A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Anriette Esterhuysen

The analysis emphasises the significance of multi-stakeholder engagement in policy processes, specifically in the context of internet governance and UN processes. It argues that involving multiple stakeholders in policy development promotes compliance, understanding, and commitment to the outcome. When stakeholders are included and have a clear understanding of the policy process, they are more likely to adhere to it. Additionally, engaging multiple stakeholders creates a demand-side angle, further promoting and advocating for the policy’s outcomes.

However, there are concerns raised regarding the loose use of the term “multi-stakeholder engagement.” It is suggested that bad policy processes are being labelled as multi-stakeholder, which undermines the credibility and effectiveness of such processes. This raises questions about the transparency and appropriateness of applying the multi-stakeholder approach in policy development.

The analysis also highlights the shrinking opportunities for participation in UN processes related to internet governance, as discussions become increasingly centralised in New York. This centralisation leads to less diversity in representation and cross-pollination of ideas. Furthermore, certain stakeholder groups are being overlooked, such as the technical community, which is not seen as a legitimate stakeholder group in discussions on the Global Digital Compact. This limited representation in UN processes restricts the perspectives and expertise that could contribute to better internet governance.

Although the principle of multi-stakeholder engagement has been widely adopted in the UN and other institutions, there is a lack of effective implementation. While there may be some use of the principle in these institutions, the application falls short. The analysis suggests that the implementation of the multi-stakeholder principle needs improvement to ensure its effectiveness in policy processes.

It is argued that a meaningful application of the multi-stakeholder process requires a granular understanding of stakeholder groups. To ensure an inclusive and diverse representation of interests, stakeholders from various backgrounds and areas of expertise should be involved. For instance, discussions on AI policy should involve not only technologists but also educators and sociologists. This highlights the importance of considering a wider range of perspectives in developing policies to address complex issues effectively.

Another noteworthy point is the unique characteristic of the WSIS process when it was based in Europe. During this time, the process involved different institutions such as UNESCO, WIPO, ITU, and human rights institutions, ensuring a comprehensive approach to internet governance. This observation highlights the need for collaboration among various organizations and institutions in policy development.

The analysis also highlights the role of power dynamics in multi-stakeholder processes. It points out that power imbalances between different countries and within gender and racial dynamics affect the outcomes of these processes. Therefore, it is crucial to acknowledge and counter the impact of power dynamics in the design of multi-stakeholder processes. Transparency about power dynamics is also emphasized as an essential aspect of fostering trust and inclusiveness.

Lastly, the analysis underscores the significance of clarity of purpose and flexibility in multi-stakeholder processes. Not every multi-stakeholder process is the same, and it is essential to assess the objectives and desired outcomes of the process. Furthermore, the design of these processes should allow for relationship building and a deeper understanding of differences among stakeholders.

In conclusion, the analysis highlights the importance of multi-stakeholder engagement in policy processes, with specific reference to internet governance and UN processes. It stresses the need for a more thoughtful and nuanced application of the multi-stakeholder approach, ensuring diverse representation, addressing power dynamics, and promoting clarity of purpose and flexibility. By addressing these factors, the multi-stakeholder process can become a more effective and credible means of policy development, avoiding its misuse as a mere shortcut or superficial tactic.

Panelist

The discussion revolved around the different aspects of internet governance and its effects on economic growth, inclusion, and stakeholder engagement. Several key points were raised during the discussion.

One of the main topics of the conversation was the positive impact of WSIS (World Summit on the Information Society) activities on economic growth. The participants pointed out that global GDP has more than doubled since 2003, with examples such as Nigeria’s GDP increasing from $100 billion to about $500 billion. The argument put forward was that the activities of WSIS have contributed to this growth.

Another important aspect that was discussed was the need for inclusive participation and representation in internet governance. The participants noted that having focal people to represent different regions can encourage multi-stakeholder participation. This approach enables representatives from different areas to participate on behalf of others, promoting inclusivity.

The conversation also highlighted concerns about the lack of diverse and engaged participants in internet governance meetings. The sentiment expressed was negative, with participants raising the issue that the same individuals have been attending these meetings for years and that new participants often do not stick around. This lack of new and engaged participants was deemed problematic for effective discussions and decision-making.

A significant point that emerged from the discussion was the importance of active outreach and the use of alternative communication channels in countries where the internet is not a priority. The participants emphasised that in many countries, the internet is still considered a luxury, and people face more pressing issues like water scarcity or environmental challenges. Therefore, the UN and other organisations were advised to adopt different means of communication, such as radio, television, or traditional letter-writing, to engage with uninvolved individuals.

Additionally, the panelists highlighted concerns about the societal and economic impact of trade agreements, including those related to digital trade provisions. They pointed out that while some panelists expressed concern about these provisions, experts with backgrounds in trade policy observed and analysed all trade-related concerns from the internet community.

The importance of interdisciplinary communication and collaboration was stressed throughout the discussion. The panelists emphasised the need for professionals from various fields, such as engineers, trade lawyers, and economists, to work together in an interdisciplinary approach to address internet governance issues effectively.

Furthermore, the conversation shed light on the importance of knowledge for effective engagement and representation. The panelists believed that adequate knowledge, especially in technical areas related to the internet, is necessary for someone’s voice to be heard.

The lack of global representation and contribution from local organisations in internet governance was also discussed. It was pointed out that the majority of participants are from Western countries, with little involvement from local organisations in countries like Japan. This apparent inequality in representation was considered a negative aspect of internet governance.

The participants also highlighted the lack of knowledge and awareness about internet-related issues among organisations such as the Japan Consumer Organisation. This observation raised concerns about the need for education and awareness programmes to bridge the knowledge gap.

Barriers to inclusive participation in global internet governance meetings were also mentioned, such as the high costs associated with attending these events and the burdensome administrative processes, including visa applications. These barriers were seen as hindrances to inclusivity in the internet governance process.

The discussion concluded with the consensus that there is a need for stakeholder inclusion and consultation through channels beyond physical and virtual meetings. This approach would enable a more diverse range of individuals and organisations to provide input and be involved in internet governance discussions and decisions.

Finally, the importance of genuine and credible multi-stakeholderism in internet governance was stressed. It was emphasised that multi-stakeholderism should not be mere window dressing, but a genuine and credible approach that includes bringing people along, listening to different perspectives, and enacting positive change.

Overall, the discussion highlighted the complexities and challenges of internet governance, emphasising the need for inclusive participation, knowledge dissemination, and interdisciplinary collaboration. It underscored the significance of actively reaching out to uninvolved individuals and organisations and the importance of genuine multi-stakeholderism in achieving effective and inclusive internet governance.

Keywords: WSIS, economic growth, multi-stakeholder participation, inclusion, diverse participants, active outreach, alternative communication channels, societal and economic impact, trade agreements, interdisciplinary collaboration, knowledge gap, global representation, barriers to access, stakeholder inclusion, consultation channels, capacity building, genuine multi-stakeholderism.

Timea Suto

The International Chamber of Commerce (ICC) is represented by Timea Suto, who serves as the global digital policy leader. The ICC is a powerful organisation that represents over 45 million companies across more than 170 countries. It has been the primary business focal point in the World Summit on the Information Society (WSIS) process for two decades.

In discussions on effective decision-making and progress, there is a consensus that an essential aspect of a successful multi-stakeholder process is ensuring the inclusion and active participation of all stakeholders. Various speakers stress the significance of hearing every stakeholder’s voice and point out that stakeholder mapping plays a crucial role in identifying those who agree or disagree on certain matters. They also highlight the need for capacity building at all levels to enable stakeholders to effectively engage and contribute.

Furthermore, there is a strong desire to expand the multi-stakeholder model to make it more inclusive. The involvement of new voices in decision-making processes is seen as essential for promoting diversity and reducing inequalities. Mentorship is viewed as a valuable tool for learning from experienced stakeholders, while sponsorship is seen as crucial for representing and promoting innovative approaches in processes where they are not yet present.

The United Nations (UN) has made progress in recognising the importance of a multi-stakeholder process. The UN acknowledges the concept of a multi-stakeholder process and describes how governments, businesses, civil society, and the technical community can come together to achieve common goals. However, there is a call for UN multi-stakeholder modalities to save time and resources. Discussions at the beginning of each process on whether to allow stakeholders in should be avoided by establishing UN modalities for multi-stakeholder engagement.

In conclusion, the ICC, with its global digital policy leader Timea Suto, plays a crucial role in representing millions of companies worldwide. The discussions highlight the significance of hearing every stakeholder’s voice, stakeholder mapping to identify diverse perspectives, capacity building at all levels, and the expansion of the multi-stakeholder model to make it more inclusive. Additionally, there is a recognition of the progress made by the UN in acknowledging the importance of a multi-stakeholder process, along with a call for UN multi-stakeholder modalities to streamline engagement and maximise efficiency.

Alan Ramirez Garcia

The analysis explores the multi-stakeholder model in Internet governance and its efficiency in addressing this process. Various speakers argue that this model, which involves researchers, businesses, government, and users, is the most effective way to govern the internet. They emphasize the importance of continuous engagement and allocation of resources to ensure the model’s success.

Furthermore, the involvement of United Nations leaders and governments is seen as crucial in supporting and advancing the multi-stakeholder model. Alan Ramirez Garcia stresses the need for their increased participation in the process. This is considered essential for creating a more connected world and maximizing opportunities in the digital governance sphere.

The speakers also emphasize that the multi-stakeholder model should yield benefits for public problems and allow for the exercise of human rights. Alan Ramirez Garcia urges the need for solid evidence on how the model can address public issues and safeguard human rights.

In addition to these points, there are suggestions for a prospective approach in facing future challenges. Alan Ramirez Garcia highlights the importance of applying a forward-looking method to identify and address emerging risks. It is also advocated to evaluate these risks and their potential impact.

To mitigate the risks that are identified, speakers recommend the immediate implementation of appropriate strategies. Alan Ramirez Garcia specifically supports the prompt execution of mitigation strategies for the risks identified. This aligns with the goal of taking action on pressing issues, as demonstrated by his endorsement of mitigation strategies for climate-related risks.

Overall, the analysis concludes that the multi-stakeholder model is effective in addressing Internet governance. However, it highlights the need for continuous engagement, the involvement of United Nations leaders and governments, and the consideration of public problems and human rights. The analysis also emphasizes the importance of applying a prospective approach to identify and evaluate emerging risks, and the immediate implementation of mitigation strategies to address those risks.

Rosalind KennyBirch

The UK is actively preparing for the WSIS plus 20 review process and has strived to ensure that the process is fully inclusive to the multistakeholder community. The goal is to create a collaborative discussion platform that allows for direct input from a wide range of stakeholders. While the main focus of the session is on WSIS plus 20, discussions may also cover inclusion in other UN processes related to internet governance.

The session, designed as a panel discussion, aims to encourage active participation and foster collaborative discussions among the multistakeholder community. This format recognizes the importance of diverse stakeholders’ direct input in achieving an inclusive WSIS plus 20 review process. The evidence suggests that this approach has garnered positive sentiment and support.

Additionally, the inclusive approach to the WSIS plus 20 review process aligns with several Sustainable Development Goals (SDGs), including SDG 10 (Reduced Inequalities), SDG 16 (Peace, Justice and Strong Institutions), and SDG 17 (Partnerships for the Goals). By promoting inclusivity, these goals can be better addressed, leading to a more equitable and just society.

Overall, the UK’s proactive efforts to ensure an inclusive WSIS plus 20 review process through collaborative discussions with the multistakeholder community are commendable. The session’s format and goal of increasing direct input reflect a commitment to creating a participatory and inclusive process. This approach not only supports the achievement of SDGs but also demonstrates the significance of engaging diverse stakeholders in shaping internet governance policies.

Mary Uduma

The emergence of the Internet has necessitated a multi-stakeholder approach due to its boundary-less nature, requiring involvement from various stakeholders in decision-making processes. This approach was discussed in the World Summit on the Information Society (WSIS), highlighting its importance in achieving inclusive Internet governance.

Regulators now recognise the value of collaboration and consultation in policy-making, opting for a more inclusive and multi-stakeholder approach. They take into account the opinions and feedback of stakeholders before implementing policies, leading to positive reception.

UN agencies have also expanded their processes and involved more actors, embracing the outcomes of the WSIS. Agencies such as UNESCO, ITU, and UNTAD have opened their doors to promote inclusivity. The Secretary General is exploring the creation of a global digital compact, indicating the expansion of UN processes.

However, concerns exist regarding the Internet Governance Forum (IGF). The participation of certain stakeholders, such as ICANN, has diminished over time. Additionally, government representation in the IGF is insufficient. Language barriers also pose obstacles, though the UK government’s sponsorship of translations in UN languages has been appreciated.

To address challenges effectively, consultations, collaborations, and grassroots involvement are crucial. Government departments and various actors are encouraged to prepare collectively and engage in discussions for the next level of WSIS Plus 20.

In addition to inclusive global processes, proactive national-level preparation is vital. Preparatory meetings for the WSIS in 2003 have demonstrated the benefits of such an approach. Understanding the global landscape and proactively engaging contribute to more effective decision-making and governance.

In summary, the impact of the Internet highlights the importance of a multi-stakeholder approach. Regulators, UN agencies, and stakeholders recognise the significance of inclusivity in decision-making processes. Efforts are being made to overcome challenges, such as diminishing participation and language barriers. Consultations, collaborations, and grassroots involvement are seen as key, along with proactive national-level preparation.

Session transcript

Rosalind KennyBirch:
Okay, I think we can go ahead and start. Okay, thanks very much everyone for joining the session today. I hope you’re all settled comfortably. Just to start off and say I think we should all give a round of applause, a big congratulations to ourselves for making it through four days of the IGF. And especially to you all who have made it to a 6 p.m. session on day, or technically day three I suppose of the IGF, but really four. Fantastic, so essentially for this session just a little bit of housekeeping and structure to begin with. We’ll be starting off with a sort of mini panel discussion just to sort of warm up the room, get some thoughts going around, but really we would encourage you and we want to leave a lot of time for you to participate in the discussion and share your ideas on our topic today which is on inclusion in the WSIS plus 20 review process. So we really want it to be a collaborative discussion today. So essentially just for some context, first of all, the UK is preparing for the WSIS plus 20 review process, and one of our key goals is to ensure the process is fully inclusive to the multistakeholder community. However, while that is our objective, we don’t want to just guess at what can make the process most inclusive. We want to hear directly from you, the wider multistakeholder community, and therefore that’s why we’ve chosen to take this more interactive approach to our open forum. Now while the focus of this session is on WSIS plus 20, the discussion could of course relate to inclusion in other UN processes that include internet governance, for example the GDC. So while we’ll be focused on WSIS plus 20 today, there may be some ideas you might want to take back that are applicable in other spaces as well. Now without further ado, I’d like to have my fellow panelists introduce themselves. So I’m going to pass the microphone across, and we’ll start with Mary. Oh, and you’ve got one there. Perfect.

Mary Uduma:
Okay. It’s morning in my continent. Good morning, Africa. Good afternoon, wherever you are, and good evening. I’m glad to be here. My name is Mary Uduma. I coordinate the West African Internet Governance Forum. I am part of the African Internet Governance Forum, and had been a member of the MAG at the UN Internet Governance. I also am the first convener of Niger Internet Governance Forum. So internet governance, internet governance, internet governance.

Rosalind KennyBirch:
Thank you so much, Mary. And over to Alan next.

Alan Ramirez Garcia:
Thank you. Good afternoon. My name is Alan Ramirez. I’m currently a MAG member at IGF. I’m a policymaker in Peru and a university lecturer in Lima, and I’m pleased to participate here at my personal capacity in such a vital discussion. Thank you.

Rosalind KennyBirch:
Thank you. Just because I’m going to pass it one more, just to make sure that we’re not going to disturb the stand.

Timea Suto:
Hi, everyone. I’m Tima Ashute. I’m the global digital policy leader at the International Chamber of Commerce. For those of you who don’t know, ICC is a global business organization representing over 45 million companies in more than 170 countries. So companies of all different sizes and all different sectors. And why we are here, it’s because we were the business focal point to the WSIS process almost 20, well, 20 years ago now, if you count 2003. And we have been following up on every WSIS outcome process on behalf of global business ever since. Thanks.

Anriette Esterhuysen:
Annette Esrehausen, Senior Advisor, Internet Governance with Association for Progressive Communications, and OLD, which means I was there 20 years ago when we were negotiating those outcomes from WSIS.

Rosalind KennyBirch:
Fantastic. Thanks so much. And my name’s Roz Kenny-Birch. I’m an international policy advisor focused on internet governance within the UK Government Department for Science, Innovation, and Technology. And so I’d just like to kick off with sort of a warm-up question. We’ve been hearing throughout the week, and it will come as no surprise to anyone that this has been said, but the multi-stakeholder model is crucial all week. So just to sort of set the scene in context before we delve into more specific questions, why exactly is multi-stakeholder engagement so essential to the internet governance space? How can we articulate that? And perhaps, Henriette, I can start with yourself.

Anriette Esterhuysen:
I’m nodding because I think people say it without necessarily saying why. You know, it’s like, why is ice cream bad for you? Well, Tamea says it’s not. It’s bad for me. And I think that we’ve lost, actually, the substance of why. It’s become like a brand. I mean, the GDC says they’re following a multi-stakeholder process. The Secretary General is talking about making the UN more multi-stakeholder. And I think we’ve started using it as a kind of a label, you know, like it’s been approved. It’s like it’s kosher, it’s halal, it’s multi-stakeholder. And bad policy processes are getting the multi-stakeholder stamp. That doesn’t make them good policy processes, even if they are multi-stakeholder. It also doesn’t necessarily make them fully multi-stakeholder. So why? I think, well, I’m going to speak from a national level or a telecoms level. You know, I’ve done a lot of work with regulators. If you work with the people that have to comply with a policy process, they’re more likely to comply with it. You might have to compromise. You know, business might not like what the public sector or civil society want it to do in terms of regulatory intervention. And civil society might not, you know, want to concede. But in the end, if you come up with something that actually is known and understood by the different stakeholders that matches their capacity and willingness. Pushing boundaries, you have to push boundaries a little bit, but then you’re more likely to have compliance. And I think the one thing we don’t need in our environment are lots of policies and guidelines and principles and regulations that no one complies with. Because then you have an unpredictable, unstable environment. And as far as human rights, you know, is concerned, you don’t, you also want understanding. And so that’s the second thing. Compliance and then knowledge, understanding. It must make sense to people. And then they’re more likely to work with it, to believe in it, and commit to implementing it. And then, you know, I think the third thing is it’s almost like if you participate in a process, you, and I mean, I think maybe ICANN is a good example of that. If you’re part of a process, you invest in it. And you actually invest in promoting it. And getting other people to be part of the outcome of that process. So it’s kind of, it builds in a demand side angle to your policy and regulatory environment, which top-down processes simply don’t have.

Timea Suto:
Going back. Perfect. Thanks, Russ. Well, it’s no surprise that I’m going to agree with Andrea. We tend to agree on a lot of things. And she said a lot of what I wanted to actually say. So I’m just going to bring it back to a business decision-making theory, let’s say. That is one of the values of a company I’m not going to name. But there’s this principle of disagree and commit, right? If you want to make a decision in any type of setting, if you’re a team in an organization and you’re trying to move forward on what you’re going to do, you need to hear the voices and really hear the voices of everybody in your team on how to make forward. Now, it’s not going to mean that you will be able to take a decision. There are people that are still going to disagree with a decision or they’re going to have a different opinion. But then somebody will take the decision and the decision will have consequences. But for the team to get behind that decision, as Andrea said, to have that buy-in, people need to be heard. People need to feel not just that they can talk or share, but they need to be heard. And a multi-stakeholder process that is effective is a process that hears every stakeholder, that creates meaningful opportunities for those voices to be said, but also to be heard. And once that’s done, yes, there will be a decision. Yes, we will have to make compromises. But we will be able to get behind a decision a little bit more effectively, if I can say so. That is my hope. It works on a small scale. The more voices you have, the more difficult it gets. But I think it’s worth the investment. And the example of that is the internet itself. The internet itself is a multi-stakeholder creation. It had the origin of an idea that came from, you can discuss, but it was a government idea or a researcher’s idea. It became a business product. It is something that all of us are benefiting from and shaping every day. It works. And in order for us to be able to make the governance of it and on it, for those of you who were on the panel before, work, we need to embody the same principles. And I think that’s why we need the multi-stakeholder model. We need to be able to buy into it, keep it ours, feel that it’s all of us have our voices shared and heard. And I think that moves us forward. Thank you.

Alan Ramirez Garcia:
Before all, I want to take a few seconds to thank you, Ross Kenybridge, and the UK government for this invitation and overall for the permanent commitment to supporting and empowering the multi-stakeholder model. And let me say that it is a great honor for me to share the panel with Mary, Timera, and Henriette. I’m sure we all agree on why the multi-stakeholder model is not only essential, but the most efficient way to address the Internet governance process. Having said that, it is a model that permanently needs to be fed with commitment from different stakeholders that could be potentially jeopardized if proper engagement is not applied. So what we need for the Internet we want is to empower the model to get more resources, to get more engagement, to be more strategic on how to avoid risks which can lead to losing engagement by parties involved. Thank you so much all.

Rosalind KennyBirch:
And now over to Mary to cap off that question. Okay. Okay. I think I have.

Mary Uduma:
Hello? Yes. Thank you very much. And the previous speakers, they’ve all said it well. And what I want to say is that I met the International Forum when it was ITU. And ITU have to be a member to be allowed to attend their meetings. And it’s a close meeting. And government to government negotiation between government to government, they make treaty. And all of a sudden, this big elephant in the house called the Internet showed up by research and those that developed it. And it became a sort of concern to the governments that do go into the room and make negotiations and agree on what to do and how to manage their spectrum, their numbers. And they have their boundaries. And here comes the monster that does not have a boundary. And how do you get everybody to agree on how to participate in this new world called the Internet? So it needed for us to think out of the box and look at other things because I started hearing this multi-stakeholder approach from WSIS. WSIS, the second WSIS when we were trying to come up with the IGF. So the buy-in, the understanding, I don’t know whether the government, they’ve really understood it, especially from my own environment, whether they’ve really gotten what the multi-stakeholder process is. But the truth is that we needed everybody’s voice to be heard and everybody to participate in the process of making sure that we benefit from the new process called the Internet. And just like Tamir said, Internet is an elephant. If you touch the head, you think that is all about the elephant. And you touch the leg, you think that is all about the elephant. And so many actors, so many people, participants. So it’s open. I don’t know whether it’s television, computer, or a radio, or a telephone. So if I hold this, this is everything. I can do my television, I can do my telephone, I can do even my camera. So for this, the process of getting everybody understand, everybody participate, all the actors participate, give room to this multi-stakeholder process. And the good thing is it is bottom-up, it’s not top-bottom. Because even at the national level, now our legislators and our government, our regulators. When I was a regulator, well, we do consultation not as much as, okay, tell us what you want, we do what we want to do. But these days, the regulator, you know, goes, ah, let’s have multi-stakeholder process in coming up with regulation. So those are the things we have gained from this process. So that’s what I can say for now. Thank you.

Rosalind KennyBirch:
Thank you so much. And I think that elephant sort of metaphor is really, really valuable in that regard as well. So now moving on, and now that we’ve sort of set the scene. We’ll start to dig into a little bit more of the meat on the bones, so to speak. So now, looking at the current landscape, in your opinion, are opportunities to participate in UN processes surrounding internet governance expanding or shrinking, and how have you seen these processes evolve, and what direction are they evolving in since that initial summit, those initial summits back in 2003, 2005? And perhaps we can go down, just given the microphone situation, in the same order again for this one.

Anriette Esterhuysen:
Thanks, Ros. So I think I tried to say earlier it’s about better policy outcomes. It’s not always harmonious, but stakeholder groups are not fixed. That’s the one thing I didn’t say. I think if we really apply the multi-stakeholder process in a way that’s going to be meaningful, so that the discussion and the process is rich enough and diverse enough, you need to analyze the issue that’s being discussed, and then make sure that the stakeholders are ready. And I think it’s in that context that it is concerning that with the Global Digital Compact, there’s the idea that the technical community is not a stakeholder group, you know, in their own right. Because I think that would be an example of, in fact, if I was the UN and I was looking at AI policy, I would bring educators in as a particular group. You know, I would bring people that are sociologists as a particular group. So I think what we’ve seen within the UN system is WSIS started and evolved quite a fixed understanding of what the different stakeholder groups are, civil society, technical academic, it was fluid in how it classified that group, and business and governments. And we worked with that. And I think where we worked with it well, we made it more granular. We brought in women, women’s rights groups, human rights groups, small businesses, you know, big businesses, and so on. And where we did not work with it well, I think we started treating those stakeholder groups as check boxes. If you had a business person, a government person, a civil society person, and a technical person, then you were multi-stakeholder. And I think the UN has, what the positive thing is that they have adopted the principle much more, I think much more widely, even in the ITU, you see much more sort of wide use of the principle of multi-stakeholder. But is it being applied well in a meaningful way? I think not really. And then I think secondly, where we have less, where opportunities are shrinking, I think also has to do with more of these discussions of moving to New York. I think there was kind of a unique, as far as the UN is concerned, characteristic of the WSIS process, that much of it was based in Europe, which meant you had UNESCO, which dealt with culture, education, human rights, and media. Media, a very important aspect of this. And then you had the Geneva institutions. You had WIPO, dealing with intellectual property, which has a huge overlap of what we do here at the ITU, dealing with infrastructural and access issues. And then you have the Commission of Science and Technology in UNCTAD dealing with the follow-up. And the human rights instruments in Geneva. So I think, so, but with more of these decisions, moving to New York, you’re going to have less of that. And also you have the women’s, you know, some of the women’s organizations in Geneva as well. I think you just have less participation. It’s much, and it’s also from a government perspective. You’re dealing mostly in New York with UN missions, which means that your process is being really run by the Ministries of Foreign Affairs. But when it’s in Geneva, you have a slightly more diverse mix of people. And for developing countries, you often have the same person in the mission in all of those agencies. So there’s just more cross-virtualization. Thanks.

Timea Suto:
Thanks, Rosanne. Thanks, Henrietta. Again, I will have to agree with a lot of the, I think we’re getting into a good dynamic. Henrietta’s throwing up the balls, and all I can do is just catch them and maybe add a couple of things to it. In terms of where have we evolved, if you were in the session just now and the future of the digital governance or internet governance, I’ve read out how the paragraph in our common agenda that actually came up with the idea of having a GDC, a Global Digital Compact, sets out a multi-stakeholder process. It doesn’t say multilateral and multi-stakeholder. It doesn’t say governments with consultation of all relevant stakeholders that we’ve seen other resolutions and documents with the UN before. It says governments, business, civil society, technical community coming together to forge a Global Digital Compact. I think that’s progress that Henrietta was saying. That’s the progress that we have come in the past 20 years. Again, caveat here that every single UN process still needs to negotiate its modalities, and it needs to negotiate in what form, if at all, it will allow multi-stakeholder input. And there’s a huge array of differences in that. I mean, in a dream world, the UN would come up with a multi-stakeholder modalities, and then we would all just save ourselves two or three meetings at the start of every process discussing whether or not we want to let stakeholders in. That’s one. Two, who are the stakeholders? I think Henrietta hit the nail on the head with that. Yes, there are the main groups of stakeholders, but none of those stakeholders are homogenous. Government is not homogenous. There’s various different government agencies, government branches, then you have parts of the government or parts of the administration, let’s say, that need to also be consulted there. In businesses, no two business is the same. No two business model is the same. There’s different industries. We’ve been hearing again, we are here mostly with telecommunications or digital companies. Business is a lot larger than that, and now every business is becoming digital. It’s not homogenous. Society is not homogenous. So if you want a meaningful multi-stakeholder process, I think that needs to start with the stakeholder mapping of who are the people that are likely to agree with you, who are the people that are not likely to agree with you, who are the people that should be at the table but don’t even know about these things. So I think there’s another element there that we need to discuss of, again, what does multi-stakeholder mean in a true effectively applied meaningful way. I think those are the two things that, Henriette, you’ve raised and I really, really agree with. A third thing that I think I want to add to this is the layers of decision-making, the layers of governance. We talk a lot about the international level because we are in an international setting. Has the multi-stakeholder model trickled down to regional, national, sub-national levels of decision-making of discussions on this? Have we matched it with adequate levels of capacity building of all the stakeholders that need to take part of it, including governments but also the businesses, the society, others that might not even know how to be part of a multi-stakeholder conversation or that they can be part of this multi-stakeholder conversation?

Alan Ramirez Garcia:
I fully agree with Henriette and Temea. Of course, we live in a more connected world, also in the digital governance sphere. So today, I find plenty of opportunities for participation in the UN process from different stakeholders. Maybe from the government perspective, I think what United Nations leaders and governments need to get more involved with the process with solid evidence of how the model benefits public problems, addressing public problems and allows the exercise of human rights and so on. That’s it.

Rosalind KennyBirch:
Thanks. And I think just some initial reflections there as well. I’m definitely hearing sort of the different perhaps cultural differences between Geneva and New York in the UN sense there. And also, how important it is to be proactively engaging different groups, including not just looking maybe at these groups in specific boxes but understanding those nuances as well. So I’ll hand over to Mary. After Mary goes, we’re going to do a quick rapid fire last question. So just to get your wheels turning now, we will be asking for a bit more participation from the audience quite quickly. So start thinking about what points you may want to make and include, but Mary to finish us off on this question and we’ll do one last rapid fire question.

Mary Uduma:
Okay. Thank you very much. With the UN processes is shrinking or expanding? I will say that the UN agencies have been running with the WSIS outcomes and they have their own community and they’ve been trying to bring in so many of their actors within their own confines. Let’s look at the UNESCO and let’s look at the ITU or the UNTAD, the trade. So they are opening their doors now. They’ve been opening their doors and I will say that it’s not shrinking. If anything, they are co-opting more people, more actors into their own individual processes that feed into the global. And that’s, I think that’s what informed the Secretary General to look at, look, can we look at the global digital compact? So everybody, all the actors will come in and, but on the other hand, when we look at the processes, for instance, the ITU, there are some actors that we have not seen, some of the stakeholders, as Tamir said, who are the stakeholders that are not as prominent. When we started, we had a lot of ICANN people all over the places, the business, you know, who were looking for, you know, their bottom line, you know, so that they could get it. But I don’t know whether they are still as large as that when it comes to ITU, I mean, IGF processes. We don’t see most of them. And we also find out that the government as well, some governments are not appearing here in IGF, maybe language issues, though we have been trying to move forward from there, thanks to the UK government that sponsored the translation in the UN languages. So there are still people that, there are still the communities and stakeholders that are not here that we need to bring into the process. Thank you.

Rosalind KennyBirch:
Absolutely. Thank you, Mary, for those insights. So last quick rapid-fire question from the panel before we open it up more broadly is just simply, we’ve pointed out some challenges or opportunities here. What can we do about those? How can we act on those observations that we’ve made? Are we equipped to address the challenges that have been raised? So I’ll start it going this way with Mary first this time and we’ll come down that way.

Mary Uduma:
Yeah. Consultations, collaborations, those are things, those are two words I think we should look at. And also the grassroots. At the national level, are we having a preparation, which is preparation at the national level? And at national level, you can see there are so many other actors within the government, the department, foreign affairs, telecommunication, education, all of that. I think preparation from that level will help to send the conversation. The conversation will get to do the actors that will be participants at the next level of WSIS Plus 20, right? So that preparation is very, very key. When we were to do the WSIS in 2003, there were preparatory meetings. I could remember we went to Ghana. And I don’t know whether the government and countries are still doing that or blocs like the African bloc, the West African bloc, even the commonwealth. I think we’re doing some preparation. I don’t know whether that is still going on. I think we have to do, have the conversation and discussion, debate at that level and know what we are going with to the global level.

Rosalind KennyBirch:
Thank you, Mary. Alan, over to you next.

Alan Ramirez Garcia:
Thank you. Well, I want to propose applying a prospective method to address the challenges that will be raised in the future, maybe now. First, we need to identify what the emerging risks are that can re-emphasize the multistakeholder model as we know it. And the second is to evaluate how probably the risk is to happen and how impactful it could be. With that identification, which is objective, the strategies for mitigation need to be applied now for all stakeholders interested and involved in that. Thanks.

Timea Suto:
Thanks, Alan. Just another analogy, if you allow me another one. When you’re trying to bring in new voices to a decision-making process or even raise up new talent in an organization, people say you need mentorship and you need sponsorship, right? You need mentors that tell you how they have done it, what they have learned, how can you apply that. I think there’s a lot of that going on around this room, taking stock of what were the good lessons learned. We’re seeing how some processes have done it and what we can learn from that and what we can apply. Then you need the sponsors. The sponsors, and I don’t mean that in a financial way. The sponsors that speak up for this when we’re not in the room, where we are in those processes where we are not being let into. There are some of us that are there that know different ways of doing things. We need to count on them to carry the flag. Since we’re in a UK Open Forum, the UK does that and thank you for that. I think we need more of that. I think in order to expand the multi-stakeholder model and to get that down to the layers that I was talking about earlier, we need that sponsorship. Otherwise, we were going to talk to each other in this multi-stakeholder eco-chamber and we will be sitting here 20 years from now still thinking what we need to do.

Anriette Esterhuysen:
Thanks, Tabea, thanks, Ross, and thanks for doing this when we all are so tired. It’s very brave of you. I think don’t use multi-stakeholder process as a shortcut. Don’t use it as window dressing. It will just discredit what is, in fact, a very powerful way of making policy. And I see everyone doing that, actually. Be serious about it. Otherwise, just don’t do it. I mean, there are other ways of making policy. It’s not the only way. So if you want to do it, do it well. We talked about, Tamea mentioned stakeholder mapping. I think the one thing that none of us have maybe explicitly said is to look at power. And don’t assume that power doesn’t play in multi-stakeholder process. It does. Global north, global south power, rich country, poor country, big company, small company, local civil society organization, big global advocacy civil society organization, all of those dynamics, gender dynamics, race dynamics, they all play out in these processes. And if you’re really serious about your multi-stakeholder processes, acknowledge that. Don’t abuse it. Actually design in order to counter the impact of that, or at least be very explicit about it. I think transparency about that is also important. I think clarity of purpose is important. Not every multi-stakeholder process is the same. And design according to that. And also assess what you want out of it. Design it in such a way that even if you don’t achieve consensus, you achieve relationship building. You deepen understanding of why there are differences. And then, sorry, and then be flexible and adapt accordingly.

Rosalind KennyBirch:
Brilliant. Well, thank you so much to the panelists. Now I’d like to move us into discussion and challenge us all to start thinking about potential answers to these questions. So I’d like to challenge you to turn to your neighbors around you. Hopefully someone new you might not have met before. And start the conversation. How can the WSIS plus 20 review process be shaped to encourage more inclusive participation from the multi-stakeholder community? So I’m actually hoping that lots of you may not know the people around you. But turn to your neighbor. Perhaps we can take five minutes or so. Those online, I think there’s about five of you. Perhaps you can discuss in your own breakout group as well. And let’s get the wheels turning. Great. Great. How can the WSIS plus 20 review process be encouraged, be shaped to encourage further inclusion from the multi-stakeholder community? Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. No. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. No. Yeah. No. That’s great. OK. Yeah. There’s no way I can make it. What the? He gave me a hug. But did you see that? Yeah. Yeah. Yeah. OK. Bye, guys. Bye. OK, bye. No. Bye. Oh, sorry. Oh, one more. I think order it up. All right. Yeah. Yeah. You still got a few minutes. Don’t worry. Yeah. All right, everyone. One more minute, so we can wrap up your conversations. All right. Thank you very much. Thank you. All right, everyone. Let’s come back in. If you’ve moved seats around, I know a few of you have, feel free to come back. Fantastic. All right. Well, now’s an opportunity to sort of reconvene after those discussions. Hopefully, we’ve got some good ideas going. I certainly saw lots of interactive conversations. But does anyone want to be brave enough to go first? I may call names. Otherwise, Jimson, yeah, please. Thanks for being brave. Oh, yeah, apologies. Pass the microphone, whatever you’re comfortable with. Thank you very much.

Panelist:
Well, very excellent interaction. Thank you very much for bringing this up. Well, first to say that what matters to everybody is their economic standard, their well-being. When WSIS started in 2003, the global GDP was just $50 trillion. But today, it has more than doubled. And ICT, WSIS has helped a lot. In fact, in Nigeria, it used to be $100 billion GDP. That was in 2003. But now, it has increased fivefold to about $500 billion. So at the same thing, it is the WSIS activities, because government change direction and involve everybody. So how can we increase that, which is the crux of the matter? First and foremost, we need this to continue. From the private sector, I’m speaking from private sector perspective, private sector and the public sector, they have a common purpose to boost the welfare of the people. We know how to create jobs. As Tamir mentioned, we can create opportunities. So when we work together, we create more opportunities. So to get more people to come in, because it’s expensive for private sector to self-fund to come in here. So what we try to do in Africa, for example, is to bring in all stakeholders together in Africa so that we could have some focal people to speak for us for Africa. Just as Tamir is here, he’s representing us. At times, we don’t go to New York, but she goes for us. So that kind of mechanism we have in place. Then also, we can encourage, as we discussed, that organization like maybe ISOC, they can create more awareness within their system, engage, they are technical, mostly. They can engage other stakeholder group, maybe more government, engage other civil society in the area of influence. And then create budget line. We need to have special budget line for inclusivity. We are doing that. We see how the UK government is supporting this event as well. So we need more of that. So that’s the budget line. Because if there’s no budget line, then you can’t really do much. So this is where I will stop for now. Thank you.

Rosalind KennyBirch:
Thank you so much. And really good point, I think, too, about people reaching out and going to places. Not putting the onus on people to all go to necessarily travel to New York for discussions. Things like that, but that proactive outreach. Really interesting. Sorry, there’s a point I need to make.

Panelist:
You know, there are a lot of hopes that this idea of encouraging. Like in Nigeria now, there’s a hope following many proceedings. So we can encourage more hopes. Yeah. The more people can participate.

Rosalind KennyBirch:
Thank you, James. Tracy, I see your hand up.

Panelist:
Hello. Yeah, so I was thinking, I’ve been thinking a lot about this for the last few years. When we come to these meetings, we see, who do we see here? Same people. Getting grayer. Getting older. Every now and then, we have a new batch of people coming in through ISOC and others. If they stick around, we’re happy. If they’re not, they disappear. So how do we keep folks engaged? And the reasons why I think that we don’t do well in this is because, you know, we’re talking to ourselves a lot, right, so we are the IGF, we are WSIS. Come to us. Come and talk about the internet. But we don’t go out. We don’t go out to them. Because in many of the countries, the internet is not a priority. First, we have the unconnected, so we have that group. Then in many countries, the internet is still a luxury. We have issues, we have road issues, we have, in my group, we have climate change problems. And the people, when they’re trying to gravitate to issues to participate in, they’re gravitating to issues that affect them, right? So if your island is sinking, then the internet is not the issue that you’re dealing with today. And if you have no water, it’s not the internet. So how do we reach out to people who can be engaged? I think there needs to be a really concerted effort from the UN and others to reach out, with outreach, right, that’s the word we’re talking about, to get to those people who are not involved. Get them involved. So you don’t have to come to these meetings, and nobody’s saying, come to Kyoto. But get them involved in some way, and if they’re not connected, use other means. Use radio, television, write a letter. This is what, what does the internet mean for you? How do you engage people that way? I think we’re not doing well in that at all. So we’ll still have 8,000 people coming to the IGF, and of that, a few of us, well, a few of the same people coming every year, and then next year, it’s a recycle. Another, you know, go to Riyadh, and then there’ll be another, you know, from maybe local people coming in, et cetera. So how do we reach out to those people? I think we are not doing well in that way, and if you’re doing bottom-up engaging and getting into the topic, you know, how do you engage in a bottom-up? I already think there needs to be a better job at this because it affects everybody. It will affect everybody. And if we look at the other areas, like the climate change, you know, issues, I think they do a much better job at engaging their stakeholders because it means something to them. So we probably haven’t really done a good job at saying that internet governance means something to you, you know, because if you haven’t explained the topic very well. So it’s, what is it exactly? What is internet governance? What does it mean to you? My colleague next to me asked him, how did he get here? So he’s new, relatively new. Someone told him, right? But if you just don’t told him, he would just be, I guess I said, riding a bike or whatever they’re doing, whatever you’re doing home. And what’s he doing now? There’s something going on here. So I don’t know about that. So how do we get that resolved? Yeah, thanks.

Rosalind KennyBirch:
Thank you, Tracy. And I think, you know, you make the point generally, but it’s especially true, isn’t it? Sometimes of UN processes. If you’re going about your day, you may not even be aware of what’s going on. And so I think a really pertinent point for that specifically as well. Did I see hands over here? Go to you next and then yourself. Great.

Panelist:
Thank you. I’m Minako Morita Yaga. I’m from the University of Sussex. I’m academic. And then also my field is complete, not really completely, but I’m trade policy expert, internist trade and trade economist and lawyer. So, but the reason I come here first time, IGF, it’s because of trade, all these trade agreement. is including all the digital trade provision. That’s, I’m really have worried about this, in the societal and economic impact on it. And then, I have been learning a lot, you know, coming here and then looking at things from the different side, but this is internet community. The thing I would like to make is, even a stakeholder means, even an academic, academia, you know, I’m in a different field. So this is really interesting to see the engineering, also trade lawyer to trade economist, what are these that we try to work together, interdisciplinary approach. So this sort of the different field, the communication from the different field is really important. This is one thing we are talking about, and so my colleagues are also the first time here, and we talk about, you know, we learn a lot, you know, this is really excellent approach. But on the other hand, when you talk about the plus 20 something, I never heard of it. What is this panel about? So I don’t know this UN, you know, the approach. And then the other thing is, I completely agree with him, that’s how to, the voice, this is my, from here is we did not talk anything about it, but my personal view is that voice should be, voice to be heard, but that voice, we required knowledge, especially internet, really technical, and then also the sponsorship, as somebody said that, money, just to get that knowledge, and then also institution. And here, when I look at all these participants, mostly Western, you know, countries oriented, and then more still, and even the local, that Japan is now the sponsor in it, but I could see very little contribution from the Japanese, you know, organization, civil society organization, whatever, and then I was wondering, why is that? And then the shocking thing, for example, one thing is consumer organization, Japan consumer organization, they don’t, there’s no people who have knowledge with internet and consumers and things, so who, who, who can represent for this, you know? There’s no knowledge about this, so this is really critical thing, you know? Come and then talk to us, but they don’t have anything to talk about, and then just, I want to learn, I want to learn, you know? Then there also, there’s no awareness in general, the very limited awareness in public, so this is a point, sorry.

Rosalind KennyBirch:
No problem, I think this is a really engaging discussion, it’s hard to have a time limit, but I might just go ahead. Before I come to yourself, and I’m very sorry, I do want to acknowledge our online participants as well. Would anyone from the breakout group online like to make some points from their discussion, or perhaps my colleague, Marek, who’s online moderating could as well summarize for the group, whatever’s most comfortable.

Panelist:
Yeah, I’m happy to summarize quickly. Yeah, so we had an interesting discussion that brought up some points around the barriers to access, both in terms of kind of costs to participate in some of these kind of global internet governance meetings and the kind of related costs and barriers to physical attendance in a meeting like the IGF, as well as the kind of administrative burdens, whether that’s restrictions on travel or kind of processes like visa applications. And so I think that we kind of talked about some of those challenges to having more inclusive processes, and kind of highlighted the need to think about different channels for consultation and inclusion of stakeholders beyond solely kind of physical meetings, as well as kind of virtual meetings and other types of coordination mechanisms. So yeah, thank you.

Rosalind KennyBirch:
Thank you so much, Marek. And thank you to our online participants for participating in the breakout group as well. And finally, over to yourself. And then we’ll conclude with some closing remarks, but over to you first. There we go.

Panelist:
Thank you. My name’s Greg Shatton. Tracy, I’m one of these newcomers that you’ve been hearing about coming to IGF for my first time, but I… I didn’t know you had a plus sign. But on the other hand, I’ve been involved with ICANN since 2007. I’ve been to 30 ICANN meetings. I’ve been on a dozen working groups. But as I’ve kind of changed my focus there, that’s why I’m here. I started out in the private sector. I still spend my daytime in the private sector. But having been the president of the Intellectual Property Constituency for three years, I left that side of the world. Sorry, Jimson. And now in At Large, where I’m the chair of the North American regional At Large organization, and also the president of ISOC New York. So I have absolutely no excuse not to be more involved than I already have been with the GDC and with the UN in New York. And I, you know, Jimson and I were talking about the barriers of access to information, capacity, development, and the like for civil society and for individual end users. And so I’m gonna make somewhat of an offer, which is if there’s anything that ISOC New York can do, and while ISOC Global is a technical-oriented organization, some of our U.S. chapters, especially New York and D.C., as you might imagine, are very policy-oriented. So if there’s anything that we can do to help to write a project, a home, online, we have the indefatigable Jolie McPhee, who can put anybody online at any time, and he’s been a part of our ISOC New York cabal. You know, we can do something to help to provide a hub, a project, a space, virtual, or even physical. I have my day job, I have a law firm, which has some conference rooms, and whatever we need to do, and we’re on 42nd and Lexington in the Chrysler Building, five blocks from the U.N. So if there’s anything I can do to help to provide some form of a node, a nexus for any of this stuff through the capacities that I have, you know, I’m more than happy to do that because I like the idea that something’s finally happening in New York instead of Geneva because, you know, I was born on the island, that’s my island. So I think, you know, I want to make it work and not make it feel like all of a sudden we’ve lost the Geneva kind of environment and that the New York environment is kind of cold and unfriendly because New Yorkers are only cold and unfriendly until you ask them to help you, and then they’re as warm and engaged as can be. So I’m asking to help you. Thank you.

Rosalind KennyBirch:
Thank you so much. And I think we can all really help each other, I think, to point out, I would encourage those, even if you’re new to this space, tell us what we can do to be more inclusive. The onus is on us to come to you, but don’t be afraid to reach out equally, I think. We have to work together in this process. And with that, I’d just like to pass over for some closing remarks before I’ll formally conclude the session. To the Deputy Director for Internet Governance at the UK Government Department for Science, Innovation and Technology to just make some closing comments. So thank you.

Panelist:
No, and thank you, Roz. And I think it’s a real achievement that here we are at sort of between six and seven on day four of the IGF, and actually a really good turnout and a really animated discussion as well. So I want to thank you, Roz. And I also want to thank your fellow panelists, Mary, Alan, Tamer, and Henrietta for sharing your time. So the UK government, we’ve organized this session really committed to multi-stakeholderism. And that’s gonna be so important as we’ve got the WSIS plus 20 process. And before then, we’ve got the Global Digital Compact. And the Internet Governance Forum is a really important vehicle for that. And my sort of one reflection, and I think Henrietta, you sort of spoke about window dressing. And I think it’s really important that we don’t, we use the M word of multi-stakeholderism a lot, but I think it’s really important that we mean it. First of all, just to make sure that it remains credible. But also, if you want to enact change, you need to bring people with you. And it’s through listening, not just hearing, and making a reality of multi-stakeholderism that you can make change happen and make change stick. So it’s a false economy, just to sort of pretend or doing a new window dressing where you’re probably just better off not doing it at all. So if you’re gonna do it, do it properly. And certainly that’s a commitment to the UK government. But I’ll stop there.

Rosalind KennyBirch:
It’s late, but Roz, back to you. And thank you very much again. Thank you so much, Paul. And I think, I mean, just to say what engagement, 7 p.m. at night. If this community is anything, it is engaged. So thank you for your time today. Thank you for your excellent commentary, engagement. I think lots for us to take away. And I mean, certainly we will be doing, as Paul says, all that we can to ensure that this WSIS Plus 20 review process is fully inclusive to the multi-stakeholder community. And with that, I’ll leave it. And I think it might be reception time, if I heard correctly. Thank you very much. Thank you very much. So we’ll start with all people’s t-shirts. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Alan Ramirez Garcia

Speech speed

160 words per minute

Speech length

418 words

Speech time

156 secs

Anriette Esterhuysen

Speech speed

172 words per minute

Speech length

1466 words

Speech time

512 secs

Mary Uduma

Speech speed

132 words per minute

Speech length

1140 words

Speech time

520 secs

Panelist

Speech speed

185 words per minute

Speech length

2645 words

Speech time

860 secs

Rosalind KennyBirch

Speech speed

157 words per minute

Speech length

2016 words

Speech time

771 secs

Timea Suto

Speech speed

182 words per minute

Speech length

1516 words

Speech time

501 secs

WSIS at 20: successes, failures and future expectations | IGF 2023 Open Forum #100

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Shamika Sirimanne

The World Summit on the Information Society (WSIS) aims to create a people-centred, inclusive, and development-oriented information society. However, this vision remains unfulfilled, as evidenced by the stark disparities in connectivity across different regions and countries. While 95% of the world’s population has access to mobile broadband networks, only 36% of people in Least Developed Countries (LDCs) are connected. This disparity highlights the existence of a digital divide, where disadvantaged populations are left behind in the global information society.

Furthermore, the digital divide is not only limited to LDCs but also extends to rural areas within developed countries. Massive rural-urban divides in connectivity exist, creating further barriers to accessing information and participating in the digital economy. This divide has become a serious development issue, as those who lack access to digital technologies struggle to reap the benefits of innovation and technological advancements.

The WSIS process and digital transformation are described as a massive technological revolution of our time. This transformation poses both opportunities and challenges for individuals, governments, and societies as a whole. Navigating the emerging world of digital technologies can be daunting, as it is largely uncharted territory. While advancements in technology have the potential to drive economic growth and foster innovation, they also bring risks, such as cyber threats and privacy concerns.

Recognising the complexity of the digital sphere, the need for multistakeholder cooperation and a One UN approach is emphasised. The WSIS highlights the importance of enhancing partnerships and collaboration among different stakeholders, including governments, private sector entities, civil society organisations, and international institutions. It is through this multilateral cooperation that the challenges and opportunities of the digital age can be effectively addressed.

Additionally, data collection is seen as crucial for fact-based reporting and decision-making. The importance of accurately collecting data through questionnaires is emphasised, as it allows for evidence-based analysis and monitoring progress towards achieving the Sustainable Development Goals (SDGs). By capturing and reporting relevant data, policymakers and stakeholders can make informed decisions and take targeted actions to bridge the digital divide and promote an inclusive information society.

In conclusion, the WSIS vision of an inclusive and development-oriented information society remains relevant and unfulfilled. The digital divide persists, with significant disparities in connectivity between different regions and populations. Navigating the digital transformation brings both opportunities and challenges, necessitating multistakeholder cooperation and a One UN approach. The collection of accurate and comprehensive data is essential for effective decision-making and monitoring progress towards achieving the SDGs.

Audience

The extended summary provides a comprehensive overview of the main points discussed in the given text. It highlights the importance of the World Summit on the Information Society (WSIS) process and the role of youth in it. The youth voice was emphasised, with advocates calling for greater representation and inclusion in the WSIS process. They also stressed the need for initiatives that improve access to education and empower young people. This underlines the importance of involving youth in shaping policies and decisions related to the digital world.

Artificial Intelligence (AI) was recognised for its potential in advancing human development and the Sustainable Development Goals (SDGs). It was noted that AI can enhance education and healthcare access, help address global challenges like climate change and poverty, and improve accessibility for persons with disabilities through assistive technologies. However, concerns were raised about potential threats to data privacy, the creation of surveillance systems, and the emergence of new forms of discrimination and exclusion. It was argued that AI should be developed and employed in a manner that upholds human rights and follows ethical guidelines.

The alignment of the WSIS with various UN processes was discussed, highlighting the importance of digital cooperation. The WSIS was seen as setting the foundation for digital cooperation and being aligned with different UN processes, including the WSIS Forum, special initiatives, and prizes. This emphasises the collaborative efforts needed between stakeholders to achieve common goals in the digital realm.

The impact of digital transformation was examined, noting its success in terms of technological advancements but also its shortcomings in terms of knowledge dissemination. Issues such as disinformation, misinformation, hate speech, and fake news were identified as challenges of digital transformation. The monopolisation of digital platforms in the media landscape was also highlighted, with significant effects on community publishing and media diversity.

The need for periodic reviews of the WSIS process was stressed, as well as the importance of acknowledging progress and evolution in Internet governance. It was argued that the presence of 176 indicators in the SDGs serves as Key Performance Indicators (KPIs) and distinguishes them from the Millennium Development Goals. The inclusion of low-literate individuals in the digital space was advocated, highlighting the current lack of internet tools and platforms designed for those who do not know how to read or write.

Overall, the speakers expressed a mix of positive and negative sentiments, advocating for responsible development and utilisation of AI, digital inclusion, and closer alignment with the SDGs. Collaboration, periodic reviews, and a multi-stakeholder and human-centric approach were seen as crucial in achieving sustainable digital development. The text provides valuable insights into the challenges and opportunities associated with the WSIS process and the role of digital technologies in advancing societal and development goals.

Moderator – Ana Cristina Ferreira Amoroso Das Neves

The WSIS Plus 20 review at CSTD takes a progressive and forward-thinking approach, focusing not only on the present but also on planning for the future. This review is in alignment with SDG 9: Industry, Innovation, and Infrastructure. The General Assembly adopted a resolution for a high-level meeting to be held in 2025, known as the WSIS Plus 20 review. To facilitate this process, the ECOSOC adopted a resolution requesting the CSTD to organize substantive discussions on the progress made in the implementation of the outcomes of the WSIS over the past 20 years. Furthermore, CSTD members have adopted a roadmap to guide their work on the WSIS Plus 20 review.

Stakeholder engagement with open consultations and a survey questionnaire is a vital part of the WSIS Plus 20 review. The CSTD’s Secretariat plans to conduct a survey questionnaire from late 2023 until late 2024. Additionally, open consultations will be held at national, regional, and international levels, involving various stakeholders such as multilateral agencies, the private sector, the technical community, national governments, civil society, and academia. This inclusive approach ensures that the review takes into account a wide range of perspectives and experiences in shaping the future of the WSIS.

The Internet Governance Forum (IGF) plays a significant role in the WSIS process and is an outcome of it. The CSTD is grateful to the IGF for providing a platform to launch its open multi-stakeholder consultation at the 18th IGF. This collaboration allows for meaningful discussions and engagement with different stakeholders to inform the WSIS Plus 20 review.

The WSIS forum not only aligns with several UN processes but also highlights the role of digital transformation in achieving sustainable development goals. There are examples of the alignment of the WSIS forum, special initiatives, and WSIS prizes with various UN processes. Additionally, the UN promotes the role of digital technology in healthy ageing, further emphasising the importance of digital transformation for overall development.

Given the positive progress made in the WSIS process, Cedric Vashvalt should continue the discussion on digital policies and transformation. Although no specific supporting facts or arguments are provided in the data, it suggests that Cedric Vashvalt possesses valuable insights and perspectives in this area.

In conclusion, the WSIS Plus 20 review at CSTD takes a forward-thinking approach by focusing on future planning. Stakeholder engagement through open consultations and a survey questionnaire plays a crucial role in shaping the review’s outcomes. The IGF provides a platform for open multi-stakeholder consultation, and the WSIS forum aligns with various UN processes, emphasising the role of digital transformation in achieving sustainable development goals. Cedric Vashvalt should continue the discussion on digital policies and transformation as he is seen as an important contributor in this domain.

Speaker 1

The Government of Japan has reiterated its commitment to the multistakeholder approach in internet governance during the International Governance Forum (IGF) held in Kyoto in 2023. This event saw a remarkable registration of over 8,000 participants, indicating widespread interest in global internet governance issues. Mr. Yasunori Ueno delivered a speech on behalf of Mr. Yoshi Ida, affirming Japan’s dedication to the multistakeholder approach, which involves collaborating with various stakeholders, including governments, civil society, and the private sector, to shape internet policies.

The principle of a people-centred, inclusive, and development-oriented internet society has remained unchanged since 2005. Despite the immense changes and advancements in the internet landscape over the years, the focus on placing people at the centre of internet policies, ensuring inclusivity, and fostering development, continues to be paramount.

The significance of a free, open, and global internet was emphasised. It was highlighted that such an internet is vital for socio-economic development, as it provides opportunities for innovation, entrepreneurship, and knowledge-sharing. Additionally, it helps in debunking misinformation by promoting the free flow of accurate and reliable information. It also enhances cybersecurity, ensuring that individuals and organizations can conduct activities online with confidence and trust.

As the G7 presidency, Japan is playing a leading role in discussions on AI governance through the Hiroshima AI process. This initiative recognizes the importance of establishing principles and guidelines to govern the ethical and responsible use of artificial intelligence, considering its potential impact on various sectors of society.

Japan views the IGF Kyoto as a crucial opportunity, especially in anticipation of the upcoming WSIS plus 20 review in 2025. The WSIS plus 20 review refers to the United Nations’ World Summit on the Information Society, a series of gatherings since 2003 aimed at bridging the digital divide and harnessing the potential of information and communication technologies for sustainable development. By participating in the IGF Kyoto, Japan seeks to contribute to the discussions and preparations for the WSIS plus 20 review, ensuring that the outcomes align with the goals and principles of the multistakeholder approach and advance partnerships for global development.

In conclusion, the Government of Japan’s reaffirmation of commitment to the multistakeholder approach in internet governance, the enduring principles of a people-centred internet society, the importance of a free and global internet, and its active involvement in AI governance discussions demonstrate Japan’s dedication to promoting inclusive, secure, and responsible internet governance practices. Through its participation in the IGF Kyoto and its leadership role in the Hiroshima AI process, Japan aims to play a significant role in shaping the future of internet governance and contributing to global partnerships for sustainable development.

Kamel Saadaoui

The analysis of the speakers’ discussions highlighted several important points regarding internet governance and the role of international institutions. Firstly, it was noted that, despite significant technological advancements since the Tunis Agenda and Outcomes in 2005, these agreements continue to hold relevance in promoting an open, resilient, unfragmented, and inclusive internet. The Tunis Agenda endorses the importance of human rights and cultural diversity in the digital space. The speakers pointed out that while platforms like social media, artificial intelligence, clouds, and blockchains have emerged since the Tunis Agenda, its recommendations have maintained their importance and applicability.

Furthermore, the analysis identified improvements in institutions like the Internet Corporation for Assigned Names and Numbers (ICANN) and the International Telecommunication Union (ITU) with regards to transparency, accountability, and support for international domain names. However, it was suggested that further improvements could be made, particularly in increasing the government’s participation in ICANN beyond an advisory level.

Another significant point raised during the discussions was the need to reconsider the framework of enhanced cooperation. The analysis highlighted that developing countries often struggle to engage with major platform providers on an equal-to-equal basis, especially in areas such as taxation and local rules for personal data protection. Additionally, emerging issues such as cyber threats, misuse of the internet for money laundering, and human trafficking require cooperative efforts among nations. The speakers proposed that a reconsideration of the framework of enhanced cooperation is necessary to effectively address these challenges.

The analysis also emphasized the importance of monitoring the potential digital gap between regions and social groups within each country. It was suggested that local digital problems should be addressed locally, and each country should actively monitor and work towards minimizing disparities in internet access and usage.

Lastly, the analysis highlighted the significance of supporting institutions involved in internet governance, such as ICANN, ITU, World Intellectual Property Organization (WIPO), World Trade Organization (WTO), and United Nations Educational, Scientific and Cultural Organization (UNESCO). These institutions play a crucial role in ensuring a stable internet and should be supported to foster a stable and inclusive digital environment for all nations.

In conclusion, the analysis of the speakers’ discussions identified the continued relevance of the Tunis Agenda and Outcomes, improvements in institutions like ICANN and ITU, the need for reconsideration of the framework of enhanced cooperation, the importance of monitoring the digital gap, and the necessity of supporting institutions involved in internet governance. These observations provide insights into the ongoing efforts and challenges in shaping an open, inclusive, and secure internet.

anita gurumurthy

The analysis provides a comprehensive overview of digital cooperation and the challenges posed by the data and AI economies. The first argument highlights that the promise of collective digital potential has not been realised. It notes that algorithms today are largely opaque and indiscernible to the public. This lack of transparency creates concerns about accountability and the power wielded by AI systems. Furthermore, the analysis suggests that this algorithmic society is also a society of fragmentation, as AI systems often lead to polarisation and echo chambers, hindering the creation of a cohesive and inclusive digital space.

The second argument focuses on the neocolonial dynamics embedded within the data and AI economy. The analysis points out how trade forums are being misused for discussions on data flows, with potential negative consequences for privacy and data protection. Additionally, it highlights how data for development initiatives are frequently extractive, benefiting powerful global entities at the expense of local communities and economies. Moreover, the weaponization of intellectual property regimes by big tech companies further exacerbates the imbalance of power, rendering the situation even more concerning.

In response to these challenges, the analysis proposes a four-pronged strategy for digital cooperation. The first prong advocates for initiating a consensus for a global digital human rights constitutionalism. This would entail establishing a set of principles and standards that safeguard individuals’ rights in the digital sphere, addressing issues such as privacy, freedom of expression, and equitable access to technology.

The second prong focuses on better governance of global data public goods. The analysis argues for the need to develop robust frameworks and mechanisms that ensure responsible data management, protection, and sharing. This would involve addressing issues of data ownership, control, and fairness to promote more equitable data ecosystems.

The third prong of the proposed strategy calls for the mobilisation of public financing to galvanise digital innovation ecosystems. The analysis recognises that public investment is vital to foster innovation, particularly in areas where the private sector may not prioritise development due to market limitations or social impact considerations. By channelling public funding strategically, the aim is to nurture digital entrepreneurship and create a conducive environment for sustainable technological advancements.

Finally, the fourth prong suggests the internationalisation of internet governance. The analysis argues that internet governance should be a collective effort involving multiple stakeholders on a global scale. This would facilitate a more inclusive decision-making process and ensure that diverse perspectives are represented. By internationalising internet governance, the aim is to create a more balanced and democratic digital ecosystem that respects the interests of all nations and individuals.

In conclusion, the analysis highlights the complexity and urgency of addressing the challenges posed by the data and AI economies. It emphasises the need for greater transparency, accountability, and collaboration in digital cooperation. The proposed four-pronged strategy provides a comprehensive framework to navigate these challenges, with recommendations ranging from the protection of digital human rights to the internationalisation of internet governance. By implementing such strategies, it is hoped that the potential of the digital era can be harnessed for the benefit of all, fostering innovation, inclusivity, and equitable development.

Pearse O’donohue

The analysis of the discussions reveals several key points from the different speakers. Firstly, the multi-stakeholder model is viewed positively as an effective instrument for internet governance. The speakers acknowledge that the model and the World Summit on the Information Society (WSIS) have played a vital role in the unprecedented success of the internet. They highlight the importance of a cooperative approach, as demonstrated by the Internet Governance Forum (IGF), in making effective decisions for the governance of the internet.

Secondly, human rights are seen as crucial to maintaining an open, free, and secure online space. The EU strongly supports a proactive approach towards human-centric digitalization, emphasizing the impact of artificial intelligence (AI) technologies. They advocate for human rights to be the foundation of an open and secure online environment. The EU AI Act is highlighted as a significant step towards placing the impact of AI technologies at the centre of digitalization efforts.

Furthermore, the importance of bridging digital divides and creating a more inclusive digital future is emphasized. The EU and its member states are committed to deploying digital networks and infrastructures worldwide, focusing on underserved regions, countries, and populations. This commitment aligns with SDG 9, which aims to promote industry, innovation, and infrastructure. The EU’s efforts aim to ensure that everyone has equal access to the benefits of the digital world.

Additionally, the analysis reveals opposition to the centralization of control over the internet. One speaker explicitly stated their stance against the centralization of control, but no further supporting facts were provided for this argument.

Lastly, strengthening the role of the IGF is seen as crucial in fostering an inclusive, open, and sustainable digital environment. The EU believes that it is critical for the IGF to evolve into an even more impactful and inclusive model. This aligns with the goal of creating a digital environment that encompasses diverse perspectives and promotes cooperation among stakeholders.

In conclusion, the analysis highlights the significance of the multi-stakeholder model for effective internet governance. Human rights are deemed essential for an open and secure online space, and the EU is committed to bridging digital divides and creating a more inclusive digital future. Strengthening the role of the IGF is seen as crucial in fostering an inclusive, open, and sustainable digital environment. Opposition to the centralization of control over the internet is also stated.

Isabelle Lois

The analysis highlights several key points made by the speakers regarding various aspects of internet governance. Firstly, inclusive, transparent, and multi-stakeholder processes are deemed to be of utmost importance in effectively addressing the digital governance challenges at hand. The past Internet Governance Forums (IGFs) have successfully applied such processes, demonstrating their effectiveness in fostering collaboration and achieving meaningful outcomes.

Furthermore, trust-building between stakeholders is acknowledged as a crucial element in realizing the vision of the World Summit on the Information Society (WSIS). It is widely believed that establishing and nurturing trust among different actors involved in internet governance is essential for creating an environment of cooperation, enabling collective efforts towards achieving common goals.

The analysis also emphasizes the need to empower individuals and centre the governance of emerging technologies. The speakers argue that putting individuals at the forefront and ensuring their equal participation in shaping the governance of emerging technologies is vital. This approach aims to maximize the societal benefits of these technologies while mitigating potential risks and ensuring that they serve the needs and interests of all.

Furthermore, it is asserted that participation in IGFs must exhibit consistency, inclusivity, and representation from all regions of the world. This will contribute to a diverse range of perspectives and ensure that the global internet governance dialogue reflects the varied needs and challenges faced by different regions. The analysis suggests that the Committee on Science and Technology for Development (CSCD) could seek synergies with national and regional IGF initiatives to enhance inclusivity and improve regional representation in the overall IGF process.

In addition, the analysis highlights the importance of expanding internet access to the remaining 2.6 billion individuals who are currently not connected. It is noted that while significant progress has been made over the past two decades, with internet access increasing from 6% to approximately 70% of the global population, concerted efforts are still required to bridge the digital divide and ensure universal connectivity.

Lastly, gender inclusion in artificial intelligence (AI) and its governance is recognized as a critical aspect. The analysis suggests adopting a gender lens and valuing women’s perspectives in all aspects of internet governance and decision-making related to AI. This approach aims to address existing gender disparities and biases, ensuring that AI technologies and policies are developed in a way that promotes gender equality and inclusivity.

In conclusion, the analysis underscores the significance of inclusive, transparent, and multi-stakeholder processes in addressing digital governance issues. Trust-building, empowering individuals, regional representation, universal internet access, and gender inclusion in AI governance are key focus areas identified by the speakers. These points highlight the collective efforts required to ensure that internet governance initiatives are equitable, efficient, and responsive to the evolving needs and challenges of the digital age.

Robert Opp

According to one speaker, ICT (Information and Communication Technology) is considered an absolute mega trend and plays a crucial role in driving global changes. It is seen as a super mega trend that has a positive impact on various aspects of society. The speaker highlights the significant role of ICT in shaping industries, innovation, and infrastructure, aligning with SDG 9: Industry, Innovation, and Infrastructure.

On the other hand, another speaker raises concerns about the urgency of addressing ICT-related issues. They emphasize that the pace of global changes is accelerating, necessitating immediate action. Undoubtedly, ICT advancements have brought about rapid changes in various domains, such as technology and communication. However, the negative sentiment expressed by this speaker suggests that there are potential challenges and risks associated with these changes. They stress the need to address these issues promptly, emphasizing the importance of SDG 13: Climate Action.

Both speakers agree on the critical role of UN partnerships and multi-stakeholder groups. They acknowledge that the collaborative efforts of these entities are essential in tackling the complex challenges posed by ICT-related issues. These partnerships and groups provide a platform for various stakeholders to come together, share expertise, and develop effective strategies. The positive sentiment expressed towards these partnerships highlights their significance in addressing not only ICT-related concerns but also broader sustainable development goals. SDG 17: Partnerships for the Goals is particularly relevant in this context.

The analysis reveals that despite the positive outlook towards ICT’s contributions to global changes, there is an underlying sense of urgency and a recognition of potential risks. It is essential to strike a balance between harnessing the benefits of ICT while mitigating any negative implications. The importance of collaboration, as indicated by the positive sentiment towards UN partnerships and multi-stakeholder groups, further underscores the need for collective action in navigating the challenges presented by ICT advancements.

Overall, the extended summary highlights the key points made by the speakers, explores their arguments and supporting evidence, and emphasizes the significance of UN partnerships and multi-stakeholder groups. It reflects the contrasting perspectives surrounding ICT as a mega trend and emphasizes the urgency in addressing associated issues.

Prateek Sibal

UNESCO, the United Nations Educational, Scientific and Cultural Organization, plays a crucial role in addressing digital challenges through the principles of multistakeholderism and cooperation. They co-facilitate action lines, demonstrating their commitment to involving multiple stakeholders in finding effective solutions to digital challenges. This approach is essential in promoting partnerships and collaborations to achieve sustainable development goals, particularly SDG 17: Partnership for the goals.

Furthermore, UNESCO actively works towards assessing internet environments in about 45 countries. Their approach is based on a rights-based, open, accessible, and multistakeholder approach. This demonstrates their dedication to ensuring that internet access is not only available but also respects human rights and fosters an inclusive digital society. However, despite UNESCO’s efforts, there have been approximately 1,200 internet shutdowns globally between 2016 and 2023. This highlights the ongoing challenges faced in ensuring universal access to the internet and the need for continued efforts to address these issues.

In addition to addressing connectivity challenges, UNESCO promotes open access to information. They celebrate Access to Information Day on the 28th of September, emphasizing the importance of transparency and access to information in fostering sustainable development. This commitment aligns with SDG 16: Peace, Justice and Strong Institutions, which emphasizes the importance of promoting accountable and inclusive societies.

Another significant challenge in the digital age is the prevalence of disinformation and misinformation. UNESCO recognizes this and has developed programs to support the upscaling of civil society organizations and fact-checking mechanisms. This effort aims to enhance media and information literacy, which is crucial in combating the spread of false information. By supporting these initiatives, UNESCO contributes to achieving SDG 16 and building resilient societies.

Moreover, UNESCO implements standard-setting instruments to facilitate open science and ethical considerations regarding artificial intelligence. They have various standards in place, including recommendations on open science, the ethics of AI, and open educational resources. This commitment to setting global standards promotes the responsible use of technology and ensures that scientific and educational resources are freely available to all.

In conclusion, UNESCO plays a significant role in addressing digital challenges through multistakeholderism, cooperation, and various initiatives. Despite the persistent challenges, such as internet shutdowns and disinformation, UNESCO’s efforts to assess internet environments, promote open access to information, enhance media literacy, and implement standard-setting instruments demonstrate their commitment to building a more inclusive and sustainable digital society. Their work aligns with the UN’s sustainable development goals, emphasizing the importance of partnerships, transparency, and ethical considerations in harnessing the potential of digital technologies for the benefit of all.

Anna Margaretha (anriette) Esterhuysen

The World Summit on the Information Society (WSIS) has received high praise for its impactful process in facilitating participation and producing unique outcome documents. These documents are notable for incorporating the perspectives and involvement of non-state actors, reflecting a comprehensive and inclusive approach. The WSIS documents strike a balance between broad overarching principles and specific subject areas, providing a holistic yet detailed framework.

There is a strong call to continue building upon the WSIS process, advocating for inclusivity and increased civil society involvement at various levels. Enabling civil society to shape debates at the grassroots level is seen as crucial in bringing about significant change. It is proposed that WSIS provides space for civil society involvement, and collaboration within the U.N. system is needed to facilitate greater engagement.

Lessons from WSIS are valuable in addressing macro issues such as public financing and digital public infrastructure. The issue of insufficient public financing is identified, and considering digital public infrastructure in the light of WSIS lessons can provide innovative solutions. By drawing upon WSIS experiences, these issues can be effectively tackled.

The WSIS documents stand out by focusing on people-centered development. They emphasize human rights, open innovation, and open source as important factors in creating an enabling environment. This approach is demonstrated in efforts to bring education to remote areas and promote trade justice and small-scale agriculture. Emphasizing the needs and rights of individuals and communities, rather than solely focusing on technological advancements, is crucial for achieving fair and inclusive development.

In conclusion, the WSIS process is praised for facilitating participation and generating unique outcome documents that incorporate the views of various stakeholders. Continuing to build on the process with a focus on inclusivity and civil society involvement is recommended. WSIS lessons are also valuable in addressing macro challenges such as public financing and digital public infrastructure. The people-centered approach advocated by WSIS is essential for promoting fair and inclusive development.

Session transcript

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
big letter, so I have to put my glasses. It’s horrible. No, come on, I’m kidding, of course. So, welcome to this first open consultation on the WSIS Plus 20. I don’t think that a lot of people here in this IGF, they know that this is the first open consultation for the future of the WSIS, but nevertheless, we are doing that during the IGF, and I’m sure that we will get more involvement for the different stakeholders throughout the next two years. So, excellencies, distinguished delegates, ladies and gentlemen, good afternoon. A warmest welcome to the launch of the open consultation of the United Nations Commission on Science and Technology for Development at the CSTD. On the WSIS Plus 20 review as an open forum event on the Internet Governance Forum 2023. So, I am Ana Cristina Desneves, and I’m chair of the CSTD. And today, I’m moderating this event entitled WSIS at 20, Successes, Failures, and Future Expectations, a partnership between the CSTD, the ITU, UNESCO, and UNDEP, all key actors of the WSIS. As you know, the WSIS vision is to establish, and now I’m going to quote some Geneva Action Plan text, people-centered, inclusive, and development-oriented information society for enhancing the potential of information communication technologies for sustainable development. So, we have different questions. To what extent and how is the vision of a people-centered, inclusive, and development-oriented information society evolved over the past 20 years since WSIS? We are still nowadays talking about people-centered, human-centric, inclusiveness, and development. In 2003, we were talking about information society. Nowadays, we are talking about digital. We have to understand what we mean by that, but we know that from the Tunis Agenda of 2005, this Internet Governance Forum was set up, and we had all these problems already, and diagnosis, and we were trying to make the world a better place to live. How will ongoing trends and emerging technologies, nowadays particularly artificial intelligence, but we have so many other emerging technologies as quantum technologies, the 3D printing, et cetera. So, how will ongoing trends and emerging technologies impact progress towards human development and the sustainable development goals? Moreover, how can these trends enable or hinder the realization of the WSIS vision? What measures should be taken to advance international cooperation, including in terms of governance to leverage emerging technologies for sustainable development in economic, social, environmental, and cultural dimensions? These are some of the questions that we invite you to consider today. If I put them simply, the questions could be how much progress have we made towards that vision? What challenges remain in the way ahead? Those are also issues that CSED has been addressing in implementation of its ECOSOC and the General Assembly mandate to review the implementation of the WSIS outcomes, including through annual reports to the ECOSOC and to the General Assembly, and the contribution of input to the WSIS Plus 10 review back to the General Assembly in 2015. And as some of you may remember, the General Assembly adopted a resolution on the 16th of December, 2015, and in that resolution called for a high-level meeting to be held in 2025 to review the overall implementation of the outcomes of the World Summit on Information Society known as WSIS Plus 20 review. For the WSIS Plus 20 review, the ECOSOC, so the Economic and Social Council of the United Nations, adopted a resolution in June, 2023 that requests the CSED to collect inputs from member states, all facilitators, and other stakeholders, and to organize during its upcoming session in 2024, March, 2024, and in its 28th session the year after, in 2025, substantive discussions on the progress made in implementation of the outcomes of the WSIS during the past 20 years, and to report thereon through the ECOSOC, then to the General Assembly. The CSED members adopted a roadmap at its annual session last March, 2023, to guide the CSED’s work on WSIS Plus 10, sorry, 20 review. Here is a snapshot of the roadmap. So you can see the roadmap. And thanks, Eva, for sharing the slide. So you can see on that slide that March, 2023, the CSEDs, so it was determined that CSEDs has a major role on the WSIS Plus 20 review, and that the CSED has to produce a synthesis report, which first draft will be presented to the CSED in March, 2024, and the final draft will be presented on the 28th CSED in 2025. The CSED outcomes will be submitted via ECOSOC to the General Assembly’s WSIS review in 2025. So this is open consultation that will be held at national, regional, and international level. And with multilateral agencies, private sector, technical community, national governments, civil society, academia. So we have these two years ahead of us until a resolution will be adopted in UNGA, the United Nations General Assembly in 2025. I wish to underline that the WSIS Plus 20 review at CSTD will take a progressive and a prospective or forward-looking approach by not only looking at the present, but more importantly, looking to the future. Equally, I want to emphasize that an integral part of the CSTD’s WSIS Plus 20 review is the engagement of all stakeholders in open consultations and in the survey questionnaire that the CSTD Secretariat will conduct from late 2023 until late 2024. The CSTD is grateful to the Internet Governance Forum for giving the space to the CSTD to launch its open multi-stakeholder consultation at the 18th IGF in this ancient and beautiful city of Kyoto. Internet governance is an important component of the WSIS process, and of course, the IGF itself is an outcome of WSIS. But it’s important to remember that the WSIS process covers all aspects of digitalization, including the issues concerned with aspects of development and the environment that are discussed in the WSIS Forum and elsewhere. The objective of today’s event is to enable a candid and open dialogue drawing on the collective wisdom, perspectives, and experiences of the various stakeholders. The insights and recommendations stemming from today’s interactions will undoubtedly contribute to the synthesis report to be prepared by the CSTD Secretariat, which is intending to shape substantive discussions at the CSTD’s WSIS Plus 20 review sessions as mandated by ECOSOC. Our session is structured as follows. Ms. Siriman, Head of the CSTD Secretariat and the Director of the Division on Technology and Logistics, UNCTAD, will make opening remarks. And Mr. Ueno, on behalf of Mr. Yoshi Lida, Deputy Director General for G7, G20 Relations, Government of Japan, will deliver a keynote speech. He will be followed by six discussion starters who will share with us their views and insights. Thereafter, I will open the floor to both in-person attendees and remote participants for a freestyle roundtable discussions under my moderation, including judicious time management. So I would like to remind remote participants that they should request the floor through the hand-raising feature on the Zoom platform. Ladies and gentlemen, now it is my great pleasure to invite Ms. Shamika Siriman, Director of the Division on Technology and Logistics of UNCTAD and Head of the CSTD Secretariat, to make opening remarks. Mrs. Siriman, the floor is yours.

Shamika Sirimanne:
Good afternoon and good morning and good evening to all of you joining from online. So let me take the opportunity to join the chair of the CSTD in welcoming you to this CSTD Open Consultation on the WSIS Plus 20 Review. And I think we all agree that the WSIS vision of a people-centered, inclusive, and development-oriented information society remains as valid as ever and also quite unfulfilled. So the journey has not ended. And in fact, it is quite concerned. We have enormous of our concerns when we know that the 95% of the world’s population live today within range of a mobile broadband network. In LDCs, just 36% of their people are connected and these are concerns. And women remain digitally marginalized in many of the world’s poorer countries and there’s a massive rural-urban divides and there are many multifaceted divides. And I think we also have seen that these divides have become quite serious development divide. It’s not just a, it’s not no longer a digital divide. It has transformed into a development divide and we experienced that during COVID-19 times. Those who has access to internet and manage to live and manage to work and manage to have education online and buy stuff online and those who are not connected basically were locked out of the functioning world and that’s the concern. So as our CSDD chair, Anna, as you said, that today the CSDD is opening a consultation process and we want your input about the lessons of the 20 years of WSIS implementation. And from that lessons, we also need to understand from your own viewpoints, how are we going to navigate this emerging world? You know, we are going through a massive digital technological revolution and this is probably the technological revolution of our lifetimes. And so how do we, we are walking into uncharted territory as we have heard in many, many, many rooms of in the IGF. So we want to hear from you about the new themes, the threats, the opportunities that need responses from the UN system. We also ask you as WSIS stakeholders and you ushered WSIS to tell us about ways to ensure that the WSIS process contributes to preserving and improving multilateral cooperation in the digital sphere. I want to also want to say that as the CSDD Secretariat, we are working very closely with other key WSIS players and our great partners like the ITU, UNESCO, UNDP and the regional commissions in this review. And I am very happy to have all of us coming together because we will all do our own consultations because we all have our own lines, action lines, but we will all merge, we will all use the inputs into the WSIS process as we prepare the material for the General Assembly. So I’m very happy to let you know that this is done with the one UN kind of approach. And I’m also very happy to say as Anna said earlier, this is truly a multi-stakeholder consultation and that’s what we need. We need, we cannot, it’s not just the governments can navigate this emerging world, it’s not just the international organizations, academia, it’s a civil society, the private sector, we all need to be at the table and we all need to put our voices. So here I like to now showcase the timeline of the various WSIS processes over the years showing the different agencies contribution. Maybe you may not be able to see it very well, but we will share this document with you because it shows the processes that’s going towards, how are we gonna all converge? So we will not walk on parallel roads. And I also want to emphasize this example of this interagency collaboration. We have jointly circulated a questionnaire to seek stakeholder inputs for the WSIS plus 20 review and please help us circulate it among your networks. And it is extremely important that we get proper data, otherwise it’s all going to be anecdotal, somebody said this and somebody said that. So we really are looking to collect data and have a report which is fact-based. So please help us, I think we spend a lot of time. Yeah, it’s a simple questionnaire, but we spend time to make sure that we get good data. So please also do that. So let me end here and I look forward to a productive discussion. And just as Anna said, this is just the beginning. We will work with our regional commissions, we will have regional consultations and we will have consultations wherever you open us room for. So thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Your very insightful and interesting remarks, reflections on WSIS. And I was this opportunity for everyone to read or to revisit the Geneva principles, the Geneva action plan and the Tunis agenda. Maybe it’s good for everybody to reread these documents. Because from three days of IGF, it seems like some of us never read those documents. And now that we are discussing WSIS plus 20, it’s very important to have those in mind. Now I’m pleased to invite Mr. Weno that will read the keynote speech on behalf of Mr. Yoshi Ida, Deputy Director General for G7, G20 relations, government of Japan. So to deliver Mr. Ida’s keynote speech, please.

Speaker 1:
Thank you, Chair. Hello, ladies and gentlemen. My name is Yasunori Ueno from Ministry of Internal Affairs and Communications, Japan. My boss, Mr. Ida, was planning to deliver a keynote speech here, but he could not attend this session because of urgent business. So please allow me to deliver his speech on behalf of him. Firstly, I would like to say welcome to Kyoto. Welcome to IGF 2023. I appreciated both physical participants and the remote participants. I heard the number of registration for IGF Kyoto is now over 8,000. We as host country is very glad with this figure. Also, this number show… how important IGF is. Government of Japan has strongly committed multistakeholder approach for internet governance. In this IGF, it is important to respect the efforts of the past 17 times of this event and to build on new efforts. From this perspective, we actively participate in leadership panels. We have contributed from the standpoints of the host country. As the concept note of this session points out, the current digital society has greatly changed since 2005, but the idea of people-centered, inclusive and development-oriented information society has not changed. For this purpose, a free, open and global internet is important. In this context, we should continue making our effort to bring about socio-economic development so that no one left behind from human-centered innovation. Also, to enhance the reliability of internet, it is important to address the issue of mis-information and cyber security. It is also important for the international community to develop digital infrastructure and address the issue of digital divide. The idea of people-centered, inclusive and development-oriented information society is the same for AI governance, which is currently an important issue for international society. Currently, the government of Japan, as the G7 presidency, is leading discussion on AI governance through a process named as Hiroshima AI process. We are planning to advance this Hiroshima AI process in cooperation with the United Nations. With the WSIS plus 20 review coming up in 2025, I believe the IGF Kyoto this year is a very important opportunity. We are looking forward to sharing and exchanging opinions and ideas between various stakeholders. Thank you very much.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you, Mr. Ueno, very much for sharing Mr. Ide’s deep thoughts and for setting the scene of our discussions today. And congratulations for the 8,000 registrations. That is huge. Nobody can say that is not huge and powerful. Wow. Congratulations again. So next, I will invite the five discussing starters to share their views and insights with us. Each speaker has four or five minutes at most, and then it will be open for everyone in this room to… in this room and online, of course, to make a statement or to say whatever they want. So I will start with Miss Isabel Lois. Lois? Lois. Yes, the Spanish family. Senior policy advisor at the Swiss Federal Office of Communications. So I think that you are going to reply to a question, which is what needs to be done in international cooperation for a better achievement of the WSIS vision. Isn’t that right? Please. Thank you.

Isabelle Lois:
Thank you, Anna. So it’s been two decades since the first World Summit on Information Society took place in Geneva. And I would like to start with acknowledging the tremendous progress that has been achieved since then. I mean, inclusive, transparent, and multi-stakeholder processes have proven to be essential in addressing the complex issues of digital governance. And we have learned that cooperation, exchange of information, and joint identification of the relevant cross-sectorial issues are key in this process, as well as fostering strong partnerships between all the stakeholders. Building trust between stakeholders and taking action against prejudical is crucial. As we look forward to the future and the WSIS Plus 20 review, our task and the task is clear. We need to strengthen the process that empowers individuals, regardless of their gender, age, or origin. We need to center the governance of emerging technologies that lead to a sustainable development in economic, social, and environmental dimension. As a government representative, I have to remind myself and my fellow counterparts that by actively listening and taking into account the needs and knowledge of the civil society, private sector, technical communities, academia, and more, we can create an inclusive environment that thrives on constructive criticism and uses the experience that we may not have. And that is what I’ve been seeing in the past IGFs and here again in the successful Kyoto edition. The WSIS has been one of the most inclusive processes to date, and the outcome reflects it. There is the sense of community and a commitment of all that are involved that demonstrates the power of the democratic multi-stakeholder participation. However, we must ensure that this participation remains consistent, inclusive, and representative of all regions of the world. For instance, the CSCD could seek synergies with the wide and rich network of national and regional IGF initiatives, as it was mentioned before. And whilst we’ll made significant progress, challenges still lie ahead. Two decades ago, only 6% of the world population had access to the internet. Today, that number stands at about 70%. And the principles laid down in 2003 remain valid, especially regarding the multi-stakeholder approach. But we must build upon the knowledge and experience we have gained since and focus on connecting the remaining 2.6 billion individuals who are still disconnected, and then ensuring that the connection is meaningful, and then effectively govern together based on those common principles. This can only be achieved through collaboration and inclusion. The VISIS Plus 20 review comes at a particular time, as AI and its governance is at the forefront of many minds. In this process, it is essential to adapt as well a gender lens. Women’s voices and perspective must be included and valued in all aspects of internet governance. By doing so, it will create a human-centered, free, and secure digital world that benefits everyone. This is the perspective that many organizations have now been taking very seriously. And just as an example, we can take the ITU or UNESCO, and we should do so as well within the CSTD process and the VISIS Plus 20 review. Switzerland firmly believes that inclusive, multi-stakeholder, and cross-silos cooperation are prerequisites for achieving our vision of a digital world that prioritizes humanity, freedom, security, and inclusivity. Let us continue to work together, breaking down the silos, and ensuring that the benefits of the internet are accessible to all. Thank you, and I look forward to the discussion.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you very much, Isabel Flores. The next discussant will be Mr. Kamel Saadoui, Chief of Minister’s Office of Ministry of Communication Technologies from Tunisia. And he’s online, right? Yes. Yes. Good morning, everyone. Good morning. Good morning from Tunisia. I think that you are going to talk a little bit about, to regard the international multi-stakeholder discussion, how has the international multi-stakeholder evolved since VISIS? And more importantly, how should it evolve in the future? Mr. Kamel Saadoui, I will give you four or five minutes, please. Thank you.

Kamel Saadaoui:
Thank you, Chair. Good morning from Tunisia. It’s 7.30 in the morning. So ladies and gentlemen, distinguished guests and participants, I’m honored to be with you today in this panel about successes and challenges of 20-year journey of the VISIS. Tunis has the privilege to be linked to this summit through its report, referred to as Tunis Agenda and Outcomes. Some of you remember that IGF itself is an outcome of the VISIS 2005. Even if this VISIS 2005 took place before the boom of social media platforms, artificial intelligence, clouds, and blockchains, its recommendation remained relevant, promoting an open, resilient, unfragmented, and inclusive internet, endorsing human rights and cultural diversity. Tunisia recognizes that institutions such as ICANN and IQ have gone through many improvements since 2005 by supporting international domain names and allowing for more transparency and accountability, even if the government’s participation in ICANN remains at an advisory level, but that can be further improved and put to have meaningful impact. Today, emerging issues force themselves high items in the agenda more than the mere technical aspects, such as artificial intelligence applications, protecting people’s privacy and personal data when using over-the-top services and social media. Developing countries cannot deal with major platform providers on an equal-to-equal ground when it comes to taxation, for example, or imposing local rules for personal data protection. For that reason, Tunisia proposes to reconsider the framework of enhanced cooperation. We’re not suggesting bringing back the sterile debate on the government’s taking over the regulation of internet because the agility of private sector, civil society, and experts community is needed to keep leading internet to new horizons. What we’re recommending is simply the following. The multi-stakeholder approach in internet governance remains the most appropriate. We should keep seeking to implement equal footing and meaningful participation. ICANN, ITU, and other technical bodies have played major role in their respective responsibilities to ensure a stable internet. They should be supported for a resilient and unfragmented internet for all. We’re facing more complicated challenges than the mere technical coordination. IGF today is more relevant to share outcomes of the multiple forums and institutions involved in internet governance and needs to secure financing for bigger role and better outcomes. Enhanced cooperation led by CSTD can be useful to tackle emerging issues involving nations, such as managing cyber threats, cyber terrorism, the misuse of internet for money laundering, and the human trafficking. Other international institutions involved in internet issues, such as WIPO, WTO, UNESCO, and others, should be supported to ensure a stable internet for all nations. Each in its area of competencies and expertise must further develop multi-stakeholder approach, including open consultations and transparent reporting. And finally, each country should watch out for the potential digital gap between regions and social groups, since local digital problems have to be managed locally. Thank you.

anita gurumurthy:
Thank you very much. And now I will give the floor to Ms. Anita Gurumurthy, founding member and executive director of IT for Change India. And I think that you are going to give some remarks of, given the widespread of digitalization into almost all aspects of our life, how should WSIS be seen, and what international cooperation should be shaped? Please, Anita. Thank you so much. It’s an honor to be here. As we move towards the WSIS plus 20 mark, I think digital public policy issues, we all acknowledge, have expanded infinitely. Some of us here will recall that the WSIS Geneva principle, one of those principles in 2003, held an optimism. And I quote, we are firmly convinced that we are collectively entering a new era of enormous potential, it said. The problem today is that this promise of collective potential is broken. The AI moment is similar in many ways, and not so similar to the Gutenberg moment in the 14th century, when Gutenberg’s letterpress printing revolutionized the world of information and knowledge. AI is moving us to a society of archiving, like the printing press did. But unlike Gutenberg’s technology, the algorithms that order society today are indiscernible to the public. The printing press shifted power into the hands of private forces, taking it away from the state, and destabilizing the authority of the church. Today, the force field of knowledge is similarly controlled by corporations who seek servitude in exchange for data and information. Our connections are growing, as they did in the age of enlightenment, but algorithmic society is also a society of fragmentation. As much as the press led to suffragette movements, and in my own country, a struggle against coloniality, it also led to competing visions of the good and to bloodshed. So also, the geopolitical risks of AI for war and annihilation cannot be wished away. The digital is indeed a lever of power, and it is no accident that those who control these technologies have little incentive to change the status quo. The WSIS did call out the respective roles of governments and public policy unequivocally, including the need to advance international cooperation, especially for the governance of digital technologies. The starting point is to recognize what ails the cooperation, and I would like to highlight, in particular, the neocolonial dynamics of the data and AI economy that demand our immediate attention at this point at the WSIS Review. The first is that trade forums are often used quite wrongly by the more powerful countries to frame rules for cross-border flows of data. We need a separate space, a separate forum for the negotiations around data governance globally. Secondly, data for development initiatives tend to be extractive. They open up individuals to pan-spectronic surveillance, normalizing dependencies of public systems on extractive private firms. Thirdly, intellectual property regimes have been weaponized by big tech, and given the prohibitive licensing costs and other barriers to entry, including patents owned by big tech firms, the Global South firms find it an uphill task to scale up and leverage a market share. Big tech firms also often resort to preemptive patenting to retain their competitive advantage, stifling innovation and stifling development of domestic industry. Fourthly, the overemphasis on personal data protection at the cost of market regulation has proven to be detrimental. This means that large tech companies, typically owned and primarily operated by white men, are extracting data from uninformed users and controlling that data to profit via predictive analytics. Unfortunately, strong data protection laws will not prevent this domination. Fifthly, the silence around development financing is very, very loud. In fact, yesterday, the New York Times has carried an article by African leaders on why African debt needs to be written off. The odds are stacked against developing countries as pathways to digital sovereignty are really uphill. Recent shocks owing to COVID-19 has broken supply chains. Inflation is pushing many Global South nations to the verge of crises. So finally, I think that there is a four-pronged strategy that is needed in digital cooperation. Firstly, we need to initiate consensus for a global digital human rights constitutionalism that is not only liberal, but that is supraliberal, incisive enough to cut through the systemic injustices in the international economic order. Secondly, we effectively need to govern global data public goods, and it may be useful to consider rules for varying contributions from varying groups of actors, such as, for instance, through principles like common but differentiated responsibilities explode in various other international negotiations. Thirdly, we need to urgently mobilize public financing to galvanize digital innovation ecosystems. We often talk about digital public goods, but we don’t talk enough about public digital financing. I think we need to set that right. And the digital development tax mechanism proposed by the UN Secretary General is particularly relevant in this regard. And fourthly and finally, I think it’s still important, although it seems far away in our memories, to still meaningfully internationalize internet governance. It’s imperative that the internet as a global commons is governed democratically, and so we need a new arrangement to oversee the technical governance of the internet, an issue that was dropped from the policy table when there was this moment, but is indeed long overdue. Thank you very much.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you very much. And now I would like to give the floor to Miss Anita Gurumurthy for her interesting remarks. And now I’ll give the floor to Miss Ariete Esterhizen, senior advisor on global and regional Internet governance, association for progressive communication in South Africa. So, Ariete, to what extent the situation today can be improved and how can we improve the way in which we communicate with each other and how can we facilitate this process? The floor is yours.

Anna Margaretha (anriette) Esterhuysen:
Thank you, Anna, and thanks very much for inviting me. I think that, you know, I’d like to respond both at the level of process and also at the level of substance. But first I want to say that we should recognize how powerful WSIS has been in terms of facilitating this process, and I think it’s very important for us to recognize that. And I think it’s very important for us to recognize that. I think Anna talked in her opening remarks about reading the WSIS outcome documents. I read them all the time, actually. And I read other U.N. outcome documents. There’s something very unique about the WSIS outcome documents. I think the fact that they are the outcome of contestation and the fact that they are the outcome of participation, and the fact that they are the outcome of participation is mediated by non-state actor participation and texts and views submitted by non-state actors. We weren’t always in the room, but if we were not in the room, we were at the back of the room. And we were in the caucusing in the corridors, and we submitted our own content and our own statements, and I think that is a very important part of the process. And I think that’s also reflected in the fact that all the groups had their own outcome documents. Our views are reflected in that. So that’s one thing, the process. I think secondly, the fact that WSIS is granular, that it has the Geneva principles which are broad-based, which highlight, which are people-centered, not tech-centered, which I feel many of our current documents, the WSIS documents speak much more powerfully as somebody who believes in social justice and equity from the global south. It’s about people-centered development. And it has human rights, the importance of human rights. It mentions open innovation and open source. When do you get that? So it has those broad overarching principles and the emphasis on governments having to play a role in creating an enabling environment. But then it’s also granular. It addresses some of the issues that are fundamental to having inclusive societies and effective accountable governance. It talks about education. It talks about food security through the agriculture action line, media freedom. And if you look at all the action lines, they all are very relevant. And then the action line on enabling environment talks about security and trust. So I think it actually has a lot of relevance. I think it has a lot of relevance because it talks about security and trust. So I think it actually frames, and perhaps it’s because it was not about the Internet. It was about ICTs, that it’s given it a longevity, I think that remains relevant. But it also means that it allowed space for advocacy groups that are working on social justice issues, that are working on trade justice issues. That are working on small-scale agriculture. On bringing education to people in remote areas who don’t have access via technology. In a sense, I think many of the responses that helped us cope with the COVID crisis, that was ground that was led by WSIS approaches and implementation. So I think that, so to go forward, I would say let’s build on that. Let’s continue to have spaces in the WSIS process that has opportunity for civil society to be not just consulted, but to actually shape the debate at some of the macro issues. The issues Anita just mentioned. The issues of financing. I think public financing is one of the failures. The lack of sufficient public financing. That debate is on the table again now because we’re talking about digital public infrastructure. Can we look at that from the lessons of WSIS? And it also still has those specific subject areas. So I think the important thing is to make the process inclusive. Not just consultation, but real collaborative shaping. Both at the sort of broader advocacy level, but also at the grassroots level. Where civil society, the only way in which you actually achieve change is when community organizations, have the power to have their own connectivity, to collaborate with small businesses. To work with local government. That’s where you actually have change on the ground. I think WSIS creates the space for civil society to be involved at all those levels. I definitely think that the IGF needs to be strengthened. And so has the WSIS forum. And I think that the tension sometimes between the two processes being so different, the one being more global south and the other one maybe being a little bit global north, I think that’s a productive tension. So let’s work with that. Let’s build on that. And then I think the final point on civil society collaboration also relates to the U.N. system. I think the U.N. system has its own diversity. It has relationships with different types of civil society organizations who work in different disciplines and different areas. And I’m sure they can all work together. I’m sure they can all be strengthened in terms of their inclusivity. But better collaboration within the U.N. system will also facilitate better collaboration and involvement of civil society. Thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you very much, Ariadne. I know that you read both documents and you were so much involved. And from your intervention, of course, we will acknowledge that you really read them and you were part of the process. So it’s very good to have you here. And to refresh some memories, it’s very good. Thank you. Now I will give the floor to Ms. Temilat Adalakun. So you will be the first speaker. So you will speak under two different hats. So Temilat is the youth ambassador of the Internet Society. And she is also associate product marketing manager of Google Africa. So Temilat, you are going to respond to questions because of your two hats. So from the youth perspective as youth ambassador, what do youth want from the WSIS process? And secondly, from the private sector perspective as Google manager, how can ongoing trends and emerging technologies, particularly artificial intelligence, enable or hinder the realization of the WSIS vision?

Audience:
Thank you so much, Ana. Good afternoon, everyone. Good morning, good evening, depending on wherever you are in the world. And standing on the existing protocol, my name is Temilat Adalakun. I’m also a product marketer at Google. Just like Ana mentioned, I’ll be wearing two hats today. One as a global youth ambassador and also an African youth. I will start with the first question. Because I’m here to actually advocate for a more inclusive and youth-centered approach in the world. So I’m going to start with the first question. So I’m a member of the African youth and African youth world summit on information society WSIS. I say unequivocally that the youth of today are deeply connected to the digital world through the use of platforms such as media, digital payment platforms, tech hubs and incubators, and even entrepreneurship, just to mention a few. So I’m here to advocate for a more inclusive and youth-centered approach in the world. So I’m going to start with the first question. So I’m a member of the African youth and African youth world summit on information society WSIS. I say unequivocally that the youth of today are deeply connected to the digital world through the use of platforms such as media, tech hubs and incubators, and even entrepreneurship, just to mention a few. So I’m here to advocate for a more inclusive and youth-centered approach in the world. So I’m a member of the African youth and African youth world summit on information society WSIS. I say unequivocally that the youth of today are deeply connected to the digital world through the use of platforms such as media, digital payment platforms, tech hubs and incubators, and even entrepreneurship, just to mention a few. So I’m a member of the African youth and African youth world summit on information society WSIS. I say unequivocally that the youth of today are deeply connected to the digital world through the use of platforms such as media, tech hubs and incubators, and even entrepreneurship, just to mention a few. So I’m a member of the African youth and African youth world summit on information society WSIS. I say unequivocally that the youth of today are deeply connected to the digital world through the use of platforms such as media, tech hubs and incubators, and even entrepreneurship, just to mention a few. Furthermore, we want WSIS to focus on initiatives to improve access to education and to empower us to actively represent in social and economic programs that influence technology in our respective countries across the continent. We want WSIS to focus on initiatives to improve access to education and to empower us to actively represent in social and economic programs that influence technology in our respective countries across the continent. I have a strong conviction that the WSIS process holds immense potential to be a powerful force for change and solution in Africa and the world. It’s a platform where we can utilize and work collaboratively to develop targeted solutions and ensure that ICT contributes to the development of the future. I would like to say that AI is a powerful tool for advancing human development and sustainable development goals. AI can enhance education and health care access. It can influence job creation, it can curtail global issues like climate change and even poverty. Also, AI can pave the way for a more inclusive and people centered information society by improving accessibility for persons living with disability, even through assistive technologies, including screen readers and text-to-speech software, which I know we are all aware of. Furthermore, AI paves a pivotal role in providing essential services, such as health care, education, and government services. It contributes to the diagnostic of treatment of diseases and even delivery of educational resources to remote areas. During COVID, we saw how AI was very useful in some of the initiatives that were brought about. However, AI also presents potential threats in the form of data privacy to the WSIS vision, including the creation of surveillance systems and new forms of discrimination and exclusion. It is imperative to ensure that AI is developed and employed in a manner that upholds human rights and aligns with the WSIS vision. The impact of AI on human development, SDGs, and WSIS vision hinges on responsible development and utilization. To achieve this, it is essential that we invest in research and understand AI’s ethical and social implications. It’s important that we also establish international standards and guidelines for its development and use to foster transparency, accountability, and even to ensure that AI is used in a way that is inclusive and inclusive. It’s important that we also establish international standards and guidelines for its development and use to foster transparency, accountability, and even to educate the public about AI’s potential benefits and risk and to teach individuals to have control over their own data and usage. In closing, I want to reiterate that the time is now. Together we can ensure that the WSIS process truly reflects the vision of the WSIS vision, and that the WSIS vision is the foundation for the future. Thank you, and I look forward to the discussions. Thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you, Temelat. Now we are almost opening the roundtable discussion, but before that, I will give the floor to our co-partners. I would like to start by thanking you for your vision and also Sharmika, the CSTD secretariat, for ensuring that it’s a joint process and that we are all working together on it.

Audience:
As we have been, you know, discussing all throughout that we have limited resources and we have limited resources and we have limited resources and we have limited resources and we have limited resources and we have limited resources and we have limited resources and we have limited resources and we all are all at the same team. We are all trying to get together, allowing our differences across the world that we have limited resources and capacities, and we areour joining hands to this process. Well, WSIS actually set the foundation of digital cooperation. We have been doing a lot in terms of digital cooperation, at least, you know, looking at how the UN has been working together and how we have been aligning with the different UN processes. For example, those of you who are involved in the WSIS forum, the WSIS special initiatives, the WSIS prizes, we have been aligning them with the decade of indigenous languages with UNESCO. And, of course, we are working with the UN to promote the role of digital in healthy aging. And so many other UN processes like the HLPF, we have been asked to align the WSIS action lines with the 2030 agenda for sustainable development, highlighting the role of digital in achieving the sustainable development goals. We have also been working with the WSIS and the UN General Assembly where we again highlighted the importance of digital preceding the UN General Assembly. So there are great examples of what the UN has been doing in collaboration, not only with each other, but also with all the stakeholders. The WSIS prizes is such a great example. We have been working with the UN to highlight the importance of digital preceding the UN and elevate their projects. Thank you for being here, so many of you. And, of course, some of the things that we need to pay attention on is you will recall, so, Anurag, like you. I think you achieved your two minutes. So I will give the floor to Mr. Cedric Vashvalt, the chief of digital policies and digital transformation section at UNESCO. Thank you, chair.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
It’s not a new thing. It’s something that we have been working on for a long time. We have been working with the UN to highlight the importance of digital

Prateek Sibal:
preceding the UN and elevate their projects. Thank you, Anurag. I’m a program specialist at UNESCO and will speak on behalf of UNESCO today. Thanks to the secretariat and CSTD for having us here. I would echo the remarks in terms of multistakeholderism and cooperation and I would like to focus more on some of the thematic achievements and challenges. UNESCO has been co-facilitating leading about six action lines and on access to information, I would like to share between 2016 and 2023, there have been about 1,200 Internet shutdowns globally. This remains a major challenge and UNESCO has been working on assessing Internet environments based on a rights-based, open, accessible and multistakeholder approach in about 45 countries and this work will need to be strengthened also with the support of civil society, academia and the private sector. We have been able to have access to information day, which is celebrated on 28th of September, which is a great moment for advocacy on open access to information in governments as well. As Gitanjali mentioned, there is a dimension of the decade of indigenous languages. So inclusion forms an important part of the work that the UN as a whole is doing here. Media and information literacy remains a major challenge when we are talking about disinformation, misinformation and in this domain, we have strengthened our programs also including youth, building dynamic coalitions. different actors and promote different kinds of responses whether it comes to fact-checking or supporting civil society organizations in upscaling. I will speak briefly about e-science. We have several standards that I’ll just stop soon. So just just several standard-setting instruments which are really bringing global communities together which is the recommendation on open science, the recommendation on the ethics of artificial intelligence, and the recommendation on open educational resources have become central tools which are being mobilized to build communities globally. So looking forward to the input and the feedback over here and we continue to remain engaged with the business process and with our partners here. Thank you. Okay, thank you

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
very much. So now Mr. Robert Opp, Chief Digital Officer at UNDP. Hello, two minutes.

Robert Opp:
Thank you. I don’t plan to take two minutes. We’re here to listen and I think it’s the issues that are coming up already are extremely relevant and it’s great to hear the themes that are emerging. I think if I think back 20 years ago and think of what’s changed I think probably for the people in this room and those who were involved the ICT or the ICT for development was would be considered quite important but I think today you cannot deny that it is an absolute mega trend driving global change and issues worldwide. In the order of climate change which has also become a super mega trend and I think that that means that the urgency is greater than ever and we’ve done a lot in the last 20 years but the pace of change is accelerating and so that’s when we look forward to WSIS plus 20 and working together with our UN partners and all of the multi-stakeholder groups this is what we need to keep in mind that it’s an urgent situation. So I will leave it there and thank you chair and thank you

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
to the secretary. Thank you very much sir. So now I will open for our free styles roundtable discussion after thanking to all the discussion starters. Yes so now we have to manage all the people that are going to intervene. So I think sir you were first then we have you and three so then European Commission on behalf of the European Union and then Cuba. So please who can help me? Can you help me? Looking around. So we have already so I have four in my mind

Audience:
here and not in the rear. Thank you very much Anna. I’m Peter Brook I’m the chairman of the World Summit Award. So you have three minutes. We have started in 2003 to focus on the best practice in content in with the ICT. It’s an Austrian member state initiative which we have for one now for the last 20 years every year. We have had 12,800 participants 1,600 winners according to the action lines C7 from the business plan of action. I think that the review is a very difficult exercise and I’m saying this very clearly because I think that business and the business process has lost half of its focus. The first focus is on digital divide and digital inclusion. I think and it was speaking about this and I also a friend from India was speaking about it but the other side was and is still the question of the transformation into the knowledge society and I would say that we have technology success but knowledge failure and if you are looking at disinformation misinformation hate speech fake news and things like this these are issues which we need to take seriously. That’s the first point. The second point is this is pre sustainable development goals. It is millennium development goals and the difference between the millennium development goals and sustainable development goals is that in sustainable development goals we have 176 indicators which give us actually KPIs of what we have achieved. The Tunis plan of action does not have KPIs and therefore we have a real issue in terms of the review and the review will be a moving back and forth and so on. The last thing is which I want to stress is we have in the Tunis plan of action a line which many people have not known it’s C9 and that is media and the media is a completely different kind of landscape today due to the economics of digital platform monopolization. We have this year for the first time in human history or talking about the Gutenberg moment is 54% of global advertising revenues goes to five American companies and it means that digital publishing I’m talking about community publishing in Canada 567 papers have been closed in the last two months. So if you’re thinking about media diversity and things like this then you’re just really looking at something where we are really having a loss of all those intermediaries which we have called editorial added value and that needs to be front and center in the review and I’m offering the World Summit Award and the International Center for New Media and its partners in 182 countries as a partner also in this exercise. Thank you

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
very much. Thank you very much and I hope that you will be able to to respond to the questionnaire in written form as everything that you said is very important as everybody’s that is talking here but responding to the questionnaire will be very very important. Now I’ll give the floor to Bangladesh. Bangladesh

Audience:
Madam Chair, when I entered the WSIS era that time I was the youth ambassador of the WSIS. Now I am former youth. Okay I asked my CSTD secretary to share what does Bangladesh expect from the future WSIS. I’m so grateful for giving me the opportunity. Bangladesh government, Bangladesh is the very unique country regarding implementation the WSIS action line. Bangladesh government formed Bangladesh working group on WSIS with a multi-stakeholder and I am one of the proud member of the Bangladesh working group on WSIS. After summit C1 to C11 Bangladesh government has integrated with the five-year plan as well as CSO also integrated C1 to C11 of their annual plan. Thirdly as a result Bangladesh government and CSO combinedly, technical community received lots of WSIS prize as the winner as a champion also. Fourth CSO and Bangladesh government has successfully addressed the COVID-19 operation disaster through ICT application that is called C7 ICT application and as WSIS outcome Bangladesh has been organizing the Bangladesh Internet Governance Forum. This is the National Internet Governance Forum with a multi-stakeholderism. Madam Chair, we expect from the future of WSIS. Bangladesh government has already declared the smart Bangladesh in line with the digital Bangladesh. There are four area, achieving sustainable economic growth, reducing poverty, ensure social and justice and third one is the very important, creating a digital and knowledge-based society. Madam Chair, thorough four area, one is the smart citizen, another one is the smart society, another one is smart economy and as well as smart government. In conclusion, we need a WSIS forum regularly. It is really multi-stakeholderism. Second, now we are preparing for participation in the WSIS forum 2024 including member of the parliament as well as the mayor of the municipal corporation promoting local governance. Third, we would appreciate it if the WSIS Secretariat would publish a handbook for the parliamentarian and as well as mayor like IGF Secretariat. It would be very useful. Fourth, summit of the PSAR and GDC process can learn from the WSIS and as well as IGF Secretariat what is multi-stakeholderism. Thank you, madam.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Okay, thank you very much and now I think it’s the time to give the floor to Kuba and then European Commission. Dear colleagues, on December 12, 2003, the first phase of the

Audience:
World Summit on the Information Society ended with the adoption of its final documents by the head of state and government of 175 countries. After arduous discussions, the developing countries succeeded in having the so-called digital divide recognized as a dimension of the existing economic and social divisions and allowed this topic to move out of the technical debate at the expert level and become a political issue of concern to the international community. 20 years later, it has been demonstrated without a doubt that the information and communication technologies in general and the Internet in particular are essential tools for the development of the countries, but it has also been confirmed that this beneficial impact of ICTs and the Internet is significantly lower in developing countries compared to developed countries. The unfulfillment of many of the WSIS agreements has had a negative impact on developing countries. For example, financial mechanisms to address the challenges of using ICT for development has not been established. In addition, persists the application of unilateral measures not in accordance with international law and the Charter of the United Nations that impedes the full achievement of economic and social development of the affected countries. Madam Chair, dear colleagues, all these issues were addressed with concern by numerous heads of state and government at the G77 and China Summit held in Havana, Cuba last September, and whose central theme was current development challenges, the role of science, technology and innovation. The final declaration of this summit reaffirmed the 2005 Tunis Agenda for the Information Society, stating that the G77 and China promote a close alignment between the World Summit of Information Society process and the 2030 Agenda for Sustainable Development. It also called for a close correspondence of the WSIS process with the Addis Ababa Action Agenda and other outcomes of relevant intergovernmental processes, including the Global Digital Compact and the Summit of the Future. It was further agreed to work towards a strong and concerted position of the G77 and China to ensure that the WSIS plus 20 general review process, the Global Digital Compact and the Summit of the Future contribute to inter alia the achievement of sustainable development and closing the digital divide between developed and developing countries. It was reiterated that the Tunis Agenda and the Geneva Declaration of Principles and Plan of Action shall lay down the guiding principles for digital cooperation. Madam Chair, dear colleague, I am finishing. The Declaration of Principles or the first phase of WSIS entitled Building the Information Society, a Global Challenge for the New Millennium, established a common view of the information society which among other attributes should be people centered, inclusive and developer oriented. In addition, the declaration noted… Okay, I think that your main message was already conveyed. No, I will finish by saying that it’s up to us now to finally make a reality of that common vision that was envisioned 20 years ago. Thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Muchas gracias. Very well, thank you very much. Now I’ll give the floor to Mr. Pierce O’Donoghue who is going to speak on behalf of the European Union.

Pearse O’donohue:
Thank you. Yes, I’m speaking on behalf of the European Union and its member states. We welcome the opportunity to share lessons learned from the WSIS implementation process at this crucial moment in the history of the Internet. We cannot appreciate the Internet’s unprecedented success without recognizing the vital role of the WSIS and the multi-stakeholder model it has advocated. The European Union continues to support the principles set out in the Geneva Action Plan and the Tunis Agenda, but our efforts do not end here. The multi-stakeholder model is not flawless, but it is still our most reliable instrument for effective Internet governance and the foundation for a dynamic system involving all stakeholders in the running of the Internet. We shall make every effort to ensure that it will never be replaced. The IGF is living proof that this cooperative approach works. Its value stems from adopting a vibrant multi-stakeholder approach and ensuring that voices from governments to private sector, civil society, the technical community and academia are heard and engaged in pivotal discussions on the Internet’s future and governance. The EU strongly supports a proactive and ambitious approach towards keeping human rights as the foundation of an open, free and secure online space, based on human-centric digitalization, preserving human dignity and equality of all people without discrimination of any kind, online and offline. We welcome the setting up of the UN Secretary-General’s high-level Advisory Board on Artificial Intelligence. The EU AI Act, which puts the impact of artificial intelligence technologies to the centre, may serve as a model for regulation elsewhere. But as we approach WSIS plus 20, we have a golden opportunity to bolster this framework and to reinforce our foundational multi-stakeholder principles. Establishing centralized control over the Internet and its governance system is not an option. Our focus should be, on the contrary, to keep its openness and freedom. This vision aligns with the SDGs. As the EU highlighted in its recent statement on the UN Global Digital Compact, swifter progress on the SDGs goes hand-in-hand with our commitment to a more inclusive digital future and to bridging the digital divides. In this regard, the EU and its member states are working hard through the Global Gateway, as Team Europe, to deploy digital networks and infrastructures worldwide, prioritizing under-served regions, countries and populations. In pursuing this digital future, it is critical that the IGF strengthens its role in fostering an inclusive, open and sustainable digital environment and evolves into an even more impactful and inclusive model. This is a shortened version of our statement, you’ll be glad to know, and the full version will be made available shortly. Thank you very

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
much. And now I will give the floor to the behind you. Thank you very much, thank you

Audience:
Madam Chair, thank you everybody, thank you colleagues here. My name is Atsushi Yamanaka, I’m a senior advisor on digital transformations at Japan International Cooperation Agencies, but I’m actually wanted to make a questions to you, as an individual who has, you know, intimately involved in the WSIS process, and where, like, you know, especially in the financial mechanism part, I used to be working in the UNDP. Now, three weeks ago, I was at a digital summit, ESDG Digital Summit, and it struck me, so like, fundamental questions that we have actually asked 20 years ago, remain the same. Fundamental things such as digital inclusions, financial mechanisms, all these things actually remain the same, and it struck me, saying, I should have worked harder, perhaps. In 20 years, I should actually have worked harder, so I can make a difference, and then solve some of these fundamental challenges. Now, we are the barge, as the robot ops said, of digital technology, which is like a fad, right? It was like I sit for developers’ fat at the WSIS process. And then remember what’s happened after 2005. Everybody who actually went out, I said, like, sorry, we had enough of this. The development partners, they left, you know, they left, and even developing countries who are really excited about this, they left. So my questions to you is, what can we do, you know, how can we do something different this time? Because these two years are going to be critical. You know, we’re going to have the summit for the future summit next year. We’re going to have digital compact. We’re going to have WSIS plus 20 in 2025. If we cannot make a difference this time, I believe that we’re going to have the same failure we saw from 2005 to around 2012. These seven years of dark age of digital development. So I urge you, all of you, can we actually come up with real concrete solutions and something that we can make a difference, say, yes, digital technology, and then ICT can actually work for everybody to create the information society and also for the development of developing countries. Thank you. Thank you. Thank you very much. So please. Oh my God. Hanslet, please go ahead. Thank you very much. Hanslet from the Gambia NRI. I just want us to go back and look at why there are big disparities as we look into the WSIS plus 20. You discover that it’s good when we have all the UNDP, the ITU, UNESCO, they claim they work together at the top level. If you go down to the bottom level, there’s no cooperation at country level. And that is why we are still dealing with big gaps in the digital divide. And the NRIs are now well-strengthened that they can help all these big international agencies on the ground. And I will suggest that moving forward, you have to work with the NRIs because the governments are part of that process. A good example I will stop at, I’m recently doing the UNESCO IOR, Internet Universal Indicators, as a lead researcher in the Gambia. UNDP are not involved. When I contacted them, they didn’t know about the ROMEX, UNDP country office. So how do you filter down information? And we have to really look at that seriously. It’s good, folks, yes, it’s from New York or Geneva, and say blah, blah, blah, we work together on the ground level. Do you really work together? Thank you. Thank you. Thank you very much, Ponslet. Now, please. My name is Izumi Aiz. I am living in Japan. I used to be participating in all the prep and processes of the WSIS. I do agree with what my friend, colleague, Atsushi, said. Have we done our homework? We used to have much more dissenting voices to hear. From the East and South. Where’s China now here? Iran, India, Russia, South Africa? I don’t see them. In the very beginning, we have heated debate about who’s gonna speak this or that, and a lot of rules, but substances. And many of which the civil society like us didn’t really agree with. But it was a very interesting creative processes that we tried to listen to each other. And I feel like it’s gone. I might be naive, but WSIS was proposed right after the 9-11 processes. There were big fears from the South that digital development would leave them out, the digital divide. But even the countries like the US or Japan, others who have the monies, wanted to let them come in together discuss. That, to me, was the genesis of the WSIS. Where is it? I was invited to the China’s World Internet Conference Digital Dialogue on, no, Dialogue on Digital Civilization in June this year. Oh, yeah. I was the only Japanese. There were only 19 participants from overseas. You can blame China not inviting them, but that’s not the way. There’ll be another World Internet Conference next month. Let’s see how many will join together. Also, today, we heard what’s going on in Israel and Palestine. Nobody really in this room tried to address that. Does it relate to the ICT? Of course. How about the things going on in Eastern Russia or Russian side? We carefully avoided these hot potatoes, perhaps. But I feel like the ICT is very powerless. And talking about the SDGs and human rights. Also, how about the status of women in certain countries? It went worse than 20 years ago, as we all know. So, how much ICT could play? How much we have done? Of course, there are areas that they played a very good job, but we cannot just remain optimistic and ask AI for rescue or whatever. So, the multi-stakeholder approach and the human-centric approach, I agree with Agnieta. Yes, it was a great result of our Europe-led sweat and tears of many days in Geneva and the other way. So, multi-stakeholder was not given. It will erode if you don’t do more. How about the climate change? We had the worst summer in Japan and many others. How can technologies do while consuming a lot of servers and energies on the AI? They might be regulated. So, I’d like really, all of you, me, address these real difficult issues and just don’t resort to the friendly, nice, warm environment that Kyoto presents. Thank you. Thank you very much. Now, I’ll give the floor to Peter Mayer, please. Thank you. I want to react. I didn’t intend to react directly, indirectly, but anyway. So, multi-stakeholders. I think in this environment and in this CSTD Open Forum, we can be proud. We can be proud because we were one of the first ones who implemented or tried to introduce the multi-stakeholder approach in our work, in the working groups. It wasn’t simple. In the first working group, it was very controversial. In the second working group, it was smooth. So, personally, I’m proud of it. And on behalf of the CSTD, I think we can be proud of that. So, probably, this is the way to go forward, not only in the CSTD, but the whole UN. That’s for one. Hot potato. We are in the setting of the IGF. The IGF, we may think, is very relevant. That’s what we think. Do other stakeholders think the same way? Do governments think the same way? I’m afraid not. And why? Because we have the IGF with its outcomes, and it is mandated to have outcomes, but it’s also mandated not to have resolutions. So, if there is no resolution in the UN, you don’t exist. So, how can we bridge this gap? CSTD is mandated to review the processes within the UN system. IGF is part of the UN system. IGF Secretariat regularly gives report to the CSTD. We listen to that, and we write one sentence. IGF Secretariat gave the report. We don’t know about the content. We don’t know about any recommendations. We don’t follow up, and that is the key. We should follow up. We should have in the resolution the key messages. We should have, if not in the resolution, because it is a very hot debate all the time. I have the experience. But we can manage. We can manage to have it in the resolution. We can manage to have it in the annex of the chair’s report, but we should forward it to the ECOSOC and through ECOSOC to the UN General Assembly in order that all stakeholders, that is all member states will be aware that we are doing something which is relevant, which is relevant not only in the field of ICTs, but which is relevant for the sustainable development goals. Thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Hi. Thank you, Peter, very much. I just would like to emphasise that it’s a pity that some of the governments that make our lives sometimes very difficult in our discussions at the CSCD, they are not here, so they cannot hear. No, Cuba is still here. No, but Cuba is here, yes. It’s living. Nigel, Nigel. Who asked first? I don’t know. I thought, please, sir, go ahead, and then Nigel. Introduce, introduce yourself, please.

Audience:
My name is Dinesh. I come from near Bangalore in a rural area where we have a community network. And our main activity is research on how to include low literate people as first class citizens of the internet. That’s one thing that I think there are more than three billion people who come under this, and I don’t see that we are focusing on them. Can we find internet access, but not what does it mean for low literacy? That means that if you don’t know how to read or write, what is internet for you? What is Google for you? Can you do keywords? What is the results for you? What can you do with it? Think like that. There are too many people we just don’t wanna see or not willing to think that there is something we can do about it. And it’s time we have technical people from the ground who are interested in addressing this issue and get them together and see what can be done about bringing internet as a first class available thing for low literate people.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
Thank you very much, Nigel. UK, please.

Audience:
Yes, thank you very much. I’ll sit next to you, yeah. God, I’m exhausted after that. Sorry, Nigel Hickson, Department of Science, Innovation and Technology for the UK. And really, I just wanted to say about three things. I think first, it’s really excellent to have this debate here. I mean, if we can’t have a thorough, controversial, open and constructive debate about the WSIS process at the IGF, then we can’t have it anywhere. So it’s really excellent to have it here. Secondly, I listened to Piers O’Donoghue speak and I thought, well, he’s speaking for the whole of the European Union, so I needn’t say anything. And then I forgot that we’re not in the European Union. So, yeah. So I’m getting to the end of my career so I can say things like that. But I wanted to completely endorse what he said. I think it’s just so important that we have these discussions. And I think it’s so important that we also recognise where we have come from. And we also, we recognise what’s happened since 2005, not just in terms of technology, which we all understand and many of us have experienced here. But we also recognise the work that’s taken place in the UN CSDD, in the reports that have been written, in the WSIS Forum where we’ve had excellent discussions year after year on many issues, the work that UNESCO has done as well, and many other regional UN bodies and other bodies. There hasn’t been silence since 2004. There’s been evolution. There’s been evolution across the spectrum. There’s been evolution at ICANN. There’s been evolution at IGF. There’s been evolution in UN CSDD. We have moved forward. Yes, and of course it’s not perfect. And that’s why it was so wise at Tunis that the language in the Tunis Agenda said, yeah, you ought to review it. We ought to review it. We reviewed it after 10 years and we ought to review it again. I mean, let’s lift the drain covers up. Let’s lift those covers up as other people have said and review it. But let’s not forgotten where we’ve come from and the progress we’ve made. And let’s not forget either of why we got together in Geneva and Tunis. It wasn’t to discuss internet governance as such. It wasn’t to discuss the mechanisms of the internet, but there’s no harm in discussing that. It was to do better than that. It was to connect people. It was to discuss why we have disparities, regional disparities. And as we heard from the ITU Secretary General at the beginning of this conference, and we heard from many other people, we still have those issues. So let’s focus. Let’s focus on development. Let’s focus on sustainability. Let’s focus on what really matters. Thank you. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Moderator – Ana Cristina Ferreira Amoroso Das Neves:
And with Nigel Hickinson from UK, I think that we come to an end of this first open consultation. I would like to send my heartfelt gratitude to each and every one of you for your active and valuable participation throughout this session. Your contributions have greatly enriched our discussions. Don’t forget the questionnaire. Please fulfill this questionnaire. That, of course, is not here in paper. It is in the virtual world, is in the website of CSTD. It will be in the session, in the website of the IGF in this session, and it will be spread widely. And it will be very important to have your inputs in written form. And with that, I declare the first open consultation closed. If we will be in the UN, I would have a hammer. So as I don’t have a hammer, I will use bottle of water.

Anna Margaretha (anriette) Esterhuysen

Speech speed

261 words per minute

Speech length

988 words

Speech time

227 secs

Audience

Speech speed

178 words per minute

Speech length

5251 words

Speech time

1773 secs

Isabelle Lois

Speech speed

162 words per minute

Speech length

650 words

Speech time

240 secs

Kamel Saadaoui

Speech speed

158 words per minute

Speech length

549 words

Speech time

209 secs

Moderator – Ana Cristina Ferreira Amoroso Das Neves

Speech speed

145 words per minute

Speech length

2728 words

Speech time

1130 secs

Pearse O’donohue

Speech speed

165 words per minute

Speech length

483 words

Speech time

175 secs

Prateek Sibal

Speech speed

207 words per minute

Speech length

385 words

Speech time

112 secs

Robert Opp

Speech speed

170 words per minute

Speech length

206 words

Speech time

73 secs

Shamika Sirimanne

Speech speed

166 words per minute

Speech length

883 words

Speech time

319 secs

Speaker 1

Speech speed

127 words per minute

Speech length

418 words

Speech time

197 secs

anita gurumurthy

Speech speed

168 words per minute

Speech length

1027 words

Speech time

368 secs

What is the nature of the internet? Different Approaches | IGF 2023 WS #445

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Pablo Castro

The internet is becoming increasingly integral to people’s quality of life as it enables them to connect with family and friends, while also facilitating the exercise of fundamental rights. This positive sentiment towards the internet is expressed through the argument that it should remain open, safe, interconnected, and accessible to all. It is believed that the internet is essential in enabling the exercise of other rights as well.

Although there is general agreement about the importance of the internet, there is a debate surrounding whether internet access should be considered as a standalone right in public policy debates. Some argue that the internet should be viewed as a tool for exercising other rights, rather than being a right on its own. Questions arise about who should be responsible for guaranteeing internet access if it is indeed considered a right in itself.

Reducing the economic, geographic, and technological barriers to internet access is seen as a significant policy challenge. It is recognized that these barriers limit access to the internet and hinder people’s ability to fully benefit from its advantages. Efforts are being made to address these challenges and ensure that internet access becomes more readily available to all individuals.

In Chile, the issue of internet accessibility is being addressed through the discussion of a proposed bill that acknowledges the internet as a public service. The aim of this bill is to reduce the accessibility gap that exists, particularly in the 35% of Chilean homes that currently lack internet access. By recognizing the internet as a public service, it is hoped that measures can be put in place to bridge this accessibility gap and ensure that all individuals have equal opportunities to benefit from the internet.

The role of internet providers is also a topic of discussion. There are concerns about balancing the public interest with the protection of individual rights when it comes to regulating these providers. Internet providers are seen as agents that give access to an essential service, raising questions about how they should be regulated and their responsibilities towards ensuring equal and fair access to the internet.

In conclusion, the internet is increasingly seen as vital to people’s quality of life, connecting them with loved ones and enabling the exercise of their rights. The debate surrounding internet access as a standalone right continues, with efforts being made to reduce barriers and ensure equal access for all. The discussion of Chile’s proposed bill recognizing the internet as a public service highlights efforts to address the accessibility gap. Balancing the role of internet providers in ensuring equal access is also a point of contention. Overall, the internet’s importance and the need to ensure its accessibility and regulation are key considerations in public policy debates.

Bruna Martin-Santos

The discussions surrounding internet governance have emphasized the need for a normative framework that focuses on rules and values. The internet is considered critical for societies and development, and there is a rich heritage and experience in understanding how it works. This positive sentiment suggests that the existing discussions have guided us well through the processes.

However, the resilience of the internet is being affected by complex societal issues and problematic governmental interventions. There are significant problems and discrepancies related to access and empowerment, and merely having access to platforms like Facebook is not equivalent to having access to the internet. This negative sentiment highlights the challenges posed by societal complexities and the need to address governmental interventions.

An argument is made that the internet should be recognized as a global public resource that is universally accessible and affordable. Its governance should be based on human rights standards and public interest principles. The internet plays a crucial role in addressing global challenges, and global calls to action are needed to advance access to technology and promote country-level development. This positive sentiment emphasizes the importance of viewing the internet as a public good.

The proactive engagement of the technical community is identified as a key requirement in internet governance. The operation of the internet relies on technical expertise, and the success of spaces like the Internet Governance Forum (IGF) relies on the involvement of this community. This positive sentiment underlines the significance of technical expertise in shaping internet governance.

Furthermore, it is argued that the global equity crisis needs to be a central aspect of internet-related discussions. Internet-related problems are rooted in inequality, and the abuse of power and insufficient collaboration further exacerbate these issues. Narratives from regions hardest hit by interventions should play a role in shaping policies. This negative sentiment highlights the urgency of addressing the global equity crisis in internet governance.

There are gaps in internet governance that need to be addressed urgently. These gaps have existed since the inception of internet governance and still persist today, indicating a negative sentiment. It is essential to work towards filling these gaps.

Safeguards are necessary to protect rights, privacy, and data, as well as to ensure the inclusion of all communities, genders, and regions. This positive sentiment highlights the need to discuss appropriate measures that guarantee the protection of individual rights while fostering inclusivity in the digital space.

Lastly, there is a call for the recognition of the internet and information as a commons or a public good. This positive sentiment suggests that more recognition is needed for the internet and information as collective resources that require appropriate governance and management.

In conclusion, internet governance discussions have highlighted the importance of establishing a normative framework based on rules and values. However, challenges related to complex societal issues and governmental interventions impact the internet’s resilience. There is a need for the internet to be considered a global public resource that is universally accessible and affordable, with governance based on human rights and public interest principles. The involvement of the technical community plays a crucial role in shaping internet governance. Addressing the global equity crisis and implementing safeguards to protect rights and ensure inclusivity are other significant aspects of internet-related discussions. Recognition of the internet and information as a commons or public good tops off the list of essential considerations in internet governance. Cooperation is crucial in advancing the development and use of digital public goods. The analysis reveals a range of sentiments, both positive and negative, showcasing the multifaceted nature of internet governance and the need for comprehensive and inclusive solutions.

Audience

The analysis highlights the issue of internet inaccessibility for individuals who cannot read or write. It reveals that globally, approximately 3 billion people lack basic literacy skills, with 1 billion residing in India facing difficulties in performing tasks such as conducting Google searches and comprehending results. This underscores the need to address the digital divide in society.

One speaker expresses a negative sentiment, emphasizing that the internet is inaccessible to those who lack literacy skills. This poses a barrier to accessing information, participating in online activities, and benefiting from online resources and opportunities. The speaker calls for innovative solutions to bridge this gap and make the internet accessible to all.

Contrasting the negative sentiment, another speaker takes a positive stance, suggesting that technical experts focus on developing solutions for hyperlinks that do not rely on text. This would enable individuals with low literacy levels to navigate the web and access information without the need for reading text-based links. By removing this barrier, the internet can become more inclusive and provide opportunities for personal growth, education, and economic development.

The analysis also introduces the Internet Governance Forum (IGF) as potentially influential in advocating for technology that supports web accessibility for low-literate individuals. Collaborative efforts between governments, technology companies, and civil society organizations are vital in addressing internet inaccessibility. By uniting diverse perspectives, progress can be made in developing technology solutions that enhance internet accessibility and reduce inequalities in accessing digital resources.

In conclusion, the analysis illuminates the pressing issue of internet inaccessibility for those who cannot read or write. It emphasizes the impact on billions of people globally and particularly in India. However, it also presents potential solutions and initiatives to improve internet accessibility. By tackling this challenge, we can work towards a more inclusive and equitable digital world where everyone can benefit from the internet’s resources and opportunities.

Anriette Esterhuysen

Anriette Esterhuysen, a staunch advocate for reclaiming the Internet as a connector of people and a disruptor of power concentration, argues that the concepts of the commons, rights, and the public good can work harmoniously together to create a more inclusive and equitable online environment. She believes that these concepts should not be seen as opposing forces, but rather as synergistic entities.

Esterhuysen supports the protection of the public core of the Internet, emphasizing the need for effective governance involving not only the state but also commercial and common interest parties. She recognizes the importance of coexistence and collaboration among different stakeholders to ensure affordable and accessible Internet connectivity without government and corporate interference.

While Esterhuysen opposes the nationalization of the Internet, she expresses concern about surveillance capitalism and the commercial exploitation of data. She calls for greater control over the collection and use of personal information, advocating for regulations that safeguard individuals’ privacy and prevent their data from being solely used for commercial gain. She commends the European Union’s efforts to regulate data access for researchers.

Highlighting the shortcomings of current Internet regulations, Esterhuysen suggests a shift in focus from targeting users to addressing the concentration of power among corporations and manufacturers. By holding these entities more accountable and responsible, she believes that regulations can play a more effective role in ensuring a fair and democratic Internet.

Esterhuysen also raises concerns about the environmental impact of the Internet, calling for regulatory measures to limit the proliferation of electronic waste (e-waste). She proposes the introduction of standards for device production to extend their lifespan and reduce e-waste, aligning with the principles of responsible consumption and production.

Challenging the notion that economic growth alone equals development, Esterhuysen argues for a more holistic approach that incorporates poverty alleviation and reduced inequalities. She suggests redefining the concept of development to address social and economic disparities and create a more equitable society.

In terms of community empowerment, Esterhuysen advocates for communities to have agency in determining their level of interaction with the Internet. This includes recognizing the value of local Internet access over global connectivity, allowing certain communities to prioritize specific online services according to their needs and preferences.

Lastly, Esterhuysen stresses the need for increased accountability and transparency from governments in regulating the Internet. She highlights the tendency of government entities to prioritize their own interests over the public’s best interest, emphasizing the importance of stringent oversight and regulation to ensure Internet governance serves the public interest.

In conclusion, Anriette Esterhuysen presents a comprehensive approach to reclaiming and governing the Internet. By emphasizing the concepts of the commons, rights, and the public good and addressing issues such as power concentration, data exploitation, environmental impact, and development, she advocates for a more inclusive, equitable, and democratic Internet. Her insights and analysis provide valuable perspectives on the future of the Internet, its potential to empower communities, and its role in fostering social and economic equality.

Azin Tadjdini

The debate surrounding whether internet access should be considered a human right is complex and ongoing. Some argue that internet access is a tool or means to an end, rather than an inherent right. However, others believe that internet access is essential for individuals to exercise their other rights, such as freedom of expression and access to information.

Restrictions on internet access raise concerns about the potential violation of human rights, as they can unduly interfere with rights such as freedom of expression and assembly. Denying individuals access to the internet significantly impacts their ability to participate in public discourse, seek and share information, and engage in political and social activities.

Several countries, including Greece, France, Costa Rica, Finland, and Estonia, have recognized internet access as either an individual or constitutional right. These countries have implemented laws that place a positive duty on the state to ensure universal and affordable internet access, reflecting a commitment to promoting equal opportunities for all citizens.

At the international level, there is a growing recognition of the importance of internet access in enabling individuals to enjoy and exercise their rights. The Human Rights Council passed a resolution in 2012 calling upon states to promote and facilitate access to the internet. Additionally, human rights mechanisms have increasingly acknowledged the significance of the internet in relation to the enjoyment and exercise of rights.

Despite these developments, further exploration is needed to define the parameters of a human right to internet access. Questions remain regarding the conditions for imposing restrictions, the roles of the state and private sector in ensuring access, and the state’s duty to protect individuals from cyber attacks. Continued discussion and analysis are required to establish a comprehensive framework for the right to internet access.

In the context of internet governance, the development of a rights framework has influenced discussions and decision-making processes. However, it is important to note that the existing rights framework does not cover all aspects of internet governance. Therefore, further exploration and adaptation of the rights framework are necessary to address the dynamic challenges and complexities of the digital era.

In summary, the debate on whether internet access should be considered a human right is multifaceted. While some view it as a means to an end, others emphasize its role in enabling the exercise of other rights. Restrictions on internet access can pose obstacles to the enjoyment of human rights, and several countries have recognized the importance of universal and affordable internet access. At the international level, there is a growing recognition of the significance of internet access, but further exploration is needed to establish a comprehensive framework. The ongoing development and application of a rights framework to internet governance require continuous examination and adaptation to address emerging challenges.

Valeria Betancourt

Various spaces and processes have emerged to address the complex landscape of Internet policy, governance, and digital governance. This proliferation reflects the ongoing evolution of Internet governance, which is closely interconnected with the governance of the digital realm. However, important unresolved questions remain regarding what the internet is and how it should be governed, giving rise to the need for compromises among stakeholders and guiding principles.

Efforts to imagine the future of Internet governance have been initiated, aiming to foster a constructive dialogue among multiple actors. These conversations intend to nurture the development of necessary compromises, acknowledging the diverse perspectives and interests involved. The push for compromises is essential in navigating the complexities of internet governance and addressing the challenges that arise from this rapidly evolving environment.

Proper governance of the internet is crucial to avoid potential harm and ensure accountability. In the absence of adequate governance mechanisms, there is a risk of further harm, as well as the concentration of power in the hands of corporations. This concentration may have detrimental consequences for individuals and society as a whole. It is also essential to hold public actors accountable in the digital realm to safeguard the interests and rights of the public.

Recognising the potential of the internet to contribute to a dignified life for all, it is important to address the unresolved questions surrounding its governance. By doing so, we can work towards achieving basic compromises that can help redress structural inequalities. The internet can play a significant role in reducing inequality and promoting sustainable development, ensuring that everyone has equal access to the benefits of this powerful tool.

In conclusion, the evolution of Internet governance intertwines with the governance of the digital realm. To address unresolved questions and overcome challenges, compromises between stakeholders and guiding principles are necessary. Proper governance is required to prevent harm, prevent the concentration of power in corporations, and ensure accountability of public actors. The internet should serve the purpose of fostering a dignified life for everyone and addressing structural inequality. By actively engaging with these issues, we can create a more inclusive and equitable digital future.

Paula Martins

The analysis emphasises the importance of comprehensively discussing the nature of the internet and the necessary policy responses. It highlights the need to consider both the current state and future implications of the internet. The primary focus is on exploring the policy consequences and formulating responses that are aligned with the unique characteristics of the internet. This perspective is viewed positively, demonstrating recognition of the importance of addressing and adapting to the evolving nature of the internet.

Importantly, the discussion moves beyond theoretical contemplation and delves into practical applications and implications. It acknowledges the need to establish policies that are not only effective but also feasible in addressing the challenges posed by the internet. By adopting this practical approach, policymakers can navigate the complexities associated with the internet and optimise its potential benefits.

Furthermore, the analysis highlights the relevance of Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. This goal underscores the need to foster inclusive and sustainable economic growth through advancements in technology and infrastructure development. By aligning policy responses to the nature of the internet with SDG 9, policymakers can contribute to achieving broader global objectives and promoting positive societal outcomes.

Overall, the analysis reinforces the necessity of engaging in a comprehensive and nuanced dialogue on the nature of the internet and subsequent policy responses. It encourages policymakers to consider the practical implications and adapt their strategies accordingly. By leveraging the potential of the internet while addressing its challenges, policymakers can effectively shape the present and future landscape, fostering inclusive and sustainable development in the process.

Nandini Chami

This analysis explores various aspects of internet governance and its impact on society. One argument posits that the internet should be treated and governed as a global communication commons, emphasising unmediated communication as fundamental to its nature. Progressive movements and feminists view the internet as a promising space for unrestricted communication.

However, concerns arise regarding the control exerted by large corporations over internet infrastructure, obstructing the concept of commoning. It is noted that four companies currently own 67% of the cloud services infrastructure. Additionally, companies in the network infrastructure sector are encroaching into the communication services sector, raising concerns about the obstruction of commoning practices.

Furthermore, the negative effects of surveillance advertising on the generative power of the web are discussed. Surveillance advertising has transformed the open expanse of the internet into echo chambers, limiting diverse and open dialogue. The Digital Services Act, currently in place, is deemed insufficient to effectively address this issue.

The analysis also raises concerns about the internationalization of internet governance. Specifically, the incomplete internationalization of the International Corporation for Assigned Names and Numbers (ICANN) is pointed out, highlighting single-state control. It further emphasizes the political nature of technical choices, indicating that decisions made in internet governance have significant political implications.

The significance of a public goods approach and commons approach to internet governance is explored. It is argued that these approaches are not antagonistic but rather complementary. The provision of infrastructure for commoning supports the public goods approach. It is suggested that an ideal internet governance model should incorporate a mixture of public, private, and cooperative enterprises.

In terms of accessibility, the analysis underscores the necessity of a proper public financing model to ensure universal and affordable internet access. The World Summit on the Information Society is mentioned as a longstanding effort to develop an appropriate model, with particular concern for marginalized communities. Insufficient financing may limit these communities to walled garden-type internet services.

Furthermore, the analysis emphasizes the importance of equitable connectivity in order for everyone to access the development dividends of the internet. Research from ICT Africa suggests that the current state of connectivity often worsens digital inequality. The need for connectivity to guarantee a fair share in data and development dividends for all is highlighted.

Lastly, the analysis underscores the importance of defining and addressing “Access to what” when discussing internet access. The current landscape often provides connectivity without yielding substantive benefits for the community, leading to a “connectivity paradox.” This highlights the need to consider the purpose and impact of internet access to effectively address digital inequality.

In conclusion, this analysis sheds light on various aspects of internet governance and its implications. It highlights the need for a global communication commons, concerns about corporate control, the detrimental effects of surveillance advertising, the necessity of internationalization, the complementary nature of public goods and commons approaches, the significance of proper public financing for universal access, and the importance of equitable connectivity. These insights contribute to a deeper understanding of the complexities surrounding internet governance.

David Norman Souter

The analysis delves into the conceptualisation of the internet and raises concerns about the potential dangers of becoming entangled in the semantics of different conceptualisations. It asserts that it is crucial to move away from such debates and instead view the internet as a public good. The initial perception of the internet as a utility that provides a service to everyone supports this argument, underlining the belief that the internet should be universally accessible and available at affordable prices.

The study also acknowledges the presence of infrastructure, intermediaries, and power structures within large-scale systems like the internet. It recognises that these elements are necessary for the functioning of the internet. This understanding further emphasises the need for regulatory structures based on traditional economic models, taking into consideration the inevitable power structures that arise.

However, while traditional economic models are viewed as essential for regulatory frameworks, the analysis points out the limitations of the rights framework in this context. It argues that the rights framework primarily focuses on states rather than corporations. Furthermore, it highlights the under-emphasis of economic, social, and cultural rights within the rights framework. This observation suggests that the rights framework may not adequately cover all the aspects needed in the context of the internet.

Additionally, the study explores the complexity of empowerment through the internet. It points out that while the internet can empower individuals, including those who are traditionally marginalised, it also has the potential to empower those who abuse their power. This observation highlights the need for careful consideration and balancing of power dynamics in internet governance, recognising the potential for misuse of power.

Lastly, the analysis draws attention to the environmental impacts of the digital sector and proposes the adoption of a broader perspective. It identifies three key areas of unsustainability: overexploitation of scarce resources, high energy consumption, and improper management of e-waste, often leading to its improper disposal in developing countries. In response, the study suggests the introduction of an environmental ethos in internet governance, directing decision-making processes such as setting standards, developing new applications, and deploying networks towards more sustainable practices.

Overall, the analysis sheds light on various facets of the conceptualisation of the internet. It underscores the need to move beyond the semantics of different conceptualisations and recognise the internet as a public good. It highlights the presence of infrastructure, intermediaries, and power structures in the internet ecosystem, necessitating the consideration of traditional economic models in regulatory frameworks. It also urges a critical examination of the limitations of the rights framework in addressing the complexities of the internet. Moreover, it emphasises the necessity of vigilance in ensuring that empowerment through the internet does not enable the abuse of power. Finally, it urges a broader perspective on the environmental impacts of the digital sector, advocating for the integration of sustainability principles into internet governance.

Luca Belli

The impact of the internet on public goods and social good is a complex issue with mixed sentiments. On one hand, the internet has the potential to facilitate justice, democracy, security, and public health. It provides a platform for citizens to engage in democratic processes, access information and services, and participate in public discourse. The internet has been instrumental in promoting transparency, accountability, and citizen empowerment.

However, the internet also poses challenges and threats to public goods. The rise of infodemics, or the spread of misinformation and disinformation, has become a significant problem with the proliferation of fake news and manipulation tactics. This can negatively affect public perception, distort facts, and ultimately undermine the democratic process. Moreover, cybersecurity attacks pose a serious threat to the security of individuals and nations. These attacks can disrupt critical infrastructure, compromise sensitive information, and hinder the functioning of democratic institutions.

In addition to its impact on justice and democracy, the internet is also considered a global public good. It has broken down cultural barriers by making culture more accessible than ever before. However, the benefits of internet access are not evenly distributed. Factors such as availability and affordability of good internet connectivity greatly influence the extent to which individuals can benefit from the internet as a global public good. This digital divide creates disparities in access to information and opportunities, exacerbating existing inequalities.

Furthermore, the internet has a dual nature, simultaneously serving as a tool for strengthening public goods while also undermining them. The manipulation of individuals through the internet puts at risk the principles of democracy, human rights, and economies. By locking people into a few social media platforms and exposing them to fake news, there is a decrease in diversity of information sources, leading to echo chambers and a reduction in critical thinking. This not only undermines democracy but also impacts the economy by distorting public perceptions and decision-making processes.

Cooperation is essential for the effective management of the internet as a public good. As public goods often transform into utilities, the market is unable to effectively price them, leading to the need for state provision. The challenge lies in determining how to manage and govern the internet in a way that protects public goods while balancing the interests of different stakeholders.

Measuring the impact of internet restrictions on public goods, such as democracy and the economy, is a challenging task. The internet is a complex and dynamic system, making it difficult to quantify its precise impact. Additionally, determining who should bear the cost of internet restrictions is another challenge. Balancing the interests of governments, internet service providers, and users is crucial for finding effective solutions.

Overall, the internet has the potential to be a powerful tool for promoting public goods and social good. However, it also comes with risks and challenges that need to be addressed proactively. Cooperation and effective governance are key in harnessing the positive impacts of the internet while mitigating its negative effects.

Session transcript

Pablo Castro:
increasingly linked with people’s quality of life, enabling them not only to connect with family and friends, but also to exercise fundamental rights. It is essential that the internet, including its support and infrastructure, remains open, safe, interconnected, and accessible to all. From our country’s perspective, it is only possible to address this significant challenge through increasing international cooperations among different stakeholders. In recent years, there has been a debate around the nature of the internet. Is it a right by itself, or a tool to exercise these other rights? The answer to these questions could have deep implications for the formulations and development of public policies related with the internet and for definitions of the obligation that each state has regarding this technology. On one hand, if we consider the internet as a right by itself, different questions arise in relation to whom should guarantee access to it, and under what conditions, as well as the implication to share the infrastructures and regulations on internet providers. On the other hand, if we consider the internet as a tool to exercise of other rights, such as freedom of expression or access to information, then the obligations states should have will be focused on guaranteeing that people have the capacity to use the internet for this purpose. This includes reducing the economic, geographic, and technological barriers that limit access to the internet. The question of whether internet providers from the private sector are concessions holders of public service is another key issue surrounding this debate. If the internet is considered a public good, that internet providers can be seen as agents that give access to an essential service, which poses questions about regulation, insurance, equitable access. In Chile, Congress is discussing a bill related to acknowledging the internet as a public service. The goal that underpins the discussion is to reduce the gap in internet accessibility that keeps nearly 35% of Chileans’ homes without access to this network. The project is considered imposing certain obligations regarding universality in relations to the areas of coverage at the time to connect them. Related to this is the possibility of given subsidies for the demand for the connection service. The motivation behind the discussions, which affects both public and private investment, is to allow most citizens to have access to all services offered through the internet. Furthermore, the intervention of a state in the name of public interest is an issue that must be analyzed with caution. Even though it is necessary to guarantee equitable access and protection of citizens’ rights, it can be also invoked risk, such as censorships and limitation to freedom of line. Ultimately, common goods governance, such as the internet, is a complex challenge. It requires a balance between the protection of individual rights and the promotion of the public interest. It also implies collaboration among different stakeholders. In this dialogue, we invite you to think about this question, the search for answers that can reflect the evolving essential nature of the Internet of our current society. Thank you very much for all your attention. Thank you, Mr. Castro, for setting the scene. And I would like to invite you to take a moment to reflect on the importance of the Internet as a resource for all of us.

Paula Martins:
Thank you very much, Mr. Castro, for setting the scene. And what we’ll have now is the first round of comments. So we have some speakers that will talk about different approaches to looking at the Internet. We’ll have each speaker talking about one specific approach. And then in a second round of contributions, we’ll have speakers from all over the world. And as I said, I’m going to move on introducing them as I pass on the microphone to each of them. And we’ll start looking at the Internet as a public good, something that Mr. Castro already referred to in his opening remarks. And I’ll invite Luca Belli. Luca Belli is professor of the Fundaรงรฃo Getรบlio Vargas Law School in Brazil and coordinator of the Center for Technology and Society. And he’s going to talk about the importance of the Internet as a public good. So, Luca, what does it mean to refer to the Internet as a public good? What are the concrete policy consequences of doing that? The concrete responsibilities that arise to different stakeholders based on that?

Luca Belli:
Thank you very much, Paula, for the questions, and apologies for being late. I was lost in the conference venue. So, let’s start with the first question, which is, how do we understand the importance of the Internet and its relation with public goods, or whether the Internet itself can be considered as a public good? The first one, the first dimension is that the Internet is a facilitator of public good. But at the same time, it’s also a facilitator of social good. So, the Internet is a facilitator of social good. can also undermine public good. So if we think of public goods like justice, democracy, security, public health, they can all be facilitated by ICTs and by the internet. But at the same time, if we think about all the infodemics that we have just lived during the pandemic, if we think about cybersecurity attack, if you think about meddling in democratic processes, they can also be undermined by the internet. So this first dimension of the internet as a potential instrument to strengthen public goods or to undermine them is very important and depends as a function of the kind of governance that was just mentioned, the kind of regulation that ensues from the governance that we are able to define. The other dimension is the internet itself as a global public good. There are people that are much smarter than me, like Stiglitz, that spoke about this already in the late 90s when he was theorizing, Joseph Stiglitz was theorizing a culture as a global public good that is non-rival in its access. And the benefits are not exclusionary. But there is a huge transaction cost to have culture. And the internet breaks this transaction cost. But here again, it really depends on which kind of internet we are speaking of. Because if we think about Joseph Stiglitz having a very good broadband connection, a nice PC, and living in a wealthy area where it’s easy and affordable to have good internet connectivity, that is an excellent example of how the internet is a global public good, because it allows him to directly connect with the global community and freely exchange, seek, and impart ideas, and distribute or obtain culture. If you think about how the majority world access the internet through a smartphone, usually through prepaid internet access plans where you have sponsored, zero-rated social media that are sponsored for free, for free. Of course, paid with your data, but not paid with money, whereas all the rest is capped. That is not really what Joseph Stiglitz had in his mind, and that strongly undermines, actually, public goods. So at the same point, the internet could be an enormous engine for strengthening public goods, a public good itself, but also an incredible machine for undermining public goods, and also just to capture individuals into a set of tools that basically data-fy them to then manipulate them, and therefore undermining democracy, human rights, the economy of entire states, right? So I think that we have to consider this double nature of the internet when we think about the internet in comparison with public goods.

Paula Martins:
Thank you so much, Luca, and thank you for calling attention to the different ways of looking at the internet as both an instrument for accessing public goods and the internet itself as a public good and what kind of internet we are talking about. So I’ll now pass on the floor to one of our online speakers, and we’ll have now two speakers that will be discussing the internet as a commons. Where is this concept coming from? How do you apply it to the internet? And again, what are the concrete consequences of applying it to the internet, especially when we use it in policy spaces? So I will start with Nandini Chami. Nandini is Deputy Director of IT for Change, and she’ll be joining us online. Nandini. Hi, are you able to hear me? We are.

Nandini Chami:
Yeah, okay. So. Just like quickly like getting into the subject at hand and what does it mean to apply a commons perspective to the internet? I think at a very foundational level, all of us recognize that the entire charm and promise of the internet is its affordance of being a communication commons for the entire world. And compared to all the other communication technologies that came before it, there is this possibility of many to many communication that is unmediated by a central broadcaster. And right from the beginning, this is the promise that feminists and progressive movements have always seen. And this is the web that we all celebrate. But when reflecting on the internet as a commons, I want to come to this with a reality check orientation also, that the community communicative ecology of the internet today is such that as somebody once made a quip about, you know, realistic perspectives on the world, the end of the world seems like far more easier for us to imagine than to think of the end of the big tech business model and its triangle hold over the internet. We are staring at a reality like that. I think that if we have to reclaim the internet as a global communication commons, there are three critical areas that I place before all of you for consideration. And I think we need to fix that. So the first is the issue of like, you know, the infrastructure layer and it’s captured by a few powerful corporations. We all know who holds the lion’s share of the world’s, you know, deep sea cables. And in the other, like, you know, layer of the web, when we look at like cloud services, we know that just like four companies own 67% of the cloud services like infrastructure today. And if the infrastructure is controlled by private sector in such a massive way, what is the scope for commoning? And then we also know that there are these egregious moves where the companies operating in the network infrastructure sector are also getting into the communication services sector. In my own country, there is a debate on net neutrality violations about what happens if telecom companies that hold the lion’s share of the Indian market are going to operate in the network like, you know, services, digital services layer, and also demand a network usage fee from the digital services. There is a policy moving push like that. So what does it really mean for net neutrality? And what does it mean for the voices of everyone? And the other problem we are all familiar with about like what happens when big tech who started out in the communication services layer is controlling the deep sea cables that reach the internet to the most marginalized regions of the world as well. And we know this issue and we have to do something about it. The second issue, when you look at the digital services itself, the entire point that, you know, there is a surveillance advertising model and this has completely destroyed the generative power of the hyperlink, which was the open sea of the web where the web ran on serendipity. And instead you have like, you know, this kind of social media model of the stream where we are all locked in our own like echo chambers and bubbles and there is no possibility of commoning or any community building. We have discussed this problem ad nauseam, so I will not like get into that problem again. But in terms of looking at solutions, if the most radical thing we can come up with is like the model, like a digital services act, that votes very ill for all of us, I think, about how do you actually get out of surveillance advertising and think of something different to reclaim communication platforms and the digital spaces? I think this is a question that stares us in the face. The third point that I would like to make and some. Some of you may have seen this op-ed piece that came out a couple of days ago, which was co-authored by the chair of the IGF leadership panel and the technology envoy, where they actually argue about how in the future of Internet governance, there needs to be a protective mode around the technical governance in terms of preserving the political structure and functioning. But looking back at what has transpired and the history that has occurred, is Internet governance really political? We all know the story of the incomplete internationalization of ICANN, and we know the default is one state is able to control the critical Internet resources still. So if we are not truly internationalizing Internet governance, and we are still continuing to hold on to a somewhat like, you know, fictionalized idea of a political technical governance, where at another level we all recognize that all technical choices are political choices. As Lawrence Lessig said, code is law and architecture is a policy choice. And we see that the different geopolitical or economic visions of development have led us to a situation where, in contrast to the Internet we have known, there is another vision of the Internet that stands before us today, imagined by another state with a different political model. And we don’t want to be caught in a geopolitical war of like, you know, where different political visions lead to Internet fragmentation. But if we have to address this question, we have to talk about what are these political choices and not like rest in this like same old like idea of a political Internet governance. We must move beyond that and look at like what would it mean to truly internationalize Internet governance. The final point, and I’ll just like take very little time, I think that when we look at the approaches, it may be important to not to see the global public goods and commons as oppositional approaches, as is traditionally done, you know, like when you talk about commons and when you talk about public goods. At some level, there is like, you know, this interesting research work that is coming out of Utrecht University, which is talking about the fact that oftentimes a public goods approach will provision the infrastructure on which commoning can take place, because it’s important to think of like commoning as a noun and also remember the communities, the people who are commoning, and then we have to make choices where we may be kind of like having this realistic approach where like when you think about economy and India, my country has traditionally thought of mixed economy. You have public enterprise, you have private enterprise, you have cooperativist enterprise. So when we think of Internet governance model, we must be actually asking ourselves how do we get the public infrastructure and the public financing and generally like the governance of the standards in a way that is truly public, so you build a commons that is truly belonging to the people and not a pro-capitalist commons that is then just cannibalized by big tech. So looking forward to hearing from everyone, I’ll just stop here.

Paula Martins:
Thank you Nandini for calling attention about the idea of the commons and the reality check of the critical areas that we need fixing so that we can actually have a commons and for already touching on what will be the second part of our discussions here today. And I’ll invite now Bruna, Bruna will also talk about the Internet as a commons, Bruna Martins Santos is a global campaigns manager at Digital Action. Bruna, you have the floor.

Bruna Martin-Santos:
Always forget to turn it on. Thanks a lot Paula and thanks APC for the invitation as well and to bring such a relevant debate to the IGF, right? I guess in the past years all of us had spent at least a week in one of these forests or spaces discussing the most relevant issues, but we all live these places feeling really bad and heavy from the pessimistic approach that normally the Internet governance discussion has taken, so it’s good for once to be doing more positive debate and one that looks towards to the future. I would say, I’ll maybe start by saying that for the past 30, 20, 30 years, Internet governance discussions have all guided us through the processes, right? And operated under the normative framework that’s focused on rules and values, all of that because it acknowledges how relevant the Internet is for societies and for its development. There is a rich story and experience on how the Internet works, what are the bodies that are involved in it, but we do lack sometimes the further inclusion of some other parts and some other groups in society, or even we do fail in bringing in this very strong global majority narrative to the conversation that’s so relevant for everyone involved in this discussion to understand what are the problems in reality. But despite of that, we have also been watching the world go more and more complex in the past years, and the equation around things such as elections or human rights gets even more complex with many stakeholders being called to the responsibility to protect ecosystems and rights, and to me this is very much connected to this discussion, right? But at the same time, I think it’s also possible for us to say that despite a lot of the society issues we face nowadays and sometimes problematic governmental intervention, the Internet remains resilient and its core characteristics and aspects are still there, and it still allows for everyone to exist and express itself, and it’s still perceived as a basic and common and really core aspect of every place that we’re at. I would like to also add that it’s interesting to see how the community is evolving towards reflections and debates about things such as the failure of some companies and corporations, but in the end of the day, this is really a conversation about what are the discrepancies, what is lacking in access when we talk about access, what is lacking when we talk about empowering stakeholders and bringing more voices. As the good and old saying says, access to Facebook is not access to the Internet, and we should not really rest until everybody understands and adds layers to that. Just to maybe start going towards the topic more further, as a global public resource, I would say the Internet has to be universal and affordable, and its governance should be grounded in international and human rights standards and public interest principles. Technologies and the Internet itself, they do play a crucial role in addressing a lot of the global challenges, and we do need global calls to action to advance access to them and encourage countries to build what is needed in terms of technologies for everybody. This says a lot about enhancing safeguards, addressing governance issues, and mitigating some of the abuse and some of the problems in this space as Luca was also referring to. Last but not least, I would say that more proactive engagement of the technical community is also required in this debate. The Internet doesn’t magically exist or simply operate. We do rely on a lot of folks and expertise for it to exist, so the success of all of that and the success of spaces such as the IGF is also reliant on these types of communities such as the technical one. The last thing that I would say is that when we talk about all of this, the global equity crisis needs to be a key aspect of such discussions. A lot of the Internet-related problems are rooted in inequality. This is one of them. Abuse of power is another of the issues, and insufficient collaboration is another one of the topics. The fact that the narratives from the parts of the world that suffer most with said interventions are not the ones shaping the policies is problematic, so we need the global majority to be making the calls and leading the processes, and I do hope that in the future of spaces such as the GDC and Summit of the Future, we take that into consideration. So I guess I’ll stop here, and thanks a lot, Paula.

Paula Martins:
Thanks to you, Bruna. Thanks for talking about the importance of actually not only reflecting about different approaches but also different realities, the role of inequalities and power politics behind the policy choices. And with that, I’ll move on to our final speaker of this round that will be, again, joining us online, and I’ll invite Azin Tadjdini. Azeem is Human Rights Officer from the Rule of Law and Democracy section of the Office of the United Nations Office of the High Commissioner for Human Rights. Azeem, can you hear us? You have the floor. Very well.

Azin Tadjdini:
Thank you so much, and I hope you can hear me as well. It’s a great pleasure to be with you and to also listen to people whose work I admire so much. So I’ll say a few words about internet access as a human rights or as an enabler of human rights, and this is a question that also Mr. Castro alluded to in the beginning about how to actually legally conceptualize the internet and internet access, which is a big debate. It’s also a big debate also in the human rights world. Is it a human right in itself? Is it not? And what difference would it make? So there is these two poles that we have. On the one side, the view that internet is a technology and it’s a means to an end, but not an end in itself, and so it is not a human right and it shouldn’t be a human right. And the view that, well, it’s an enabler of human rights is what is currently reflected in the state of international human rights law where internet access yet is not recognized as a right in itself, but it is recognized as an enabler right in the sense that restrictions to internet access can constitute undue interference with rights such as freedom of expression or freedom of peaceful assembly, but also of socioeconomic rights such as the right to education, the right to health. So in other words, from this point of view, this conceptualization means that there is a negative duty on states not to unduly interfere with rights as exercised online, but there is not a positive duty to actually provide internet access. And on the other hand, there is the view that internet access is not only a tool, but it’s become so intertwined with our basic ability to exercise our rights that it should be considered. to human rights and then, in other words, there’s a positive duty on states to ensure access. And I’ll go a bit back to this, but even though I initially mentioned that internationally there’s no such, or we haven’t reached a state where international human rights law actually creates a legal obligation to ensure internet access, at the domestic level in some states this is already considered a right. So it’s an individual or a constitutional right in the sense that the law establishes a positive duty on the state to ensure universal access and, in some cases, also affordable access. So some examples that I think we have all heard of is, for example, the Constitution of Greece that states that the facilitation of access to electronically transmitted information and the production, exchange and diffusion of such information constitutes an obligation of the states. And there’s also a similar language from the Constitutional Court of France and the Constitutional Court of Costa Rica. And a bit similar to what also I think the bill that is proposed in Chile, there is a law in Finland that has declared broadband access a basic right. And similarly in Estonia, internet access is part of the universal service that the state has to provide to all its people. So at the domestic level there has been a development towards recognizing internet access as a right and as a positive obligation of the state. And at the international level that’s not the case. Although here I think it’s also important to bear in mind where we’ve come from. So the development of how international human rights law deals with the internet. And one milestone is, of course, the 2012 Human Rights Council resolution that was adopted to protect free speech on the internet and that called upon states to promote and facilitate access to the internet and international cooperation aimed at developing media and information communication facilities in all countries. So it was the first UN resolution of its kind. And since then there’s a growing attention by human rights mechanisms, treaty bodies, special procedures, as well as regional human rights mechanisms about the importance of the internet and internet access to the ability of people to enjoy and exercise their rights. And this is also recognized by the Sustainable Development Goals, which reinforces an obligation to work towards universal and accessible internet. So there is this development and I think from an international human rights perspective, what would be important to ask is, so if we recognize universal internet access as a right, what should that mean? Because there’s no one form of right. You have rights that can be realized progressively depending on the resources of the state. And so what would a human right to internet access actually look like? Would it be an absolute right? Probably not. But if not, what would be the conditions for restrictions? What would it mean for the interplay between state and private sector given precisely the sort of global nature of the internet? And what would it mean, for example, for a state duty to protect people from cyber attack that inhibit their access to the internet? So even if we move towards or if international human rights law moves towards increasingly recognizing universal access as a human right, the question still remains, well, what kind of right do we want it to be? And what kind of obligations should it actually create? I’ll stop there and I look forward to the discussion. Thank you.

Paula Martins:
Thank you so much for, well, telling us a little bit of the state of the art, where we are exactly in the discussions. We still have two views on the internet as a right in itself and the internet as an enabler. But I understand that not only in the examples of national legislation, but the other examples of global policy that you mentioned, this direction that it seems that we are taking to recognize it, access to the internet as a right in itself. And with that, we’ll move on to the second part of our discussion here today, that is inviting some commentators to react to these different approaches that were shared with us. And the question is, like, what are the key, the main takeaways from what you’ve heard? What are the main commonalities, the divergences? And how these different concepts can be combined, complemented, how they relate to each other and how we could use them in global policy spaces? Easy questions. So, I’ll start with you. Esther Heusen is Senior Advisor for Global and Regional Internet Governance with the Association for Progressive Communications.

Anriette Esterhuysen:
Thanks. I was just trying to convince David to go first. Thanks very much. And I’m sorry I was late. I had another session that ran a little bit late. I don’t know if I have anything useful to say, even though this is a concept that I’ve personally been grappling with for a long time. And I think what I really want to celebrate and thank APC for doing is to actually get us to talk about this at the IGF, because I think what this is responding to is a conversation that internet governance should have started with, not actually be trying to end with, or go into its next evolution cycle with. I think that, I mean, anyone from the technical community who, if they were here, would say to you, the internet is not one thing. But actually, as I listened to the speakers, it resonated with me. The internet is not one thing. And for me, that also resonated with Nandini’s comment about not looking at commons and public good as being in opposition. And I think similarly, looking at internet or access to the internet as a human right or as an enabler of other human rights, that’s also not a conflictual. So I think that there’s real synergy between these three different concepts that are being discussed. And I think what they all speak to is the fundamental problem. And I think Nandini articulated that well, and maybe also before I came. But the fact that how we imagined this internet as a connector of people, as a disruptor of concentration of power, as a leveler of distributing more voice and more influence, we’ve lost that to a large extent. And I think what we are really talking about here, whether we are entering it from the commons perspective, the rights perspective, or the public good perspective, is to look at a framework that will allow us to reclaim that. And then I want to just go back to reflect on maybe this is about the internet not being one thing, and I think we also have to keep in mind that it evolves, and that if we are going to try and make a contribution to internet governance, and encourage some kind of paradigm shift if we can get as far as that, I think it has to be something that is future-oriented, that can address the challenges of how a technology that emerges within the public domain, for example, can then become appropriated by commercial interests, further developed, and invest. It’s a little bit like the pharmaceutical industry, we wouldn’t have medication if it wasn’t for private sector money going into it, but we also wouldn’t have drugs and retrovirals in my country if governments did not intervene to ensure that those patents were available. So you need a shift of some kind. So I’m going to let David give you all the answers, but what I want to say is that I think we can look at the concept of the public core of the internet, that is a norm that has been developed, I was personally involved in that, the Dutch government, Dennis Bruder did some work on this, a Dutch academic, but trying to identify if there are parts of this distributed, interconnected internet which we can actually govern and regulate and manage as a public good, a public infrastructure, there are analogies, we can look at places where water is supplied in a municipality by private companies, but there are standards and procedures that private companies have to adhere to, to ensure that that water is affordable, that it’s available, that it’s clean, so and then we have the notion of the protection of the public core as well, which it requires both governments and companies not to interfere with it in particular ways. Then I think there’s the idea of the commons, and I like Nandini. telling us that we don’t have to think of them as alternatives, public good, which generally comes with more of a sense of the public sector and the state having to play a stewardship role, and the commons, which is much more of a bottom-up process. They are part of the Internet, which I think is the commons, but then they are also part of the Internet that is run by the private sector and that’s built out by the private sector, and we have to accommodate that. We can’t change that. I don’t believe in nationalizing the Internet, but I also don’t believe in surveillance, capitalism, and models where our data, our behavior, our communications are being extracted for commercial gain, and I think there are ways of looking at that, very practical ways, for example, such as the data that emerges from all these companies. The European Union now has regulation that ensures researchers have access to that data. If you’re in Brazil and you want to access what happened during the election with social media last year, was it last year or the year before? Last year, you’d have to pay millions of dollars to get access to that data, so there are lots of things, they’re parts of the Internet that I think that we should be able to claim as being in the public domain, and then I think, then I’m just going to end with two bullet points before I give this to David. I think it is important to recognize that the architecture of both the Internet and of Internet governance, so the technical architecture and the governance, including the technical governance, is not apolitical. I think that’s a very important starting point, and I think Bruna also made that point, that point about power, and I think that’s important. Then my final bullet point is, Mr. Castro, I didn’t hear your comments, but I think Luca’s point about both the positive and the negative outcomes of looking at the Internet as a public good, I think all governance models, including the status quo governance models, we need to look at both the intended and the unintended positive and negative. consequences, and I don’t think we’re doing enough of that with the status quo, but as we develop alternative models we also need to look at intended and unintended consequences.

Paula Martins:
Thank you. David, we are ready for all the answers now, so I invite you to share with us. Let me just introduce you, wait, let me just introduce, because I failed to do so. So David Soter is Managing Director of ICT Development Associates. David, now you go.

David Norman Souter:
Sometimes I have the misfortune to examine PhD theses in the social sciences, and to some extent I felt like this is like the first, the opening chapters of those theses, where the poor student has to go through every kind of theory that there has been in the area in which they’re trying to work on, and then bring those things together and come up with something, some kind of new concept, a new contribution, which draws on each of those other areas of previous work, and in very many cases they get trapped in the semantics of this, and I think there is a danger here of being trapped in the semantics of this. Okay, so we’re different conceptualizations here, and you can draw on all of them. A is better than C, or A plus B plus C is essential. You can draw on these things without putting them together, and that’s, say, I was trained as an historian, not as a social scientist, and historians start by looking at the evidence and only come to the conceptualizations that social scientists use at the end of their analysis of the process, so that’s how I tend to look at things. So, I’m saying don’t be trapped in the semantics. I think there are, these are fairly random thoughts about the three points that were made, so there is no constructed theory here, because you can’t write an essay while sitting listening to the presentations. Just in terms of, first in terms of public goods, I mean when this was first talked about on the internet, public goods, about 20 years ago, there was a lot of confusion, because there is the economic definition of public goods as non-excludable and non-rival, which was understood by economists, but then non-economists were using the term to refer to things that were for the good of the public, and in particular to see the internet as a delivery mechanism for things that were for the good of the public, and that very much puts it in the kind of area of traditional regulation of utilities. It’s saying it’s a utility which provides a service to people and therefore should be universally accessible and available at affordable prices, so that everyone can benefit from it. It seems to me that that’s the kind of genesis of that concept of public good, whereas the concept from the common seems to me to be derived more from the internet as a communications medium, and in that context, so it’s a communications medium and it should be universally available because it is a communications medium and enables everybody to talk to everybody else, as opposed to a medium that delivers to them products and services. And I think the issue here is to do, to some extent, with the evolution of the internet, so that, you know, once upon a time it was small and now it is large, and in things that are as large as the internet, you have infrastructure and you have intermediaries. It’s not possible for something that large to be conducted without those things, and if you have infrastructure and intermediaries, you have the power structures that go with that. You have a requirement for capital investment, and there are not many places where that capital investment can come from. And in the case of the internet, you also, and other communications media, you also have network externality, so the more people there are on Facebook, the more valuable it is by a factor that is more than equivalent to the numbers. If you have 10 million people on something, it is much more valuable to you, one person, than if there are only 10,000 on it. And those things drive the power of data corporations. It’s very difficult to see how you can avoid those power structures, so if you can’t avoid those power structures, well, it’s not possible to say I can avoid them if you have to regulate them. I think, back to the regulatory structures that are required, and there are traditional economic models for doing that as well. The rights perspective is obviously crucially important, but the rights framework, the international rights framework, doesn’t cover everything that needs to be considered here, and that’s, I think, part of the difficulty in trying to root things in a purely rights perspective. So the rights perspective focuses, the rights framework focuses on states, not on corporations, and increasingly it’s corporations where the real power lies. So where are the responsibilities and obligations that are placed on those corporations? Generally, there’s been a discussion in a rights context about particular rights, and there’s been an under-emphasis on economic, social, and cultural rights, and so on the development and environment kind of domains. And I think the rights perspective is a different lens to that which is used by the most powerful actors within the internet world. So it’s a different lens to that used by governments, it’s a different lens to that used by corporations, it’s a different lens in particular to that used by some governments and some corporations. So it’s an extremely important perspective, but it’s not sufficient in itself to cover all that is required. Those are random thoughts, not yet in essay form.

Paula Martins:
Thank you, David, and I think our proposal is not to get lost in translation. It’s not merely a theoretical exercise, what we are proposing for the discussion here today, but really to think about the policy consequences and use these entry points to discuss the nature of the internet and what kind of policy responses we’re looking for for the upcoming years and for right now, actually for the present, not only for the future. What we have now is we’ll open the floor for questions. So Nadine, in a second I’ll give you the floor to share if there are any questions online, then I’ll ask here if there are any questions in the room, and after that I’ll give you all, speakers from the first round, a final chance to react again to the comments that you heard from the commentators. You have two minutes each, so it’s like firing back, maybe if there are any specific questions to you, and then we’ll wrap up with my colleague Valeria, that will try to systematize the key points coming out of the discussion. So Nadine, do we have any questions from the online participants?

Nandini Chami:
Yeah, there are about like four questions, so how many can we take, Paula? Let me see, how many questions we have in the room? One.

Paula Martins:
If you are going to read them quickly, they are all questions or comments? They’re all questions. Okay, so yeah, just read them all. We’ll give the floor here in the room, and I’ll just invite you to do your best to respond to them. Go ahead, Nadine.

Nandini Chami:
Yeah, so the first question is about like, you know, how do we ensure that, what are the best approaches to ensure that we bring affordable connectivity to the largest number of people? The second question is actually, yeah, the second one is a comment about the internet as a tool for empowerment, so I’ll skip that, and the third is a question about what do we do about the energy footprint and the consequences of the greenhouse gas emissions and everything of like digital technologies, and how do you think of the sustainability question in the era of 4IR? And the last question is about when is it possible to make internet available and accessible for the underprivileged in developing countries, and what actually be done about it, like what can we do next? Thank you, Nandini. Dinesh, if you could

Paula Martins:
Thank you, Nandini. Dinesh, if you could use the microphone to your left. Thanks, Nandini. I think

Audience:
What you said actually kind of addresses, like mentions what I want to ask, but I want to be very specific in what I want to say, which is internet for all of us is the web and not accessibility in terms of infrastructure, the fiber, the something and all that. Internet is the web, which is hyperlinks, interlinking, hypertext and all those things. Now there are 3 billion people in the world who cannot read and write, 1 billion in India who cannot do Google search, make sense of the results, and if you can’t read and write, what is internet for you? Why are we not pushing technical people to look at this and see what is hyperlinking when you don’t have text, what is, you know, and there’s so much we can do, and we as a group of very highly literate people stop asking questions when it comes to the web and internet. For example, when you give a book, you write on the margins, how many of you have asked why don’t I have it on the web? This is a starting of a beginning, like why we need to push things, this is all I want to say, and we need to look at it, IGF is a forum, not like the Silicon Valley, not like the corporate, not like something where we can actually take a decision on what is the technology that helps push the web to the low literate people.

Paula Martins:
Thank you, Dinesh. Mr. Castro, can we start with you? We have two minutes, everybody will have two minutes to react, both to respond to questions and also some reactions, would you like to, yeah? Okay, so Luca, we’ll start with you.

Luca Belli:
All right, so I just, I want to react quickly to the comment that has just been done about the fact that for most people the internet is the web, I think I disagree, and my point before was precisely that for most people actually the internet is not the web, because for those who have the privilege to have fixed internet access, and you can use the web as an app to browse through the internet, and that is a chance, but for most people, actually a couple of billions, at least, of people in the world, a web is not the web. The web is only a small part of what they do in the internet. What they primarily do in the internet is using two or three applications, which is on their mobile phones. So it’s a very tiny part of the internet. And it’s really an enormous walled garden in which they don’t have the privilege of being internet users browsing through the web. But they are data-fied, basically, the entire day. And they are fed with algorithmically-recommended content based either on their profile or based on who pays the most. So I think that if we want to start then to get into what the IGF could do, well, what it has been doing over the past years is allowing a platform for these issues to be discussed. What the IGF has not been doing but should have been done is also to try to recommend solutions that could be utilized to, if not solve, at least mitigate these kind of issues. And these kind of issues, they require cooperation. If you take my point on the possibility to consider internet as a public good, any kind of public good requires cooperation then to be managed, right? The commentator explained about the fact that then at the end of the day, public goods end up being utilities, of course, because the market does not know how to price them. And then you have usually the state intervening to provide them. But it is also actually a huge problem when you undermine them. Because if you lock, for instance, 3 billion people into a couple of social media platform and you feed them only with fake news, then you are undermining democracy, you’re undermining human rights, you’re undermining the economy. But you cannot price it. You cannot say, you can say 10% more connectivity penetration equals to 3.6% increase in GDP. But you cannot say 70% of people in a country only having access to Facebook equals to minus 50% of democracy. Because you cannot price democracy. You cannot price. You cannot meter. So, I think we have to be a little bit creative, not only discussing these issues, but also understanding my point on public goods is not necessarily, I’m not advocating for considering the Internet a public good, I think one has to consider both sides of the coin, and then the interesting part of it would be then not to understand whether the Internet should be considered a public good, but whether when it undermines public goods, how to meter it and which kind of solution, who should pay this cost.

Paula Martins:
Thank you, Luca. Nandini, thank you so much for doubling as our speaker and our online moderator, I would like to give you the floor now to have your final remarks.

Nandini Chami:
Yeah, so I would just like to take on one of the questions that came from the Bangladesh remote hub, that what is it that it means to make the Internet accessible and available and affordable for the most marginalised? Because I think it speaks to everything we have been talking about so far, including the Internet as a right, the public good questions and the provisioning, Internet as a communication commons that is not like cannibalised by surveillance capital, it speaks to all dimensions of the question. Here, I think that right from the time of the World Summit on the Information Society, we have been like, you know, we failed to find a proper public financing model that can ensure that the Internet in its foundational sense and not like a zero services type of walled garden version is available to everyone in the world, and 20 years after the Vissers or Moore, we are still talking about that agenda, so I think we need to kind of like have a way to discuss the financing question about that. And secondly, I think when we are talking about Internet access, the question of access to what, which many of us have been talking about today, that needs to also be addressed, because what is it that this access is supposed to do? Like there’s a study by Research ICT Africa, which actually talks about the connectivity paradox where connectivity actually means like more digital inequality. Is that what we want, or do we, are we talking about a world where connectivity means equitable access to the data dividends and the development dividends of the Internet for everyone? So we might have… So I think we have to kind of like push this, like in however we engage with in the Global Digital Compact process and the World Summit on the Information Society action lines that may come up. So I feel we have all a lot of work to do here. And thank you again for organizing this very, very important and critical discussion.

Paula Martins:
Thank you, Nandini. Bruna?

Bruna Martin-Santos:
Thanks, Paula. Yeah, just to add some more thoughts to this, I think I agree with some of both our commentators’ additions to the conversation about from Henriette that this is a debate where Internet governance might have started, right? But it’s also important to be coming back to this at this point in history where we still see the gaps, right? And I also agree that some of the concepts are definitely not opposing as well as with the idea of not getting lost in the semantics of the whole conversation. At the same time, addressing the gaps is urgent, mitigating the abuses and discrepancies as well. We are at a moment in history where it seems to be a need to cooperate even further in order to advance the development and use of digital public goods or Internet as a common or as a human right. At the same time, it’s important to discuss appropriate safeguards to ensure the protection of rights, privacy, data protection, as well as inclusion of all communities and genders and regions and claims. This will help us build trust around this space, foster an inclusive technology ecosystem that also meets the needs of the local populations because it’s what we’re talking about. And last but not least, I would say we do need, and I hope we get out of the GDC, more recognition of the Internet and information as a commons or a public good and how we’re moving forward in governing that and managing that as a collective resource. So, thanks a lot.

Paula Martins:
Thank you, Bruna. Azin, can we hear from you?

Azin Tadjdini:
Thank you. And thanks to everyone, including the questions online. So, yeah, just to start, I think I completely agree with David’s point that the rights framework doesn’t and cannot cover all the elements of Internet governance, all the questions that arise there. But I think it’s still the development in the rights landscape will also impact very much on the reality. So, rights will not just reflect the facts. Very often they don’t. But the way in which these discussions are later articulated in terms of rights, legally conceptualized rights, will also impact how, of course, Internet governance takes place. And so, it’s a very central part of the discussion. And also, just on the sort of question about infrastructure versus access, I think one added value of the rights framework is particularly the question about what type of Internet or what type of access do we actually speak about? And what kind of content do we speak about, considering that so much of the content in places where people actually do have access to the Internet is very heavily interfered with by states, but also by private sector. So, that’s where I think there’s an added value of the rights framework, which provides for the free flow of ideas of all kinds.

Paula Martins:
Thank you, Azin. I only refer to the speakers, but we still have some time. So, I’ll give our commentators also the chance to share some final remarks before we have a quick systematization of the key takeaways. David, who wants to start? I’m asking if you want to.

David Norman Souter:
Thank goodness for that. Maybe I’ll just pick up on a couple of the questions and kind of raise some complications with them. So, firstly, on empowerment and tools for empowerment, I’ve always had a slight problem with the way in which this concept is used, because innovations like the Internet, for example, have been used for a long time. the Internet empower everyone, not just those who are disempowered, and not just those who we might hope to empower because they are disempowered or for progressive reasons. The Internet empowers everybody, including people who are powerful and abuse their power. So in particular, it can empower people in the sense that people who are disadvantaged in the sense that it gives them additional resources which they can use in effective ways, but at the same time, it can empower those who have power over them more than it empowers them. So think of that in terms of employer-employee relationships, say, or landlord-tenant relationships, or some gender relationships, family relationships. So empowerment is a more complicated issue, I think, than is often discussed here. On the environmental point, I didn’t quite catch the question, I’m afraid, but I sort of spoke in the main session from the platform this morning in the main session on the environment. And so I do think this is a particularly important dimension and is often missing from, still missing from our discussions about the direction of Internet development and more generally digital development, which is now much larger than Internet development. And I think there are many dimensions to it. It’s often discussed in digital fora on the basis of what can be done with digital technology to address particularly climate change. But it’s important to look at this in a broader perspective. So there are three ways in which the digital sector is currently unsustainable in the way that it is progressing, and those are to do with the overexploitation of scarce resources which are used in digital devices, the energy consumption gross rate, and e-waste, which is rarely recycled and substantially dumped in developing countries. So all of those things need to be addressed, and what I was proposing in the session this morning was that we need to introduce into across the board in Internet governance and in the thinking of governments and businesses and the technical sector an environmental ethos which thinks about environmental impacts as part of decision-making processes, for example, in setting standards, in developing new applications, in deploying networks. So the goal is both to maximize the potential value of those innovations and to minimize their environmental footprint.

Anriette Esterhuysen:
Thanks very much. Well, I mean, thanks for the questions, and I mean, I think much as we want a paradigm shift, we’re not going to get one, so we have to start somewhere. And I think the one thing that we โ€“ there is a lot of regulation of the Internet. I mean, we’ve talked about the European Digital Services, Digital Market Act, there’s regulation at many, many country levels as well. But I think the problem is that the regulation, a lot of the regulation is actually not targeting the powerful, it’s targeting people, it’s targeting users. It’s not targeting the manufacturers of the devices or the builders of the infrastructure. So I think maybe a shift away, yes, regulatory response is necessary, but shift that regulatory response in such a way that it actually tries to address concentration of power in a more effective way. And this applies also to the issue of the environment and environmental impact. So we need more common acknowledgment and some principles that we need to regulate how companies do business on the Internet, and not just in the European Union and not just targeting the big tech companies. Actually, we need to look at business as a whole and look at diversifying and opening markets more and opening innovation more. You know, this notion of permissionless innovation is another thing. We need to be able to regulate innovation in such a way, not to prevent it, but to prevent harm. And we have principles from the pharmaceutical industry, the idea of the precautionary principle. If new apps are developed, that could change how children learn or how people interact with one another. And why should those not be first tested before they can go into operation? I think we need to also regulate how infrastructure is built from an environmental impact and energy perspective. And yes, I’ve mentioned devices already, and this includes the disposal of the devices, but the endurance of the devices. I mean, surely we should have regulations that stop e-waste from proliferating at the rate that it does. But one can introduce standards for the manufacturing of devices so that they last longer, so that they can be updated. It is possible to change that. But I think to change that, we also need to shift our understanding of development, this whole notion that growth equals development. that will alleviate poverty by creating more rich people. We need to shift away from those conceptions. But then I think one of the real challenges here is that because we’re dealing with a globalized network and a globalized market, we cannot rely on our traditional, I think the rights, as in, by the way, I think the rights framework does give us what we want, but there’s one thing in the rights framework that doesn’t work that well for us, and that’s the concept of the state as duty bearers and the rest of us as rights holders, and that it’s the state’s responsibility to ensure that private sector actors comply with rights frameworks. It’s very hard to make that work within the current internet context, both because the companies are so powerful and because the states are so unaccountable. It becomes very hard to trust states to play the role of holding business accountable when states themselves are not accountable for acting in the public interest and managing and growing the internet and regulating the internet in the public interest. And then I think we also need to have a third dimension here, and that is about empowering communities and allowing communities to create their own approach to internet. I mean, one of the things that we’ve learned in APC by working with community networks is that there are communities who only want internet in their village. They don’t actually want to connect to the global internet. They’re quite happy just to be able to have free WhatsApp calls in their own area. So I think that’s the third thing. So I want to stop at that, but I do want to say I think, as in mentioned, the complexity of the human rights framework, I think it gives us a lot to work with. I disagree with you in that, David. And I think she didn’t mention specifically, but the business, the guidelines on business and human rights, will probably never become a treaty because some governments will block it, but it’s providing us with really good guidelines on how to ensure that companies are accountable.

Paula Martins:
Thank you all for the very insightful contributions. Very, very interesting. And I’ll give now the floor to my colleague, Valeria, that took lots of notes, and she’s gonna tell us how we are going to use all these insights that we just compiled here today. Valeria.

Valeria Betancourt:
Thank you so much, Paola, and thank you so much to all of you for your insights. Actually, Andrea did my, you did my job, so thank you so much, Andrea, because, yes, so I won’t repeat what has been said. I just want to say that we were motivated to convene this conversation because obviously there are questions that are not resolved, starting by the basic one about what the, Internet is and also how we can deal moving forward with the governance of not only the Internet itself but the governance of the implications and impacts of its use. And we are in a very particular moment in which also the evolution of Internet governance is proving that it is interplaying very directly and deeply with the broader governance of the digital realm. So in this current moment in which several spaces and processes dealing with issues of the Internet policy and Internet governance and digital governance have proliferated. So what we can do? What’s next? How we can keep contributing to it? Like two years ago we started a process to imagine the future of Internet governance and this session kind of closes that initial phase of our reflection and also the thinking and analysis of how we can fit into these several processes that are looking at the configuration of the digital future. So in this moment I think, well the speakers have said it all, I think what we can push for and how we expect to use the outcomes of this conversation is to precisely nurture and feed into the thinking of what are the compromises that are needed between the different stakeholders and what’s the principles that should guide Internet governance and digital governance moving forward to precisely set, to precisely avoid what Andrea was saying, to avoid like causing further harm and putting people in the center and regulating in a way that we address the structural problems and particularly the concentration of power in corporations but also the lack of accountability of public actors in particularly government. So we expect to use those not only in the framework of the IGF and the conversations happening here but also in the framework of the Global Digital Compact and the WSIS plus 20 review and the commitments emerging in the context of the acceleration of the implementation of the Sustainable Development Goals. So the role that we want to play and how we want to use these inputs is to make sure that these questions that have kept unresolved for so long are still addressed, and that we address in a way that helps us to move towards basic compromises to address structural inequality, towards ensuring that the internet can serve the purposes of ensuring a dignified life for everyone. So that’s what I want to say, and then I pass it to you, Paula, for closing.

Paula Martins:
And I would just like to thank you all again. Thank you, you, Valeria, the participants online, the speakers online. And with that, we wrap up the session of today. Bye. Thank you. Thank you. Thank you.

Anriette Esterhuysen

Speech speed

174 words per minute

Speech length

1974 words

Speech time

679 secs

Audience

Speech speed

154 words per minute

Speech length

266 words

Speech time

104 secs

Azin Tadjdini

Speech speed

147 words per minute

Speech length

1193 words

Speech time

486 secs

Bruna Martin-Santos

Speech speed

171 words per minute

Speech length

1118 words

Speech time

391 secs

David Norman Souter

Speech speed

149 words per minute

Speech length

1418 words

Speech time

570 secs

Luca Belli

Speech speed

175 words per minute

Speech length

1191 words

Speech time

409 secs

Nandini Chami

Speech speed

174 words per minute

Speech length

1891 words

Speech time

653 secs

Pablo Castro

Speech speed

178 words per minute

Speech length

630 words

Speech time

212 secs

Paula Martins

Speech speed

177 words per minute

Speech length

1327 words

Speech time

449 secs

Valeria Betancourt

Speech speed

147 words per minute

Speech length

527 words

Speech time

215 secs

VoD Regulation: Fair Contribution & Local Content | IGF 2023 WS #149

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Cho Changeun

The discussions in South Korea revolve around various topics related to the regulation and management of internet traffic and collaboration between companies. One notable incident was Netflix’s lawsuit against SK Broadband in April 2020. Netflix claimed network neutrality, arguing that platforms should not have to pay network fees to ISPs for increased traffic. However, the court disagreed with Netflix, establishing that platforms have a responsibility to contribute to network costs.

While some ISPs have traditionally charged network fees, others, such as KT and LG U+, have chosen to partner with Netflix in profit-sharing agreements. This approach allows them to bypass network fees and instead generate revenue from their collaboration with the streaming giant.

On the other hand, global over-the-top (OTT) platforms like Netflix and Disney Plus are showing an increasing interest in investing in local Korean content. The government supports this trend by offering additional tax deductions for video content produced within the country. Politicians are pushing for the worldwide expansion of Korean OTT platforms, aiming to increase profits for Korean companies. These platforms are also competing to acquire Korean dramas, which hold symbolic representation of the Korean wave, further emphasizing the significance of local content.

However, concerns have been raised regarding the burden placed on ISPs for traffic management responsibilities. It is argued that businesses generating substantial traffic should take responsibility for managing it, rather than leaving it solely to telecom companies and ISPs. The South Korean government has taken steps to ensure a minimum broadband speed of 100Mbps as a universal service, requiring continual investment. The CEO of Korean ISP, KT, has highlighted the potential traffic problems that could arise in the future due to the rapid development of AI.

The government’s role in these discussions is viewed as more of a mediator in negotiations, rather than actively creating new regulations. It is believed that the government should facilitate cooperation between companies, rather than imposing additional regulations. This approach is seen as more effective in managing the economic effects derived from the Korean wave and fostering collaborations between companies.

Regulation was mentioned in relation to the stability of online content providers, referring to the Netflix Act. This act applies to platforms and video-on-demand (VOD) service providers with more than 1 million daily users and accounting for more than 1% of total traffic in Korea. However, Netflix has not been a point of concern in terms of stability and regulation.

Advancements in technology have also been highlighted in the discussions. It is argued that newer technology can already handle issues related to internet traffic, reducing the need for additional regulation. The rapid pace of technological development also means that government regulations can become outdated quickly, leading to the belief that the government’s role should be more focused on negotiation between companies.

Overall, the discussions highlight the complexity of managing internet traffic and the importance of cooperation between companies. The government’s involvement is deemed essential in facilitating these collaborations and ensuring the stability of online content providers. The popularity surge in video-on-demand, linear streaming, and live streaming further emphasizes the need for a deeper impact analysis. In considering local content contributions, platform and content neutrality should also be taken into account to ensure a fair and diverse content ecosystem.

Toshiya Jitsuzumi

Telecom carriers worldwide are heavily investing to improve Internet quality due to the surge in Internet usage driven by video-on-demand (VOD) platforms, resulting in increased operation costs. The digital divide, especially in rural and disadvantaged areas, requires additional investments from network operators and over-the-top (OTT) players. Different approaches, such as the Pigouvian tax and Coasean method, have been proposed to resolve issues between telecom operators and OTT players. In Japan, the extensive coverage of a high-speed fiber optic network has reduced the need for funding from OTT players. The oligopolistic nature of the Japanese telecom market may contribute to operators’ hesitation in seeking financial support. Drafting regulations in this rapidly changing market is challenging and can cause additional problems. Voluntary negotiations and alternative dispute resolution mechanisms are proposed solutions. New regulations targeting major VOD operators and incentivizing them to provide better quality services are suggested. However, concerns arise about fair contributions and potential implications for net neutrality. Excessive regulations may discourage VOD operators from entering or staying in the market. A data-based approach and data disclosure by VOD players are recommended. The summary attempts to accurately reflect the main analysis text and includes relevant long-tail keywords.

Moderator

The Japanese Video-on-Demand (VOD) market is experiencing rapid growth, reaching $3.5 billion in 2022. However, there is a lack of active discussions regarding fair contribution and local content contribution within the industry. Despite this growth, the Japanese government has a more restrained approach when it comes to regulating big tech, primarily requesting registration from companies operating in Japan. Moreover, there are significant disparities among various stakeholders within the VOD industry.

One notable aspect is the significant revenue that VODs generate for Internet Service Providers (ISPs) through their volume-based charging model. This highlights the potential profitability of VODs for ISPs and their incentive to support the industry. Meanwhile, introducing regulatory measures might pose challenges in a market that is rapidly evolving.

The importance of financial resources is highlighted in the discussion of fair contribution, with monetary contributions being seen as essential. Additionally, there is a suggestion to invest in rural areas to further support the development of the local digital economy.

Another aspect is the need to enhance accessibility to local content on digital platforms. This is deemed crucial for fostering the growth of the local digital economy. Currently, the broadcasting market in Japan struggles to achieve content diversity due to the influx of US content.

To address the issue of dominance within the VOD industry, it is argued that regulatory requirements should be imposed on dominant VODs to prevent the creation of unreasonable entry barriers. Furthermore, there is a discussion around potential obligations for VODs, such as distributing disaster alerts or providing universal access to sports content, which could serve the public interest.

Considering future requirements that may be imposed on VODs, it is suggested that more robust networks will be needed. Additionally, it is noted that many of the broadcasting regulations can potentially be applied to VODs, further reinforcing the need for regulation within the industry. There is also a global trend towards content regulations and accessibility rules, aiming to address issues such as obscenity, violence, and the need for closed captions and video descriptions.

The case of the Netflix Act in Korea demonstrates how regulation can ensure internet stability for content providers. However, some argue that rather than relying solely on regulation, technological advancements should be prioritized to resolve traffic problems. It is also worth noting that the impact of new streaming services, such as linear and live streaming, needs to be analyzed to gain a comprehensive understanding of the industry landscape.

Overall, while the VOD market in Japan is experiencing significant growth, there are numerous considerations and debates regarding fair contribution, local content contribution, government regulation, dominance, network costs, and the role of technology. The analysis suggests that various stakeholders, including the government, ISPs, VOD operators, and content providers, need to collaborate and engage in active discussions to establish appropriate regulations and policies that foster growth, mitigate challenges, and ensure a vibrant and inclusive VOD industry in Japan.

Nami Yonetani

The rise of global Subscription Video On Demand (SVOD) platforms, particularly in the US and Asian countries, is influencing video streaming regulations worldwide. SVOD giants like Netflix, Amazon Prime Video, and Disney+ have targeted global expansion and have a significant impact on the traditional audiovisual media industry. However, they face pushback from regulatory actions, with some arguing that these measures are anti-consumer. Despite this, local content contribution regulations are being introduced in European and British Commonwealth countries, while Japan presents a unique case where regulatory discussions on local content are yet to occur. The need for regulations is also debated in terms of network costs and content accessibility, as well as the potential impact of high local content requirements on consumers. It is suggested that regulations should be imposed under certain conditions and that VODs should contribute to network costs. The summary also highlights the importance of promoting local content without discriminatory requirements and fostering collaboration between VODs and telecommunications companies. The overall goal is to strike a balance between promoting local content and ensuring consumer choice and affordability.

Audience

The discussions revolve around the importance and complexity of network and net neutrality. Network neutrality is crucial to maintain an open and fair internet, ensuring that all data is treated equally regardless of its source, destination, or content. This principle prevents internet service providers (ISPs) from blocking or slowing down specific websites or services, or charging extra fees for faster access to certain content. Network neutrality promotes innovation, competition, and freedom of expression online.

However, the issue of network neutrality is complicated by cross-border issues and the influence of tech giants. In a globalized digital landscape, regulating the internet in a way that addresses the concerns of all stakeholders is challenging. Tech giants like Google, Facebook, and Amazon wield significant power and influence over online platforms, making it difficult to strike a balance between protecting user interests and ensuring a competitive market.

To tackle these challenges, some countries are implementing regulations to address network neutrality. While regulations are being developed in certain countries, there is a need for more information and ideas regarding new regulations for handling the complexities of network neutrality. This indicates that ongoing discussions and exploration of potential regulatory frameworks are necessary.

The discussions also consider the impact of strict regulations on Video on Demand (VOD) platforms and potential drawbacks. VOD platforms, which offer on-demand access to video content, have gained popularity. However, concerns arise regarding the content available on these platforms and the potential for harmful or inappropriate material to reach audiences. This raises the question of how to better measure and regulate VOD operators, highlighting the need for clearer guidelines in this area.

Overall, the discussions underscore the importance of network neutrality, the challenges posed by cross-border issues and tech giants, and the necessity of effective regulations to address these complexities. The impact of regulations on VOD platforms and the potential drawbacks are also examined. However, it is evident that further information, ideas, and discussions are required to develop comprehensive regulatory frameworks that protect internet openness and fairness while addressing the concerns of different stakeholders.

Session transcript

Moderator:
Okay. Time is start. Everyone. Hello, everyone. This is workshop number 149, VOD regulations, fair contribution and local content contributions. Okay. So, let’s start the workshop. Oh, I wanted to. Oh, yeah. Thank you. Here are the speakers of this session. Local speakers from here are venue from here in Tokyo, Kyoto. Dr. Nami Yonetani and I’m Ichiro Mizukose. I’m also the session organizer of this session. And remote speakers, Dr. Toshio Jitsuzumi from New York. Can you hear me? Oh, yes, I can hear you, but I’m not from New York. I’m from DC. Wow. Yeah. Thank you. And Mr. and Ms. from Seoul. Can you hear me? Hello. I can’t hear you. It’s okay. I can hear you. All right. Oh, each one’s background will be talked about in their presentations. So, yep. Today we discuss about two regulations. One is fair contributions and the other is local content contributions. This is very, very rough explanation of these regulations. Fair contribution is redistribute the money from VOD to investigate the telecommunication infrastructures such as fibers and the towers and et cetera. And the local content contribution is redistributing the money from the VOD to produce the local content and ordering them to promote the local content. Again, this is very, very rough explanations. So, after this workshop, I hope everyone get the more correct understanding of these regulations. So, we want to make this workshop a bit interactive with polling system by slider. So, if you have no internet access, please access the slider with code 3965076 and click the URL that’s posted here now. Just a moment. Yeah, I’m sending the URL. If you post, could you click that URL and make the polling? Yep. Sorry. What’s going on? Yeah, yeah, yeah, yeah. Yes, yes, yes. This is a practice and so on. Please choose one. Before this workshop, we want to know your background. Have you ever had a fair contribution on the local content contribution or both or none? Just wait for polling. Yes. Oh, no. Okay. Polling is coming. Just a moment. Share the slide to see the result. Yeah. Five people are voting. Oh, everybody knows about this session. Oh, no. One knows only fair contributions. One knows nothing. Okay. So now, this is a practice, so I switch to the real polling now. Just a moment. Yep. This is, I want to hear your opinion about, it’s too little, but you can see it in your display. Just a moment. Yeah, yeah, yeah. Oh, no. Why? Just a moment, please. Got it. Okay. This is a question that, wait a moment. The polling question is, do you think the government should make rules for VODs to distribute some of their profit with local telecom industry or the local content industry or both local telecom and the content industry or none of them? So even if you live in the middle of the workshop, please poll, and you can edit or change the poll at any time. Okay. I will show you the result at the end of the sessions. So go back to the agenda. Here’s today’s agenda. First, Toshio Jitsuzumi will talk about the fair contribution. Next, Aichiro Mizukoshi talk in the case study in Japan. Then, Michio shows the situation in Korea. And finally, Dr. Yonetani give a presentation about local content contribution. At our presentations, we have time of Q&A and discussion. So, Mr. Professor Jitsuzumi, please, is it okay?

Toshiya Jitsuzumi:
Okay. First, let me share my slides. Yeah. I stopped the slides. Can you see my slides? Yeah, we can see it. Okay. So, let me start. Hello, ladies and gentlemen, my name is Jitsuzumi of Chuo University and currently staying at Georgetown University and enjoying my sabbatical leave here. While I deeply regret not having the chance to meet you in person at Kyoto, but I’m grateful for this opportunity to present a brief summary of fair contribution debate. But let me first introduce myself very briefly. After an 18-year tenure at the Japanese Telecom Authority’s Ministry of International Affairs and Communications, I am now a faculty member at Chuo University and teach microeconomics focusing on network industries. And my recent research includes net neutrality, AI regulation, platform regulation, fragmentation of the internet, and fair contribution, which is a central theme of today’s panel discussion, I think. All right. Let me summarize three backgrounds that have ignited a discussion on the fair contribution so far. The first is, of course, the remarkable surge in global internet usage. This phenomenon is notably driven by the rapid expansion of internet video usage, which, according to the Cisco estimate in 2018, has quadrupled in consumer usage and tripled in business usage over the last five years since 2017. As you can easily understand, this escalating internet usage imposes a substantial strain on the operation of the underlying network. According to the estimate of frontier economics, the cost to European broadband network can range from billions to tens of billions of euros per year. Facing such a robust surge in network utilization, telecom operators worldwide are channeling considerable investment into their network to uphold and, where possible, augment the quality of internet experience for their users. However, in Japan, the growth in investment by telecom carriers has not resulted in a proportionate improvement in network quality, leading to challenges in offering enhanced broadband services that meet higher network standards. As my research in the 2010s reveals as displayed in this slide, especially the left-hand side, network investment of the Japan operators initially led to notable improvement in latency and then gradual enhancement in effective download speeds. Unfortunately, as the right-hand side graph shows, download percentage, which is a ratio of effective speed, I mean actual speed to advertised speed, has been declining, indicating that the Japanese telecom operators need to further expand the investment size in order to accommodate the demand of broadband users. As you can easily understand, this situation is just one example of the difficulty faced by telecom operators in many countries and which is one of the biggest backgrounds of the fair contribution controversy. The second one relates to the fact that in most countries, there still remains a substantial digital divide or digital gap in the rural and poor areas. In its 2019 report, the Broadband Commission, which was established by IT on UNESCO, underscored the lingering global digital gap and called for additional investment of network operators, which of course include OTT players. The third one pertains to the sustainability of universal service mechanisms. Currently, many governments bear the responsibility of ensuring essential telecom services for their students. This policy necessitates a problem of support to network operators which supply services in unprofitable remote regions. And the funds that support this substantive mechanism are often collected from users of very basic voice services. However, unfortunately, the proliferation of broadband has led to significant decline in voice service demand, raising concerns about long-term sustainability of such subsistence scheme. Therefore, in the U.S., this has prompted discussion, as you can see in this slide, regarding alternative funding sources, which I believe are very compatible with a fair contribution concept. In order to address these three challenges, operators must find new sources of revenue. But in the real world, numerous constraints curtail their capacity to freely pursue this quest. First, telecom carriers are usually bounded by rate and entry restrictions originally attributable to their natural monopolistic nature. Second, considering the pivotal role of telecom services, even legally permissible actions may encounter political constraints. Another example involves a significant reduction in mobile prices in Japan following an influential political statement of the chief cabinet secretary at that time, who then became our prime minister. Third, as shown in the graph on the right, the average revenue per users has diminished in recent years, indicating that any fee increase can risk significant user attrition. Fourth, in an area where IoT devices are penetrating all around us, acquiring additional frequency licenses is imperative for telecom operators, demanding further investment to secure frequency licenses and ultimately thin the financial resources for physical network maintenance. As a result, telecom carriers’ expansion strategy realistically has just two options. First, augmenting revenue by introducing non-telecom services. I think this is eagerly pursued by Japanese mobile operators right now. Second, charging additional network usage fees on OTG operators that utilize broadband networks for content distribution and make money, which is of course the heart of the fair contribution debate. The debate about fair contribution has gained additional momentum due to the noticeable difference in profit between OTG and telecom operators. This contrast is clearly illustrated in a report by GSMA, a group of mobile operators, which states that while telecom operator profit has been declining, OTG operators continue to make high profit. Yet, the theoretical remedy for this issue is remarkably straightforward. The crux of the matter lies in the reality that network expansion by telecom operators inherently benefits OTG players, yet the compensation growing from OTG operators is inadequate. Other things in the external economy properly is important to avoid under-investment by telecom operators. The resolution in theory involves two approaches. The first entails appropriate taxation of OTG operators. This is an approach proposed by economist Arthur Pigou and known as Pigouvian tax. The second revolves around the direct negotiation between relevant parties, in our case between the telecom operators and OTG players. This approach was suggested by another economist, Nobel laureate Ronald Coase. As an economist, it is a great pleasure for me to say that economic theory produced such a very elegant pair of solutions, but I have to remind you that we should not forget the existence of another externalities. Investment by OTG players in content application can drive greater demand for broadband services, resulting in a positive external effect on telecom carriers. In order to properly deal with this externality, telecom operators need to pay compensation to OTGs in return. So, given this dual existence of externalities, determining whether OTGs or telecom operators bear the burden rests on a case-by-case analysis. Although theoretical solutions are so simple, translating them into practice presents significant challenges of course. First, the majority of interconnection agreements that make up today’s global Internet are either settlement-free peering, where nobody pays the usage of other networks, or unidirectional transit agreements where lower-tier networks always pay to the upper tier. Clearly, most of them do not work well with a regime that enforces payment based on network usage. Second, for Pigouvian tax and Coasean method to succeed, participant parties must possess sufficient information of the surrounding situation and can reach consensus without much difficulties. But achieving them in the dynamic broadband landscape is exceedingly difficult. Last but not least, theoretical resolutions can clash with net neutrality principles, since net neutrality prohibits network operators from charging users outside direct contractor relationship and forbids QoS differentiation. So, under strict net neutrality, fair contribution cannot be realized. Europeans notably recognize these issues ahead of the global curve, and just before the World Conference on International Telecommunication 2012, European operators considered introducing a new approach, sending party network pay to reshape interconnection dynamics, which consists of peering and transit. However, this proposal encountered substantial opposition to the informal proceedings of the International Forum, and after this attempt, we have decades of stagnation for this SPNP discussion. However, probably influenced by the development in Korea, which I think will be explained by my fellow panelists, SPNP discussion began to resurface. Currently, Europe has published a proposal that calls for all stakeholders to make a fair and proportionate contribution to the cost of public goods, services, and infrastructure as a means of achieving the universal gigabit coverage by 2030. Of course, this proposal sparks ongoing debate with counter-arguments such as a concern of tighter regulation of broadband, disproportionate usage charges, and potential negative impact on broadband adoption. And while policymakers, industry representatives, and academia from various countries were engaged in heated discussion here, shocking news came on September 18 of this year. That is, after three years of heated disagreement, SK Broadband Network suddenly announced a ceasefire and ended the lawsuit. The detailed terms of the agreement are still unclear, but in any case, the situation in Koreaโ€ฆ seems to have been settled to a point. But very interestingly, however, the issue of human contribution is hardly discussed in the neighboring countries across the Sea of Japan. Several hypotheses are possible. The first one is that Japanese carriers have already installed a network of excessive capacity and coverage so that the impact of internet video traffic is not severe. In fact, the average speed of over 80 megabits per second as of 2019 is sufficient for video viewing, and the fiber optic network has achieved more than 99.7% coverage nationwide. Therefore, there will be no need to ask all the operators to fund a new investment at this stage. However, this situation is limited to the investment expenditure and has no explanatory power regarding operating and renewal cost of the network. Furthermore, capital investment will continue to be required for mobile network because the mobile traffic is expected to skyrocket with 5G. As a result, I think this hypothesis alone is not sufficient. Second, because of a high degree of oligopoly in the Japanese telecom market, existing network operators have long been earning sufficient profit and thus do not feel the need to ask OTT to share the cost. At the very least, I feel this explanation applies well on the three largest companies, NTT, KDA, and SoftBank. On the other hand, this has limited explanatory power with regard to small and medium-sized players, which are less profitable. The last but not least hypothesis is that Japanese mobile operators heavily utilize Wi-Fi offloading in order to manage the huge amount of traffic, especially in the peak times. Thus, if they loudly claim fair contribution, they may end up paying large usage fees to fixed network operators. Whether either of these hypotheses can explain the unique situation in Japan, which differs very different from that of Korea, this is a topic for my future empirical research. All right, let’s turn on the focus on the Japanese telecom authorities. The Japanese telecom authorities have conjured a fairness of cost distribution within the context of neutral discussions but has yet to implement definitive actions. Even the recent amendment of Telecom Business Act, which identified broadband as a universal services, failed to incorporate the mechanism to procure necessary costs from OT players. So as a researcher, I maintain key interest in the Japan’s forthcoming decisions. With that, I conclude my presentation. Your attention is deeply appreciated, and now I yield the floor. Thank you very much.

Moderator:
Thank you very much. Yes, thank you, Professor Jisuzumi. So the next presenter is me. Just a moment. And I send the URL again, so please poll your opinion. Just a moment, please. Where is? I can’t see it. Oh, my goodness. OK. Let me start my presentation with a case study in Japan about these two regulations for BODs. First of all, my name is Ichiro Mizukoshi. I’m an internet engineer, especially for the operation of packet forwarding and security. I’m working for NTT East, and also the external board of the JPSAT-CC. But this presentation is solely my opinion and doesn’t show my company’s position, OK? Skip here. First of all, BOD is also growing in Japan. This is market size of BOD in Japan. It grew by 18% in these three years and reached $3.5 billion in year 2022. Here’s a detail. Oh, no, no, no. BOD is getting more popular in Japan. Seven years ago, only 15% of the people had used BOD. But now almost 40% have used it. Here’s a detail of the Japanese S-Board subscribed video-on-demand market. Familiar global BOD players, such as Netflix, Amazon, Prime, Dazoom, Disney Plus, and Hulu, those of them have shared more than half of the market. Then how about these two regulations going in Japan? As Professor Jitsumi said, fair contribution is very inactive. And also the local content contribution is also inactive. I’m an ISP industry guy, so fair contribution is occasionally talked in the ISP industries. What’s going on the EU? How about the trial in Korea is going? But there is no concrete actions have occurred as far as I know. And about the local content contribution, I’m not a content industry person, but based on my understanding, this topic is rarely discussed in Japan, partially because broadcasting regulations are non-existent. So, yep, why? It’s my humble opinion. There are three reasons adding to it. This means hypothesis. It’s the government stance, market size, and numerous stakeholders. First, government stance to the big tech. The Japanese government has a much more restrained stance compared to the EU. Here are the two recent activities in Japan. First one is request for registration in Japan. The companies that do business in Japan are legally required to register, not only their local arm, but also their global headquarters with authorities here. But such request has occurred. It means before June 2022, even Microsoft, Google, and Meta, such a global company, big tech, had not registered in Japan, just requested. And some of them are going out, but almost them are registered now. And the second one is just subject to the regulation. This year, search engines and social media with more than 10 million users are newly subject to the regulation. But there’s no heavy punch to find like in the EU. Japanese government take mostly name and shame approaches. If I may add, those two activities mainly focused on the promoting, protecting the personal information. So they’re not so much care about the market movement. So anyway, this approach is much more restrained. The second one is, as Jitsumi-san also mentioned, but the market size. As I showed in the previous slide, the VOD market is 3.5 billion, but related industries such as telecommunications, television, and video productions, compared to VOD, these three industries are order of magnitude larger. The last finding is numerous stakeholders. It’s not easy to define the ISP. However, assigning IP addresses to the customer is a core function of ISP. Currently, over 500 organizations in Japan have been delegated IP address blocks to assign to their customers. Yes, it’s including hosting, clouds, and so on. However, a fairly large number of ISP exists in Japan. The television stations, there are so many television stations in Japan. Terrestrial has more than 120, and cable stations 450, and satellites are 40 or more. And the size of these stakeholders, ISP and television sectors, is too wide a difference. Large ones target the national market, and the small ones target the rural areas. So it’s too hard to discuss the issue in one table. So it means there are too many stakeholders to cope with the VOD industry together. In addition, in the VOD industry itself, there are various domestic players. It’s also the kind of reasons, I think. Yeah, this is my last slide. I have been thinking about the fair contribution for more than 10 years and so on. This statement significantly impacted me. In year 2010, my friend, the executive of Mobile Operator said, we charge end users based on the volume of packets. So what it is good, that is a golden egg for us. I think this is an important message to the whole world. Yeah, thank you. So let’s pass to the next presentation. Ms. Cho, please. Can you hear me?

Cho Changeun:
Yes, I’m here. OK. Yes. OK. I’ll start. Hello, I’m Chang Cho, a research fellow at KDDI Research. Thank you for joining our workshop. My research focus on the broadcasting, ICT, and related policies. I have been writing about Korean ICT trends for various magazines of Nikkei BP, an affiliate of Nikkei newspaper in Japan. Today, I will be speaking about fair contribution and local contents contribution in Korea. Korean dramas, K-pop, and various other contents are gaining popularity in the world these days. Korea is not only a great contents creator, but also an internet powerhouse. It was the first country in the world to spread broadband to ordinary households. And it had the highest broadband penetration rate among OECD member countries in 2022. It created new industries, such as internet broadcasting, online game tournaments, and webtoons. When it comes to the internet, there are many instances where Korea starts first and influences the rest of the world. A recent example of this is debate over network fees. In Korea, platforms have usually paid network fees for ISPs. It has been taken for granted that both users and service providers pay for the network. There are many companies that want to provide services in Korea, but there are only four ISPs in Korea. The situation has changed when global big tech companies like Google, YouTube, and Netflix launch their services in Korea. With the majority of Korean internet users using Google, YouTube, and Netflix on a daily basis, the platform began to become more powerful than ISPs. In April 2020, Netflix filed a lawsuit against SK Broadband, a Korean ISP which had a significant impact on the fair contribution debate in Korea. First, let’s briefly explain what the lawsuit was about. Netflix asked the Seoul Central Court to rule that it was not obligated to negotiate or pay network fees for the delivery of content over SK Broadband’s domestic and international networks and for the operation and expansion of these networks. Netflix argued network neutrality, but the Seoul Central Court ruled in June 2021 that network neutrality is a principle that prohibits telecom companies from unreasonably discriminating against local traffic on their network and is not directly relevant to network fee discussion. The court also recognized that Netflix was obligated to pay SK Broadband for the paid services of accessing, connecting to, and remaining connected to SK Broadband network. The court also ruled that the two companies should negotiate how the network fees will be paid. Korean media reported that Netflix was in court and that all business with service in Korea must pay network fees. To understand how Netflix’s lawsuit against SK Broadband began and why Korean court upheld network fees, we need to go back to 2006. In 2006, Korean ISP Hanwha Telecom launched Hanwha TV, an IPTV service that allows users to stream video over the internet with a set-top box on their TV. It was the first IPTV in Korea and gained a lot of popularity due to convenience of being able to watch reality on TV using remote control. However, other ISPs did not like this and shut down Hanwha TV. Claiming that it increased the traffic of subscribers who were using Hanwha TV VOD, Hanwha TV is currently still in service as SK Broadband BTV. By registering the Internet Multimedia Broadcasting Business Act in February 2008, KT, SK Broadband, and LG U-Post, which are both telecom companies and ISPs, launched closed IPTV services for their own subscribers. With these three companies offering combined internet, phone, and IPTV discounts, most households now subscribe on IPTV. In 2012, Samsung Electron’s newly launched Smart TV became a problem. In a service merger that allows Smart TV to access royalty from various companies, the ISP KT blocked the Samsung Smart TV from accessing KT Internet, citing increased traffic. All Smart TV ended up paying network fees to ISP or paying something even it wasn’t quoted the network fees. Then in 2016, Netflix launched in Korea, and subscribers started to grow, and YouTube users skyrocketed. Naver, a portal platform with the most users in Korea, raised the issue of network fees. Naver asked ISPs to disclose how much YouTube paid for network fees, which accounts for 17.8% of video viewing time in Korea. Naver argued that network fees that are strictly applied to Korean companies but not to global companies are problematic. The National Assembly also said that it was not fair to discriminate between Korean companies and global companies. The debate started over paying network fees fairly. The discussion on fair contribution to internet network maintenance and investment began earnest. As Netflix subscribers grew, so did the traffic for Netflix VOD. When Netflix launched in Korea, two of the country’s ISP, KT and LG U+, partnered to make Netflix available using their IPTV remote instead of network fee. They chose to share the profit when their subscribers signed up Netflix. SK Broadband decided to take network fee. Starting in 2018, SK Broadband expanded its network because it felt that the traffic from Netflix subscribers was damaging users who didn’t subscribe to Netflix. Since it expanded its network for Netflix, it proposed to Netflix that Netflix should share the cost. In 2015, Netflix insisted on installing Open Connect inside SK Broadband’s facilities, claiming that it could reduce 95% of traffic. SK Broadband refused, saying that Open Connect would not solve the problem, and asked the Korean Communication Commission to mediate to make Netflix negotiate a network fee. Netflix rejected that arbitration and asked the court to rule that it did not have to pay the network fee and did not have to negotiate. The result was a de facto defeat for Netflix. In September 2021, SK Broadband sued Netflix under their unjust enrichment provision of civil acts. It claimed that Netflix had ignored the first judgment, refused to pay network fees, and to negotiate. Netflix appeared in December 2021. Netflix insisted that SK Broadband could not bill Netflix for the increased traffic because it is charged based on internet speed to their subscribers. Google and YouTube cited Netflix. The National Assembly was considering a law change that would require CPs to sign contracts with ISPs, stating that they must pay for network uses, which they opposed. In September 2023, Netflix, SK Telecom, and SK Broadband formed a strategic partnership and ended all litigation. The three companies announced that they would work together for the good of their users. Korean media reported that following reasons for the end of litigation. Netflix would have paid SK Broadband for its network behind closed doors because it was likely to lose on a peer, and sitting presidents would have triggered lawsuits in other countries. Experts calculated Netflix’s network fees, which could have been up $108 million. In Korea, why do we have to pay ISPs for network fees? Korean Telecommunication Business Act defines interconnection as paid peering, by default. Interconnection between ISPs is also paid peering. ISPs are contracted on a bill-and-keep basis, so they don’t pay each other, but it’s not free. In Korea, there are only four ISPs, but there are many OTTs. The most popular OTT in Korea is Netflix. In September 2023, it was reported that more Koreans use Netflix than all other Korean OTT combined. For ISPs, the most important question is what kind of agreement they have with Netflix to address network fees and fair contribution issues. Korean ISPs are working with European telecom companies who have the same concerns. In Europe, the majority of traffic is dominated by a handful of big tech companies. But even in Korea, some people are questioning network fees. Civic groups argue that subscribers pay for internet every month, and ISPs also collect network fees from service providers. This is like a toll tax. Does this mean that a popular K-pop content platform should pay network fees to overseas ISPs because it has a lot of overseas users? Civic groups ask. The National Assembly is considering amending the Telecommunication Business Act to make network fee agreement mandatory. However, the legislation has told it amid growing public opinion that it should be left to negotiations between ISPs and other companies. In Korea, the Netflix Act was enacted in December 2020 following a lawsuit by Netflix, stating that big tech platforms are also obligated to keep internet network safe. Apart from network fees, platforms will be held liable if they cause excessive traffic and disrupt services. From here, I’ll talk about local content. As the Korean wave has become a global phenomenon, global OTTs like Netflix and Disney Plus are investing in Korean local content. They are competing to possess Korean dramas. The government has given more tax deductions to video content produced in Korea. Korean productions are happy. But politicians think that Koreans shouldn’t be a content production subcontractor for Netflix. For the sake of more profits for Korean companies, they are not satisfied with exporting contents and want to support for worldwide expansion of Korean OTT platforms. Finally, who should pay for sustainable internet service? In 2020, the Korean government designated 100Mbps broadband as a universal service. In any region, telecom companies must provide internet at a speed of at least 100Mbps to all subscribers. This requires ongoing investment. The number of people using VOD services is growing, and a small number of companies are generating huge amounts of traffic. We need to think about whether telecom companies and ISPs should take all responsibilities for the traffic or whether the business that causes the traffic is also responsible. The CEO of a Korean ISP, KT, said that in the future, not only VOD, but also AI may cause traffic problems. In the case of local contents, Korea will invest more in contents to maintain the economic effects derived from the Korean wave. Global OTTs that distribute Korean local contents are also important, so cooperation is important. In order to create an environment where everyone makes a fair contribution rather than an environment where someone has to sacrifice, I think there should be more communication between companies, like the workshop we are doing today. Thank you for listening.

Nami Yonetani:
Thank you. Hello, everyone. Hi. I’m Nami Yonetani. Today in my presentation, I would like to talk about the regulatory responses around the world to the rise of global video streaming giants with a particular focus on the discussions on local content contribution. Before getting down to the main point, please allow me to introduce myself briefly. Again, my name is Nami Yonetani. I’m a researcher at the Foundation for Multimedia Communications in Tokyo, Japan. Actually, this is my very first time attending IGF, and I’m very excited to share my research findings today. So, let’s move on to our main topic. First of all, I would like to give you a quick overview of the current state of video streaming. Although there are many types of video streaming services, they can be classified into five categories by their revenue model and distribution model. And the five categories are AVOD, FAST, SVOD, VMVPD, and TVOD. And among all of these, SVOD, which stands for Subscription Video On Demand, is leading the global streaming market. And as you can see in this table, SVODs based in the US and Asian countries have a great number of global subscribers. Especially the three major US SVODs, which are Netflix, Amazon Prime Video, and Disney+, have targeted global expansion since early stage. One of the reasons for their global popularity is their original and exclusive content. For example, Netflix’s original series called Wednesday became a global smash hit last year, and it is reported that it has been watched for more than 1 billion total hours by over 150 million households. And various previous studies have shown that such popularity of US SVODs is having a great impact on the traditional audiovisual media industry. For example, in the broadcasting industry, consumers are moving away from pay TV to US SVODs, which are rich in content and much cheaper. The same thing is also happening in the film industry. Recent studies have discovered that consumers, especially the younger generation, are moving away from the cinema to watch films on US SVODs. Given this situation, more countries are regulating video streaming platforms to counter US SVODs. There are several points of contention, but one of the biggest points is possible unfair market competition led by regulatory asymmetry between broadcasting and video streaming platforms. Although broadcasting and streaming are similar in the fact that they both distribute videos, law tends to lag behind video streaming, which is a relatively new technology. Since around 2020, to reduce such regulatory asymmetry, some countries have imposed similar regulations on video streaming platforms to the ones that they have done on broadcasting platforms. Especially, regulations on local content contribution, which is the subject of this presentation, are being introduced mainly in European and British Commonwealth countries. As you can see in this figure, governments are applying or trying to apply local content requirements such as content quotas and financial contribution obligations not only to broadcasting platforms but also to video streaming platforms. This table shows you the countries which introduced or are in the process of introducing local content requirements on video streaming platforms. As a side note, as Dr. Cho mentioned in her presentation, South Korea takes a quite different approach from these listed countries. While these countries are taking a defensive approach to protect their local audiovisual industry against U.S. efforts, South Korea is adopting a more aggressive approach to foster local industry that can compete with U.S. efforts. Here I want to point out that there are more than one policy approach towards large U.S. streaming platforms, and my presentation is focusing on the defensive approach taken by the countries listed in the table. In the next few slides, I would like to share the regulatory discussion in the countries highlighted in sky blue, which I consider to be important and distinctive. We’ll start with the European Union, which is the pioneer in regulating local content contribution. In response to the accelerated influx of U.S. content in the 1980s, the European Union introduced the Television Without Frontiers Directive in 1987 with the aim of protecting the European audiovisual industry from the U.S. and required broadcasters to contribute to European works. Subsequently, in 2007, the Audiovisual Media Service Directive, AVMSD, was introduced to regulate video-on-demand, VOD platforms, and the directive was revised in 2018 to require VOD platforms to contribute to European works as well. Firstly, the revised AVMSD instructs member states to make VOD providers under their jurisdiction secure at least a 30% share of European works in their catalogs and ensure prominence of those works. It does not mention exactly how to give or measure prominence, but it is expected that European works should be immediately viewable on VOD platforms, for example, on their homepages, recommendations, and search results. And secondly, the revised AVMSD says that member states may require both domestic and foreign providers targeting audiences in their territories to contribute financially to European works. And among all member states, France is the one with the strictest requirements on VOD platforms. France transposed the revised AVMSD into national law in 2021, but with further strict requirements. Moreover, they have their own system called the Media Chronology Rule, an industry agreement on the release window of films which is formed in the government’s presence. The latest version of the rule came into effect last February, and it was decided that SVODs can distribute films 17 months after cinematic release. However, it may be shortened to six months with agreement of film industry. And in fact, Netflix has succeeded in shortening the distribution period from 17 to 15 months in return for extra investment in French films. On the other hand, Walt Disney is raising a strenuous objection to this rule, arguing it’s outdated, and they are protesting by refusing to release some new Disney films in French cinemas and distributing them exclusively on Disney+. In contrast to France, the United Kingdom transposed the revised AVMSD into national law with minimum requirements before Brexit. And after Brexit, they started to explore its own regulatory framework to ensure the Britishness of content, and announced the draft media bill this March. The focus of this bill is to maintain and strengthen public service broadcasters, PSBs. So what is PSB? PSBs are broadcasters intended for public benefit rather than to serve purely commercial interests, and they are granted privileged access to Spectrum in return for various content obligations. So, returning to the bill, there are a number of proposals aimed at strengthening the presence of PSBs, but one of the most eye-catching proposals is to give prominence to PSBs’ video streaming services on connected devices, including smart TVs and streaming devices such as Amazon Fire Stick. What is more, the BBC has sent a letter to the Parliament calling for a dedicated button to provide a shortcut to PSB streaming services on all television remote control, so the current regulatory discussion in the UK affects not only video streaming platforms but also manufacturers. Now I would like to turn our attention from Europe to the British Commonwealth country, Canada. Canada is a country of immigrants and a neighbour of the superpower, the US, therefore the overriding issue of Canada has always been how to shape its identity as a nation. With this in mind, the Broadcasting Act was amended in 1991 to protect Canadian culture from the arrival of American television. The Act implements the so-called Canadian content rule for broadcasters and requires content quota and financial contribution. However, in 2015, the government expressed concern that Canadian content rules on broadcasters were becoming a dead letter as video streaming services grew in popularity, and they argued the need to support the discoverability of Canadian content. Finally, the Online Streaming Act was passed this April to modernise the Broadcasting Act, and this Act will expand the definition of broadcasting to include video streaming in it and apply Canadian content rules to both domestic and foreign video streaming platforms, and now they are currently in the process of public consultation, public hearing to finalise the regulatory framework. Finally, it is neither Europe nor the British Commonwealth, but I would like to mention Israel, and first and foremost, I would like to extend my deepest sympathies and condolences to the victims and their families in Israel and Gaza. The situation is absolutely heartbreaking. So let me get back to the regulatory discussion. Pay TV operators are already obliged to make financial contribution to local content in Israel, and the Netanyahu government, which was formed last November, introduced the Broadcasting Bill this August to apply the same obligation to video streaming platforms. However, there are voices predicting that the government, which is said to be the most far-right and religious in Israel’s history, might introduce stricter video streaming regulations, including content quotas, for political reasons in the future. So here, in this slide, I wanted to point out that there are possibilities that local content requirements may be imposed, not just for cultural or economic reasons, but also for political reasons, and of course this could apply to any other countries, not only Israel. Okay, the video streaming platforms are, of course, trying to push back against such regulatory actions. Netflix claims a legal compulsion to contribute to local content would be anti-consumer. Firstly, they point out that content quotas would merely encourage spending on quantity over quality, and result in mass-producing cheaper and low-quality works. Secondly, they argue that tweaking recommendations in ways that undermine viewer choice and preferences to meet the prominence obligation would lead to a disappointing viewing experience. So let’s say there’s a viewer out there who loves romantic comedy, but Netflix recommends a zombie film to them, to him or her, only because it’s local content. Then it is easy to imagine that the viewer would be less satisfied, and may give a low rating to the zombie film. What is more, he or she might even cancel Netflix. So Netflix is saying that prominence obligation would make no one happy, neither the viewers, the creators, nor the video streaming platforms. And thirdly, Netflix insists that the market forces have already made Netflix spend significant money on local content, thus there’s no need to legally obligate. Finally, they point out it is discriminatory that foreign video streaming platforms are often obliged to contribute to a local content fund, but are not allowed to use it. On another front, Walt Disney mentioned a new global expansion strategy for Disney Plus this August. They revealed that they are thinking to prioritize markets to make Disney Plus profitable by 2024. To be specific, while they’re going to continue to invest in local content in high potential markets, they will invest less in local content in mid potential markets, and may even shut down Disney Plus in low potential markets. So this means that in some countries, Walt Disney might decide that following local content requirements is unprofitable, and shut down Disney Plus. So far, I’ve shown the discussion in European and British Commonwealth countries, but from now on, I would like to specifically look at what is happening in Japan. In Japan, so far, there has been no regulatory discussion regarding the contribution to local content by video streaming platforms. I hypothesize that there are two reasons for this. Firstly, the regulatory playing field between broadcasting and video streaming platforms is relatively level in Japan. And secondly, Japan is a Galapagos market where broadcasters are not much damaged by U.S. S-Watt giants. So let me explain from the first reason. While broadcasting falls under the Broadcasting Act of 1950, video streaming is exempted from the act. Therefore, I think it is safe to say that regulatory asymmetry does exist between them. However, such asymmetry is slight in Japan, as broadcasting regulations, especially content regulations, are much more relaxed than in other countries. For example, we do not have local content requirements for broadcasters like the European and the British Commonwealth countries. In this sense, broadcasting and video streaming are already placed in a fair regulatory environment in Japan. The second reason is based on the uniqueness of the broadcasting market in Japan. And there are two unique aspects. Firstly, unlike many other developed countries, including the ones we’ve seen in this presentation today, pay TV households remain in the minority in Japan. 80% of TV households watch TV via OTA signals, and private-owned OTA broadcasters are the most popular television stations. So this means that there is no direct conflict of interest between popular broadcasters and U.S. S-WATCH giants due to the difference in their revenue models. Secondly, Japanese viewers prefer to watch local content, Japanese content, on U.S. S-WATCH platforms. The table on the left shows the result of a nationwide questionnaire I conducted in Japan in 2021. And it was revealed that people, both young and old, highly prefer to watch Japanese content, especially TV programs and animes, on U.S. S-WATCH platforms. And the figure on the right is from Bloomberg last year. It shows the share of the top ten programs in major Netflix markets that also appear in global rankings. While the more to the right, the more overlapping their tastes, Japan is at the left end, which means that Japanese viewers have quite different tastes from global trends. And so when U.S. S-WATCH arrived in Japan around 2015, most of us thought that they would be a significant threat to the broadcasters, to the domestic broadcasters. But it turns out that U.S. S-WATCH are now just new channels for broadcasters to distribute Japanese content to Japanese viewers. However, does this mean that Japanese broadcasters are able to keep enjoying easy days? In my opinion, U.S. fast platforms, which are gradually expanding globally, could emerge as a potential threat to Japanese OTA broadcasters if launched in Japan, as they are both based on the free advertising business model and thus have a direct conflict of interest. If that happens, some sort of regulatory discussions on video streaming might finally occur in Japan. And that brings us to the end of my presentation. Thank you very much.

Moderator:
Oh, thank you so much. So all the presentation has been done. So just before going to the Q&A, just check the polling result, just a moment here. Oh, unfortunately, only five voted, sorry, and most of them disagree to regulators by government. Okay. Hmm. So before going to the discussion, we wanted to know the presenters’ polling result. For the order of the presentation, Professor Jitsuzumi, please, what’s your opinion?

Toshiya Jitsuzumi:
I think, oh, thank you. It’s a very interesting presentation of all the presentations. First of all, I have to say thank you for the information. And for the questionnaire result, which you presented on the screen, I think my answer is that we do not need any regulation on this matter. But in the presentation, I say that concerning the existence of externalities, some kind of compensation should be justified in order to make the resource allocation more effective. In order to make the resource allocation efficient in the market. However, to introduce a government regulation is a different matter, because concerning the information asymmetry, which the government had to consider when making some regulations, drafting a regulation in this rapidly changing market is very difficult, and cause additional problems for the market itself. So my recommendation is to ask or to make the negotiation between the actual players as much as possible in order to come up with efficient outcomes. How much compensation they need, and which side should be obliged to pay the other side. So instead of asking government to introduce regulation on these matters, I recommend the government to make those negotiations as easy as possible. For example, it’s very difficult for the small and medium-sized operators, medium-sized network operators to start negotiating with the export giant of the US, because according to the voice of my friend who is in the business of small ISPs, they said it’s very difficult to start negotiations, to start making contact with their networks and other networks and YouTubes. So maybe the government can dispatch some experts to start negotiations or to start making some contact with their content giant to start negotiating with efficient terms and conditions of those compensations. Or they can introduce some alternative dispute resolution mechanism in order to prepare for the situation where the negotiation just broke up. So that is my thought about this question.

Moderator:
All right, thank you. It’s my turn. I choose to support both. It’s a minor one, anyway. The reason is very simple, because money is essential. And I’m not a content guy. I can’t say much about the content. But about the fair contribution, yeah, as I said in my last slide, based on the packet charging is important. It will solve all of the problems. However, if the purpose of the fair contribution is for the broadband universal service fund, I would like to invest in rural areas. So I could agree with it. That’s my opinion. So, Ms. Cho, please.

Cho Changeun:
Yes, I can hear you. I think government needs to play a role in helping to negotiate well between companies rather than creating new regulations. Because technology is developing very rapidly. But it takes time for government to set the regulations. When the government tries to regulate the enterprise on the issue of traffic related to VOD or something, and the need for fair contributions in network infrastructures, investment, and maintenance, it is the need for need. But I don’t think, I think it’s not need a new or a regulation. Just government needs to play a role in helping negotiate.

Moderator:
All right. You also none. Okay. How about you, Yonetani-san?

Nami Yonetani:
I think I would vote to the fourth option, content. Because I do think that the government has a role to play to make VODs promote local content. Because I believe increasing accessibility to local content on digital platforms is one of the key elements to develop local digital economy and it is hard to achieve by market forces alone. We’ve already seen in the broadcasting market that it is very difficult to achieve content diversity when there’s a flood of cheap and exciting US content in front of us. However, I also believe that such requirements should only be imposed in two conditions. First, it should be imposed when the broadcasters are already under similar requirements and there is a need to reduce regulatory asymmetry. Second, the minimal requirements should be imposed on dominant VODs, otherwise the requirements would be an unreasonable entry barriers. And in terms of fair contribution to network costs, I think legally forcing VODs to negotiate deals with local telcos would be enough for now, because actually I’m not sure if the government is able to define the fairness, but also I have to point out that some countries are currently discussing to impose public interest requirements on video streaming platforms, including VODs. For example, some are debating whether to obligate video streaming platforms to distribute disaster alerts, and some are debating whether to include video streaming technology to ensure universal access to major sports content. So, if such requirements are imposed on video streaming platforms in the future, we would need even more robust networks than we have right now. So, considering such possibilities, I am open to the idea to widen the scope of contributors of network costs.

Moderator:
Okay, so your answer is none, or partially yes to the content?

Nami Yonetani:
Yes, that would be right. Partially? Partially, yes. Minimal requirements.

Moderator:
Minimum requirements. Most of the panelists believe the market mechanism, right? So, let’s start the Q&A and discuss the sessions. Do you have any comments and questions? Anybody?

Audience:
Hi, I’m Keisuke Kaneyasu from NEC Corporation. So, thank you for your presentation and discussion. So, it’s a very informative issue. So, I think it seems to be an important issue about network and net neutrality. So, particularly the difficult point is, so relation, cross-border issue, and each giant tech issue, or computation role, or right of know, like access to information or contents. So, a lot of issue. So, I know some country made rule or regulation ongoing. So, it’s a very confusing issue. So, give me more information to clarify. So, do you have any other idea for new regulation to be improved to measure VOD operator, like networks?

Moderator:
So, you’re asking new, is there any new regulations for the VOD operators? Yes. All right. How about you, Yonetani-san, please?

Nami Yonetani:
Thank you for the question. In terms of media policy, I think that most of the broadcasting regulations could potentially be applied to VODs, because there’s even countries that have introduced licensing for VODs. But, as a global trend, the most prominent discussion currently revolve around content regulations and accessibility rules. In this case, content regulation means prohibiting obscenity and violence and introducing age rating systems for the content. And accessibility rules is requiring closed caption and video description.

Moderator:
All right. How about you, Ms. Cho?

Cho Changeun:
About regulation, in Korea, Korea has a duty to secure internet stability of content provider, called the Netflix Act. Russian platforms and VOD service providers, which have more than 1 million users a day and account for more than 1% of total traffic in Korea, are also obligated to maintain stability in the quality of the internet network. The name is Netflix Act, but Netflix has never been a problem. Only Korean companies failed to comply with Netflix Act. So, I think regulation is possible, but technology is also possible that new technology has already come out and solved the problems about traffic. So, I think we don’t need a new regulation.

Moderator:
Thank you. How about you, Professor Jitsumi?

Toshiya Jitsuzumi:
All right. It’s a very good question. I think the necessity of introducing the new regulation of the major VOD operators. Let’s focus on the discussion on the fair contributions. I was thinking that what happens if the VOD operators refuse to pay fair contributions to the local network operators. I think what happens is that the QoS for the local viewers deteriorates. Even if they refuse to pay fair contributions to local network operators, local operators cannot block the infusion of the OTT content from abroad because there are lots of gateways to the local network. And that is how the internet was designed in the first place. So, the incentive for the VOD operators to pay the fair contribution is required. So, one possible incentive to motivate the VOD operators to pay the fair contribution is to provide better qualities. No congestion even in the peak hours or the smaller latency. So, from the viewpoint of net initiality, which is my major research focus, this situation is equivalent to providing a paid prioritization to the content providers. So, introducing some regulation to authorize the network VOD operators to pay fair contributions to local network operators is equal to allowing the fast lanes in the internet, which can be a very controversial issue in net neutrality. So, from the viewpoint of policy reasons or cultural reasons, it might be a good thing for the local government to introduce such a regulation, but from the viewpoint of net neutrality, it is very questionable. I personally have to say that I am not a strong supporter of net neutrality, but there are some supporters for net neutrality and they will have a huge discussion against the introduction of the regulation, which is applied only to the major VOD operators, I think. That’s my impression.

Moderator:
All right, thank you. So, you’re talking about net neutrality around the world and there’s some local neutrality in the country, right?

Toshiya Jitsuzumi:
No, I think that in order to motivate the VOD operators to pay the fair contribution to some local network operators, there is some incentives. Incentives can be providing better quality to the network operators. Otherwise, they just refuse to pay the fair contribution and to let the QoS of the services in the local market deteriorate.

Moderator:
All right, thank you. So, my opinion is, if the VOD goes to the news, I mean journalism, the cross-media ownership regulation should be applied, but I’m not sure about such other things. My feeling is such a VOD has a strong power to the local countries, so the kind of network or the content neutralization should be applied. That’s my feeling. Is there any comment? All right, thank you very much. Any other questions?

Audience:
Okay, can you hear me? My name is Tomohiro Fujisaki from the Internet Society Japan chapter, but here I’m speaking in my own capacity. My question is about regulations. I already had some answers in the discussion, but I want to know, are there any potential drawbacks or unintended consequences associated with the regulations? I know the regulations are very, very strict, so I want to know the side effects of the regulations.

Moderator:
All right, thank you. So, the bad effects of the regulations, anyway. So, how about the professional of these regulations? First, the decision, please.

Toshiya Jitsuzumi:
All right. It depends on what kind of regulation we are in mind. Consider the situation when the government oblige the VOD operator to pay some amount of fee to the local networks. So, in my opinion, it’s very difficult for the regulator to come up with exact appropriate numbers, appropriate fee level, which the network operators, VOD operators, to pay in order to cover the externalities and make their investment efficient from the viewpoint of economic efficiencies. So, if the government officials are asked by the politicians or the research groups to come up with numbers, they have to ask the operator themselves. I mean, they have to ask the network operators or the networks to come up with some figures. So, this situation is called regulatory capture and cause a different problem of efficiencies. And also, if they come up with some numbers, it can be very high or very low. So, I think they will be more influenced by the local operators and the VOD giants. So, if the regulators introduce a fee which is higher than the efficiency level, maybe the VOD operators think that this market is not a high potential market, especially in Japan, which has an aging population and a lowering income in the long-term perspective. If the government introduces too strong a regulation on this matter, maybe the networks or YouTube determine that we should stop making business in this country and go out. So, I think that if that happens, it will have a huge negative impact on the Japanese economy. So, that’s the second reason I think that the regulation is not appropriate in this matter.

Moderator:
Thank you. Go out? It’s a worse case. So, how about you, Natani-san?

Nami Yonetani:
I think the same can be said for local content requirements. If the local content requirements are too high, it may result in higher prices for consumers, and in worst case scenario, VODs might withdraw from the market, which means that consumers would have less choice of content. What is more, it might encourage piracy, because consumers seek to access content outside the legitimate channels. So, I believe that requirements should be non-discriminatory and minimal, because the ultimate goal of local content requirements on VODs should not be protecting local industry, but to ensure consumers’ access to diverse content.

Moderator:
All right. Thank you. Any comments? Thank you so much. Thank you. So, it’s the end of the time. Any questions more? Okay. Thank you so much. It’s… Oh, yeah. So, we want to ask the final comment from the panel. Sorry.

Toshiya Jitsuzumi:
The first one, Professor Jesumi. Okay. Sorry. Okay. I have to… I don’t want to repeat my previous comments, but I just say that everything should be determined based on the data, not the political statement or the emotions. As an economist, we need data. Based on data, we can have a more rational conclusion or recommendation to the government. So, in order to get some data, I ask all operators on VOD players to disclose.

Moderator:
All right. Thank you. Ms. Cho?

Cho Changeun:
Yes. Thank you for listening. And I think regulation starts out always with good intentions, but technology adjusts so quickly that another problem always arises. I think there is a need for more government organization to companies together, like this workshop. I need more negotiation and cooperation if needed. Thank you. Thank you. In today’s workshop, the focus was on VOD, but considering the increasing popularity of linear streaming and live streaming, I think it is our future task to analyze the impact of those new streaming services to deepen our discussion. And also, as Professor Jitsuzumi mentioned about net neutrality during the discussion on fair contribution, I think that we have to consider platform neutrality or content neutrality when discussing on local content contribution. I hope we can have an open discussion again at future IGF. Thank you, everyone, for joining us today.

Moderator:
Thank you. It’s truly ending. Thank you for coming. Thank you.

Audience

Speech speed

100 words per minute

Speech length

198 words

Speech time

119 secs

Cho Changeun

Speech speed

125 words per minute

Speech length

2285 words

Speech time

1098 secs

Moderator

Speech speed

116 words per minute

Speech length

2075 words

Speech time

1073 secs

Nami Yonetani

Speech speed

115 words per minute

Speech length

3118 words

Speech time

1624 secs

Toshiya Jitsuzumi

Speech speed

140 words per minute

Speech length

3168 words

Speech time

1361 secs

The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarah Myers West

The analysis provides a comprehensive overview of the concerns and criticisms surrounding artificial intelligence (AI). One notable concern is that AI can be misleading and misunderstood, leading to flawed policies. It is argued that in the field, there is a tendency to make claims about AI without proper validation or testing, which undermines trust in the technology.

At present, AI is primarily seen as a computational process that applies statistical methods to large datasets. These datasets are often acquired through commercial surveillance or extensive web scraping. This definition emphasizes the reliance on data-driven approaches to derive insights and make predictions. However, the ethical implications of this reliance on data need to be considered, as biases and inequalities can be perpetuated and amplified by AI systems.

The lack of validation in AI claims is another cause for concern. Many AI systems are said to serve specific purposes without undergoing rigorous testing or validation processes. Discrepancies and problems often go unnoticed until auditing or other retrospective methods are employed. The absence of transparency and accountability in AI claims raises questions about the reliability and effectiveness of AI systems in various domains.

Furthermore, it is evident that AI systems have the potential to mimic and amplify societal inequality. Studies have shown that AI can replicate patterns of discrimination and exacerbate existing inequalities. Discrimination within AI systems can have adverse effects on historically marginalised populations. This highlights the importance of considering the social impact and ethical implications of AI deployment.

In terms of content moderation, AI is often seen as an attractive solution. However, it is acknowledged that it presents challenges that are difficult to overcome. For example, AI-based content moderation systems are imperfect and can lead to violations of privacy as well as false positive identifications. Malicious actors can also manipulate content to bypass these AI systems, raising concerns about the effectiveness of AI in tackling content moderation issues.

To address these concerns, there is a need for more scrutiny and critical evaluation of the use of AI in content moderation. Establishing rigorous standards for independent evaluation and testing is crucial to ensure the effectiveness and ethical use of AI technology. This approach can help mitigate the risks associated with privacy violations, false positives, and content manipulation.

In conclusion, the analysis underscores the importance of addressing the concerns and criticisms related to AI. The potential for misrepresentation and flawed policies, the lack of validation and transparency in AI claims, the amplification of societal inequality, and the challenges in content moderation highlight the need for thoughtful and responsible development and deployment of AI technologies. Ethical considerations, rigorous testing, and ongoing evaluation should be central to AI research and implementation to ensure that the benefits of AI can be realized while mitigating potential harms.

Audience

During the discussion on child safety in online environments, several speakers emphasised the necessity of prioritising the protection of children from harm. They stressed the importance of distinguishing between general scanning or monitoring and the specific detection of harmful content, particularly child sexual abuse material (CSAM). This distinction highlighted the need for targeted approaches and solutions to address this critical issue.

The use of artificial intelligence (AI) and curated algorithms to identify CSAM content received support from some participants. They mentioned successful implementations in various projects, underlining the potential effectiveness of these advanced technologies in detecting and combating such harmful material. Specific examples were provided, including the use of hashing techniques for verification processes, the valuable experience of hotlines, and the use of AI in projects undertaken by the organisation InHope.

However, concerns were raised regarding the potential misuse of child safety regulations. There was apprehension that such regulations might extend beyond the intended scope, impeding on other important areas, such as encryption and combating counterterrorism. It was stressed that policymakers should be wary of unintended consequences and not let child safety regulations become a slippery slope for encroaching on other narratives or compromising important tools like encryption.

The participants also emphasised the significance of online safety for everyone, including children, and the need to prioritise this aspect when developing online solutions. Privacy concerns and the protection of personal data were seen as vital considerations, and transparency in online platforms and services was highlighted as a crucial element in building trust and safeguarding users, particularly children.

The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called for greater transparency in these systems, expansion to other regions, and better differentiation between various types of technology. They stressed that a comprehensive approach was required, involving not only the use of targeted technology but also education, safety measures, and addressing the root causes by dealing with perpetrators.

There were also concerns voiced about law enforcement’s use of surveillance tools in relation to child safety. Instances of misuse or overuse of these tools in the past created a lack of trust among some speakers. An example was provided where a censorship tool in Finland resulted in the hacker compiling about 90% of the secret list of censored websites, revealing that less than 1% contained actual child sex abuse material.

In conclusion, the discussion on child safety in online environments highlighted the need to differentiate between general scanning and scanning for specific harmful content. It emphasised the importance of targeted approaches, such as the use of AI and curated algorithms, to detect child sexual abuse material. However, concerns were raised about the potential misuse of regulations, particularly in the context of encryption and other narratives like counterterrorism. The protection of online safety for everyone, the improvement of existing systems, and a comprehensive approach involving technology, education, and safety measures were identified as crucial elements in effectively protecting children online.

Namrata Maheshwari

The discussion revolves around the crucial topic of online safety and privacy, with a specific emphasis on protecting children. While there may be various stakeholders with different perspectives, they all share a common goal of ensuring online safety for everyone. The conversation acknowledges the challenges and complexities associated with this issue, aiming to find effective solutions that work for all parties involved.

In line with SDG 16.2, which aims to end abuse, exploitation, trafficking, and violence against children, the discussion highlights the urgency and importance of addressing online safety concerns. It acknowledges that protecting children from online threats is not only a moral imperative but also a fundamental human right. The inclusion of this SDG demonstrates the global significance of this issue and the need for collective efforts to tackle it.

One notable aspect of the conversation is the recognition and respect given to the role of Artificial Intelligence (AI) in detecting child sexual abuse material (CSAM). Namrata Maheshwari expresses appreciation for the interventions and advancements being made in this area. The use of AI in detecting CSAM is a critical tool in combating child exploitation and safeguarding children from harm.

The conversation highlights the need for collaboration and cooperation among various stakeholders, including government authorities, tech companies, educators, and parents, to effectively address online safety concerns. It emphasizes the shared responsibility in creating a safe online environment for children, where their privacy and security are protected.

Overall, this discussion underscores the significance of online safety and privacy, particularly for children. It highlights the importance of aligning efforts with global goals, such as SDG 16.2, and recognizes the positive impact that technology, specifically AI, can have in combating online threats. By working together and adopting comprehensive strategies, we can create a safer and more secure digital space for children.

Udbhav Tiwari

The analysis conducted on content scanning and online safety highlights several significant points. One of the main findings is that while it is technically possible to develop tools for scanning certain types of content, ensuring their reliability and trustworthiness is a difficult task. Platforms already perform certain forms of scanning for unencrypted content. However, Mozilla’s experience suggests that verifying the reliability and trustworthiness of such systems poses challenges. Currently, no system has undergone the level of independent testing and rigorous analysis required to ensure their effectiveness.

Another concerning aspect of content scanning is the involvement of governments. The analysis reveals that once technological capabilities exist, governments are likely to leverage them to detect content deemed worthy of attention. This raises concerns about the potential misuse of content scanning technology for surveillance purposes. Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this is seen in the implementation of separate technical infrastructures for iCloud due to government requests. Therefore, the law and policy aspect of content scanning can be more worrying than the technical feasibility itself.

The importance of balancing the removal of harmful content with privacy concerns is emphasized. Mozilla’s decision not to proceed with scanning content on Firefox Send due to privacy concerns demonstrates the need to find a middle ground. The risk of constant content scanning on individual devices and the potential scanning of all content is a significant concern. Different trust and safety measures exist for various use cases of end-to-end encryption.

The analysis brings attention to client-side scanning, which already exists in China through software like Green Dam. It highlights the fact that the conversation surrounding client-side scanning worldwide is more nuanced than commonly acknowledged. Government measures and regulations pertaining to client-side scanning often go unnoticed on an international scale.

Platforms also need to invest more in understanding local contexts to improve enforcement. The study revealed that identifying secret Child Sexual Abuse Material (CSAM) keywords in different languages takes platforms years, suggesting a gap in their ability to effectively address the issue. Platforms have shown a better record of enforcement in English than in the global majority, indicating a need for more investment and understanding of local contexts.

The issue of child sexual abuse material is highlighted from different perspectives. The extent to which child sexual abuse materials are pervasive depends on the vantage point. The analysis reveals that actors involved in producing or consuming such content often employ encrypted communication or non-online methods, making it difficult to fully grasp the magnitude of the problem. Further research is needed to understand the vectors of communication related to child sexual abuse material.

Finally, the analysis stresses that users have the ability to take action to address objectionable content. They can report such content on platforms, directly involve law enforcement, or intervene at a social level by reaching out to the individuals involved. Seeking professional psychiatric help for individuals connected to objectionable content is also important.

In conclusion, the analysis of content scanning and online safety identifies various issues and concerns. It emphasizes the need to balance the removal of harmful content with privacy considerations while cautioning against potential government surveillance practices. Furthermore, the study underscores the importance of understanding local contexts for effective enforcement. The issue of child sexual abuse material is found to be complex, requiring further research. Finally, users are encouraged to take an active role in addressing objectionable content through reporting, involving law enforcement, and social intervention.

Eliska Pirkova

The analysis of the arguments reveals several important points regarding the use of technology in different contexts. One argument highlights the potential consequences of using AI tools or content scanning in encrypted environments, particularly in crisis-hit regions. The increasing use of such technologies, even in democracies, is a cause for concern as they can only identify known illegal content, leading to inaccuracies.

Another argument raises concerns about risk-driven regulations, suggesting that they might weaken the rule of law and accountability. The vague definition of ‘significant risks’ in legislative proposals is seen as providing justification for deploying certain technologies. The need for independent judicial bodies to support detection orders is emphasized to ensure proper safeguards.

Digital platforms are seen as having a significant role and responsibilities, particularly in crisis contexts where the state is failing. They act as the last resort for protection and access to remedies. It is crucial for digital platforms to consider the operational environment and the consequences of complying with government pressures.

The pending proposal by the European Union (EU) on child sexual abuse material is seen as problematic from a rights perspective. It disproportionately imposes measures on private actors that can only be implemented through technologies like client-side scanning. This raises concerns about potential violations of the prohibition of general monitoring.

Similar concerns are expressed regarding the impact of the EU’s ongoing, still-negotiated proposal in relation to the existing digital services act. If the proposal remains in its current form, there could be direct violation issues. The argument also suggests that the EU’s legitimization of certain tools could lead to their misuse by other governments.

The global implications of the EU’s regulatory approach, known as the Brussels effect, are also discussed. Many jurisdictions worldwide have followed the EU’s approach, which means that well-intentioned measures may be significantly abused if they end up in inappropriate systems.

The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal. However, differing means, policy approaches, and regulatory solutions may generate counterproductive debates when critical views towards technical solutions are dismissed.

In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts, particularly concerning online security, accountability, and rights protection. Dialogue and negotiations among stakeholders are crucial to understand different perspectives and reach compromises. Inclusive and representative decision-making processes are essential for addressing the challenges posed by technology.

Riana Pfefferkorn

The analysis explores various arguments and stances on the contentious issue of scanning encrypted content. One argument put forth is that scanning encrypted content, while protecting privacy and security, is currently not technically feasible. Researchers have been working on this problem, but no solution has been found. The UK government has also acknowledged this limitation. This argument highlights the challenges of striking a balance between enforcing online safety regulations and maintaining the privacy and security of encrypted content.

Another argument cautions against forced scanning of encrypted content by governments. This argument emphasizes that such scanning could potentially be expanded to include a wide range of prohibited content, jeopardizing the privacy and safety of individuals and groups such as journalists, dissidents, and human rights workers. It is argued that any law mandating scanning could be used to search for any type of prohibited content, not just child sex abuse material. The risk extends to anyone who relies on secure and confidential communication. This argument underscores the potential negative consequences of forced scanning on privacy and the free flow of information.

However, evidence suggests that content-oblivious techniques can be as effective as content-dependent ones in detecting harmful content online. Survey results support this notion, indicating that a content-oblivious technique was considered equal to or more useful than a content-dependent one in almost every category of abuse. User reporting, in particular, emerged as a prevalent method across many abuse categories. This argument highlights the effectiveness of content-oblivious techniques and user reporting in identifying and mitigating harmful online content.

Furthermore, it is argued that end-to-end encrypted services should invest in robust user reporting flows. User reporting has been found to be the most effective detection method for multiple types of abusive content. It is also seen as a privacy-preserving option for combating online abuse. This argument emphasizes the importance of empowering users to report abusive content and creating a supportive environment for reporting.

On the topic of metadata analysis, it is noted that while effective, this approach comes with significant privacy trade-offs. Metadata analysis requires services to collect and analyze substantial data about their users, which can intrude on user privacy. Some services, such as Signal, purposely collect minimal data to protect user privacy. This argument highlights the need to consider privacy concerns when implementing metadata analysis for online content moderation.

The analysis concludes by emphasizing the need for both advocates for civil liberties and governments or vendors to recognize and acknowledge the trade-offs inherent in any abuse detection mechanism. There is no abuse detection mechanism that is entirely beneficial without drawbacks. It is crucial to acknowledge and address the potential negative consequences of any proposed solution. This conclusion underscores the importance of finding a balanced approach that respects both privacy and online safety.

The analysis also discusses the challenging practical implementation of co-equal fundamental rights. It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single right taking precedence over another. The difficulty lies in effectively implementing this principle in practice, particularly in contentious areas like child safety.

Furthermore, the analysis highlights the importance of holding governments accountable for maintaining trustworthiness. It is argued that unrestricted government access to data under the guise of child safety can exceed the necessity and proportionality required in a human rights-respecting framework. Trustworthiness of institutions hinges on the principle of government accountability.

In summary, the analysis provides insights into the complications surrounding the scanning of encrypted content and the trade-offs associated with different approaches. It emphasizes the need for a balanced approach that considers privacy, online safety, and fundamental rights. Acknowledging the limitations and potential risks associated with each proposed solution is crucial for finding effective and ethical methods of content moderation.

Session transcript

Namrata Maheshwari:
Next, the organizers to help us see the speakers on the screen. The two speakers joining online. Oh, there we go. Hi. Just to check, Rihanna, Sarah, can you hear us? Yes. Great. Do you want to try saying something so we can check if your audio is working? Hi. Can you hear me? Yes. I can hear you. Great. Can you hear me? Yes, we can. Thank you. All right. So, Udbhav will be joining us shortly, but maybe we can start just to make the most of time. My name is Namrata Maheshwari. I’m from Access Now, an international digital rights organization. I lead our global work on encryption, and I also lead our policy work in South Asia. I have the relatively easy task of moderating this really great panel, so I’m very excited about it, and I hope we’re able to make it as interactive as possible, which is why this is a roundtable. So, we’ll open it up, hopefully halfway through, but definitely for the last 20 minutes. So, if you have any questions, please do note them down. Well, quick introduction, and then maybe I’ll do some context setting. I’ll start with Ilishka Pirkova on my left, who is also my colleague from Access Now. She is Senior Policy Analyst and Global Freedom of Expression Lead, and as a member of the European team, she leads our work on freedom of expression, content governance, and platform accountability. Thank you so much for being here. I will introduce Udbhav anyway while we wait for him to come here. He is the Head of Global Product Policy at Mozilla, where he focuses on cybersecurity, AI, and connectivity. He was previously at the Public Policy Team at Google, and Non-Resident Scholar with Carnegie Endowment. And online, we have Rihanna and Sarah. Rihanna Fafikorn is a Research Scholar at the Stanford Internet Observatory. A lawyer by training, her work focuses on encryption policy in the US and other countries, and related fields such as privacy, surveillance, and cybersecurity. Sarah Myers West is the Managing Director of AI Now Institute, and recently served a term as a Senior Advisor on AI at the Federal Trade Commission. She holds a decade of experience in the field of the political economy of technology, and her forthcoming book, Tracing Code, examines the origins of commercial surveillance. Thank you so very much. These are people who I believe have played a very important role in shaping the discourse around encryption and AI in recent times. So thank you so much for lending your insights and expertise to this panel, and thank you all for sharing your time with us here today. Well, we’re seeing a lot of proposals across the world in different regions on AI and encryption. So this session really is an effort to shed some light on the intersections between the two, which we think lie within the content scanning proposals that we’re seeing in different countries, US, UK, EU, India, and Australia, a lot of others. These proposals mainly suggest scanning content of messages on encrypted platforms, and proponents say that there is a way to do this in a way that would not undermine privacy and help eliminate harmful material. And opponents say that there is an over-reliance on AI, because the tools that would need to be developed to scan this content are AI tools, automated scanning tools, which are prone to biases, prone to, well, false outputs, and also that it would undermine privacy and erode into an encryption as we know it. So I’m hoping that the speakers on this panel can tell us more about it. With that, I’ll get us started. Just some housekeeping. Online, we have my colleague, Reits, moderating. So whoever is joining online, if you have questions, drop them in chat, and Reits will make sure we address those. Rihanna, if I could start with you. Proposals in many countries to moderate content on encrypted platforms are premised on the idea that it is possible to do this without undermining privacy. Could you tell us a little bit more about what the merits of this are, what the real impact is on encryption, and on the user groups that use these platforms, including the groups that these proposals seek to protect?

Riana Pfefferkorn:
Sure. So there’s a saying in English, which is that you want to have your cake and eat it, too. And that’s what this idea boils down to, the idea that you can scan encrypted content to look for bad stuff, but without breaking or weakening into an encryption, or otherwise undermining the underlying privacy and security guarantees intended for the user. We just don’t know how to do this yet, and that’s not for lack of trying. Computer security researchers have been working on this problem, but they haven’t yet figured out a way to do this. So the tools don’t exist yet, and it’s doubtful that they will, at least in a reasonable timeframe. You can’t roll back encryption to scan for just one particular type of content, such as child sex abuse material, which is usually what governments want end-to-end encrypted apps to scan for. If you’re trying to scan for one type of content, you have to undermine the encryption for all of the content, even perfectly innocent content. And that defeats the entire purpose of using end-to-end encryption, which is making it so that nobody but the sender and the intended recipient can make sense of the encrypted message. This has been in the news lately because, perhaps most prominently, the United Kingdom government has been pretending that it’s possible to square this particular circle. Basically, the UK has been one of the biggest enemies of strong encryption for years now, at least among democracies. It’s been trying to incentivize the invention of tools that can safely scan encrypted content through government-sponsored tech challenges, and it just passed a law, the Online Safety Bill, that engages in the same magical thinking that this is possible. The issue here is that, like I said, there isn’t any known way to scan encrypted content without undermining privacy and security. And nevertheless, this new law in the UK gives their regulator for the internet and telecommunications the power to serve compulsory notices on encrypted app companies, forcing them to try and do just that. The regulator has now said, actually, OK, we won’t use this power because they’ve basically admitted that there just isn’t a way to do this yet. They say, we won’t use that power until it becomes technically feasible to do so, which might effectively be quite a while because we don’t have a way of making this technically feasible. And part of the danger of having this power in the law is that it’s premised upon the need to scan for child sex abuse material. But there isn’t really any reason that you couldn’t expand that to whatever other type of prohibited content the government might want to be able to find on the service, which might be anything that’s critical of the government. It might be les majestรฉs. It might be content coming from a religious minority, et cetera. And so requiring companies to scan by undermining their own encryption for whatever content the government says they have to look for could put journalists at risk, dissidents, human rights workers, anybody who desperately needs their communications to stay confidential and impervious to outside snooping by malicious actors, which might be your own government, might be somebody else who has it in for you, even in cases of domestic violence, for example, or child abuse situations within the home. So we’ve seen some at least positive moves in this area in terms of a lot of public pushback and outcry over this. Several of the major makers of encrypted apps, including Signal, WhatsApp, which is owned by Meta, and iMessage, which is owned by Apple, have threatened to just walk away from the UK market entirely rather than comply with any compulsory notice telling them that they have to scan encrypted material for child sex abuse content. So I take that as a positive sign that not only some of the major makers of these apps are saying that isn’t something that we could do, and that they’re saying we would rather just walk away rather than undermine what our users have come to expect from us, which is the level of privacy and security that end-to-end encryption can guarantee.

Namrata Maheshwari:
Thank you, Rihanna. Sarah, if you could just zoom out a bit for a second. And there have been a lot of thoughts about how artificial intelligence is a misleading term, and it could lead to flawed policies based on a misrepresentation of the kind of capabilities that the technology has. Do you think there is a better term for it? And if so, well, what would it be? And the second limb of the question was, again, there have been a lot of studies and debates around the inherent biases and flaws of AI systems. So if these were to be implemented within encrypted environments, which one of these characteristics, or if that’s true, if that is something that would happen, would these be transferred to encrypted platforms in a way that would lead to, well, unique consequences?

Sarah Myers West:
Sure, it’s a great question. I think it is worth taking a step back and really pinning down what it is that we mean by artificial intelligence, because that’s a term that has meant many different things over an almost 70-year history. And it’s one that’s been particularly value-laden in recent policy conversations. In the current state of affairsโ€ฆ No worries, maybe we can come back to Sarah once she rejoins. Ritz, could you let me know when Sarah’s back online? Oh, she’s back. Okay. Hi, Sarah. I’m back. Yes, sorry about that. What I was about to say was, what we sort of mean by artificial intelligence in the present-day moment is, you know, there are a lot of people who think that, you know, what we sort of mean by artificial intelligence in the present-day moment is the application of statistical methods to very large data sets, data sets that are often produced through commercial surveillance or through, you know, massive amounts of web scraping and sort of mining for patterns within that massive amount of data. So, it’s essentially, you know, a foundationally computational process. But really, you know, what Rihanna was talking about here was sort of surveillance by another means. And I think a lot of ideals get imbued onto what AI is capable of that don’t necessarily bear out in practice. You know, the FTC has recently described artificial intelligence as, you know, largely a marketing term. And there’s a frequent tendency in the field to see claims about AI being able to, you know, serve certain purposes that lack any underlying validation or testing where, you know, within the field, you know, benchmarking standards may vary widely. And very often companies are able to make claims about the capabilities of the systems that don’t end up bearing out in practice. We sort of discover them through auditing and other methods after the fact. And to that point, you know, given that AI is essentially grounded in pattern matching, there is a, you know, very well documented phenomenon in which artificial intelligence is going to mimic patterns of societal inequality and amplifying them at scale. So, we see widespread patterns of, you know, discrimination within artificial intelligence systems in which, you know, the harms accrue to populations that have historically been discriminated against and the benefits accrue to those who have experienced privilege. And that AI is sort of broadly being claimed to be some magical solution, but not necessarily with, you know, robust independent checks that it will actually work as claimed.

Namrata Maheshwari:
Thank you. Ilishka, given your expertise on content governance and the recent paper you led on content governance in times of crisis, could you tell us a bit about the impact of introducing AI tools or content scanning in encrypted environments in regions that are going through crisis?

Eliska Pirkova:
Sure. Thank you very much. And maybe I also would like to start from a sort of a content governance perspective and what we mean by the danger when it comes to client-side scanning and weakening encryption, which is the main precondition for security and safety within the online environment, which of course becomes even more relevant when we speak about the regions impacted by crisis. So, but unfortunately, the internet is not the only place where we have to think about but unfortunately, these technologies are spreading also in democracies across the world and legislators and regulators increasingly sell that idea that they will provide these magical solutions to ongoing very serious crimes such as child sexual abuse materials. And I will get to that. This also concerns other types of illegal content such as terrorist content or potentially even misinformation and disinformation that is spreading online on encrypted spaces such as WhatsApp or other private messaging apps. So, of course, there are a number of questions that must be raised when we discuss content moderation and content moderation has several phases. It starts with the detection of the content, evaluation, assessment of the content, and then consequently, ideally, there should be also provided some effective access to remedy once there is the outcome of this process. And when we speak about end-to-end encryption violation and client-side scanning, the most kind of worrisome state is precisely detection of the content where these technologies are being used. And one very important, and this is usually done through different using hash technologies, different types of these technologies, photo DNA is quite known. And, of course, these technologies, and I very much like what Sarah mentioned, it’s quite questionable whether we can even label them as artificial intelligence. I would rather go for machine learning systems in that regard. And what is very essential to recognize here is that these technologies simply scan the content and they are usually used for identifying the content that was already previously identified as illegal content, depending on the category they are supposed to identify. So then they trace either identical or similar enough content to that one that was already captured. And the machine learning system as such cannot particularly distinguish whether this is a harmful content, whether this is illegal content, whether this is the piece of disinformation. Because this content, of course, doesn’t have any technical features per se that would precisely provide this sort of information, which ultimately results into a number of false positives and negatives and errors of these technologies that impose serious consequences on fundamental rights protection. So what I find particularly worrisome in this debate increasingly, and that’s also very much relevant regarding the regions impacted by crisis, is the impact on significant risk and justifications that these type of technologies can be deployed if there is a significant risk to safety or to other significant risks that are usually very vaguely defined in these legislative proposals that are popping across the world. And if we have these risk-driven, I don’t want to call it ideology, but trend behind the regulation, then of course what will be significantly decreased is precisely the requirements for the rule of law and accountability, such as that, for instance, these detection orders should be fully backed up by the independent judicial bodies, and they should be the ones who should actually decide whether something like that is necessary and conduct that initial assessment. And when we finally put it in the context of crisis, of course in times when the rule of law is weakened, either by authoritarian regime in power that seeks to use these technologies to crack down on decent human rights activists and defenders, and we as AXIS now, being a global organization, we see that all over again, that this is primary goal of these type of regulations and legislations, or also these regimes being inspired by the democratic Western world where these regulations are also probably profiling more and more, then of course consequences can be fatal. sensitive information can be obtained about human rights activists as a part of the broader surveillance campaign and it also means that under such contexts where the state is failing and the state is the main perpetrator of violence in times of crisis it is the digital platform and private companies who often act as a last resort of protection and access to any sort of remedy and under those circumstances not only that their importance increases but so do their obligations and their responsibility to actually get it right and that of course contains the due diligence obligation so understanding what kind of environment they operate in and what they what is technically feasible and what is actually consequence if they for instance comply with the pressure and wishes of the government in power which we often see especially when it comes to informal cooperation between the government and platform that was a lot so I’ll stop here thank you.

Namrata Maheshwari:
Thank you. Our fourth speaker Udbhav Tiwari is having some trouble with his badge at the entrance I don’t know if the organizers can help with that at all but he’s at the venue but just having trouble getting a copy so just a request in case you are able to help no worries if not he’ll be here shortly but in the meantime we can keep the session going Rihanna I’d like to come back to you a lot of the conversations and debates on this subject revolve around the very important question of well what are the alternatives there are very real challenges in terms of online safety harmful material online and very real concerns around privacy and security so the question is if not content scanning then what in that context could you tell us more about your research on content oblivious trust and safety techniques and whether you think there are any existing or potential privacy preserving alternatives.

Riana Pfefferkorn:
Sure so I published research in Stanford’s own Journal of Online Trust and Safety in early 2022 there’s a categorization that I did in this research which is content dependent versus content oblivious techniques for detecting harmful content online content dependent means that that technique is requires at will access by the platform to the contents of user data so some examples would be automated scanning for the DNA as an example or human moderators who go to look for content that violates the platform’s policies against abusive uses I would also include content client-side scanning as at least I was describing as a content dependent technique because it’s looking at the contents of messages before it gets encrypted and transmitted to the recipient content oblivious by contrast means that the trust and safety technique doesn’t need at will access to message contents or file contents in order to work so examples would be analyzing data about a message rather than the contents of a message so metadata analysis as well as analysis of behavioral signals how is this user behaving even if you can’t see the contents of their messages another example would be user reporting of abusive content because they’re the reason that the platform gets access to the contents of something isn’t because they had the ability to go and look for it’s because the user chose to report it to the to the platform itself so I conducted a survey in 2021 of online service providers which included both with end-to-end encrypted apps as well as other non EDE types of online services and I asked them what types of trust and safety techniques they use across 12 different categories of abusive content from child safety crimes to hate speech to spam to missing disinformation and so on and I asked them which of three techniques I made content scanning which is content dependent and metadata analysis and user reporting which are content oblivious did they find most useful for detecting each of those 12 different types of abusive content and what I found was that for almost every category a content oblivious technique was deemed to be as or more useful than a content dependent one specifically user reports in particular prevailed across many categories of abuse I asked about the only exception was child sex abuse material where automated scanning was deemed to be the most useful so meaning things like for the DNA these findings indicate that and then encrypted services ought to be investing in making robust user reporting flows ideally ones that expose as little information about the conversation as possible apart from the abusive incident I find user reporting to be the most privacy preserving option for fighting online abuse plus once you have a database of user reports you could apply machine learning techniques to users or groups across your service if you want to look for some trends without necessarily searching across the entire database of all content on the platform another option metadata analysis in my survey that didn’t fare as well as user reporting in terms of the usefulness as perceived by the providers but that was a couple of years ago and even then the use of AI and ML were already helping to detect abusive content so those tools surely have room to improve I do want to mention though like it’s important to recognize that there are trade-offs to any of the proposals that we might come up with metadata analysis has major privacy trade-offs compared to user reporting because the service has to collect and analyze enough data about its users to be able to do that kind of approach there are some services like signal that choose to collect extremely minimal data about their users as part of their commitment to user privacy so when we’re talking about trade-offs trade-offs might be inaccuracy there might be false positive rates or false negative rates associated with a particular option privacy intrusiveness what have you there’s no abuse detection mechanism that is all upside and no downside we can’t let governors or vendors pretend otherwise and especially when it comes to pretending that you’re going to have all of the upside without any kind of trade-offs whatsoever which is what I see commonly used like oh yeah it’s worth these privacy trade-offs or these security trade-offs because we’re going to realize this upside well that’s not necessarily guarantee but at the same time I think that as advocates for civil liberties for human rights for strong encryption it’s important for us not to pretend that the things that we advocate as alternatives don’t also have their own trade-offs there’s a great report that CDT published in 2021 that looked at a bunch of different approaches called from the outside looking in it’s also a great resource for looking at the different sorts of options in the end and encrypted versus versus you know have the tension between doing trust and safety and how to continue respecting strong encryption

Namrata Maheshwari:
great above will come to you now a lot of proposals again on content scanning are premised on the admittedly well-intentioned goal of wanting to eliminate harmful material online from a product development perspective do you think it is possible to develop tools that are limited to scanning certain types of content and looking at the cross-border implications as well from a platform that provides services in various regions what do you think the impact of implementing such abilities in one region beyond other regions with different kinds of governments and contexts thanks number that I think that

Udbhav Tiwari:
the first kind of angle to with which to look at it is whether it’s technically feasible or not and the second is whether it’s feasible in law and policy and I think the both of them are two different answers purely on the technical feasibility perspective it depends on how one decides to define client-side scanning and and what constitutes client-side scanning or not but there are different ways in which platforms already do certain kinds of scanning for unencrypted content that some of them claim can be done for encrypted content in a way that is reliable but personally speaking and also from Mozilla’s own experiences as we’ve evaluated them it’s quite difficult to take any of those claims on face value because almost none of these systems when they do claim to only detect a particular piece of content well have gone I think undergone the level of independent testing and rigorous analysis that is required for those claims to actively be verified by the like by the rest of either the security community or the community that generally works in trust and safety like Rihanna was talking about and the second aspect which is the law and policy aspect is I think the more worrying concern because it’s very difficult to imagine a world in which we deploy these technologies for a particular kind of content presuming it meets the really high bar of being reliable and trustworthy and also somehow privacy-preserving the legal challenges that it creates don’t end with it existing but of how other governments may be inspired by these technological capabilities existing in the first place and that’s because once these technological capabilities exist various governments would want to utilize it for whatever content they may deem worth detecting at a given point in time and that means that what may be CSAM material in one country may be CSAM material and terrorist content in another country and in a third country it may be CSAM material, terrorist content and content that maybe is critical say of the political ruling class in that particular country as well and as I think if there’s one thing that we’ve seen with the way that the internet has developed over the last 20 to 25 years it’s that the ability of companies and especially large companies to be able to resist requests or directives from governments has only reduced over time. The incentives against them standing up against governments are like very very aligned towards them just complying with governments because it’s simply much easier for you from a business perspective to be able to just if a government places pressure upon you over an extended period of time to just give in to certain requests and we’ve already seen examples of that happen with other services that are ostensibly parts of which are end-to-end encrypted such as iCloud where in certain jurisdictions they have separate technical infrastructures that they’ve set up because of requests from governments as well so if it has started happening there I see it I think it’s very difficult to see a world in which we won’t see it happening for client-side scanning and these kinds of content as well. One other thing that I will say that and that’s especially from a product development perspective is Mozilla has also actually had some experience with this and the challenges that come with deploying end-to-end encrypted services and deciding to do them in a privacy-preserving manner but not having and not collecting metadata which was this service called Firefox Send which Firefox and Mozilla had originally created a couple of years ago to be able to allow users to share files easily and anonymously so you went on to a portal it had a very low limit you could upload a file onto the portal and then once you uploaded a file onto a portal you got a link and then the individual could click on the link and then you could download it and this the service worked reasonably well for a couple of years but what we realized I think towards the end of its lifespan was that there were also some use cases in which it was being used by malicious actors to actively deploy harmful content in some cases malware in some cases like materials that otherwise would be implicated in investigations and once we evaluated whether we could deploy mechanisms that would either scan for such content on devices which in our case was the browser which has even less of a possibility of doing such actions we decided that it was better for that piece of software to not exist rather than for it to create the risks that it did for users without the trust and safety options that were available to us because it was end-to-end encrypted so that’s also I think a nod towards the fact that there are different streams and levels of use cases to which end-to-end encryption can be deployed and different kinds of trust and safety measures that could be deployed to like account for different like threat vectors if you would like to call them that way and the ones that we’re specifically talking about which is client-side scanning most is most popular right now for messaging but the way that it’s actually been deployed in the past or almost been deployed in the past by say a company like Apple was the closest was actually scanning in from all information that would be present on a device before it would be backed up so there is and that’s the final point that I’m making that there’s also this implication that we are presuming this is a technology that will only scan your content after you hit the send button or when you’re hitting the send button but in most of the ways in which it’s actually been deployed it’s been deployed in a way where it proactively would scan individuals devices to detect content before it before it is backed up or uploaded onto a server in some form and that’s a very very thin line to walk between doing that and just keeping it and just like scanning content all the time in order to detect whether there’s something that shouldn’t exist on the device and that’s a very scary possibility

Namrata Maheshwari:
thanks above is Sarah online rates I I believe she’s had an emergency but that’s fine I will come back to her if she’s able to join again Alishka in many ways the EU has been at the center of this discourse or what is also known as the Brussels effect we see a lot of policy proposals debates and discourses on internet governance and privacy and free expression traveling from the EU to other parts of the world also true for it also happens horizontally across other countries but still in a disproportionate way from EU to elsewhere more recently there have also been proposals around looking at the question of content moderation on encrypted platforms what would you say are the signals from the EU for the rest of the world from a privacy free expression and safety perspective on what to do and what not to do thank you

Eliska Pirkova:
indeed the EU regulatory race has been quite intense in past few years also in the area of content governance and part from accountability specifically in the context of client-side scanning I’m sure many of you are aware of the still pending a proposal on child sexual abuse material it’s the EU regulation which from the fundamental rights perspective is extremely problematic as a part of a dream network there was a tree position paper that contains the main critical points around the legislation and couple of them I’ve already summarized during my first intervention and the whole entire regulation is problematic due to the disproportionate measure it imposes on the private actors from detection order and other measures that can be only implemented through technical solutions such as client-side scanning very short-sighted justifications for the use of these technology very much based on that risk approach that I’ve already explained at the beginning but also ultimately not recognizing and acknowledging that the use of such technology will also violate the prohibition of general monitoring because of course these technology will have to scan the content indiscriminately and I’m mentioning the ban on the general monitoring because if you ask me about the impact of the EU regulation of course another very decisive law in this area was or is digital services act even though digital services act regulates user-generated content disseminated for public if we speak about platforms but to some minimum extent we could say there are some minimum basic requirements for private messaging apps too even though it’s not the main scope of the digital services act but DSA has still a lot to say in terms of accountability transparency criteria and other due diligence measures that these regulation contains and we are really worried about the interplay between these horizontal legislative framework within the EU and the ongoing still negotiated proposal the proposed regulation the child sexual abuse material and if it would stay in its current form and we are really not there yet and of course there would be number of issues that would be in direct violation with the existing digital services act especially those measures that stand on that intersection between these two regulations and of course this sends a very dangerous signal to other governments outside the European Union governments that will definitely abuse these kind of tools especially if a democratic governments within the EU will legitimize the use of such a technology which would ultimately happen and we hope it won’t and there is a significant effort to prevent these regulation from ideally not being adopted at all which is probably at this stage way too late but at least do as much damage control as possible So we have to see how this goes, but of course the emphasis on the regulation within the European Union around digital platforms in general is very strong. There was a number of other laws adopted in recent years and it will definitely trigger this Brussels effect that we saw in the case of the GDPR, but also in case of other member states within the EU, especially in the context of content governance, for instance infamous NetzDG in Germany where Justitia is running the report every year where they clearly show how many different jurisdictions around the world follow this regulatory approach. And if it’s coming directly from the European Union the situation will only get, you know, as much as I believe in some of those laws and regulations, what they try to achieve, everything in the realm of content governance and freedom of expression can be significantly abused if it ends up in the wrong hands and in the system that doesn’t take their constitutional values and rule of law seriously.

Namrata Maheshwari:
Thank you. Udba, my question for you is actually the flip side of my question for Ilishka. Given that so much of this debate is still dominated by regions in the global north, mostly the US, UK and EU, how can we ensure that the global majority plays an active role in shaping the policies and the contexts that are taken into account when these policies are framed? And what do you think tech platforms can do better in that regard?

Udbhav Tiwari:
Thanks, Namita. I think that generally speaking if we were to look at, just for the first maybe minute, in the context of end-to-end encrypted messaging, I would say that probably the only country that already has a law on the books that nobody’s, at least the government doesn’t seem to have made a direct connection between possibilities like client-side scanning and regulatory outcomes is the Indian government. Because India currently has a law in place that gives the government the power to demand the traceability of pieces of content in a manner that still preserves security and privacy. So we’re not very, I don’t think it’s too much of a stretch for, say, a government stakeholder in India to say, why don’t we develop a system where there’s a model running on every device or a hash or a system that scans for certain messages on every device and the government provides hashes and then you need to be able to, like, essentially scan a message before it gets encrypted and reported to us for whether it, and if it is a match, it means that that individual is spreading, you know, messages that are either leading to, like, public order issues or other kinds of misinformation that’s spreading that they want to clamp down on. So, and the reason I kind of raised that, even though traceability is not necessarily a client-side scanning issue, is that I actually think that the conversation is both a lot less nascent in the vast majority of the global majority conversations, but it also has a lot more potential to cause much more harm. And that’s because a lot of these proposals both float under the radar, don’t get as much attention internationally, and ultimately the only thing that holds or protects the individuals in these jurisdictions is the willingness of platforms to comply with those regulations or not. Because so far we, apart from the notable exception of China, where in general there have been systems where the amount of control that the state has had on the internet has been quite different for long enough that there are alternative systems, to the point at which I think that the only known system that I’ve read of that actually has this capability is the Green Dam filter, I think it’s called in China, which does have the ability to, was originally installed on, and I think it’s almost mandatory for it to be present on personal computers, which was originally recommended as a filter for pornographic websites and adult content, but there have been reports that since then it had, that may have reported to governments when people have either searched for certain keywords or gone after or looked for content that may not necessarily be approved at that point as well. And I think that that shows, that showcases that in some places the idea that client-side scanning may not be this hypothetical reality that will exist in the future but may already exist for some time, and that given the fact that we are only relying, for better or for worse, on the will of the platforms to resist such requests before they end up being deployed, I think that the conversation that we need to then start having is, one, what are the ways in which people outside these jurisdictions are actually holding platforms to account for when these measures get passed? So if they do get passed, saying, do you intend to comply with it? If you don’t intend to comply with it, what is your plan for if the government escalates, like, its enforcement actions against you? And as we’ve seen in many countries in the past, like, they can get pretty severe, and ultimately I think this is something that will need to be dealt with at a country-to-country level, not necessarily platform-to-country level, because I think that ultimately, if, depending on the value of the market for the business or for the strength of the market as a geopolitical power, the ability of a platform to resist demands from a government is ultimately limited, and they can try, and some of them do, and many of them don’t, but ultimately it’s something that only international attention and international pressure can reasonably move the needle into. The final point that I’ll make there is, I do think that even when it comes to the development of these technologies, these are still very much, very, like, Western-centric technologies, where a lot of the models that they are trained on, a lot of the information these things are designed on, come from a very different, like, realm of information that may not really match up two pieces in the global majority. I have, like, read of numerous examples outside the end-to-end encrypted context, where, for example, something that a lot of platforms do is that they block certain keywords that are known to be secret keywords for CSAM, which are not very well known, and they vary radically in different jurisdictions, so in order to find it, it may seem like an innocuous word that means something completely different in a local language, but if you search for that, you will find users and profiles that actually CSAM already exists, and just finding out what those keywords are in various local languages, in individual jurisdictions, is something that, like, many platforms take years to do, be able to do well, and that’s not even an end-to-end encrypted or client-side scanning problem, it’s a how much are you investing in understanding local context, how much are you investing in understanding local realities problem, and if that happens there, I think that, like, it’s because those measures fail, it’s because when it comes to unencrypted content, that platforms don’t act quick enough or don’t account for local context enough, that governments also end up resorting to measures like recommending client-side scanning. That’s by no means to say that it’s the fault of these platforms that these measures or these ideas exist, but there’s definitely a lot more that they could do in the global majority to actually deal with the problem on open systems, where they actually have a much better record of enforcement in English and in countries outside the global majority than within the global majority.

Namrata Maheshwari:
Thank you. I have one last question for Sarah, and then we’ll open it up to everybody here, and if anybody is attending online, so please feel free to jump in after that. Sarah, as our AI expert on the panel, what would your response be to government proposals that treat AI as a sort of silver bullet that will solve problems of content moderation on encrypted platforms?

Sarah Myers West:
So, I think one thing that’s become particularly clear over the years is that content moderation is, in many respects, an almost intractable problem. And though AI may present as though a very attractive solution, it’s in many ways, it’s not a straightforward one. And in fact, it’s one that introduces new and likewise troubling problems. AI, for all of its many benefits, it remains imperfect. And there’s a need for considerably more scrutiny on claims that are being made by vendors, particularly given the current state of affairs, where you know, quite few models are being going through any sort of very rigorous independent verification or adversarial testing. I think there are concerns about harms to privacy. There are concerns about false positives that could sort of paint innocent people as culprits and lead to unjust consequences. And lastly, you know, there’s been research that has shown that malicious actors can manipulate content in order to bypass these automated systems. And this is an issue that’s endemic across AI. And underscoring, you know, even further the need for much more rigorous standards for independent evaluation and testing. So before, you know, we put all of our eggs in one basket, so to speak, I think it’s really important to one, evaluate whether AI, broadly speaking, is up for the task, and then two, to really look under the hood and get a much better picture of what kinds of evaluation and testing are needed to, you know, verify that in fact, these AI systems are working as intended, because by and large, the evidence is indicating that they’re very much not.

Namrata Maheshwari:
Thank you, Sarah. And thank you all so much on the panel. I’ll open it up to all the participants, because I’m sure you have great insights and questions to share as well. Do we have anybody who wants to go first? Great, sure. Could we, before you make your intervention, could you just, in a line, maybe share who you are?

Audience:
Oh, no. Is it? Okay, it’s better now. Good morning, everyone. Or, yeah, still good morning. My name is Katarzyna Staciwa, and I represent the National Research Institute in Poland, although my background is in law enforcement and criminology, and soon also clinical sexology. So, I really want voices of children to be present in this debate, because there were already mentioned in the context of CSAM, which is child sexual abuse material, scanned, and, yeah, on some other occasions. But I think there is a need to make a difference between general monitoring or general scanning, and scanning for this particular type of content. It is such a big difference, because it helps to reduce this horrendous crime. And there are already techniques that can be reliable, like hashes. And by hashes, I also mean experience of hotlines, in-hope hotlines present all over the world. And it’s already experience of, I believe, more than 20 years of this sort of cooperation. So, hashes, they are gathered in a reliable way. There is 3-I verification in the process of stating if a particular photo or video is CSAM. So, it’s not like a general scanning, it’s scanning for something what has been corroborated before by an expert. And then on AI, I’m lucky enough, because my institute is actually working on AI project. And we train our algorithms to detect CSAM in a big bunch of photos or videos. And I can tell you that this is being very successful so far. So, we use also current project by InHope that follows on ontology, specific ontology. So, we train algorithms in a very detailed way to pick up only these materials that are clearly defined in advance. So, and it’s again, it’s an experience of years of cooperation, international cooperation. And I can tell you that general monitoring is something very much different than scanning for photo or video of a six-month-old baby that is being raped. So, please take it into consideration in any future discussions that while we have obligation to take care of privacy and online safety, we first have an obligation to protect children from being harmed. And this is also deeply rooted in all the EU conventions and all the UN conventions and the EU law. So, we have to make a different, we have to make a decision because for some of these children, it will be too late. And I will leave you with this dilemma. Thank you.

Namrata Maheshwari:
Thank you. Thank you so much for that intervention and respect all the work you’re doing. Thank you for sharing that experience. I think one thing that I can say for everybody on the panel and in the room is that all of us are working towards online safety. And I know we’re at a point where we’re identifying similar issues, but looking at the solution from different lenses. So, I do hope that conversations like this lead us to solutions that work for safety and privacy for everybody, including children. So, thank you so much for sharing that. I really value it. Anybody else?

Audience:
Thank you for the great presentation. I’m Arlena Wozniak from European Center for Nonprofit Law. And thank you for your intervention. I’m following up on that. I’d love to hear from you, Alishka. You mentioned the potential misuse of EU regulation. Then, more broadly, how can this kind of child safety narrative can also be used as a slippery slope for other narratives like counterterrorism or fighting human trafficking, which are all laudable goals, which as human rights advocates, we all fight for. And thank you for your mention about child protection. Indeed, online safety applies to all, especially marginalized groups. But I’d love to hear from you how it’s not as easy, it’s not a black or white kind of picture, and how these narratives can often be abused and weaponized to actually prevent encryption.

Eliska Pirkova:
Thank you so much. Great question. And thank you very much for your contribution. From a position of digital rights organization, we, of course, advocate for the online safety and protection of fundamental rights of all. And, of course, children also have their right to be safe and they also have equally right to privacy. And we can go into nitty gritty details on general monitoring, how these technologies work and whether there is any way how general monitoring would not occur. And I think that maybe we would even disagree to some extent. But the point is that the goal is definitely the same for all of us. And especially when it comes to marginalized groups, as Marlena rightly pointed out, it’s a major priority for us, too. But I definitely find it difficult, mainly as an observer, because we truly rely fully on the ADRI network in Brussels, European Digital Rights Network, who leads our work on child sexual abuse material. And I often see that precisely the question of children’s rights is being, to some extent, I would say, I’m trying to find the right term. But the emphasis on that, even though it’s a number one priority for all of us, it can be used in the debate to maybe counter argue against opinions that are slightly more critical towards some technical solutions, while no one ever disputes the need to protect children and that they come first. And that often complicates and maybe becomes, to some extent, almost counterproductive. Because I don’t think that we have any differences in terms of goals that we are trying to achieve. We all are aiming at the same. outcome in the process but perhaps the means and ways and the policy solutions and regulatory solutions that we are aiming at might differ and that’s of course subject to debate and to ongoing negotiations what is that solution and none of them will be ever perfect and there will have to be some compromises made in that regard but I do find this you know dichotomy and more kind of like very straightforward black-and-white differences in terms of when we are doing a rare advocacy and almost occasionally trying to put it in a way that we should choose the side incredibly problematic because there is no need for that I think we all as I said have the same outcome in mind so I don’t know whether I answer your question but indeed this is a very complex and complicated topic and we need to continue having this dialogue as we have today and inform each other position and try to see each other perspective in order to achieve that successful act outcome that we are striving for and that’s the highest level of protection.

Audience:
Thank you, Vagesha and then Hioranth. Hi, thank you. Is this working? Okay. Hi, I’m Vagesha, I’m a PhD scholar at Georgia Tech. Kudos to all of you to condense all of that material into 40 minutes. I mean it’s a vast thing that you’ve covered here. I think I have a comment that will lead to a question so I’ll be quick. Alishka, you mentioned about significant risk in the beginning and I was thinking about how significant risk on any sort of online platforms are often masked by this concerns of national security when it comes to the governments directly right and national security risk could be often subjective and could be related to the context of which government and how they interact with it. So and I think Udbhav also mentioned about how harmful content is a big problem in the space. I mean all of us agree about that. I think my question would be largely alluding to one when you were talking about and this is to everybody when you’re talking about scraping of the content to be used further on how much of the apps that are available online actually store the data in an encrypted format so how big is that problem of you know scraping of that data and it’s encrypted it being an encrypted format and two how do we think about it from a user’s perspective so what can a user do directly to either not solve this problem but intervene in this problem and present their perspectives ahead. Thank you. Thank you.

Namrata Maheshwari:
Actually do you want Udbhav to take the question given the platform reference and could I request everyone to just keep the questions brief so that more people can participate. We have only a few more minutes to go. Thank you. Sure so

Udbhav Tiwari:
on the platform front I think how big is the problem and how pervasive is the problem is an interesting one because on one angle it depends on whose perspective you’re looking at it from. If you’re looking at this from the perspective of an actor that either produces or consumes child sexual abuse materials then it’s arguably a lot of them because this is how the one would argue they communicate with each other and share information with each other which is measures that either aren’t online at all or are ways in which they are encrypted but I think that’s definitely a space that needs a lot more study especially especially going into like what are the vectors in which these like pieces of information are shared and communicated with each other because there has been some research on how much of it is online how much of it is offline how much of it is natural discovery how much of it is you have to seek out the fact that it exists kind of discovery but a lot of that information is both very jurisdiction specific and I think overall has not been answered to the degree that it should be. On what are the things that users themselves can do I mean broadly into three categories right like one is there’s reporting itself because then even on other systems the ability for a user to say I have received or seen content that is like this and I want to tell the platform that this is happening is one route. The second route and this applies to more limited systems is the content exists in this form and I would like to directly take it to the police or to law enforcement agency saying I have it from this user in this way and this is the problem that it’s creating and ultimately the third is for the user and this is like something that a lot there’s been a lot of research on is like intervening at the social level where you talk to your like if it’s somebody you know for why this is kind of problematic and you ask them to get professional psychiatric help and then like they essentially get treated like they have a disease. Platforms can or cannot play a role in this some of them can proactively prompt you to seek help some of them can tell you it’s a crime there are some countries where courts have mandated that these warnings proactively be surfaced and laws too like in India but ultimately I think it’s a area that needs a lot more study which just hasn’t happened so far. Just to very quickly add to that and I’ll pass it on is that all

Namrata Maheshwari:
of this is by no means to say that you know platforms play a role in keeping people safer in a way that governments don’t. By all means we need measures to make platforms more accountable including the ones that are end-to-end encrypted absolutely but the question is just how to do it in a way that most respects fundamental rights. I’ll pass it to you and then to the lady in the back. Online Rianna, Sarah, if there’s anything that you want to add to any question please just raise your hand and we’ll make sure you can come in.

Audience:
Yeah my name is Rao Palme from Electronic Frontier Finland and I don’t have a question but I just like to respond as well to the law enforcement representative here that I’ve lost a lot of credibility in law enforcement to use their tools for what they actually say they use them for. For example in Finland they tried to introduce a censorship tool, well they did in 2008 and in the end it took a hacker to scan the secret list of the police that censorships or censors the websites and we found out like the rationale for the tool went that it has to be used like that because the CSUN material is hosted in countries that our law enforcement doesn’t have access to or even any cooperation and there was this hacker who scanned the internet to find out the secret list he was able to compile about 90% of that list and we actually went through the material and had a look what’s in there. First of all it was like less than 1% was actual child sex abuse material and the second point which I think is even stronger is that guess the biggest country that hosted the material? It was US. After that Netherlands. After that UK. In fact the first 10 countries were Western countries where all you need to do is pick up the phone, call them to take it down. That’s it. Why do they need a censorship tool? The same goes for this kind of client-side scanning. I feel it’s going to be abused, it’s going to be used for different purposes after it goes to gambling and so on. So it’s a real slippery slope and it’s been proven before that that’s how it goes. Thank you and thank you very much for your comment. So I work for iPad International which some of you will know we’re at the forefront of actually advocating for the use of targeted technology to protect children in online environments and I absolutely I mean what’s interesting just even about the people in this room is I think certainly what we’re seeing is an example and we speak for both sides of the conversations being divided and I I’m very happy I’m here and I’m really enjoying this conversation because I absolutely believe in critically challenging our own perspectives and views on different issues and it’s been really interesting to hear particularly the point about Global South and different jurisdictions and it’s absolutely I think we have a system that is working it’s not perfect and there are examples where there have been problems but in general the system is working very well and we could give many other examples of why that is but we need to build on the existing system to expand out into other regions. One of the things I think is interesting and this has been a key theme of the IJF I think for me is this issue of trust in institutions and trust in tech and they’re very difficult to achieve, they’re easy to lose, they’re hard to gain, it’s trust in general. I think on this issue it’s at the forefront of the problem. I think one of the things I always regret is that there isn’t more discussion of why we do agree because there are areas we agree and again one thing that comes up when we deal with issues of trust are issues of transparency whether that’s in processes, algorithmic transparency, oversight, reporting, they’re not perfect but as civil society we can call for accountability so I think that those are areas where we agree and I do wish we were speaking a little bit more about that. In terms of the legislation and general monitoring you’re right we’re not going to go into the details of the processes in the EU but I do think there’s a sometimes a convenient conflation of technology in general and specific technologies that are used for certain things and I think if we talk about targeted CSAM detection tools and spyware they’re not the same thing and I think sometimes there’s a convenient conflation of different texts that are used for different means. The other thing and this is very much to your point about data sets upon which these tools are trained, it’s true that we need to be doing much better at understanding and having data that will avoid any kind of bias in the identification of children but just to this final point one of the reasons for differentiating between hosting of content and which is very much related to internet infrastructure but it is shifting is also that we need to talk about victim identification that one of the reasons to take down and refer child sexual abuse material is that it gets to into processes where children can be identified and we have decades now of experience of very successful processes whereby law enforcement are actually identifying children and disclosing on their behalf because we have to remember that child sexual abuse material is often the only way a child will disclose because children do not disclose and one of the fallacies, I’m sorry I will finish here, one of the fallacies in the debate about the child rights argument is often that we we’re calling for technical solutions as a silver bullet, absolutely not. I think one of the things we all agree on is this is a very complex puzzle and prevention means technology, prevention means education, prevention means safety measures, prevention means you know working with perpetrators, it’s everything that we need to be doing and we’re absolutely calling for that. So I suppose it’s not a question but I wanted to sort of make that point and maybe it’s a question or call to action is that we really need to be around the table together because I think there are areas where we absolutely are agreeing.

Namrata Maheshwari:
Absolutely agree with that and I do hope we’ll have more opportunities to kind of talk on the issues that we all care about. Unfortunately we’re over time already but I know that Rihanna has had her hand raised for a bit so Rihanna do you want to just close us up in one minute?

Riana Pfefferkorn:
Sure so to close I guess I’ll just note that I’ll emphasize something that Alishka said which is that we know that all fundamental rights are meant to be co-equal with no one right taking precedence over any other and how to actually implement that in practice is extremely difficult but it applies to things like child safety as well that are these contentious you know horses that we can get stuck on and that’s the topic of report that I helped author including with an emphasis on child rights as part of the DFR labs recent scaling trust on the web report that goes into more depth into all the different ways that we need to be forward-looking with regard to finding equitable solutions for the various problems of online harms. I also just want to make sure to mention that when it comes to trustworthiness of institutions we do need everybody to be holding governments accountable as well. There was recent reporting that Europol had in some of the closed-door negotiations over the child sex abuse regulation in the EU demanded unlimited access to all data that would be collected and have that be passed on to law enforcement so that they could look for evidence of other crimes not just child safety crimes. So in addition to looking to platforms to do more we also need everybody child safety organizations included to be holding governments to account and ensuring that if they are demanding these powers that they cannot be going beyond those and using that as the tip of the spear with one particular topic to be demanding unfettered access for all sorts of crime investigations because that goes beyond this sort of necessity and proportionality that is the hallmark of a human rights respecting framework. Thanks.

Namrata Maheshwari:
Thank you. A big thank you to all the panelists Sarah, Rihanna, Udbhav, Elishka, Reitz. Thank you for moderating online and thank you all so very much for being here for sharing your thoughts and we hope all of us are able to push our boundaries a little bit and arrive at like a common ground that works for the best of all users online. Thank you so much. Have a great IGF. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

1948 words

Speech time

695 secs

Eliska Pirkova

Speech speed

171 words per minute

Speech length

2097 words

Speech time

734 secs

Namrata Maheshwari

Speech speed

177 words per minute

Speech length

2092 words

Speech time

709 secs

Riana Pfefferkorn

Speech speed

189 words per minute

Speech length

2000 words

Speech time

636 secs

Sarah Myers West

Speech speed

156 words per minute

Speech length

762 words

Speech time

294 secs

Udbhav Tiwari

Speech speed

194 words per minute

Speech length

2785 words

Speech time

863 secs

The Internet in 20 Years Time: Avoiding Fragmentation | IGF 2023 WS #109

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Henri Verdier

Henri Verdier, a pioneering internet entrepreneur, took a leap of faith in 1995 by starting his first internet company during a time when there were only 15,000 web surfers in France. Initially, Verdier harboured doubts about the potential of crowd-sourced knowledge bases like Wikipedia. However, he has since come to embrace the transformative power of the internet.

One of Verdier’s concerns is the fragmentation and privatization of the internet. It is disconcerting to see certain big states and tech companies disregarding the importance of a free, open, and decentralized internet. This issue raises questions about the future of an internet that is accessible and available to all.

Cyberspace has become intertwined with various aspects of life, including education, health, business, and even matters of war and peace. This highlights the enormity of the impact of the digital revolution in recent times. Furthermore, there has been a rise in digital diplomats as part of this revolution.

Understanding the distinction between technical fragmentation and legal fragmentation of the internet is crucial. Technical fragmentation leads to a higher temptation to disconnect from each other, while the legal aspect of internet governance empowers individuals to shape their own future.

In advocating for a free, open, and decentralized internet, Verdier acknowledges the importance of respecting each country’s right to establish its own legal framework. He believes in the right of the people to make decisions about their own future and is a strong proponent of an open and neutral internet. He opposes the idea of a unified market for tech giants such as Mr. Zuckerberg.

Another vital aspect is the need for network standards and legal standards to be interoperable. This ensures seamless connectivity and compatibility between different systems.

Verdier highlights the distinction between private online spaces, such as social networks, and the internet itself. He sees entering a social network as akin to leaving the internet, emphasising that social networks are “private places” built on top of the internet’s infrastructure. Additionally, Verdier expresses a preference for European rules over private rules from platforms like Elon Musk’s.

The golden age of the internet has ushered in an unprecedented openness of access to information, knowledge, and culture. This has been a monumental shift, allowing people from various backgrounds to engage with a vast array of resources. Furthermore, this period has uniquely empowered communities and individuals, enabling them to have a greater say in shaping their own futures. The permissionless innovation that characterizes this era has also spurred remarkable progress.

Verdier cautions that threats to individuals’ autonomy, empowerment, and creativity can stem from many sources, not solely rogue states. He has expressed concerns about the role of the private sector in potentially impeding these freedoms.

In conclusion, Henri Verdier, a respected internet entrepreneur, has witnessed and experienced the incredible evolution of the internet. While initially doubtful, he now recognises its transformative potential. However, he remains watchful of the dangers of fragmentation, privatization, and the potential threats to people’s autonomy and creativity. By advocating for a free, open, and decentralized internet, he strives to strike a balance between global connectivity and respecting the sovereignty of individual nations. Overall, his insights and observations shed light on the complex challenges and opportunities presented by the internet in the modern world.

Izumi Aizu

Predicting the future is a challenging task, especially when it comes to disasters and conflicts. These events are often unpredictable in nature, as exemplified by the earthquake and tsunami that occurred 12 years ago, which was not foreseen. Recent conflicts in Gaza and Ukraine were also unexpected. Despite advancements in technology, such as the Internet, smartphones, and AI, natural calamities and conflicts continue to impact the world unexpectedly. This suggests that while optimism about the future is important due to technological advancements, reality often brings unexpected events.

The future is multi-faceted, consisting of both positive and negative aspects. It is composed of different elements, including both dark and bright aspects. While there may be positive advancements, there are also dark and challenging aspects to consider. It is important to have a holistic understanding of the future, considering its multi-dimensional nature.

One perspective on the future of the Internet is presented by Izumi Aizu. He believes that the future scenarios of the Internet will be characterised by mixed networks co-existing with the traditional Internet, fragmented Internet with national bloc politics, and a globally unified strength Internet. This suggests that the future of the Internet may be chaotic and fragmented.

However, Aizu also believes in the Internet as a tool for global communication and knowledge sharing. Despite the potential fragmentation due to political and economic reasons, he emphasizes that the underlying ethos of the Internet as a communication tool is likely to persist. Aizu challenges the view that achieving a ‘better internet’ alone should be the ultimate goal. Instead, he emphasizes the importance of focusing on creating better societies and better people.

Aizu agrees with Sheetal Kumar’s statements about the need to harmonize legal frameworks to international human rights standards and make governance bodies more inclusive, particularly in the context of internet governance. He suggests that the future of technology, including the Internet, should be informed by current politics and environmental changes. This includes considering potential regulations on servers, data centers, and artificial intelligence (AI) due to environmental factors.

Furthermore, Aizu emphasizes that the focus should not solely be on the future of the Internet but on the future of humanity as a whole. He argues that it is essential to address global goals like good health and well-being, quality education, and sustainable cities and communities alongside technological advancements.

The current discourse on artificial intelligence (AI) is criticized for its lack of inclusivity. Aizu points out that important countries like China and India were not adequately represented in the discussion. This highlights the need for broader participation and diverse perspectives in shaping the future of AI.

The present state of the Internet Governance Forum (IGF) is perceived as peaceful but unremarkable. Although the IGF has evolved from being tense and fearful in the past, it is now considered to be less impactful and engaging.

In conclusion, predicting the future is a challenging task, particularly regarding disasters and conflicts. Advancements in technology do not eliminate the unpredictability of these events. The future is multi-faceted, composed of both positive and negative aspects. The future of the Internet may be chaotic, but it also holds potential as a tool for global communication and knowledge sharing. The focus should be on creating better societies and better people, rather than solely improving the Internet. Harmonizing legal frameworks and governance bodies to international human rights standards is crucial for responsible internet governance. Considering current politics and environmental changes is important when shaping future technology. Inclusivity is key when discussing topics like AI, and broader participation is needed. The present state of the IGF is perceived as peaceful but unremarkable, highlighting the need for more impact and engagement. It is essential for IP fundamentalists to expand their perspectives and engage with other global issues. By doing so, they can learn from and contribute to discussions on topics like climate change.

Olaf Kolkman

Predicting the future of the internet is a challenging task due to the complexities and rapid advancements in technology. However, there are differing viewpoints on what the future may hold.

One perspective is that openness is a key feature that should define the future of the internet. This notion is supported by the belief that the scientific method of sharing knowledge, criticizing each other, and making knowledge readily available has been instrumental in driving innovation and progress. Openness allows for collaboration and the exchange of ideas, thereby fostering continuous development and improvement. Furthermore, empowering communities through bottom-up methods, such as building Internet Exchange Points (IXPs) and providing cookbooks for community networks, helps ensure that everyone has equal access to the benefits of the internet.

However, there is another argument that proposes a future scenario where the internet becomes closed and proprietary. This model envisions a world where services are primarily developed to generate profits, prioritising monetary gain over network connectivity. Under this system, the concept of openness may be overshadowed by profit-driven motives, potentially hampering innovation and limiting access for certain groups of people.

Additionally, the lack of infrastructure is identified as a significant challenge that leads to fragmentation. Without adequate infrastructure, internet services may be limited or nonexistent in certain regions, impeding connectivity and hindering progress.

One area of concern is the influence of industry politics on standardisation bodies. It is recognised that choices made by these bodies can be influenced by industry interests and agendas, potentially impacting the open and transparent nature of internet standards.

The notion of consolidation is another topic of discussion. Even with open technologies, companies may seek to extract profits and monopolise the market, leading to consolidation and reducing diversity. This trend raises concerns about fair competition and innovation within the internet ecosystem.

On the other hand, innovation does not always require strict standards. For example, the development of blockchain technology by Satoshi Nakamoto, where an innovative approach was taken without relying on a predefined standard, showcases the possibility of permissionless, open, and individual-driven innovation.

Open architecture, open-source, open standards, and transparency are highlighted as essential components for a positive future of the internet. Open architecture allows people to build upon existing technologies, while open-source encourages collaboration and reuse of building blocks. Open standards and transparency promote inclusivity and foster trust among users.

Internet regulation and governance are acknowledged as crucial aspects for the future of the internet. A principle-based approach that considers factors such as individualism, autonomy, and societal values is suggested as a means of organising the internet. However, achieving global consensus on these matters is expected to be challenging given the diverse perspectives and interests of various stakeholders.

In conclusion, predicting the future of the internet is a complex task, given the rapid pace of technological advancements. While there are differing opinions on what the future may hold, the importance of openness, infrastructure development, community empowerment, and fair governance are recurrent themes in shaping a positive future for the internet.

Lorrayne Porciuncula

The analysis explores different perspectives on the impact and governance of the internet and technology. It begins by highlighting the initial optimism surrounding these tools, with the belief that they would serve as liberating and empowering forces. Lorrayne Porciuncula grew up closely involved in the evolution of the internet through her father’s local ISP in Brazil. She conducted a survey that revealed widespread optimism about the benefits that technology would bring to society. However, it is noted that the reality of technology’s impact is more nuanced than early optimistic predictions.

Porciuncula acknowledges that while the internet and technology have brought some positive changes, they have not fully lived up to the idealistic visions many had held. The argument presented is that the future concern lies more in the legal and regulatory aspect of technology rather than the technical layer. It is stressed that there is a need to consider how to build alignment across different national legal and regulatory frameworks to avoid fragmentation.

Furthermore, it is suggested that coordination and collaboration are essential in creating a more agile perspective towards internet infrastructure. This includes having a multi-stakeholder approach and addressing the challenges of cross-border coordination. Porciuncula emphasizes the importance of finding institutions and processes that are capable of considering various perspectives and adapting to the ever-evolving nature of technology.

The analysis also highlights the complexity of the internet and the need for international cooperation in its governance. It is recognized that the internet is difficult for one government to regulate and comprehensive governance requires collaboration on an international scale. The argument is made that the focus should be on governing the complex adaptive system of the internet through international cooperation.

Narratives are identified as playing a crucial role in discussions about the internet and digital society. Porciuncula emphasizes the importance of addressing issues such as walled gardens with competition tools and identifying the requirements society has for the internet. The analysis also notes that there is a lack of clarity about what society wants from the internet.

The need for remedies that allow users to switch between internet platforms is highlighted, drawing parallels with the example of telecoms where users have the right to switch. This is seen as a means to promote competition and reduce inequalities.

Addressing the complexity of internet governance requires a clear objective, an incremental and iterative approach, and multi-stakeholder inclusion. The analysis stresses the importance of considering the perspectives of underrepresented communities and incorporating them into the decision-making process. It is argued that multi-stakeholderism is not about relinquishing government decision-making power but rather about creating a more inclusive and democratic approach.

Lastly, the analysis suggests that sandboxes can serve as a valuable tool for testing new policies and understanding potential issues. By allowing for real-world testing of regulations, sandboxes can provide insights into the effectiveness of policies and help address any unintended consequences.

In conclusion, the analysis highlights the need for a more nuanced understanding of the impact and governance of the internet and technology. While there was initial optimism about their liberating and empowering potential, it is recognized that their impact is more complex. The focus should shift towards the legal and regulatory aspects and finding alignment across national frameworks to avoid fragmentation. Additionally, a more agile perspective, international cooperation, and multi-stakeholder inclusion are crucial in addressing the challenges of internet governance. Clear objectives, an iterative approach, and multi-stakeholder involvement are necessary to tackle the complexity of the system.

Emily Taylor

In the discussions surrounding the future of the Internet, Emily Taylor raises the need to explore potential risks and scenarios. Taylor outlines three possible scenarios for the Internet’s future: muddling along as it currently is, fragmentation due to various factors, or a more positive collective future created by society.

Taylor also reflects on the optimism once associated with the Internet, expressing a desire to rediscover that sense of potential for liberation and empowerment. This highlights the importance of not losing sight of the positive aspects of the Internet’s evolution.

The discussions emphasize viewing technology as an integral part of society rather than something separate. Izumi’s views on the chaotic nature of the Internet and the need for focus on better societies and individuals support this argument. The concept of a better future should encompass technological advancements as well as advancements in society and individuals.

In conclusion, the future of the Internet requires consideration of potential risks, a renewed sense of optimism, and recognition of the integration between technology and society. This comprehensive analysis offers insights into the discussions surrounding the future of the Internet and the need to align technological advancements with societal progress for a more inclusive and beneficial future.

Note: There were no UK spelling and grammar errors to correct in this text.

Sheetal Kumar

The future of the internet is predicted to become increasingly intertwined with our daily lives and more challenging to separate from our activities, according to multiple speakers. They assert that advancements in technology have resulted in devices becoming smaller and faster, leading to the omnipresence of cameras through mobile phones. This development has made capturing and sharing images an effortless part of our routine.

Furthermore, the speakers emphasize the accuracy of past predictions regarding technological advancements. This observation highlights the potential for future visions and creations to shape the evolution of the internet. It implies that our anticipation and actions today can play a crucial role in determining the trajectory of technological progress.

Sheetal Kumar, one of the speakers, underlines the significance of actively shaping the future of technology. She stresses that technology should feel liberating for all individuals, especially those who lack positions of power. Kumar emphasizes the need to address and overcome current social inequalities in shaping the future of the internet. This call for inclusivity is accompanied by an appeal for engagement and cooperation among technology builders and standard-setters.

Moreover, the speakers stress the importance of harmonizing legal frameworks with human rights standards and making decision-making bodies more inclusive. This notion is grounded in the existence of international human rights law and standards. The speakers argue that aligning legal systems with human rights principles leads to more equitable and just outcomes. They advocate for increased transparency and a reinstated sense of user control in internet data, as recent trends have demonstrated a shift in control from users to corporate actors and governments.

Protecting the openness of the internet is seen as paramount. The speakers highlight the value of open access, enabling people to go online, build new applications, and develop technologies. They argue that maintaining openness fosters innovation, collaboration, and an inclusive digital environment.

In conclusion, the future of the internet is expected to be tightly integrated into our lives, making it difficult to disassociate from our activities. Promoting a future where technology feels liberating and inclusive is a shared goal among the speakers. They advocate for engagement, cooperation, and the alignment of legal frameworks with human rights standards. Reinstating user control and transparency while protecting the openness of the internet is also considered essential. Ultimately, the future world should be built upon the principles of liberation and the safeguarding of human rights.

Raul Echeverria

During the discussion, the speakers covered various important topics, including the challenge of internet fragmentation and its negative impact. They acknowledged the already existing fragmentation in the internet and expressed the mission to minimize it as much as possible, promoting a more unified and accessible internet for everyone.

Another significant aspect discussed was the need for gradual objectives and commitments in policy making. The speakers emphasized the importance of starting with simple agreements and progressively improving upon them. This approach encourages collaboration and partnerships among different stakeholders, in line with the goal of achieving the Sustainable Development Goal (SDG) 17 of “Partnerships for the Goals.”

To improve messaging and policymaking, the speakers emphasized the importance of clear and concise messages regarding internet fragmentation and its implications. Simplifying these messages would enhance policymakers’ understanding and enable them to make informed decisions. This approach aligns with SDG 16, which aims for “Peace, Justice, and Strong Institutions” and underscores the need for effective communication in policy-making processes.

Additionally, the discussion shed light on the impact of fear on shaping future policies, particularly in relation to artificial intelligence (AI). The speakers observed that discussions at a global conference primarily focused on fears and concerns about AI, with a negative bias. They argued against formulating policies solely based on fear, advocating for a balanced and rational approach rooted in evidence-based decision-making.

The speakers also emphasized the importance of involving youth in policy discussions. They believed that regardless of their level of expertise, young individuals should have a voice in shaping the future. This recognition aligns with SDG 16 and highlights the value of diverse perspectives in the policy-making process.

In summary, the speakers stressed the need for collaboration, clear messaging, and gradual improvement in policy-making processes, while cautioning against the negative influence of fear. By involving various stakeholders, particularly youth, in discussions, they aimed for a comprehensive and inclusive approach to envision and shape the future of the internet.

Audience

The future of the internet is heavily influenced by innovation in use cases and applications. Younger engineers are seen as key drivers of this innovation, as they come up with new ideas that shape the development of internet protocols and technology. However, there are concerns about the current state of the internet. It has shifted from being a force for good to being driven by aspects such as surveillance, capitalism, malware, and misinformation. This observation highlights the need for measures to address these negative aspects and ensure that the internet continues to serve as a positive force in society.

Diversity and inclusion also emerge as crucial factors in the development of internet standards. The lack of female participation and end-user representation in standards bodies is seen as a problem that needs to be addressed. Having more diversity and inclusivity in these bodies allows for a wider range of perspectives, leading to more comprehensive and effective standards.

Predicting future advancements in technology should focus on understanding user demands rather than solely relying on technological capabilities and government regulations. The speaker suggests that the best way to anticipate future developments is by understanding what individual users want technology to do. This user-centric approach ensures that technological advancements align with the needs and desires of the people.

While there is technological optimism, challenges arise from governmental regulation fragmentation and enforcement contradictions. The existence of contradictory laws and regulations related to privacy and online content does not seem to inhibit governments from enforcing them, raising concerns about the effectiveness and coherence of regulation in the internet landscape.

Incentives, particularly money, play a significant role in driving internet development, especially in the context of web 3 crypto. However, it is acknowledged that money may not be the sole incentive driving technology development. Other factors such as societal impact, innovation, and user satisfaction should also be considered.

The influx of cryptocurrencies is expected to make the future of the internet more complex and fragmented. This observation raises concerns about the possibility of increased fragmentation and the need for regulation to address these complexities effectively. Government regulation fragmentation is seen as a major risk that could hinder the development of a cohesive and secure internet.

There is also a focus on the need for more inclusive regulation, particularly in the context of AI. The lack of consensus and the competition surrounding AI regulation are seen as challenges. It is suggested that businesses, civil societies, and the engineering sector should document the consequences of fragmented regulation to increase awareness and promote more balanced and inclusive approaches.

Inclusivity and engagement of users from the global south and countries with geopolitical differences are highlighted as essential for the future of the internet. By incorporating diverse perspectives, the development and governance of the internet can be more representative and inclusive.

There are concerns about the negative aspects of the internet, such as internet shutdowns and the exploitation of ICT by bad actors. These issues call for regulation and measures to ensure the proper and ethical use of technology.

The importance of aligning government regulations with human rights norms and standards is emphasized. Both governments and companies have responsibilities to uphold human rights through their actions and policies.

Inclusive governance and the involvement of diverse stakeholders, particularly users, are seen as crucial. By including different voices and perspectives, decisions about the internet’s future can be more comprehensive and representative.

In conclusion, the future of the internet is shaped by innovation in use cases and applications driven by younger engineers. However, challenges exist in terms of the internet’s trajectory towards negative aspects such as surveillance and misinformation. Ensuring diversity and inclusion in internet standards bodies is key, and predicting future technology advancements should focus on understanding user demands. Regulation, especially with regards to cryptocurrency and AI, needs to be comprehensive and inclusive. Inclusivity, human rights, and the prevention of negative impacts on society should be at the forefront of decision-making.

Session transcript

Emily Taylor:
There’s an expectant silence, so I’m going to fill it. Good morning, good afternoon, good evening to those who are joining us online, and welcome to this workshop organised by the DNS Research Federation entitled The Internet in 20 Years Time. So this is organised around the theme of avoiding fragmentation, and what we decided to do was to imagine ourselves into a future in 2043, and we will be reflecting on the internet as it has become in our prediction, how we got there, what good would look like in 2043, and what action we might need to take now to fulfil the hoped-for future that we want. So my name is Emily Taylor, I’m a founder of the DNS Research Federation, I’m also CEO of Oxford Information Labs, and an Associate Fellow at the international affairs think tank Chatham House. I’m joined today by a wonderful panel of experts who are going to indulge this act of imagination, but I also hope that we can involve you, the audience in the room, and also online. Please feel free to ask for the floor at any stage, we’re not doing the sort of opening remarks, we’re going to just travel through those themes of imagining the future and how we got there. So if at any point you would like to join the conversation, please, you’re more than welcome to do so. So I’m joined today on the panel, I’m going to kind of run through from end to end, we have Olaf Kolkman, who I think is your current job title, Principal Internet Technology Policy and Advocacy at the Internet Society? Thank you. But we’re very, very fortunate to have Olaf, those of you who know him will know how deeply his understanding and communication of the technical layers of the internet and his ability to communicate that to non-technical people is much appreciated on this panel, and I hope we’ll be hearing that and the reach across into standards as well. We have Lorraine Porciancula, who’s the Executive Director of the Datasphere Initiative, and I hope I haven’t pronounced your name completely wrongly. We have Ambassador Henri Verdier from France, who is joining us today as well. We’re very delighted to welcome you to the panel, Ambassador. We have, you’ll see an empty seat beside me, that is for Raoul Echevarria, who’s joining us at about the hour mark. He’s the Executive Director of the Latin American Internet Association. We then have Izumi Aizu, Senior Research Fellow at the Institute for Info-Socionomics at the … Something like that. Something like that. Did I say that right? At Tama University here in Japan, and then Sheetal Kumar, who’s Head of Advocacy at Global Partners Digital. So welcome to all of you on the panel. So my first question to you all is, if we imagine ourselves in 2043, what does the internet look like? Let’s try the sort of, you know, your best guess. Before we get started on that, as we’re in your hands as futurologists today, how good are you at predicting the future? Would you say, does anybody want to share any anecdotes about their prowess at predicting the future? Olaf, have you got anything for us?

Olaf Kolkman:
I knew this was coming. I told this story to Emily once. In the second half of the 90s, sort of 95, 96 or so, I was making webpages in the university, studying astronomy. At some point, a PhD student that I was working with came to me and said, let’s bail out. Let’s start a company making webpages. And I told him in his face, no, I will not do this. This whole web thing will not go beyond academic libraries and preprints. So that is how good I am at predicting the future.

Emily Taylor:
Great to have you on this future gazing panel, Olaf. With that, Sheetal, have you got anything for us?

Sheetal Kumar:
I’m not sure if I have a mic here. Sorry. I think we might have to share. So I think the danger with these questions is it also forces you to reveal your age to some extent, which is part of the game, perhaps. But I do remember, perhaps, it’s not me, but I remember my parents saying that they, well, one of them, that they imagined 20 years ago that in 20 years we would have cameras on us all the time, which is true. We have our phones. And we would be able to access, you know, what we want to see on smaller phones because they kind of imagined that the devices, the pagers and then the big block phones that we had and we were carrying around would just become smaller and smaller and faster and faster. And that’s what happened. They should be on the panel. But I think that that lends us to, you know, the question of, like, how does that then evolve and what does the internet look like? I think it’s more perhaps a question for people, what does the internet feel like? And it’s, I think, going to be along the lines of, of course, what we create and what we envision and how we build that. But an internet that is more, just more in our lives, more embedded, more difficult to disassociate from everything that we live in and inhabit. So that would be my prediction.

Emily Taylor:
And we’ll come back to the vision of the future in a second, Shetal. Thanks for sharing that. Ambassador Verdi, I think if we do that one.

Henri Verdier:
Hello and thank you for the invitation. I started my first internet company in 1995 in France, so we were 15,000 web surfers and I didn’t miss internet. So here I was right. And I remember when, for example, do you remember when Bill Gates said that internet would never work and Microsoft.net would be better? So here I didn’t miss the story. But my company was a subsidiary of a publisher and I remember the birth of Wikipedia and I thought and I said that it was impossible to conceive an encyclopedia without a genius like Diderot or d’Alembert. So I said, this is impossible. And that was my first mistake in this story. Yeah, well, you know, what do they say?

Emily Taylor:
Predictions are difficult, especially about the future, right? But you got you got a lot of it right, like Shetal’s parents. So Izumi, how about you?

Izumi Aizu:
Thank you. I think many of you know the very famous term. The best way to predict the future is to invent it by Alan Kay. But 12 years ago, there was a big earthquake and tsunami happened. We have never predicted it, right, and we didn’t want to invent it either at all. If it’s positive, you can invent, you can make the future bright. But how many of us have imagined that Gaza thing just started to fire and what’s going to happen? How many of you, Ukraine, not to mention, but also 28 years ago, there was a big earthquake in Kyoto. I mean, Kobe hit us, many killed. So yes, while we are very much optimistic about the future with all the great things like technology, Internet, smartphone, AI, the reality may be composed by many different colors, dark, void, vacuum, green and white. So I don’t know how really to respond to your nice question, Emily, but I will try to come up later.

Emily Taylor:
Thank you very much. Lorraine.

Lorrayne Porciuncula:
Thank you so much, Emily, and thank you for an invitation of this panel. I keep thinking that, well, I grew up with Internet, right? It was very, it’s hard for you to predict something that it was part of the air, something that’s already a given, right? And so I remember when I was very young and my father had one of, had a local ISP in Brazil. It was one of the first in Brazil, actually. And I remember playing around servers in the cool rooms and all that. That was very much part of sort of my life, right? I’ve been part of that. And so trying to retrace when I started thinking about the Internet as something separate was probably when I was around my undergrad studying international relations and economics. And I was very much into Amartya Sen. And as a Nobel Prize winner, thinking about capabilities and how that is development, right? Rather than just thinking about how much money you’re going to make or GDP you’re going to grow in a country, it’s about capabilities of individuals or communities. And it struck to me particularly because that was around where, well, the green wave and the movements in Middle East were happening. There was the Arab Spring. And I remember the sense of excitement of what technology was going to bring. And the kind of empowerment and expansion of capabilities was going to bring. And this realization very naively at the point, and I think was shared by many people, that there was no way to fight this because it was going to come in terms of liberating people and populations. And technology was going to empower everyone. And it was interesting because I did do a small survey asking people about their predictions. And there were a number of questions, answers to that question. And a lot of people saying, well, there’s just a lot of positivity and optimism in terms of what it was going to bring to society. So when I think back to that, I think, well, maybe, I mean, we certainly are not there in terms of just being the solution to so many of the problems that we already have inherently as societies. But somehow it has brought good things. So the answer is way more nuanced, as with any prediction. It never comes in an extremist kind of scenario.

Emily Taylor:
Yeah. So we’re not going to get it right. But I think that that view from you, Lorraine, as somebody who never remembers not having the Internet, that the future is difficult to predict. It will have good and bad aspects, as Izumi has said. But it’s also, you know, one of the things I hope we can rediscover on this panel in a small way is that sense of optimism that you describe from your earlier time. And so we set out in an accompanying paper to this session, three possible scenarios for the future, which we published on our blog. I don’t know if you’ve seen it. I think it’s on the on the page for this workshop. But in in the sort of TLDR aspect, it’s we muddle along more or less as we’re going. And it’s a little bit worse, probably, you know, but somehow all holds together somehow, a bit like what we’ve got today. But in 20 years time, there’s a fully fragmented future, which is either divided at the technical layers, at ideological layers, at at regulatory layers or all three. And then there is the bright future, the where we all collectively get our act together and almost sort of deliberately work to create the Internet in that optimistic frame that you described so beautifully, Lorraine. So if I could just get a sense from our panel and and also please do, you know, raise your hand if you would like to to to join in this conversation. I’d like to hear from you, you know, according to your expertise or your area of interest, you don’t have to cover everything. What do you think is the most likely future that we will have for the Internet and why and at which layer do you see the most risk? Shall I start with you, Olaf, as you started by sharing so honestly your prediction about there being no future?

Olaf Kolkman:
Yeah, I again, predicting the future is incredibly hard. And what you normally do with scenario thinking is you go into the extremes. Now, when I read this paper for the first time, what I what I sort of noticed is that the future is already here. What you’ve taken are points that we already see start happening to starting happening and that can explode and find their find their way into that future and become more prevalent. And if that happens, the world will look different. When I was thinking of the story of hope, and this is a way to sort of classify those futures, when I was young, again, what I liked about the Internet, what drove me towards the Internet and what made me the the professional that I am now is the openness, the openness, the really the scientific method of sharing knowledge. Criticize each other, having knowledge available. Everything I learned about the Internet, I learned on the Internet, and I shared my learnings and I contributed to that Internet as well. And that that feature of openness, I think, is is one that sort of classifies is another way to classify the scenarios that you have. A scenario that first scenario that you have a mixed scenario with a net closed networks or mixed networks coexisting with the traditional Internet is about being closed, about being proprietary, about developing services for which the services make the money and people pay for the services. And that’s the way that they connect. I connect to this service. And the network connectivity itself is not important anymore. And that’s different from the third evolution, which is more open and treats the Internet as a way to connect to the rest of the world and choose your services. So I leave it at this for the moment. We can go in deeper.

Emily Taylor:
Thank you. I’ve got three people waiting to join the conversation, and I’m so pleased to see that at such an early phase. And also, I would encourage any women in the room who would like to ask a question to either raise their hands. And I personally find the the mic in the in the aisle quite a big step. But if any if any women would like to join the conversation from the floor, please do. And some younger people. And Asians. Thank you. But let’s can we just run through some some very brief injections from you to the conversation if you’re ready to do so? Thank you very much. Sure.

Audience:
This is Barry Lieber. What I’m gonna say is going to follow on very nicely from what Olaf just said. I have a talk that I’ve given in a few places about Internet architecture, how we built the Internet. We collectively how we how we got where it where it is and where it’s going and in the how it got where it is. There’s a lot of what are the innovations that drove the architecture? And how did we add to the protocols, the suite of protocols that make up the Internet with things like media streaming and teleconferencing that we’re working? We’re now working on protocols for autonomous cars to talk to each other in a little pocket that. So there’s the where we’re going is a realization that the that what has driven the Internet is innovation in in use cases and applications and things that we that we can do with the Internet have built up the the suite of protocols and the technology that makes the Internet. So as we look for the future, where it’s going, it’s going to be I can’t predict specifics, but what I can predict is that it’s going to be some brilliant engineer who’s a third my age who has the next great idea for an application on the Internet that’s going to drive another set of standards and technology that builds the Internet 20 years from now.

Emily Taylor:
Thank you. And thank you for also highlighting the role of standards in shaping the way that we experience technology. So I hope that we’ll come back to that on the panel. Andrew, do you want to just give us a quick injection from you? And then after we’ve heard from Mike, I’m going to resume our panel discussion with Izumi.

Audience:
This might actually work quite well, follows quite nicely on Barry’s point. So I’m going to come at it from a different point of view. I think when we look back in 20 years, we’re probably at or close to an inflection point. Up until now, the Internet’s largely been a force for good. And I would observe that when we consider things like surveillance, capitalism, malware. disinformation and misinformation and CSAM, we’re at the point where the balance is shifting to it no longer being a force for good and actually being a force for harm rather than good, when you net out the various effects. If I look at, so this is where we get to Barry’s point, those sort of standards bodies, I literally this morning received an email telling me that from a survey of the ITF membership it’s around 10% female. I’ll leave that out without comment. There are no end-users or virtually no end-users present in the standards bodies. The ITF is not unusual in that regard. None of them are really very good in terms of multi-stakeholderism in any meaningful way. So I would suggest that when we look back in 20 years, I think the reason it’s an inflection point is we either change the SDOs to be multi-stakeholder and diverse in all sorts of different axes or potentially this will fail under the weight of the harms because we need to design this as a internet for the users not by the engineers. So we’ve got two very contrasting views already for the future.

Emily Taylor:
We’ve got from Barry the idea that you know there’s going to be some really unexpected piece of innovation that just comes out of nowhere and that sort of picks up a point from Izumi about you know when you look back at things we failed to to predict even in the last week these are unexpected things and a somewhat more pessimistic view from Andrew about you know and highlighting some aspects if you like of the standards development world that are not currently as inclusive as they should be and even the internet becoming a force for harm rather than good which I saw you know caused ripples in the room. Mike can you help us out with another vision and then we’ll resume the panel.

Audience:
Well my comments are going to feed in very nicely to both previous speakers. I’m at the Carnegie Endowment for International Peace but I’ve had eight dream jobs. A couple back where I was at Georgetown teaching about communications culture and technology. My most popular class was called how to predict the futures parentheses s and I taught the students that the best way to understand what’s coming isn’t to look at what the technology can do it’s to look at and not to look at what governments think the technology should not do. The best thing to do is to look at what individual users will want the technology to do and that’s whether it’s digital technology whether it’s biotech whether it’s cars and I don’t think your panel is structured in a way to do that. So I want to rewrite your project description your program description to spend at least a few minutes thinking about what is it that are driving the companies and the governments to make the internet better because I’m a technological optimist and a political pessimist. In those countries that have policies that allow a lot of innovation and competition I think we’re going to solve most of the problems that were just mentioned. But I don’t think we’re gonna understand the future if we don’t understand what’s driving it and I’m just challenging you to ask that question.

Emily Taylor:
That’s a really welcome challenge and I think Izumi I’d like to to turn to you on that you know we we often talk about fragmentation in very particular ways like we’re going we’re in the layers we’re talking about it at a technical level but as Mike has has challenged us you know there are lots of different issue as well. ways that fragmentation might emerge and there’s different ways of framing the

Izumi Aizu:
Thank you very much Professor Nelson. I’m a student of your class okay 20 years ago. Well it’s a pity that we don’t have anybody in the teens and 20s on the panel. Not to mention that many in the room. I had an interesting discussion yesterday with the 12 years old kid and five and talked about the war and peace and the internet. But let’s put aside. With the scenario you prepared three ones right? Mixed networks coexisting with the traditional internet and the second scenario is fragmented internet with national block politics internet and the third one is the globally unified strength internet. I would say our first one and to mix and I would call it chaos. I don’t see any globally coherence ranks the internet. I asked Mr. GPT and Mr. Bard a few minutes ago. I’ve got more than I can read in five minutes. But interestingly overall nature of the internet while fragmentation due to political and economic reasons might be predominant the underlying ethos of the internet is as a tool for global communication knowledge will likely persist. That’s Mr. GPT and Mr. Bard said personally I believe that the internet is likely to become more unified in the coming years. Very noisy pictures and these are AI not me. So I would perhaps later explain a little bit why I would call it chaos. As Mike may have already mentioned but you said for the better internet I challenge that. I would say for better society better people not internet. That’s a very different views of the world and the internet. So that’s my second contribution.

Emily Taylor:
Thank you very much Izumi and and I think it sort of comes back to where you started us Lorraine is the the and and Chital this sort of maybe it’s it’s more and more false to think of technology as something that is separate from ourselves. That it’s it’s integrated that the future of the technology is very much about our own future as societies and as people. Lorraine what do you think is the most likely of any scenarios but the three may have helped or not but you know we want to remember Mike’s challenge on that you know you can just frame it however you want.

Lorrayne Porciuncula:
Thank you so much. I actually I really love the flow of the conversation and how organically we’re integrating the different arguments. I’m going to try to build on all of that. I think that the question around fragmentation needs to be you know needs to take a step back in terms of in which layer are we talking about when we’re talking about fragmentation. Often people tend to confuse the issues in terms of where it’s actually what kind of fragmentation we’re talking about. And also I mean there’s a difference between fragmentation on the technical layer and a fragmentation on the legal and regulatory layer. And so for me it’s more useful to think about I mean if I’m talking about scenarios in 20 years I don’t think we’re gonna have such a big issue with the technical layer in itself. I think the real big hairy challenge will going is going to be around the legal and regulatory space. And that’s not a potential if it’s a reality right now. And so I’d rather focus on that scenario which is very much true around fragmentation around in regulatory aspects than on what could happen if a number of things happen in the domain name system etc right. So that taken aside I also think that ultimately it’s not only about fragmentation of the internet as Izumi said. It’s about what’s happening with our digital society. And so it’s the question that I’d like to pose then with the also instigated by Mike is what how do we how are we going to do to get along really. And what are the incentives because ultimately policymakers are going to design regulatory and legal responses to what they are afraid of, to what they want to control. And talking to Mike just before the panel he was saying he wrote a paper just 25 years ago around what are the different incentives of what governments would like to control which is basically taxes. Basically you know content that is online. How do you actually ensure national security, democratic processes. All of this are incentives in terms of how do you build the tools from a national angle to and see them reflected in the technologies that we have today. So the questions are around how the national concerns are going to are going to be really then reflected in that technology and the fragmentation happens in that in that angle really which is legal regulatory and about how do we actually find convergence or how do we find find interoperability around those different spaces of legal regulatory regimes. And how do we find the institutions and the processes that are able to to take this all in from a more agile perspective from a way that coordinates across borders and across multiple stakeholders.

Emily Taylor:
Thank you very much. So I hope we can expand on your final point about you know what do we need to do how do we equip ourselves from the future for the future we want. But we’ve we’ve heard from from the audience and the panel that there there might be fragmentation risks at a technical level and also from you from a legal and regulatory level. Ambassador Verdier I’d like to turn to you now for your prediction about where we are likely to end up and thank you very much I can see already five people including two women thank you very much for that. I’m going to I’m going to go to Ambassador Verdier and then to Chantal to articulate your predicted or preferred visions and and then I’d like to hear about your visions as well for the future and we can we can join forces in that way.

Henri Verdier:
Okay we’ll try to be brief and make four small remarks. First I don’t know what will happen but I know what we should fight for because you you say that you don’t remember the world before Internet. I can imagine the world without Internet and the world after Internet and I’m not sure that my daughters will know the world where we are living in today because of course probably there will always be a technical standard and the possibility to build interaction between computers but you all know that some big states don’t really like this free open decentralized Internet and some and most big tech don’t care about it and you did observe like me that for example now 80 percent of the submarine cables are private and we can observe a tendency of privatization of something that was a common and that’s so there will always be an Internet like there is a darknet for example but maybe we won’t live within this Internet and that’s a main concern so I wanted to share this with you and we have to fight for this again open neutral free decentralized Internet based on the open standards that we can share. My second and quite so maybe that’s normal I remember the declaration of independence of cyberspace most of you remember John Perry Barlow at this time the cyberspace could be seen as a foreign place somewhere else now it did invade and transform everything education, health, business, war, peace so this is the normal life there is not anymore a digital life this is a life so every problems we have as governments and as citizens as democracies are as a digital aspect that’s why you will observe more and more digital diplomats because now half of the diplomacy depends on the digital revolution so we have to think about this and this is new six years ago you didn’t have any digital diplomats now we quite all have digital diplomats so we’ll have to engage everything we did conceive as states as democracies for centuries and just to finish and to launch a conversation I’ll share your views that we should pay attention we should separate the possibility of a technical fragmentation which would be a very very bad scenario I don’t know if you did think about the fact that so far we are all interdependent even the less digitalized countries they rely on the internet the same as we do if you could imagine two or three technical internet the temptation to disconnect the other would be very high and the war will become a war but the ground infrastructure itself so so far we are we have cyber war we have attacks we do observe lots of things but no one did in turn to disconnect internet itself because it will hurt even the attack or so there is a technical layer and there is a political layer the legal layer from my perspective I will fight always for the unique open neutral decentralized internet the legal aspect is something different so of course it would be better for the business for everything to to converge in one direction but if you believe in democracy you believe in the right of the people to take decision regarding his own future so we cannot ask for what one legal framework for all the world because we want to agree with some countries in France you cannot say publicly anti-semitic or homophobic worlds because the French people want this so we don’t have to comply with other regulation if we want this as a democratic country and here we okay I will make a world and I finish with this I will fight for one unique internet I I’m not there to build one unique market for mr. Zuckerberg that’s another issue that’s not mine

Emily Taylor:
thank you very much and I found that very moving actually thinking about you know of the same generation who remember not having an internet and thinking to our children’s future and that that that might be the same more similar to our past and than we would like but thank you for for you know policy people do like to be miserable and to have to have you articulating so strongly the the intention to to really fight for a better future is something that we often we feel very passive sometimes with technology that it happens to us and we don’t get to design our future and from the panel I’d like to hear last but not least from you should tell about you know what scenario you think is either most likely or what you would like to see.

Sheetal Kumar:
Well quickly as I would love to hear from everyone else I can only speak to what I would like to see because I think we build we build and we do design our future unfortunately some of us of course don’t have as much power to do so as others and that’s something we have to be aware of and I think where the internet should should act as a tool for changing that but going back to my point of instead of thinking about how the internet might look like how it should feel it should feel in 20 years time liberating and it should feel liberating to people who perhaps now don’t occupy those positions of power and don’t have them and and that I think is an is an opportunity for us to to ensure that the internet now doesn’t or in the future doesn’t reflect the inequalities of our society and and the structures of them and to point that was made earlier what’s really important in that is ensuring that those who build it the technologies build the standards are engaging with each other and opening up these spaces to all those affected and I know we’re going to come on to what can we do what should we do I think a lot of thinking has been done both here within the Internet Governance Forum and outside about that so really happy to reflect on that because I think there’s a lot of positive recommendations and concrete recommendations I think we also frankly we know what we need to do but we often don’t do it so the more we enumerate I think and the more we vocalize and the more we commit collectively to what we need to do the better so I’m glad we’re here to do that and yeah happy to pick up on the points of what we need to do to get to that third scenario yeah

Emily Taylor:
thank you very much Chantal and that really teases up very nicely for the next stage in our conversation I’d like to thank you very much for waiting patiently I I’m going to go to Georgia first and then I’m going to come to to you Steve and then we’ll sort of zigzag across our audience members thank you very much

Audience:
thank you very much to the panel for all your comments my name is Georgia Osborne I’m senior research analyst at the DNS Research Federation and so you mentioned incentives and I think one thing I think about when I think about the internet in 20 years times are what are the incentives money is a massive incentive we hear a lot about what people are talking about with web 3 crypto and those kind of different fragmentations that you have on a technical layer and I would say that money is currently driving the incentive to build a web 3 through crypto and perhaps that this feature will be much more complex with that coming into play and perhaps it will be more than money that will drive that incentive as the technology develops I was wondering what whether the panel could comment on this type of fragmentation so you have the Ukraine war which is funded mainly by crypto and you know you can call it the metaverse the Fediverse or whatever you want to call it but I’d be curious to know what type of fragmentation you might see in that kind of area, whether you see it being integrated or more fragmented. Thank you very much.

Emily Taylor:
Thank you very much for that. I’m going to take the comments, and then we can invite the panel to make some comments or reactions to that challenge on Web3 and crypto. Thank you very much for your question. Steve.

Audience:
Thank you. Steve DelBianco with NetChoice. Fragmentation by regulation is not only the largest risk, but it is the reality, as you indicated. So for the short term, for the front end of the next 20 years, that is what we will confront. There are scarcely any inhibitions for a government to legislate in any way it wishes, in a populist fashion, in particular today, because the consequences are nonexistent. The governments that try to control what their citizens see and say and contradictorily impose privacy at the same time they’re trying to enforce the laws against bad actors, those contradictions are not enough to stop governments from doing so. Seventy-five percent of this conference has been about AI, and it really isn’t about a drive to consolidate and cooperate on AI regulation. No, it’s been a competition. The speakers have competed with their vision of how they believe AI should be regulated, and that will continue. In the case of NetChoice in the United States, we try to push back on that fragmentation through lawsuits based on unconstitutional approaches. We’re having some success there, but that is not going to work in a cross-border fashion. I’m calling on business and civil society, particularly the engineering sector, to begin to document the consequences of fragmentation regulation at all the layers, document not only the costs, because costs become a barrier to entry, and costs become costs that are passed on to the consumers, the voters of the countries that have embraced unilateral action by their governments. So I believe we need to raise the pain level so that governments believe that there’s some cost to enacting unique legislation that imposes cross-border jurisdictional impacts and raises the cost of everything we’re trying to do.

Emily Taylor:
Thank you very much. And I think that I want to hear from everyone that this is โ€“ I knew this would happen, but it’s great to have such an interactive conversation. But I want to come to you, Ambassador Verdier, on that point, and any others that want to join in, but how do we maintain what you were talking about, which is democratic choice and that diversity, while also maintaining the unity? So let’s hold that thought and hear from the others from the audience. Thank you very much.

Audience:
Hi, my name is Nikki Colasso. I run global public policy at Roblox, which is a metaverse company. I think that comment was really well taken, because as I’ve been sitting here, I’ve been thinking about the difference between IGF last year and IGF this year. And for those of you that were at the conference last year or attended online, a lot of the conversation was around how we do approach technology from a perspective of inclusivity. So Sheetal, I really appreciate your thoughts on inclusivity. We talked a lot about incorporating the global south into the decisions that were being made. And so my question for the panel is, as we move to this third phase of the conversation, I think we understand at a high level what the issues are. Very crisply, what do you see, or what are the specific steps that companies, civil societies and others can take to engage users, others in parts of the world that may not get representation and in countries that have geopolitical differences? How do we actually go about having those conversations? What is the way to do that? If we know we need to do it, that’s agreed, how does that happen?

Emily Taylor:
Thank you very much for that. So far, we’ve got Web 3.0 and money, interoperable laws, and involvement of global south, and those who have, you know, where there is disagreement on the basic ideology. Sir?

Audience:
I was really wondering if you knew my name. That was going to be impressive. Hi, I’m Jarell James, and I do have a question on, similarly, with regards to money. I don’t really know how to predict the future of the internet, but I do know how to look at history and see that money is the overwhelming factor for how power is flexed on certain communities. My question is with regards to both regulation and with regards to policy around sanctionable actions. So as we see regulation for monopolies exist in traditional, our traditional finance system, do we see a development where we stop what’s essentially digital colonization from happening from large actors like Facebook into regions that are massively underdeveloped in the communications infrastructure sector? And do we create and enact actual policies to prevent those communities from only taking solutions from outside of their regions? Instead of doing what we do, which is bring tech to these communities, how do we foster this development from within these communities so that in 20 years, we don’t sit on the same problem that we have now, which is 21st century colonialism and resource extraction? And the further part of that is just with these shutdowns, is there sanctionable actions that can start to inform this direction?

Emily Taylor:
Thank you very much for that. And it’s Jarell, right? Thank you. So we’re adding digital colonialism, and how do we foster sort of indigenous development, if you like, if I can put it that way, and also sanction bad actors? Thank you.

Audience:
Good morning. I’m Jennifer Bramlett. I’m the ICT coordinator for the Counterterrorism Executive Directorate with the United Nations Security Council. The issue of money is very interesting. I was amazed by the remarks by the representative of Saudi Arabia yesterday when he was quoting the internet that we deserve and the billions of dollars that would be lost if we didn’t solve issues of fragmentation. Hundreds of thousands of jobs. So I thought it was interesting that he put it right in front of us so bluntly. And when you look at gaming and all of these other industries, multibillion dollar industries, mobile gaming, regular gaming, I mean, it’s an amazing amount of money being generated. Amazing amount of money being generated by bad actors, which is the space that I look at, is how terrorists and violent extremists are exploiting ICT for criminal purposes. One of the main areas that we’re looking at is counter-narratives, and so how language is being proliferated across various systems to recruit, to radicalize, and to keep this criminal enterprise going. And that’s one of the issues that we’re looking at with regard to regulation, is language across jurisdictions and what’s considered harmful language, what’s considered unlawful language, and how authorities in various jurisdictions are going to deal with that in these borderless zones for what’s out on the internet. Also, one of the spaces that we’re looking at in terms of futuristics is the concept of reality. I do remember life before the internet. And yet, looking at the kids growing up, especially those who are playing in the games and in the metaverse areas, we had some really good talks with Naver Z recently, and the concept of reality, what I consider to be real, I go into a game, I play, and I leave, and then I go do my life. Whereas for other people, it’s becoming less and less of a division. And when we’re looking at legislating crimes, and bringing crimes into domestic frameworks, we already have a problem where we don’t have the legislative frameworks and the capacities in member states to be able to even deal with the internet as it is now. Many states don’t have laws that say, terrorism recruitment online is illegal. They have laws against terrorist recruitment, but it doesn’t apply to cyberspace. And so as we move into metaverse, fediverse type realities, what if something happens in the metaverse? If you, for example, detonate the White House in a metaverse world, it could be very real to some people. How do you even deal with that? And so these are things that we’re starting to think about from our side of the house.

Emily Taylor:
Thank you very much, Jennifer, and that’s a fascinating addition as well. So a huge amount to unpack in that, but the recruitment of terrorism online, the establishment of counter-narratives, and also I think the key point at the end, and among legislators, the legislative frameworks in all countries. Let’s have a very brief moment with Vittorio and Bertrand, because I feel like I’m neglecting our panel. I can see they’re like, hang on. But I want to have a reflection on what you’ve heard, and then in the final stage of our workshop, to really think about action, and think about how do we really start to articulate the vision of the future we want and what we need to do now. So let’s go to Vittorio, and then Bertrand.

Audience:
Thank you. Actually, I’ll get to my question. I’ll make my question first, but then I want to make a comment. The question was, don’t we think that we need the regulation to preserve the interoperability, the openness? Because I think that the reason why, at least in Europe, we need to regulate the interoperability, the openness of the internet, and not to break it. But I was prompted to make a comment by the previous round of comments about the future of the internet. If the future of the internet is decided by what the people want from it, what do the people want from an internet? If you take the average internet users today, what do they want? I’m going to Kyoto and not even looking at the places, but taking selfies and posting them to social media and saying, hey, I’m here, give me attention, I need someone to tell me I’m beautiful, interesting. So we’ve been building someone which is growing the bad parts of the internet, the worst parts of the internet personality of the people. So the problem is, what’s the social purpose of the internet today? Because we see the purpose in terms of new technology, we want autonomous driving and whatever, AI and bionic arms, and why are we doing them? Well, because we can, and because someone will make money out of it. But what’s the social purpose of this? I think we’re missing that. We had one 30 years ago, but we don’t have it now.

Emily Taylor:
Yeah, I think there’s a real thread running through a lot of the comments on incentives and what people want, and going back to Mike’s earlier challenge about the users,

Audience:
I want to piggyback on the distinction that Lorraine introduced between technical fragmentation and legal fragmentation. What is interesting is a lot of the technical fragmentation, if it happens, is not driven by a technical objective, mostly. It’s driven by the political environment and the objective of preserving spaces that would become separated because they would correspond to the spaces that are politically separated today. So the fact is that the legal fragmentation is a fact of the international system because of the national sovereignty, and that’s a reality that goes, as Henri was mentioning, to the notion of territorially based national sovereignties. However, to go back to what Jennifer was saying, one of the challenges that we have is that even without interoperability of the legal framework, the separation and the legal fragmentation is what prevents us from addressing the abuses in many cases. When you have a criminal investigation, the framework for access to electronic evidence is nonexistent at the moment and completely insufficient. And in most cases, it’s a whack-a-mole game to avoid a certain number of contents. And so I would just want to finish by saying, I completely agree with Henri that there is the democratic process for each country to do what they think is best for their citizens. That being said, interoperability doesn’t mean alignment completely. Just like the architecture of the Internet allows autonomous networks to function through protocols in an interoperable manner, I think the big challenge that we have if we want to preserve the open Internet is to reduce friction at the legal level by building a governance protocol that allows heterogeneous governance frameworks, including the governments, the companies, and all other human organizations, to be interoperable yet autonomous.

Emily Taylor:
Thank you very much. And of course, Bertrand, maybe there’s somebody in the room who doesn’t know Bertrand, but I think the way we promote that legal interoperability is very much leading in this. Thank you for your question. What I’m going to do, if I may, with your permission, is come back to the panel to react, and then I’m going to come back to you first in the queue. So we have had seven interventions so far, and it’s not even half past 12 yet. So what I’d like to do is to go through our panel, and please choose one question that you would like to respond to that is clearly in your frame of expertise or that just stimulated some thoughts. Who would like to go first? Can I start with you, Ambassador Verdier? Because there was quite a lot that was sort of building on your points or maybe challenging your points about interoperability, and how do we reconcile that autonomy with still being in a network?

Henri Verdier:
I think that quite everything was said. As I said, first we have to fight to protect a common infrastructure that can be interoperable. Internet is a network of networks. Internet is not just one standard everywhere. And there is a second and different question about legal fragmentation. I cherish your approach, Bertrand, that we should learn and try to build interoperable legislation. But so far we don’t really do this, and there is no world government. So I will say something more. So let’s try to progress in this direction, but we are not there. I have one observation from my perspective. You know, I come from a very libertarian Internet. I did cherish Jean-Pierre Barlow 30 years ago. So most of my friends ask me, why did you join the government, the bad actor? And why are you now fighting for regulation, for European regulation? And I don’t feel that I did change, because from my perspective, that’s personal. When you enter within a social network, you leave Internet. If you are within Facebook, within Twitter, within YouTube, you are not in Internet anymore. You are within a private place built on Internet. And I feel that there is a certain level of confusion in this conversation this morning, because we use Internet to speak about TCPIP and to speak about Facebook or Twitter or TikTok or whatever you want. And just to mention it, as a European citizen, I prefer the European rules than Elon Musk’s rules. And I prefer to discuss with other countries and other citizens, and to decide something and to impose this decision to these big platforms. If we don’t put this in the conversation, and we can still protect the free and open and decentralized Internet, but people don’t leave. My daughters, they are never on a blog, for example, or on Internet. They are always within something.

Emily Taylor:
Thank you very much. Chital and then Izumi. Okay, here we go.

Sheetal Kumar:
So there was a lot there to react to. I said earlier that I think we know what we need to do, and we are not doing it. And perhaps to elaborate a bit on that, what I would say is that on the legal fragmentation point and the fact that there is a need for harmonization of frameworks, we do have international human rights law and standards. And of course, there is a lack of agreement perhaps around how that is being effectively implemented. But it is there. We have the rule of law. We have our institutions. And we need to use what we have, which includes interpretations of international human rights law that already exist. And we need to commit to those. And I think global norms and standards, which include those that are discussed here at the IGF and are evolving, including in various UN institutions, requires referring to and, again, committing to constantly and adapting those to the digital age. Because ultimately, I think, we need to build on that. So there are a couple of areas. And I know we’re going to come to the question of what do we do. But I just wanted to highlight that protecting what is essentially the openness, the ability to get online, to perhaps build new apps or technologies and shape the future, I think is so key. We have the internet that Olaf discussed. We understand the need to protect that. We understand the need to align and to harmonize our legal frameworks to human rights standards. We know that we need to make these bodies more inclusive. And there are ways to do that. And I think that what’s really important is that we refer to these commitments and that we are all taking that home as well in various democratic institutions and in global forums that we are vocalizing these values and ensuring that there are mechanisms for implementing and instituting them. Perhaps I can say this at the end, but I’ll quickly just throw it in here. I know that we don’t have many people in their teens and perhaps younger people here. But I think it’s important for us, perhaps the older people, to not be so self-absorbed. nostalgic about the world that wasn’t that perhaps not you know isn’t that great or wasn’t that great either and to look forward to building a world that is ultimately about you know about liberation about ensuring human rights are protected and that when it comes to the internet I think is what is really key is ensuring user control so ensuring that the kind of control that has just been discussed about corporate actors or governments controlling and deciding what the internet is that doesn’t happen so we need to rest that back and how we do that I know we’re having that conversation I said we already have many many ideas and ways to do that but that’s key because it’s that that’s shifting and that’s what people are worried about and that’s when I was a child I remember thinking it was so cool that could just get online and I knew it was transparent knew how to you know where information was and and now I think what is happening is that the journey is becoming more in transparent we don’t know why certain things are happening and that needs to shift thank you very much

Izumi Aizu:
Izumi being a little bit older than you I’d like to go say a 70 percent what you said when I was 40 I was really excited to see all the new things online and stuff but to respond to maybe the Vittorio’s and Bertrand’s questions come comments I’d like to respond these it’s yes it’s the people’s will as Vittorio said or the jurisdiction legal framework etc or international politics I’d like to add a little more if you think that we’re in now in 2043 at the world let’s say conference or governance forum world governance forum not internet forum or internet governance forum with the new United Nations and the new United Communities in 20 years from now it could happen right after which ever will win at that war or this war so and then coming the war is the climate change war there might be a regulation that the one shouldn’t be too many servers at the data center not too many AI’s or crypto or these these environmental factors you may think it’s external factors to the Internet but I would suggest you to make it upside down back to the history in the 40s we had fought a lot they killed each other a lot with a reflection coming the some of the use of the information and then the ballistic calculators sending bombs and then coming the Internet during the Cold War phase but then in the 80s to 90s the world has changed the Cold War was over we hoped that East and West come closer that’s why the Eastern side wanted to be united using the Internet perhaps the China wanted to have technology and science and economic growth that’s why they accepted Internet we tried hard to talk with them so these wills of the people entities allowed some technology to be picked up and made global how about now China reached their India reached there and do they need real you know globally united and science sources from the West yes and no right so my first take of the mixture of the fragmented one as well as some you know chaotic one we all need both but we don’t know what’s the reality of the politics and environmental changes and stuff like that in the near future not to mention the far so I think we should go out of the box I’ll go out of the ivory tower or IP tower Internet centric thing the future of the Internet who cares the future of us we care so then how technologies including the current and the future Internet that you guys we guys will work may make something better that’s kind of my suggestion thank you thank you very much Lorraine thank you and I think that’s a perfect segue to to my point

Lorrayne Porciuncula:
as well I completely agree with Izumi and I think oh I I do and I think it’s a I mean we can think about the sort of golden age of the Internet and where it was a lot of sort of personal logs and websites and all that I think that the and of course it wasn’t just that but I’m just trying to think about the average person who ultimately doesn’t care about that and what they want is to be able to access content and and sometimes I think that we create I mean narratives really matter here and the words that we’re using really matter and I think that’s a common thread in this discussion we’re talking about the Internet and we’re talking about fragmentation so in questioning those terms also comes into play it’s important and so ultimately I don’t think it’s about the Internet in itself it is about a digital society right as Izumi was saying because I mean I do think that we need to think about what we want as a society first and as Chantal was saying what what does it do people people want the Internet to feel like or their lives to be on the Internet and I feel we actually don’t have the answer very clearly for that yet and that’s why we’re struggling because if we do know we are then not just creating those big enemies out of entities that are not entirely bad certainly not entirely good but not entirely bad because they offer services that are useful and at the same time I think that we then focus and what are the harms and what are the policy objectives that we are trying to achieve if the issue is with walled gardens there are tools that we can use in competition tools to use to address that if the issue is around a competition it we we can look into portability and interoperability as tools that have been used in different markets in telecoms for example on the right a user to actually move from one platform to the other that’s not possible right now and if we see this and if identify as a society that doesn’t that’s an issue we need to try to find a remedy regulatory and legal to address that the thing with with with with the internet is that it’s not one government that can do it by itself and so the question comes back to the point that is zooming made and to the point that become made it’s about governance it’s about how we cooperate in the international setting and so for me instead of using 75% or 18 workshops of the IGF to talk about AI mostly very sort of in the high level I would rather that we use that time to talk about how do we cooperate and how do we govern this complex adaptive system that is it’s really difficult to address and how do we actually identify what are our objectives economic and social and what are the best remedies to to get there

Emily Taylor:
thank you very much and that tees up the final part of our conversation today which is how do we get there but Olaf unfortunately you’re the last to go and there are several questions that have not been addressed by the panel so far web3 money incentives Global South geopolitical aspects digital colonialization counter narratives and capacities and legislative frameworks we’ve talked a bit about that but or anything you like yeah I that my first response would be stack overflow but

Olaf Kolkman:
I had indeed made a bunch of notes and I actually don’t know how to start where to start but let me start with not having infrastructure at all is the ultimate fragmentation not having it at all is the ultimate fragmentation and I I so associate with Joelle empower communities what we do at the Internet Society is is is empower communities by building IXPs by giving cookbooks for building community networks and that’s not the only way I know that Joelle is working on his own stuff and it’s very smart about it but that’s that’s really empowering and in the end that’s bottom-up other parts we talked a lot about economics and government rules and when you talk about economics and standardization I think we have to be honest standardization is also also to a large extent industry politics the standardization body you choose to do your work has to do with industry politics and we need to put that on the table and understand it economics of course consolidation happens even if you have open technologies companies will try to extract money out of the employing that open technology consolidation happens and with that you have you know accumulation of power to a point where you know he may say this is too much or at least government might say this is too much and I I associate with that to a large degree what I was also thinking is when we talked about standards you don’t need standards for every innovation and the name Satoshi Nakamoto came up and I hope you all know who that was yes blockchain inventor and that’s permissionless innovation that has changed the world for the good for the bad I don’t I don’t I have a sort of opinion about that but this is somebody who wrote a paper published the paper online for everybody to read that’s open innovation and it happens all the day we’re in this room but on the internet people are sharing code fragments and open building blocks all the time innovation happens today and it doesn’t happen only in standards organization no it happens by individuals in chats it happens in companies it happens everywhere if you ask me where you want to go with this internet in the future from a sort of technical perspective then I would say open open and open open architecture so people build against pieces of others open source so people can reuse those building blocks and open standards and with a lot of transparency around that I think that the building blocks of the few of a positive future have are are basically open I think that was my I’m my I drained my stack

Emily Taylor:
brilliant thank you very much we’ve got a little bit of time left and normally by the time we unpack all of the problems we’ve run out of time I’ve got a question from the audience and then I’m going to introduce our latest speaker Raul I’m not going to make him summarize the conversation so far what I’m going to do is to start with you Raul to think about the how not about the what not about all the problems we’ve rehearsed that not about the different visions but how do we get to a better place to the 20 years time where we’ve sussed it all out what do we need to do how do we get there so first of all thank you very much for your patience and thank you for your

Audience:
question hi I’m I’m not society youth ambassador I am probably too young to be conservative but I have my doubts on how much internet can fundamentally change but my fear is that with the pace of innovation and these bunch of emerging technologies coming in we’ll have applications built on the base of the internet and it will get really complex to the extent that we won’t have enough expertise understanding and knowledge to make sure that all these different parts work together one of the panelists mentioned complex adaptive systems I’m not sure how we are going to adapt to this so 20 years from now I think we will have fragmentation not by design but by default because we just don’t know how things work and standardization is being mentioned quite a lot and that is possibly one of the solutions but standardization is also an extremely slow process and with this pace of innovation I am not sure standardization can keep up also the interests that go into standardization as Olaf was mentioning it’s largely industry driven so I want to press the panel and the audience to think on how we can include more stakeholders especially the users in in standardization process and make it accessible to them thank you thank

Emily Taylor:
you very much so that’s a good how question as well how do we make our processes more inclusive but I think let’s let’s run through the panel but start with you Raul thank you very much for joining us and welcome to the panel Raul Echeberria how do we get to what do we need to do to get to that brighter future that that we’ve articulated thank you very much

Raul Echeverria:
I I’m sorry for the delay in joining I feel like an imposter here because I was in another session now I’m running to get here unfortunately I don’t have the answer that’s that’s the first point that the if I would have that answer probably I would get a wooden a good position a good job here in the in one of the companies that or governments or institution or institutions that but and so I just we can just reflect and think about that and as you describe it in the in the possible scenarios I think that we already we already have certain level of fragmentation in internet and we have to live with that and so I don’t think that is a feasible scenario where the that’s we have the ideal internet and perfect internet that that all of us want to have the the theory of the rational human that people act according the rationality and so there what is best for everybody is something that died in the 80s and so the incentive of policymakers are diverse many times even knowing that the decisions that are being taken are not the best one for for the the for one objective like keep the internet not fragmented so many times the decisions will go in a different direction so what our mission is to keep the internet as less fragmented as possible one thing I think that’s a I could say that’s two things is that we need to to have gradual objectives and commitments instead of for going for the whole packet just let’s try to get agreements on on simple things and try to gradually improve that and and we have to really is make our messages much more simple I I heard Olaf today in a in a in a session before this this morning earlier this morning and he was excellent you in that session and so is it is usual on him but but I thought when I was listening to him saying if I have to explain this to policymakers in Latin America I have to bring Olaf with me so the this is very it’s very complex so we need to make our our messages much more simple about what the government should not do what the policies should not produce and so to to give them more tangible tools for taking better decisions

Emily Taylor:
thank you very much Olaf I was thinking do I include mark now do I go back to the panel mark do you have a quick point or do you want to can we continue with them let’s hear two more from the panel come back to you so Lorraine and then Ambassador Verdier okay so what do we do right we’re gonna do let’s be action-orientated lectionary focused on that even it’s just one thing so from Raul we had just an incremental approach try to do small things well you

Lorrayne Porciuncula:
know yeah so so maybe we do that game where we just add on to what the other said so incremental I mean being clear on the objectives is the first be incremental is another and being iterative I think it is important as well and I think a lot of the issues with the processes that are being built to address the challenges that we have there are many is that we are under the impression that it’s suffice for us to design and develop the ultimate regulation and that’s going to solve all of our problems and and there’s a whole lot of questions that get unanswered once you do that because one we are the fact that we are in a complex that’s the system means that it’s very hard for us to predict it because the system moves in a way what one just one element in that system can have really implications on on the whole in ways that it’s very hard to predict so what we do with those systems is that we observe and we try to adapt to it in order to do that we need to be able to have the processes and the institutions that are much more agile than the ones that we have now so instead of looking to linear approaches of developing regulation we need to think of it like almost like software development where we have versioning of policies and regulation where we’re able to actually identify a bug and then be able to correct it by having inclusive multi-stakeholder consultations and the problem is that and and that’s an issue that I was trying to unpack with Bertrand the other day and it was the fact on how we think about multi-stakeholderism is that it’s trying to get away from government’s decision-making power so that everyone is deciding in a sort of a global happy assembly, right? And that’s not what multi-stakeholderism is supposed to be. And certainly it’s not what it’s sometimes and often applied by government which is, oh I’ll give you 30 days for you to participate in this consultation and I’ve done a multi-stakeholder process. That’s not it either. It’s actually being intentional about including people and different stakeholders. Inclusive, iterative, in a way that is actually particularly not only national levels in the international level that we are including the global South, youth, different communities that are underrepresented as well. And in the way that it’s actually intentional and looking towards a process that it’s not simply aiming to produce the ultimate legal text to rule them all, but it’s actually a process. What we’re trying to learn and we are assuming our just inability to predict accurately the future and the fact that we are adapting along the way and trying to be better at it. And we do not have the processes and we do not have the institutions. One of the things that we are doing in building is sandboxes as a possible avenue to testing out those issues that we don’t know about. And a lot more needs to be shared on how they best work, how they don’t work. We almost need to do a sandbox out of sandboxes in itself, but I won’t get into it so far.

Emily Taylor:
Sort of a playful approach, an iterative approach, building on where we started with Raul about incremental approaches, simpler messaging and this sort of sense of agility. Ambassador Verdier.

Henri Verdier:
So I don’t know what we have to do, but maybe we could agree on a compass. I was surprised when you did mention the nostalgia for the Golden Age, because usually I’m not a kind of nostalgic guy. But I think that we should start for at least three aspects of this Golden Age. The first one is the unprecedented openness of access to information, knowledge and culture. This was a big shift and this is not finished. We still have the digital divide and half of the humanities that doesn’t access to this. The second one was the unprecedented empowerment of communities and people. And the last one was the permissionless innovation. And from my perspective, this could be a kind of compass for future decisions. Are we increasing people’s autonomy and creative capacity? And that’s why I did mention some concern, for example, with sometimes a private sector, because if you think in terms of autonomy, empowerment and creativity, the threat can come from everywhere, not just from rogue states.

Emily Taylor:
Thank you. I like the idea of a compass and that can be very good organising. I want to come to Mark quickly and then I’ve got, I was going to come to you in a bit, or are you reacting very, yeah go ahead. If I may, with a very, what I

Olaf Kolkman:
wrote down is actually what Henri said, principle-based as the add-on. And I like the principles you pose. The note I made was principle-based and then there’s a differentiation, I think, of the approach of the Internet, of the regulator or evolution of the Internet and evolution on the Internet. I think that we can get to those shared principles much easier if we talk about the evolution of the Internet and that when we talk about empowerment, individualism, autonomy, getting a global consensus about that, on the Internet might be where the trouble is going to be in setting joint principles as a guide.

Emily Taylor:
Thank you very much. I’m going to pause in our final round-up to just have these two questions from the audience. We’ve got Mark and then Lucien and we are in the last five minutes, so thank you very much for that

Audience:
reminder. So, Revati. Thank you very much. Mark Derisgald from Brazil. I’m an Internet governance consultant for small media organizations. I found comments by Lohane and Olaf to be quite stimulating in a sense of something that I have been talking a lot about, which is the AI forum, as it has become now, hasn’t been focusing a lot on one of the questions that I find more key about it, which is open source versus closed source. So, why are we in this AI space we are in right now? Because there were very early open source developments on this that just let the technology lose. A lot of the debate goes around mid-journey and this proprietary technologies, but very few people are looking at things like stable diffusion, which are basically being iterated upon on the basis of papers, right? Just like I said, it’s paper after paper and that gets incorporated into the technology and that’s how it’s expanding. And then the private companies need to port that back into their proprietary code. So, this reality seems very feasible because we’re watching it happen right now, how open source and papers are actually starting to drive. There’s no AI consortium forum or anything. It’s literally being driven by research that’s published in an open space. So, this is something that we should be looking towards, not as, hey this is about AI, but rather about, is this the new paradigm of how different protocols and standards and different approaches will be developed? So, just to kind of complement that point, thank you very much.

Emily Taylor:
And a very, very valuable point about the role of research in acting as that sort of snowball effect.

Audience:
And I think, it’s Lucian Taylor from the DNS Research Federation, and I think my point follows on from Mark’s about protocols. I’ve been an internet engineer for over 20 years with my team and I’ve just had the privilege of meeting Vint Cerf with Emily and talking to him about protocols, an absolutely wonderful thing. And what he did, I think there’s a gap between the IETF, which is not frankly a safe space for women and people to develop standards, and how you develop protocols. Vint Cerf got together with a few other universities and they developed a way of doing things, and that was TCP IP, and they then invented the internet. And then we bake it in through standards bodies like the IETF. I think standards are a very good place to test these new ideas, and we are at an inflection point. We’ve got regulation hammering down on us, and that regulation needs to be tested. Things like know your customer, putting that into a free and open internet is really challenging. And my question to the panel is, is the IETF the place to develop in a free and open way those protocols that we need next?

Emily Taylor:
Thank you. So we’ve got probably less than three minutes left, and three panelists and questions.

Audience:
Okay, I’ll quickly say that I think, Vittoria, you posed some philosophical questions earlier, and I’m not going to judge what people do in terms of, you know, the question of what young people do, and selfies, and all of that. But what I am going to judge, and I think what we should all judge, is what governments do when they regulate. Do they align the regulation with human rights norms and standards? Do companies who also have obligations to do so, do that? Are they transparent? Are they accountable? And are our governance forums and standards bodies inclusive? No, clearly not. But there are recommendations for how to change that. The Office of the Human Rights Commissioner, OHCHR, has released a report with many recommendations on how to change and how to improve inclusiveness. And I can say that we’re doing a session on Thursday where we’ll be exploring that report. So I think protecting critical properties as they evolve, having that principles-based approach, aligning with what we have, building with on the human rights framework, and creating more inclusive spaces is really key.

Emily Taylor:
Thank you. Raul, and then Izumi. I endorse everything she said. Good. But beside that, another point.

Raul Echeverria:
A few weeks ago, I participated in a global conference of parliamentarians speaking about the future. But the average age was over 50, and so all the discussion was about the fears, about the future, and the fears about AI. And so we have to be very careful that the policies are not developed based on fear. So I say, of course, it’s normal that they have fears. I have fears. I’m terrified about the future. I’m scared. But don’t let my fears stop the evolution. This is why we have to involve youth in the discussion. And probably if we bring people like 18 years old, that they don’t have the old expertise in architecture and internet architecture and other things to speak, but they can say how is the internet they want. And that would help very much.

Emily Taylor:
Thank you very much. Give the microphone to Izumi. Our last thoughts.

Izumi Aizu:
I saw no China nor India in the last session, I mean the main session yesterday, while they are talking about AI. To me, it’s fragmented. The IGF wasn’t like that 18 years ago. We have tensions, we have fears, we have battles. Now we are peaceful and boring. Go out to the chaos or make the chaos, please. Fear, fine. We don’t know the future. Be bold. And to the IPers and IP fundamentalists, I would say, go outside the box. Go to the World Governance Forum. Go to the climate change thing and talk with them. Learn from them. Eat their foods. Don’t give them the food. Okay. So otherwise, all these complex things in 20 years would happen. The internet wouldn’t work in the youth. Thank you. Thank you very much.

Emily Taylor:
So that brings our session to an end. Thank you very much for all the interaction and to our brilliant panel. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

175 words per minute

Speech length

3836 words

Speech time

1317 secs

Emily Taylor

Speech speed

160 words per minute

Speech length

3229 words

Speech time

1209 secs

Henri Verdier

Speech speed

152 words per minute

Speech length

1386 words

Speech time

549 secs

Izumi Aizu

Speech speed

156 words per minute

Speech length

1132 words

Speech time

436 secs

Lorrayne Porciuncula

Speech speed

173 words per minute

Speech length

2092 words

Speech time

727 secs

Olaf Kolkman

Speech speed

136 words per minute

Speech length

1041 words

Speech time

460 secs

Raul Echeverria

Speech speed

146 words per minute

Speech length

586 words

Speech time

241 secs

Sheetal Kumar

Speech speed

174 words per minute

Speech length

1269 words

Speech time

437 secs

Technology and Human Rights Due Diligence at the UN | IGF 2023 Open Forum #163

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Nicholas Oakeschott

The United Nations High Commissioner for Refugees (UNHCR) has developed a comprehensive digital transformation strategy that will be implemented from 2022 to 2026. This strategy aims to leverage technology and innovation to improve efficiency and effectiveness in providing support and assistance to forcibly displaced and stateless individuals. Additionally, the UNHCR is in the process of developing a formal policy framework on human rights due diligence. This framework will ensure that the organisation’s use of digital technology is aligned with international human rights and ethical standards.

In managing the risks associated with digital technology, the UNHCR has a wide range of policies and guidance in place. These policies cover areas such as privacy, data protection, and procurement. By implementing these policies, the organisation aims to mitigate any potential negative impacts and ensure the responsible use of digital technology.

The UNHCR is actively exploring the implementation of guidance on human rights due diligence. This includes considering the use of artificial intelligence (AI) and developing a user-friendly digital tool for implementation. The organisation takes a hands-on approach in reviewing existing policies and guidance on human rights due diligence to ensure the practical implementation of these measures.

The engagement with the UN Human Rights team has been crucial for the UNHCR in foreseeing and addressing potential challenges. The guidance on human rights due diligence has been increasingly seen as implementable at the field level, indicating its effectiveness in practice.

The protection of forcibly displaced and stateless people is integral to the mission of the UNHCR. They prioritise the provision of support, assistance, and protection for these vulnerable populations. This commitment is underpinned by the organisation’s concern for reducing inequalities and fostering peace, justice, and strong institutions, which are reflected in the relevant Sustainable Development Goals.

The UNHCR recognises the importance of engaging with states and the private sector based on the guidance they provide. By applying the guidance, the organisation believes it can establish a stronger basis for increased engagements and partnerships in pursuing its mission.

The UNHCR’s digital transformation strategy is a significant step forward in serving people digitally. Prioritising digital protection, digital inclusion, and providing more digital services are key goals of this strategy. It demonstrates the organisation’s commitment to embracing innovative approaches to enhance service delivery and access to information.

Engaging civil society stakeholders in the implementation of human rights due diligence is seen as a major opportunity and challenge for the UNHCR. The organisation values the input and collaboration of civil society in ensuring the responsible and ethical use of digital technology. Ongoing dialogue and discussions with civil society are recognised as necessary for continuous learning and improvement.

While the UNHCR acknowledges the importance of independent assessments and auditing, they are still evaluating the capability of independent entities to undertake audits beyond the existing system. Currently, the organisation has an independent auditing function that reviews the work of UN agencies. Expert suppliers also assist with data protection impact assessments at the global and field level.

In conclusion, the UNHCR is committed to the responsible use of digital technology and the preservation of human rights in its operations. The development of a digital transformation strategy, policy framework on human rights due diligence, and active engagement with stakeholders reflect the organisation’s dedication to continuous improvement and the pursuit of its mission. By leveraging digital innovation and technology, the UNHCR aims to provide better support and protection for forcibly displaced and stateless individuals while upholding international human rights and ethical standards.

Quintin Chou-Lambert

The United Nations (UN) recognises the importance of utilising digital technology across various aspects, such as peace and security, sustainable development, and human rights. This commitment extends to the UN’s internal operations, encompassing recruitment, procurement, and IT services. The UN aims to ‘walk the talk’ when it comes to its own use of digital technology.

A rights-based approach is considered paramount in the UN’s utilisation of digital technology. This is emphasised in the UN’s Roadmap for Digital Cooperation, which was initiated in 2020. The roadmap places great significance on incorporating a framework that upholds human rights in the UN’s digital endeavours. It serves as a guide and reference point in decision-making processes related to digital technology, aiding individuals in making informed judgments on a day-to-day basis.

Non-binding guidance plays a vital role in promoting horizontal alignment among UN agencies. This guidance assists in addressing the challenges of harmonisation and facilitating effective translation into local contexts. It encourages each entity within the UN to integrate these principles into their own local procedures. By doing so, entities can ensure optimal outcomes, particularly in areas such as the tendering process. Non-binding guidance acts as a beacon, guiding UN entities in hard-coding these principles into their operational procedures.

Each entity within the UN possesses distinct governing bodies and housekeeping rules. The Secretariat follows housekeeping rules prescribed by the General Assembly, while other agencies, such as UNHCR, enjoy more flexibility in certain cases. This diversity highlights the unique nature of each entity’s governance structure within the UN.

The COVID-19 pandemic served as a test for the UN’s application of the human rights due diligence framework. This framework assists in decision-making processes and helped the Secretariat make appropriate judgments during the pandemic. An example of this is when the UN considered the introduction of a contact tracing system. The human rights due diligence framework offered a guiding principle in ensuring that human rights were upheld throughout the decision-making process.

In conclusion, the UN is committed to employing digital technology in various areas, both externally and internally. A rights-based approach is fundamental to the UN’s use of digital technology, as highlighted in the Roadmap for Digital Cooperation. Non-binding guidance aids in maintaining alignment among UN agencies, while entities are encouraged to incorporate principles into their operational procedures to achieve optimal outcomes. Each entity within the UN has its own governing bodies and housekeeping rules. The human rights due diligence framework serves as a guide in decision-making processes, ensuring human rights are upheld.

Scott Campbell

The process of developing guidance on human rights due diligence for digital technology has been lengthy but valuable. It has involved multiple rounds of bilateral consultations, open forums, and heavy consultation with UN entities as well as external partners. Scott Campbell, in emphasising the importance of policy alignment and coherence, notes that the UN has considered these factors at the forefront of its thought process regarding digital technology and human rights due diligence.

The process has not only facilitated the development of guidance on human rights due diligence for digital technology but has also helped in mainstreaming these efforts across the UN. It has fostered a better understanding and alignment across the system, which is crucial for ensuring effective implementation.

The UN is now nearing the completion of this process. Despite its lengthy duration, the value that it has generated cannot be undermined. The involvement of various stakeholders has ensured a comprehensive and inclusive approach to developing this guidance.

In addition to the development of guidance for digital technology, there has been a parallel process to study the implications of expanding the scope of the current human rights due diligence policy. The guidance on digital technology intersects with this broader expansion, and there has been a broad agreement that it should be grounded in the parameters for this expansion. Recently, at an Executive Committee meeting, the parameters for the human rights due diligence framework policy were agreed upon.

Overall, the process of developing guidance on human rights due diligence for digital technology has been challenging but rewarding. It has facilitated the mainstreaming of these efforts across the UN and cultivated a better understanding and alignment within the system. The emphasis on policy alignment and coherence highlights the UN’s commitment to ensuring that digital technology is used responsibly and in a manner that protects and upholds human rights.

Audience

The analysis explores various concerns and issues related to the decision-making processes within the United Nations (UN) system. One of the key concerns raised is the lack of transparency in the procurement of technologies within the UN system. It argues for the consideration of the influences and priorities of specific actors who fund the UN system. This suggests that external factors may influence the decision-making process, potentially not aligning with the UN’s best interests.

Additionally, the analysis acknowledges the current lack of clarity within the internal mechanism of the UN system. This lack of transparency and clarity can impede effective decision-making and hinder the efficiency and effectiveness of the UN system.

Furthermore, the analysis questions the selection of technologies within the UN system. It suggests that the selection process should consider more than just efficiency. An example is given, citing the use of biometric technology by the United Nations High Commissioner for Refugees (UNHCR). While biometric technology has shown some efficiency gains in reducing fraud, the potential risks to the populations exposed to these technologies outweigh the minimal benefits. This highlights the importance of prioritising not only efficiency but also the potential risks associated with implementing certain technologies.

Another topic discussed in the analysis is the independence of assessments within UN agencies. Concerns are raised about who should conduct these assessments and ensuring their independence. Ana Cristina Ruelas from UNESCO specifically questions the independence of assessments and how to navigate diverse member states’ views when conducting evaluations. This raises considerations about maintaining unbiased assessments and managing potentially conflicting perspectives within UN agencies.

Furthermore, the analysis questions the absence of a due diligence process in the current development of UNESCO guidelines. An anonymous individual raises concerns about the lack of due diligence in the guidelines. This highlights a potential gap that could lead to oversight and negative impacts.

To conclude, the analysis highlights several key concerns within the decision-making processes of the UN system. These include the need for transparency and consideration of external influences, the importance of weighing potential risks when selecting technologies, ensuring the independence of assessments within UN agencies, and incorporating a due diligence process in guidelines, such as those being developed by UNESCO. These concerns highlight areas for improvement in the UN system and can contribute to more effective and accountable decision-making processes.

Peggy Hicks

The United Nations (UN) is nearing completion of a guidance document on human rights due diligence, a milestone in its efforts to protect human rights. The directive aims to standardize and harmonize approaches across the entire UN system, emphasizing the commitment and dedication of the UN and its partners to safeguarding human rights.

Partners within the UN have already started applying human rights due diligence as they introduce new technologies. This proactive approach ensures careful assessment and mitigation of potential human rights impacts. The directive seeks to harmonize these approaches and ensure consistency across different sectors of the UN system, preventing any gaps or inconsistencies in human rights protection.

The UN recognizes the importance of aligning actions across various sectors, viewing the directive as necessary to facilitate collaboration and achieve common goals. By implementing this directive, the UN demonstrates its commitment to SDG 16: peace, justice, and strong institutions.

Despite the complexity of some issues, UN agencies are genuinely committed to implementing the human rights due diligence framework. This dedication reflects the UN’s determination to protect human rights and uphold its institutional values.

However, the UN faces challenges in establishing public-private partnerships and engaging with corporations. Adequate funding is essential for the UN’s functioning and the protection of human rights and refugees. Insufficient funding impedes the UN’s work, making partnerships with the private sector crucial in bridging this gap.

Transparency, assessments, and enforcement are crucial aspects that the UN interagency working group will consider. These elements ensure accountability, identify areas for improvement, and enforce the human rights due diligence framework.

The UN’s digital transformation is heavily reliant on funding and partnerships. Many UN agencies recognize the lack of funding available for digital transformation initiatives, except within the context of important partnerships. The UN acknowledges the importance of advancing its digital capabilities to improve efficiency and effectiveness in addressing human rights issues.

In conclusion, the UN’s progress in completing the human rights due diligence guidance document reflects its commitment to promoting and protecting human rights. The directive aims to standardize approaches and ensure consistency within the UN system. Addressing challenges related to public-private partnerships, funding, and digital transformation is necessary to support the UN’s work effectively. Transparency, assessments, and enforcement are critical components that further strengthen the UN’s commitment to human rights.

Marwa Fatafta

The analysis presents several significant points related to human rights evaluation, transparency in decision-making, enforcement of human rights due diligence tools, technology implementations, public-private partnerships, and the need for evidence-based solutions.

One key argument highlighted is the necessity of incorporating independent assessment in human rights evaluation. The analysis argues that internal assessments lack oversight and accountability, and having an independent third party would mitigate bias and ensure proper scrutiny. This approach is perceived as essential in ensuring fair and accurate evaluations.

Transparency in practice and decision-making is emphasized as another crucial aspect. The analysis suggests that transparency in decisions allows affected communities to evaluate the decisions and assess how well they serve their needs and protect their rights. By providing transparency, decision-makers can be held accountable for their actions, leading to better outcomes.

Furthermore, the analysis advocates for the effective enforcement of human rights due diligence tools. It is argued that the tools themselves are only as good as their enforcement. If not implemented properly, sensitive personal data can be exposed to risks. Therefore, strong enforcement mechanisms are necessary to protect individuals’ rights and ensure the effective functioning of these tools.

The potential fallout of not conducting due diligence on technology implementations is also cautioned against. It is highlighted that when technology is used without due diligence, it can result in exposing sensitive personal data and create challenges in safeguarding the collected data. Therefore, it is crucial for organizations to thoroughly assess and evaluate the risks associated with technology implementation before deploying it.

The analysis also underscores the importance of transparency in public-private partnerships, particularly in regards to the selection of specific companies for partnership with United Nations (UN) agencies. It notes a lack of available information on why certain companies are chosen, advocating for greater transparency in these partnerships to ensure fairness and accountability.

Additionally, the need for evidence-based solutions is addressed. The analysis suggests that technologies are sometimes deployed or used without proper evidence, which can have negative consequences. It cautions against relying on “snake oil” solutions that potentially harm human rights. Instead, the focus should be on implementing solutions that have been thoroughly researched and proven effective.

Notably, the analysis raises the significance of scrutinizing certain technologies, such as artificial intelligence (AI). It highlights the importance of examining the narratives behind these technologies to ensure they align with ethical principles and human rights standards.

Moreover, the analysis supports the initiative by the Office of the High Commissioner for Human Rights (OHCHR) to incorporate human rights due diligence in UN bodies. This step is regarded as important and much-needed in the effort to protect and uphold human rights globally.

Lastly, the analysis acknowledges the role of civil society in building bridges and reaching out to stakeholders. Access Now, for instance, is mentioned as an organization willing to connect and provide consultation when needed. This highlights the potential for civil society to contribute to promoting human rights and fostering collaboration among different stakeholders.

In conclusion, the analysis sheds light on various aspects related to human rights evaluation, transparency in decision-making, enforcement of human rights due diligence tools, technology implementations, public-private partnerships, and evidence-based solutions. It emphasizes the importance of independent assessment, transparency, and effective enforcement in safeguarding human rights. The analysis also advocates for responsible technology implementation, transparency in public-private partnerships, and the need for evidence-based solutions. The support for the OHCHR’s initiative and the role of civil society in building bridges further strengthen the call for greater human rights protection and collaboration among stakeholders.

Catie Shavin

The guidance for human rights due diligence has undergone significant enhancements as a result of thorough consultations with both UN and external stakeholders. Four drafts of the guidance document have been circulated, and valuable feedback has been gathered during this process. The consultation process involved engagement with various actors, both within and outside the UN system. This inclusive approach ensured that a wide range of perspectives were considered, resulting in a more robust and comprehensive guidance.

One key aspect that emerged from the feedback received was the need for gender and intersectionality sensitivity in human rights due diligence. Entities highlighted the importance of considering the diverse impacts on girls, women, and gender non-conforming individuals. As a response, the new approach to the guidance explicitly incorporates this inclusion, addressing the concerns raised during the consultation process. By incorporating gender and intersectionality sensitivity, the aim is to ensure that the guidance is applicable and effective in promoting equality and reducing inequalities.

Furthermore, the consultation process resulted in insightful learnings that will support UN entities in effectively implementing the guidance. Entities with significant experience in human rights due diligence have been identified, and their insights have been gathered to understand the best practices and challenges associated with the implementation. Additionally, the consultation process helped identify the language and approaches that resonate with colleagues across the UN. These valuable learnings will aid in supporting UN entities and enhancing their capacity to implement the guidance.

The consultation process also shed light on the areas of alignment and divergence between UN entities and business enterprises when it comes to implementing human rights due diligence for digital technology use. The process initiated conversations regarding the adaptation of approaches such as the UN Guiding Principles for Business and Human Rights. By identifying areas of alignment and divergence, the consultations have contributed to a better understanding of the challenges and opportunities in implementing human rights due diligence in the digital technology sector.

In conclusion, the guidance for human rights due diligence has significantly evolved through extensive consultations with UN and external stakeholders. The process has led to the inclusion of gender and intersectionality sensitivity, generating valuable insights that will aid in supporting UN entities in effectively implementing the guidance. Moreover, the consultation process has provided a clearer understanding of the alignment and divergence between UN entities and business enterprises in implementing human rights due diligence for digital technology use. These findings contribute to a more comprehensive and adaptable framework for promoting human rights and ensuring accountability in various contexts.

David Satola

The World Bank’s operational work differs from other organizations in that it provides member states with financing for projects, known as recipient-executed activities, instead of directly conducting the work themselves. This approach allows for a distribution of resources and responsibilities, as member states are responsible for implementing and managing the projects. It promotes economic growth and development within member states by leveraging the World Bank’s financial support.

However, the World Bank’s approach faces challenges due to a lack of clarity and synthesis of rules for member states to apply. The nature of the World Bank’s business model presents member states with difficulties in understanding and navigating the set of rules required for project implementation. The absence of a unified framework can result in confusion and discrepancies in the application of rules across member states. To address this issue, an improved clarity and synthesis of rules are needed to provide a more uniform approach to project implementation.

In the realm of human rights due diligence, the use of a principles-based approach is seen as commendable and beneficial. This approach allows for more flexibility in interpreting and applying human rights standards. By considering the evolving nature of standards and rules, the principles-based approach seeks to overcome the limitations of a rigid ‘one-size-fits-all’ approach. It recognizes that different countries have varying levels of development and economic maturity, making it challenging to impose the same model on all member states. Adopting a principles-based approach enables the World Bank to acknowledge and address these differences, promoting a more inclusive and adaptable framework for human rights due diligence.

While accountability is considered crucial within the World Bank, there is currently no specific mechanism in place to address human rights issues. However, the World Bank possesses multiple tools, such as the grievance redress mechanism, fraud and corruption guidelines, the Inspection Panel, and the Independent Evaluation Department. These tools have the potential to be adapted and expanded to include human rights considerations. Incorporating human rights issues into these existing mechanisms enhances the World Bank’s accountability measures and ensures that human rights violations or concerns are properly addressed.

The World Bank has also taken measures to protect personal data within its projects, particularly in the context of the COVID-19 pandemic. Recognizing the significance of data protection, the World Bank has worked with borrowers to embed data protection frameworks in sovereign-to-sovereign agreements. This incorporation of data protection into projects serves as a powerful legal tool to safeguard personal data in countries without existing data protection laws.

In conclusion, the World Bank’s operational work provides member states with financing for projects, promoting economic growth and development. However, challenges arise from a lack of clarity and synthesis of rules for member states to apply. To address these challenges, a principles-based approach is lauded for its flexibility and adaptability. While accountability mechanisms within the World Bank currently lack specificity regarding human rights, existing tools can be tailored to include human rights considerations. Additionally, the World Bank has implemented measures to protect personal data in projects with countries lacking data protection laws. It is suggested that the World Bank enforce specific mechanisms for human rights as part of its accountability measures, enhancing its commitment to promoting peace, justice, and strong institutions.

Session transcript

Peggy Hicks:
Hello, everyone. Welcome. Very glad that you’ve decided to join us competing against receptions and other responsibilities for this session on technology and human rights due diligence at the UN, from guidance to practice. My name is Peggy Hicks. I’m the director of the thematic division at the Office of the High Commissioner for Human Rights in Geneva. And I’d like to very much welcome all of you and thank our co-sponsors, the Office of the Tech Envoy and the European Union, for their help in this session. We thought it’d be good to reconvene around these issues. Many of you have been involved in this process for some time. We are working on the human rights due diligence guidance for the UN, and we’re nearing the finish line is the phrase that I’ve been told to use. Scott Campbell from our office will tell us more about what that means in practice. But I do want to emphasize that while we’ve been working on this document, it hasn’t stopped our partners within the UN from applying human rights due diligence on an ongoing basis as they have rolled out and used new technologies. And through that process, they’re also sort of seeing some of the challenges they face in implementing the need to harmonize approaches across the UN system. So it’s sort of reinforced our desire to move forward on this process. And we’ll talk about that more as we go forward. Before we do that, I’d like to give the floor to Quentin Lambert from the Tech Envoy’s office for some opening words.

Quintin Chou-Lambert:
Thank you very much, Peggy. Yes, and thank you all for being here. My job is very simple and short, just to give a couple of welcoming remarks and to frame this discussion. So this human rights due diligence for digital technology is really about the United Nations walking the talk when it comes to its own use of digital technology. Grounded in the roadmap for digital cooperation back in 2020, this was already on the agenda for the UN as technology was coming into the UN’s work. And since then, the UN’s been grappling with this issue internally like many other organizations. As you can imagine, the different areas of the UN’s work across the peace and security pillar, sustainable development, and kind of human rights work itself, but also in its internal operations. So the use of digital technology in things like UN recruitment or recruiting UN personnel, UN procurement, IT services, and that kind of thing. So obviously there are challenges which all organizations are grappling with, and this is a really good opportunity to take a rights-based approach and walk the talk when it comes to digital technology in the UN. So back over to you, Peggy.

Scott Campbell:
Great, thanks very much. We’re very glad to partner with the office on this important area of work. As I said, I’m now going to turn the floor to Scott Campbell, our senior human rights officer with UN Human Rights who leads on this process, and he’ll give us an update on where the process currently stands. Over to you, Scott. Thank you very much, Peggy, and just a quick sound check to make sure you’re hearing me okay in the room. All great. Fantastic. Very pleased that we’re here today, and as Quintin, I think, very rightly put, seeing us all moving forward at the United Nations on walking the talk, and also pleased, as Peggy mentioned, to be nearing the finish line on this process. The drafting and consultation process for the guidance has been quite lengthy. It’s involved multiple rounds of bilateral consultations of open forums like this one, other public events, and we’ve consulted heavily with UN entities as well as with external partners, including member states, tech companies, and diverse members among our civil society partners. The process internally, while it has been lengthy, I should underscore it’s been very useful in giving us an opportunity to engage on human rights with a large number of entities across the full UN family, and some of these entities are very familiar with human rights mainstreaming, human rights due diligence, we’ll hear from a couple of them today, other entities far less familiar with human rights. So the process has really reinforced a broader mainstreaming of human rights due diligence efforts across the UN, and has assisted us in building more understanding and aligning approaches across the system. The process externally, the mandate given to us by the Secretary General to develop the guidance called specifically for consultations with external partners, and in particular, those most affected by digital tech, and I think this has really added a lot of value to where we’ve landed in terms of the content of the guidance. And I want to give a shout out to Access Now for having facilitated a number of public events and consultations with civil society partners. Just quickly on the timing, a fourth draft of the guidance was circulated back in July to the Secretary General’s Call to Action Interagency Working Group, which is a UN body. Comments were received, we’ve done some consultations in August and September, and we are, as mentioned, nearing the finish line. I just want to mention one note before handing it back over. As the process has evolved, alignment and policy coherence across the UN system has really been forefront in our thinking. And this guidance on digital tech intersects with another parallel and very much related process, which is a study to examine the implications of expanding the scope of the current human rights due diligence policy of the United Nations. And as many of you may be aware, this is a policy that’s been in effect since 2011, but has a narrow focus on UN support to non-UN security forces. So this study on expanding the existing policy, which was also mandated by the Secretary General’s Executive Committee, was begun before we began our work on this non-binding guidance for our use of digital tech at the UN. And in discussion with many actors along the way throughout the process, there was broad agreement that in drafting the human rights due diligence guidance, we needed to first be grounded in the parameters for the broader expansion of the existing human rights due diligence policy, which is a binding policy. And that that first, that groundwork, that foundation first needed to be set. We’re very, and of course the guidance that we would develop, which is non-binding guidance should of course align with that broader policy. So we were very pleased to see back in June at the Executive Committee agreement on the parameters of the human rights due diligence framework policy agreements on the next step to draft that policy and to develop an implementation plan and to seek resources. So with that set that we’re now, we’re now have the space to move forward on finalizing the draft guidance for tech, ironing out any remaining details and preparing for consideration of the guidance by the Secretary General’s Executive Committee on which you’re now trying to get on the calendar for that, that committee’s meeting. Following consideration by the Executive Committee and we, we trust with their endorsement, the Secretary General may decide to share the guidance with the Chief Executives Board for their consideration and potential use across the full UN system. So I’ll leave it at that on, on the process and hand it back over to you Peggy. Thank you.

Peggy Hicks:
Great. Scott will stay online for, for interpretation of, of all of that, which some who are maybe not as deep in the UN system may need at some point. But before we do, to move to that, I’d like to beg your indulgence for one more member of our team who will give us a substantive update on, on the issues that have arisen through the consultations and, and where we stand and on the guidance itself. So Katie Chavin, who’s our Senior Project Advisor on the project will come in now. Over to you, Katie.

Catie Shavin:
Thanks very much, Peggy and everyone. It’s wonderful to be here with you. As Peggy mentioned, I’m a Business and Human Rights Specialist and I’ve been working with Scott and his team to support the development of this guidance. I thought I would very briefly just offer a description of how the guidance has evolved through these rounds of consultation and share a few lessons and insights that we’ve gained throughout the process. Sorry, my computer’s a little slow today. So in terms of the process itself, as Scott mentioned, we’ve now circulated four drafts of this guidance for feedback with both UN and external stakeholders. And we’ve been very grateful for the time reviewers have given to this process. We’ve received an enormous amount of very constructive, thoughtful and helpful input, which has really supported us to strengthen the guidance, but also to ensure that it responds to the needs in the different contexts of UN entities. And it’s also given us some insight to inform early planning to support the guidance’s implementation, if indeed it goes forwards. For those who haven’t been following this process, earlier rounds of feedback really focused in on seeking clarity about the status of the guidance. Is it a guidance document? Is it UN policy? Will it be mandatory? Is it there to support entities? What is the guidance about? What is it trying to achieve? We also gained some insight into the different levels of familiarity that UN entities have with human rights due diligence processes, which has really helped us tailor the language and the approach to the guidance, particularly for those users who are newer to working with these types of concepts. Some of the earlier feedback provided an opportunity also for us to reflect on and to discuss with entities the appropriate scope of the guidance, what’s practicable, what best helps us steer towards a strong longer-term approach to managing the human rights risks of digital technology use. We received some requests for more concrete examples to help bring the material to life and give people a sense of what it would actually look like to implement in practice. And some of the UN entities actually worked with us to develop some examples that are hypothetical but also realistic of the types of situations that they face or anticipate facing as the digital technology use grows. We’ve also had some opportunities to explore how the guidance can be applied in sensitive contexts, for example by entities involved in the provision of emergency or humanitarian support, so we could tailor it to enable those entities to apply the guidance while navigating often very challenging and complex contexts and considerations. And we’ve also been able, through the process of consultation, to really explore how best to apply concepts and language around human rights due diligence that were originally developed for the private sector to UN entities, you know, recognising that there are some differences in the mandate, in the purpose, in the everyday language that is used across the UN family. When it comes to the most recent round of feedback, we generally heard very strong support for the approach that the guidance now takes, which was encouraging, and also some very targeted and very helpful feedback to support us to further hone and strengthen it. So, for example, a number of entities provided some helpful suggestions as to where we could more closely align the guidance with other agendas that are important across the UN system. So, for example, more prominently highlighting where digital technology use and human rights due diligence need to be sensitive to the different impacts on girls, women and gender non-conforming people, and to support an approach that is based on the principles of inclusion and intersectionality, so we’ve made that much more explicit. Some of the reviewers also helped us to identify other relevant principles, guidance documents and other resources on human rights and technology that are already in use across the UN, which has supported us to really promote an approach to the guidance that aligns with, rather than duplicates, those existing processes. The most recent round of review also offered us a chance to test some of the hypothetical examples that were now included in the guidance with other stakeholders, and we received some input on how we could further refine those to reflect issues that arise across different entities, not just the entities that helped us develop these examples, and to ensure that the language that we’re using resonates with users across different parts of the UN family. Finally, we heard that our efforts to clarify the relationship between this guidance and the process to develop a framework policy on human rights due diligence hadn’t quite hit the mark, and we received some helpful suggestions on how we could make this clearer for readers earlier in the guidance. As Scott mentioned, we’re currently working on the fifth and hopefully final or near final draft, and I think it’s likely to look very similar to the fourth draft, for those of you who have seen that. As I mentioned, the most recent round of feedback has really yielded input that’s helped us strengthen the guidance by tweaking the language in subtle but, I think, important ways, and to include more explicit connections to existing processes and resources, so they’re not major changes. Stepping back to reflect on the process as a whole, it’s generated some interesting learnings that are supporting us to start to think through how best we might support UN entities to implement the guidance when finalised. The consultation process, it’s not just helped us to hone the guidance, it’s provided us with some time and opportunities to learn more about where different entities are at when it comes to human rights due diligence, meaning that we’ve got a better sense of what might be needed to support capacity building in a more targeted and hopefully helpful way going forwards. We’ve learned a lot about what language resonates with colleagues across the UN. We’ve also been able to identify entities across the UN that already have significant experience working with human rights due diligence and have practical approaches and insights that they could potentially share with others to support that capacity building process. Our engagement across the UN has also generated a lot of food for thought on what risk management for the UN looks like in a world where a proactive approach to human rights is increasingly expected. Perhaps especially as we enter into an era in which increasing use of digital technology paves the way for a new world of human rights risks as well as potential human rights benefits. We’re very mindful that expectations not just of business but also of other organisations, including UN entities, when it comes to managing human rights risks and issues are becoming stronger and are also becoming increasingly connected to discussions about how to address environmental issues including the climate and biodiversity crises. Related to that, the process overall has provided a great opportunity to reflect ourselves and with both UN and external stakeholders who’ve been involved in the various rounds of consultation on the similarities and differences between UN entities and business enterprises when it comes to implementing human rights due diligence for digital technology use. We went into this process, I think it’s fair to say, with a general sense that it made sense to leverage and build on standards such as the UN guiding principles for business and human rights which were developed for business and we’ve been able to start the work of initiating conversation with those involved in the consultations on the nuances of adapting that approach. I might leave my comments there, though like Scott I will stay on the line in case there are any questions later.

Peggy Hicks:
Great, thanks very much Katie for that overview. We’re going to turn now to a discussion that looks at the practical realities and challenges of applying human rights due diligence for the use of technology within the UN system. As noted, while the work on the due diligence guidance has been underway, a number of UN entities have already dived into the space and we’re going to hear from two of them right now, UNHCR and the World Bank, to share some of their experience. For this section, we’re very fortunate to have with us David Sotola, the Office of the Legal Counsel at the World Bank, Legal Vice Presidency, and I do need to note that David is joining us at a miserably early hour in Washington, D.C., so thank you so much for being here, David. From UNHCR, Nicholas Oakeshott, Senior Policy Officer for Digital Protection at UNHCR, who we’ve worked closely with in the course of our work on the human rights due diligence guidance, and he helped us organize or organized himself a workshop on applying HRDD to UNHCR’s use of technology and complex field settings, and he’s been spearheading those efforts across UNHCR. So I’m going to ask those two panelists a couple of questions to give us a sense of how this looks in practice. Turning to you, Nick, first, since UNHCR has been a real leader in applying human rights due diligence and its use of digital technology, we’ve really appreciated your collaboration. Could you please give us a sense of how you’re applying human rights due diligence, particularly in complex settings like those that UNHCR engage in, including the systems and mechanisms that are in place and how they’re being strengthened? Thanks.

Nicholas Oakeschott:
Thanks, Peggy. I mean, as you’d expect, UNHCR has a wide range of policies and guidance that can help to manage risks in its use of digital technology. They range from privacy and data protection through to procurement, partnership, due diligence and beyond. However, in terms of a formal policy framework on human rights due diligence, that’s less developed and very much in line with what Scott was saying earlier on. But in our digital transformation strategy, which runs from 2022 to 2026, we’ve set the goal that UNHCR’s own use of digital tech will align with international human rights and ethical standards and in line with what was said earlier on about walking the talk. But these standards will also be promoted with states and the private sector with a focus on high-risk technologies, uses and contexts. So our process of engagement with the guidance development has very much been around building UNHCR’s understanding and capacity to apply human rights diligence approaches to its use of digital technologies in order to meet this overall strategic objective. As you mentioned earlier on, in January, we brought together a multifunctional team to implement a simulation of the third draft of the guidance, looking at field-based case studies. This approach allowed us to engage with experts on human rights due diligence from within the UN system, but also to receive advice from an international law firm, DLA Piper, which has expertise in advising the private sector on these issues. And this was facilitated through a strategic partnership that we have with DLA, which gave us access to this advice on a pro bono basis, which has always helped. By looking at the case studies, we were able to identify more clearly the potential implementation challenges, but also where the guidance added value to our existing policies and processes. The second case study, which looked at the innovative use of social media platforms to deliver protection information to people on the move, such as avoiding how they could avoid risks of exploitation and trafficking in online ads related to accommodation or work. That was particularly positive and resulted in immediate follow-up. We’ve had a regional bureau bring together another multifunctional team to undertake a full risk assessment of this approach, and that resulted in some quite important adjustments and a decision to develop some more established guidance on this innovation. We’ve also got, I think, a reasonably clear and positive identification of the way forward. First of all, to meet an immediate priority of the UN Security Council. will consider the guidance, even though it’s still a draft, as part of a multifaceted assessment of UNHCR’s developing approaches to the use of artificial intelligence, including generative AI. This will include the application of UNHCR’s new and expanded general policy on data protection and privacy, as well as the principles on the ethical use of AI in the UN system, which were adopted in September last year. And secondly, we’re going to review our set of existing policies and guidance to see how we can best implement the guidance once it’s adopted. This will also include exploring whether a user-friendly digital tool could help the field and other internal stakeholders in implementation, as well as how best to engage with effective communities and civil society, which is an important but challenging part of the guidance. So UNHCR’s journey down this road has begun, and I think a quite clear and useful way forward has been identified. Back to you, Peggy.

Peggy Hicks:
Great, thanks very much, Nick. It’s really clear that you’ve gotten a head start on a lot of this, and then the rest of us in the UN system will really be able to draw on some of those good practices that you’ve been working on. I’m going to turn to David now and ask you for a perspective from the World Bank. In my experience, it’s not always that easy to talk about human rights in a World Bank setting, so I’m a bit curious to hear how it’s been for those of you within the bank that are working in the area of human rights due diligence, and maybe if you could say a bit about whether you faced any pushback and how you’ve addressed it.

David Satola:
Sure. Good morning and good afternoon. Just a quick sound check. Can you hear me okay in the room? It’s great, David. Thank you. Thank you. Great. Well, thank you all for inviting me here. Despite the early hour, I’m delighted to be here virtually with you and for including me in the panel. I’m sorry not to be in Kyoto in person. Before I get onto some of the specific challenges, I do want to take just a minute and applaud the effort that you all are doing in trying to synthesize these disparate evolving threads. I mean, I think in the past few years, all of us have taken on different approaches to human rights and technology, whether it be in cybersecurity and more recently with artificial intelligence. I think the synthetic approach that you all are doing here to have a broad approach to human rights due diligence is really to be applauded. I also think that, and Katie and others and Scott have mentioned this before, but there are some elements of the process that you’re going through that I think are extremely important and that will resonate with those who have history with the Internet Governance Forum. One is the consultation process that you’ve undertaken. A multi-stakeholder consultation process will only reinforce the strength of this guidance. I can’t underscore enough the issues of capacity building that are mentioned in the document itself. That’s extremely important for us as we are providing financing in these areas for different digital development activities. I’m also struck by the sort of principles-based approach. And I think this is a reflection of one of the main challenges, and it’s not just for us, but it’s for all institutions who are working and trying to involve human rights due diligence, is that it’s difficult to have a one-size-fits-all approach. But if you do it on the principles basis, then that I think is reflected in the document, I think that then that can be achieved. I’d like to echo what Nick said as well, that in the past few years, our organization like UNHCR and others has attacked different things in different ways from procurement to human resources to other things. And now this is an opportunity for us to kind of, again, bring those threads together. So the first challenge I think is that, and this is, I think, exactly what you’re trying to do in this document, is to recognize that there are standards out there that they are evolving in different ways. But this is, I think, a first attempt to try and synthesize that. So that in itself is a big challenge. The biggest challenge for the World Bank in this area is that the way that we do business, the way that our operations are conducted is I think fundamentally different than most other UN organizations. So, and I don’t mean to speak for UNHCR or any of the others, but correct me if I’m mischaracterizing how you do business. When UNHCR does an operational activity, UNHCR is in the field, its staff are doing the work. Whereas in the World Bank context, when we do operational work in the field, we are generally providing financing to our member states to undertake a project. That’s called a recipient-executed, what we refer to as a recipient-executed activity. So we’re one step removed from the kind of direct interaction that most of our other UN family organizations are doing. And that is a principal difference. So one of the challenges that we are facing is that our member states are confronted with the kind of lack of clarity or lack of synthesis and a set of rules to apply. So even if we have a guidance for the UN family, it’s not necessarily gonna translate directly to how our member states might undertake their own due diligence. And with our renewed emphasis on digital as a principal way of doing business and development, I think this will be increasingly a challenge for us. I think I’ll leave it at that for now, but I appreciate the opportunity and look forward to the discussion today. Thank you. Great, thanks very much, David.

Peggy Hicks:
I think it’s really interesting to hear the downstream effects and the way the guiding principles on business and human rights that Katie mentioned are, is that not coming in now? Sorry, microphone, yes, good. Sorry about that. I was just saying that David’s comments about the downstream effects and the engagement, the indirect way that some of the guidance would need to apply given the nature of the way the World Bank works in different settings is really interesting when you look at how it fits with the UN guiding principles on business and human rights framework. So we’ll probably come back to that. But I’ll flip back to Nick now and just to ask a bit from your side, about pushback in terms of human rights due diligence. I know we hear a lot of comments from those that are engaged about some of the challenges they face within their institutions. And it’d be great to hear a bit from your side about what sort of things have come up and how you’ve been able to address them.

Nicholas Oakeschott:
Thanks, Peggy. I think that part of the process of engaging closely with the team at UN Human Rights has been really helpful in helping us to address and think through what the potential pushbacks. It’s important to recognize that UNHCR, as David was saying, is a field-focused organization. And the protection of the forcibly displaced and stateless is a key part, perhaps the key part of its institutional DNA. It’s integral both to the agency but also to its identity. So in this context, new processes could be seen as unnecessary steps to potentially getting in the way of the immediate delivery of protection and humanitarian assistance in challenging emergency contexts, something which is a duplication rather than an atom. However, as the guidance has strengthened from draft to draft, it’s been seen as being increasingly implementable at the field level. And the value add has become clearer, particularly in relation to existing risk management processes. And I’d flag up that in many ways, over the years, we’ve focused on similar questions, but through the use of the UNHCR, over the years, we’ve focused on similar questions, but through the lens of privacy and data protection rather than through a broader human rights due diligence perspective, which I think has obvious pluses in some contexts, but in other contexts is perhaps too narrow in scope. So overall, I’d say that UNHCR sees the guidance as an opportunity to realize the key digital protection strategic goal that I flagged up earlier on, and provides us through experience with a stronger basis for increased engagements with states and the private sector, including technologies on promoting the protection and forcibly of the forcibly displaced and stateless in digital contexts. I think that there are enormous advantages from applying the guidance, even in its draft form to existing field contexts, because it means that we’re more relevant in our approaches and the advice that we can provide to states and the private sector. Back to you, Peggy.

Peggy Hicks:
Great, thanks very much, Nick. And I’m quickly gonna turn back to David just to check in with you to see if you wanted to add on to your comments. I found your notes about the principle-based approaches as being very interesting. And in particular, I know that, especially given the amount of time that we spent on this human rights due diligence guidance, one of our hopes is that it will be a document that does have application beyond the UN system as well. And obviously, when you’re looking at recipient countries and how they engage, perhaps that’s one of the ways in which we could see that happen. But I’d love to get your thoughts on that point, David.

David Satola:
Yeah, thanks, Peggy. And just following up on that very point, and I think that the principles-based approach will enable this. I think one does need to recognize that our members and our member states are the same as your member states. They’re at different levels of development. And so one could call it a maturity levels. There are some big middle-income countries who borrow from the World Bank who are gonna be more sophisticated and have higher capacity to deal with some of these issues. If you take a big middle-income country versus say a small island country with a smaller population, less development, maybe even lower income levels, it’s hard to impose the same model on both. And so I think that the fact that it is a principles-based approach allows for recognition of those different levels of maturity to deal with the thing. I’m not suggesting at all a subjective approach to human rights or a relative approach to human rights. No, I think it’s the due diligence part and the capacity to integrate how one approaches technology and technology issues that would need to be recognized in those contexts. And I think that we find this in our normal lending operations as well. There are some things that are universal that apply across the board. We expect our borrowers to observe the same kind of procurement principles and things like that. So likewise, I think we can hope to achieve a universal approach to human rights due diligence. But in the process, I think we do need to recognize that different countries have different levels of development and economic maturity and that would need to be taken into account. Over.

Peggy Hicks:
Great, thanks very much, David. I think we’ll leave it with Nick and David at that point. And Marwa has been very patient. We’re very fortunate to have Marwa Fattah from Access Now with us. Access Now has been involved, as Scott noted, along this process. And we’d really like to hear from you, you know, Access Now’s views on how the UN’s doing in this area. Why do you think this guides could be important and what you’d like to see as we go forward. Thank you very much, Vicky.

Marwa Fatafta:
Okay, this works. I’m very happy to be here and I hope our colleagues who follow us online can hear me clearly. We, I mean, as a starting point, I think this is a very important step that OHCHR has taken over and to ensure that human rights due diligence is mainstreamed across all UN entities. And we think it’s a step that is, frankly, a bit overdue where we’ve seen the rollout of technologies or the deployment of technologies on a mass scale without sufficient assessment of potential negative human rights risks, which some of them have materialized. And I think that’s important, especially in contexts where there are not necessarily strong rule of law or a human, strong human rights records in countries where vulnerable communities, individuals and communities who may be impacted by the use of digital technologies by UN agencies can have access to effective remedy. So I think it’s a very important step and I really look forward to seeing it, to see the implementation of it and the final draft as well. We are, of course, we have been engaging on business and human rights on a number of fronts, especially with the private sector. And of course, engaging with human rights due diligence has given us a number of lessons learned that I would like to share and I think is important for this conversation. The first one of which is we’ve seen in the guide that the aim of this guide is basically to build the capacity of different UN agencies in headquarters and field offices to be able to conduct human rights impact assessments and use this guide. However, we think that it’s very important to add an element of independent assessment. This is important for a number of reasons, the first of which is oversight and accountability. When those assessments are made internally and especially when they’re not published or the findings of those are not published, it becomes hard for civil society to scrutinize the decision made. We’ve had situations and especially with the private sector where there is a decision to expand in a certain market or use a specific technology where we see clearly in red letters that this technology will lead to negative human rights impact. However, we’re told that it’s fine, you can relax because we’ve done our due diligence, we’ve done our human rights impact assessments and you can trust us that we’ll take care of this matter. Therefore, I think independent assessments are very important. It’s also, and truth be said, I mean, we’re all subject to bias and having an independent third party that can assess the rollout of technologies or specific programs that rely on tech or digital solutions, especially when they’re already being implemented is key in order to avoid a situation where the technology is being used and there is an assessment, but having someone from the outside that conduct this is important. And the second point and it ties to the first one and I’d already alluded to it and that is transparency. Transparency on the process, on the practice, engagement with civil society that allows affected communities to evaluate from their own perspective the extent which these decisions taken by the agency are actually serving their needs and protecting their rights. And this transparency for us is not an option, which I think the guide suggests. We think it’s important and key for the success of this tool. The third point is around enforcement and I mean, human rights due diligence tools are as good as their enforcement. And again, from experience engaging with the private sector and also with some UN agencies, including UNHCR, we have seen that those tools, whereas they are explicitly written or sometimes mandated by internal policy handbooks or internal policies, they’re not necessarily implemented. And it’s especially so in challenging situations such as in humanitarian context where UN agencies have to rush to get refugees registered or to get people across the border. And we are at the end of the day operating in an ecosystem where private companies are also aggressively selling solutions to solve very complex problems. And here we see that again, in such situations, the technology is used or rolled out and the assessments are either not made or made later. And when they are made later or not made at all, we have a situation where these technologies are implemented on a mass scale. So we have a kind of a de facto situation such as the biometric registration of refugees conducted by the UN Refugee Agency. When you have millions of people already registered with their biometric information, which is extremely sensitive personal data being used and processed, it of course exposes individuals and vulnerable communities to a number of risks. But it becomes hard to challenge these systems when it’s already been out. And then the question for us as civil society is, how can we work with UN agencies to mitigate these risks when the more data you collect, the harder it becomes to protect them? So that’s just one example for when there is no human rights due diligence done, what’s the long-term cost of that? And one point also to raise here, and I think David mentioned it, and that is sometimes also we’ve seen that when headquarters are very diligent about enforcing and implementing and doing human rights impact assessments, data protection impact assessments, when it trickles down to the field level where field office operates, sometimes those rules are not necessarily followed. It could be because of lack of capacity or lack of resources or the sector in which they’re operating or the context in which they’re operating. But here it’s key also to ensure that those tools are being implemented at the lowest level where there is direct interaction with affected communities. And so that’s one point to highlight on the enforcement bit. And then last key point to raise here is around public-private partnership. I think that’s very important to help strengthen the Human Rights Due Diligence Guide. When private companies are being procured, We don’t see any information from a number of UN agencies about why they have selected specific countries. There were also examples where companies with shady human rights records have been partnering with UN agencies, like Palantir is one classical example that comes to mind. And when civil society asks for more information or transparency on how company X has been selected, we don’t receive answers. So I think adding or strengthening transparency on public-private partnership due diligence for the companies that are procured are just as important as assessing the negative or potential or foreseen negative human rights impacts of the programs or the technologies themselves.

Peggy Hicks:
Thanks very much, Mayra. It’s really great to get your reflections on it. And your second point related to the transparency in the process involvement of civil society, which you said was key to making this process work. And I think your comments gave us a good example of that. And we need that sort of input about where we’ve gotten and how much further we have to go. That doesn’t mean we’ll necessarily get there all in one step. But it’s very important to have that spotlight and to understand what needs to be done and how we need to move not just from the guidance and not just from the implementation, but to look at some of these key issues about how to make sure that it’s as deep and meaningful and that these questions around transparency and independent auditing and other things are addressed. So thanks for that. With that, I think it’s time for us to move quickly to the question and answer. As I said, we’re very grateful to those of you that have joined us for this session. We’re happy to โ€‘โ€‘ we have people online, I think, that may come in with questions as well, but we’d be very happy to prioritize questions in the room first. If people want to just โ€‘โ€‘ or a small enough group, I think you can just flag me, and I’d be happy for โ€‘โ€‘ I think you need to go to the mic just so the people online will be able to hear it, I’m being told. Anybody have any questions or comments on what they’ve heard? I’m seeing none. Sorry? There you go. Oh, thank you. And if you can introduce yourself as well, please.

Audience:
Sure. My name is Boshree Badi. My question is around, like, these guidelines that are being created. I’m wondering how much of a space there is to actually talk about what’s influencing, again, the decisionโ€‘making process around procuring certain technologies within the UN system and thinking about, like, I mean, there’s funding that goes into the system from certain actors that, like, have their priorities and agendas that are clearly set out, but that’s not necessarily something that could be made transparent, I think, within the parameters of how the UN system currently functions. But for, like, an internal mechanism within the UN system, there’s also a lack of clarity within it. So, like, is there anything that’s being developed maybe to make that more transparent internally, even if it’s not something that can be publicly shared? Because I think that’s an important part of understanding the decisionโ€‘making process of why certain technologies are being pushed and, like, the underlying narrative around those technologies because there’s, like, the understanding that maybe they’ll improve efficiency, for example, with UNHCR’s use of biometric technology. There was a lot that was discussed about it decreasing fraud instances, but those were seen to be so negligible that it didn’t merit the risks that those populations were being exposed to as a result of the use of those technologies. So I’m wondering if there’s any mechanism that’s being considered there as well. I’m sorry.

Peggy Hicks:
Great. Thanks. No, it’s a good followโ€‘up question to the comments that Marwa made as well. I’ll just see if there are any others that wanted to come in with questions, and then we can go back to the panel and others for response on that point. Anybody else want to come in? Do we have any questions online, Eugene, that we should bring in? Oh, sorry, please.

Audience:
Hello. I’m Ana Cristina Ruelas from UNESCO, and I have a question about the independence of assessments. How to identify who is going to do the independent assessment when you have, like, different bodies? Like, what is your experience of who will be the independent body that will perform assessments when it comes to UN agencies which have many member states with different views? Should the member states decide different names to perform the assessments? Should civil society decide? Which civil society should decide? What would be your recommendation on that? Because it’s very, like, that will be, like, how to develop that independency. Please. We’ll take this one last question, then I’ll go back to the panel with all three. Hi, my name is Oliver. I can’t name my organization because of security risks. My question is, because UNESCO just stood up, would the human rights due diligence that you’re developing, would it apply to the UNESCO guidelines that are currently also in development? Because a lot of civil society have been asking why the UNESCO guidelines have no due diligence process. Thanks.

Peggy Hicks:
So we have three questions on the table. I think, Nick, if you’re there, maybe it makes sense to go to you first as UNHCR has been coming to the conversation at several occasions. But I think, you know, to potentially broaden it out and just have a sense from you about, you know, how you’re dealing with some of the challenges that have been raised around public-private partnerships and transparency around them and, you know, the different factors that are in play when UNHCR is looking at some of these issues, including the use of independent assessments and other things. Thank you.

Nicholas Oakeschott:
Thanks, Peggy. I think that, you know, it’s a good question, the question about the purposes for which certain technologies are chosen. And I think one good reference point that civil society and other stakeholders now have is UNHCR’s digital transformation strategy, which I’m just going to drop a link into the chat so that you can see there what our objectives are on digital. And you can see in that strategy that it’s very much focused on the people that we serve. Three goals of the five are, you know, digital protection, digital inclusion, and providing more digital services for the people we serve. And so there I think that there’s more of a clear idea on the business side, if you like, of what we want to use digital technology for. And it’s the first strategy that we’ve had, so I think it’s an important reference point. On transparency questions, I think that one of the key opportunities, but also fundamental challenges that we’ve identified in the work we’ve done around the human rights due diligence guidance is how can we effectively engage with civil society stakeholders in the implementation of that guidance. I think that, you know, from talking to experts in the private sector, that’s also a challenge that businesses have faced, and I would very much welcome an opportunity to discuss with Access Now and other stakeholders ideas on how we can make that work. On the one hand, you know, respecting that there may be some confidentiality questions that arise, but also how important it will be to include civil society in those due diligence processes. On the question of independent assessments, I think that that’s a particular challenge within the UN system. There is an independent auditing function that does look at the work of UN agencies, and once the policy that Scott refers to is adopted, that policy will become auditable, if you like. And on the other hand, we have, say, in the context of data protection, established agreements with expert suppliers to help bring both expertise but also some independent rigor to data protection impact assessments that have been undertaken both at the global and the field level. But I think the jury is still out from UNHCR’s perspective about whether independent entities undertaking audits of the implementation of the guidance beyond the existing system would be something that we could work well with. But overall, I think that we’re on a learning process, as I said in my earlier comments, and would very much welcome greater dialogue and discussions with civil society about how we can best make this guidance work. Back to you.

Peggy Hicks:
Great. Thanks very much, Nick. And I’ll turn to David to see if you have any comments on that, and then to the panelists here.

David Satola:
Yeah, sure. Thank you. And just wanted to follow up very quickly on Marwa’s comment and the ensuing discussion on enforcement and related issues. I agree. I think that that lends itself towards accountability, which is definitely required. And while we don’t have anything specific on human rights at the moment, we do have a variety of other tools that are available both to our borrowers and to civil society and the beneficiaries of our work. And some of those are the following. One of them is we have a grievance redress mechanism in our projects. And so every project will have this, so that if there is a negative impact on someone, an individual, for example, they can then appeal to the World Bank to seek redress for whatever harm they’ve encountered. We also have, in terms of the โ€“ and I think this might address in part the PPP question or working with the private sector โ€“ a lot of the financing that we provide to governments goes to vendors or consultants or contractors. So if it’s a roads project, we’re not going to build the road. The government’s not going to build the road. They’re going to hire someone to build the road. But in that context, we have our fraud and corruption guidelines, which, to borrow the phrase, sort of follows the money and all the way down the chain to the most local subcontractors. So to make sure that they’re doing what they’re supposed to be doing with the money. We also have, in the broadest sense, an organization called the Inspection Panel, which is independent and which can be invoked if there are issues that arise in one of our projects that there was some serious breach or something like that. And we also have internally a group called the Independent Evaluation Department, which retrospectively looks at projects in terms of lessons learned and what worked, what didn’t work. And so collectively, there’s a lot of accountability mechanisms that are there. They’re not specifically designed right now necessarily to address human rights, but there’s no reason that they couldn’t be adapted to include human rights issues. And as Nick said, over the past few years since the entry into force of GDPR, personal data protection is a huge issue for us. We provided billions of dollars of financing in the COVID pandemic, and maybe some of you remember that from a couple of years ago. But the amount of personal data that was being collected by our recipients at that time, we realized, was going to be huge. And we wanted to put in place mechanisms in our lending instruments that would ensure that our borrowers had in place the right kind of legal and technical measures to protect personal data. Some of our borrowers had laws in place, and we could rely on those. In other cases, there weren’t legal frameworks in place, and so we worked with our borrowers to make sure that for those projects, the projects themselves had a framework in place. Now, let me just digress for a moment there. Our members are sovereigns. The World Bank is a sovereign. When we have a lending instrument, when we do a financing agreement, a sovereign-to-sovereign agreement is a treaty. It’s a very powerful instrument. And when we did those COVID projects with countries that didn’t necessarily have a data protection regime in place, we built it into our agreement. And so we were pretty comfortable with the fact that that sovereign-to-sovereign agreement, that treaty, for the purpose of the data that was collected in that context of COVID, was going to be protected. So not perfect, but certainly a tool that we had that we used to make sure that, to the extent that we could, those issues were being addressed. Over.

Peggy Hicks:
Great. Thank you very much, David. Great to get that insight. We only have a couple minutes left. We’re starting to hear noises outside of our room here in Kyoto. Marwa, I’ll turn to you quickly.

Marwa Fatafta:
I don’t want to hold people. Quickly, I couldn’t agree more with the first comment, and that’s an issue we also face. Often, technologies are deployed or used without evidence. And I think it’s evidence-based solutions are very important in a context where, again, private companies are happy to sell you, and I use this term, snake oil, or solutions that could have serious ramifications or negative impact on human rights. And therefore, for us as civil society organizations, we sometimes struggle to understand the rationale why certain solutions that are disproportionate, given their human rights impacts, are being used and justified. So having an evidence to show, for instance, with biometric registration, that there is no other solution but biometric registration that justify the collection and processing of sensitive data, and therefore, based on this evidence and this research. Research, of course, is resource-intensive, time-intensive, and I understand, again, that in challenging contexts, that’s hard to achieve all the time, but nevertheless, it is important to scrutinize the narratives behind certain technologies, such as AI. The point on independent assessment, I mean, I’m not in the business of promoting certain entities, but there are, of course, companies or civil society organizations that are specialized in doing exactly that, doing human rights due diligence. And as someone who had participated in a number of consultations, my job as a civil society organization to ensure that those, you know, as auditors or, you know, companies are speaking to the right people. So they’re not just speaking to Access Now as a global organization, but can actually speak to grassroots organizations, so the people who belong to the communities that might be affected. That’s, I think, something that civil society should continue doing in building bridges, and I understand the difficulty in reaching out to the stakeholders, which I believe Nick had mentioned, and that’s something that civil society can help with. An organization like Access Now, we have partners across the world, and we’re more than happy to connect whenever a consultation needed, and the same, of course, applies to other partners. Great. Thanks, Marwa.

Peggy Hicks:
Quinton, a closing word from Quinton?

Quintin Chou-Lambert:
Yep, sure. Thanks very much. And just bringing it back to the overall role of this non-binding guidance, and it kind of helps to reconcile these two challenges. One is how we have horizontal alignment across the different UN agencies and entities, and then making it principles-based such that it can be translated into those local contexts. It’s not a surefire outcome-guaranteeing kind of thing. To get to that, probably to hard-code it into the operational procedures like procurement, one would need to have it baked into the entity-specific procedures, including the tendering process, the kind of checklists and auditing that goes on those procurement processes. And this guidance can be a beacon for each entity to do that kind of hard-coding. It has to be done entity by entity because each entity has its own governing bodies. For example, in the Secretariat, the General Assembly prescribes housekeeping rules, we call it, but basically the way in which the UN does its procurement and has the criteria for procurement handed down by the GA, whereas UNHCR, other agencies, have more flexibility in some cases. But it can act as this kind of beacon, this guidance, for each entity to hard-code these kinds of principles into its own local procedures. And also, just in closing, as a kind of beacon just for individual people, staff members who are working in the organizations. I recall, for example, during the COVID days when the Secretariat itself was considering how to deal with the pandemic and whether to introduce its own contact tracing, proximity tracking system. In the end, it was a judgment. It was an emergency and it was a judgment. In my opinion, the correct judgment went out, which was that we were not going to do it and that the partner who was offering to do it was not going to be able to meet the privacy and requirements that were appropriate for the case. But it was a judgment call. This kind of human rights due diligence framework offers this kind of load star for the system to both translate the principles into its own kind of regular procedures, but also for individuals who are taking judgment calls on a day-to-day basis.

Peggy Hicks:
Great. Thanks very much, Quinton. We’ve run over time, so just to conclude by saying that we’ve had a good conversation here. Some important questions have been raised around transparency and assessments and enforcement. Those are issues that we will look very seriously at and the interagency working group will take them on board as we’re looking to implement and move forward in this process. And I think it’s coming at it from a previously civil society perspective, I have to say I think from my interaction with the UN agencies involved, there’s a real commitment to trying to move this forward in a positive way. But some of the issues raised are difficult ones for us to solve. The issues of the public-private partnerships and the corporate engagement, I’m in charge of digital transformation from a champion standpoint at my organization. And one of the things in reaching out to other UN agencies to have a sense of how they’ve been able to do what they wanted to do within digital transformation is the real recognition that the funding is not there for it to happen, except in the context of some of these important partnerships. And we’re grateful for that because the UN has to be an entity that functions with all of the tools necessary to protect human rights in my regard, to protect refugees in UNHCR’s regard. So these are challenging things for us to implement, but we really appreciate the input and commit to continuing the conversation as we go forward. Thank you all for staying so late and missing the reception outdoors. I hope you’ll get a chance to enjoy the evening here in Kyoto, and thanks again for all your time. And thanks to our panelists for all their efforts.

Audience

Speech speed

173 words per minute

Speech length

467 words

Speech time

162 secs

Catie Shavin

Speech speed

182 words per minute

Speech length

1397 words

Speech time

461 secs

David Satola

Speech speed

169 words per minute

Speech length

1856 words

Speech time

658 secs

Marwa Fatafta

Speech speed

169 words per minute

Speech length

1696 words

Speech time

603 secs

Nicholas Oakeschott

Speech speed

159 words per minute

Speech length

1524 words

Speech time

574 secs

Peggy Hicks

Speech speed

195 words per minute

Speech length

2195 words

Speech time

675 secs

Quintin Chou-Lambert

Speech speed

166 words per minute

Speech length

646 words

Speech time

233 secs

Scott Campbell

Speech speed

172 words per minute

Speech length

848 words

Speech time

295 secs

Successes & challenges: cyber capacity building coordination | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Claire Stoffels

The analysis reveals several key points about cyber capacity building coordination. Firstly, there is a lack of coordination among stakeholders, leading to diverging objectives, different approaches, and duplication of actions. This lack of coordination hinders the overall effectiveness of cyber capacity building efforts.

On the other hand, successful coordination requires a inclusive, demand-driven, and context-specific approach. Cyber security transcends many communities of practice, necessitating regional collaboration and a shared understanding of the specific needs and challenges faced by different regions.

Trust is identified as a crucial component for effective cooperation in capacity building. However, building trust is challenging due to the presence of different policy fields and institutions. Luxembourg, perceived as neutral and trustworthy, has played a role in relationship building by fostering trust among stakeholders.

Another challenge is the development of scalable models for coordination. Coordinating capacity building efforts sustainably is a significant concern. Establishing mechanisms that allow for the efficient coordination of efforts while adapting to different contexts and needs remains a challenge.

Furthermore, the analysis highlights the risks posed by a lack of coordination in cyber capacity building, namely duplication of efforts and the lack of coherence. Coordinating actions and sharing information across stakeholders is vital to avoid these risks and ensure a cohesive and efficient approach to capacity building.

The importance of multi-stakeholder approaches and partnerships is emphasized. Bringing together stakeholders from diverse sectors and actively engaging them in capacity building efforts can lead to more comprehensive and effective outcomes. Luxembourg has been successful in fostering multi-stakeholder approaches and partnerships, collaborating with the national cybersecurity agency and coordinating efforts across sectors.

The analysis also points out the benefit of using coordination platforms and practitioner groups in cyber capacity building. Luxembourg has joined various coordination platforms and practitioner groups, such as the GFCE and EU Cybernet, finding them beneficial in facilitating coordination and collaboration.

The D4D Hub is highlighted as a valuable platform for exchanging information, sharing best practices, lessons learned, and improving projects. Despite the challenges in gathering information, the hub serves as an important element in project inception and formulation.

Lastly, the analysis underscores the role of donors and implementers in promoting awareness, enhancing communication, and facilitating cooperation and knowledge sharing. Claire Stoffels endorses the idea that donors and implementers have a responsibility to play a larger role in capacity building efforts.

In conclusion, the analysis identifies the need for enhanced coordination in cyber capacity building. It emphasizes the importance of inclusive, demand-driven, and context-specific approaches, building trust among stakeholders, developing scalable models for coordination, and fostering multi-stakeholder approaches and partnerships. Using coordination platforms and practitioner groups, such as the D4D Hub, can also support information exchange and project improvement. Additionally, donors and implementers should take an active role in promoting awareness and facilitating cooperation among stakeholders.

Donia

The discussion revolves around the concept of capacity-building in the context of community development and technological solutions. Both participants agree that capacity-building should be seen as a comprehensive approach encompassing various aspects, such as community awareness, legal frameworks, and governmental policies. They argue that solely focusing on technology solutions is insufficient.

The speakers emphasize the importance of adopting a holistic approach to capacity-building. This approach should involve not only technological advancements but also community awareness, including educating individuals about the benefits and implications of technology solutions. They also stress the significance of developing legislative frameworks and government policies that encourage capacity-building, as these are crucial for creating an enabling environment for sustainable development.

The participants provide supporting facts, including questions posed by online participants, which demonstrate a concern for broader aspects of capacity-building beyond technology. They also suggest that capacity-building extends beyond the requirements of SDG 9 (Industry, Innovation, and Infrastructure) and SDG 11 (Sustainable Cities and Communities) to include SDG 16 (Peace, Justice, and Strong Institutions). This highlights the extensive scope and potential impact of capacity-building beyond immediate development goals.

Throughout the discussion, the sentiment of both speakers remains neutral. They present their arguments in a balanced manner, without expressing a strong positive or negative stance on the topic. This neutral sentiment indicates a willingness to engage in an open and constructive dialogue on the subject of capacity-building and its multifaceted nature.

In conclusion, the discussion underscores the importance of considering capacity-building as an end-to-end process that encompasses technological solutions, community awareness, legal frameworks, and governmental policies. The participants argue that capacity-building should not be limited to technological advancements alone. By addressing these diverse aspects, capacity-building can foster sustainable development, promote social progress, and contribute to the achievement of various SDGs.

Anatolie Golovco

During the discussion on cybersecurity, speakers emphasised the significance of the human element in protecting computers against cyber threats. They stressed the need for individuals with the right values, ethics, and technical skills to be involved in the field. Cybersecurity is ultimately about good people safeguarding computers from bad actors.

Insufficient coordination and a lack of clarity in project objectives were identified as challenges in implementing cybersecurity initiatives. When beneficiaries lose sight of project goals midway, misalignment in project delivery occurs. This issue can be compounded by competition among donors and a lack of clarity in defining project needs. To address this, speakers advocated for improved planning and better coordination among states. States should clearly articulate project needs and roles to donors, facilitating better alignment of objectives and successful project implementation.

One proposed solution involved a three-layer mechanism for effective coordination in cybersecurity efforts. This mechanism consists of a cybersecurity council, smaller groups for peer review, and the Ministry of Economy and Digital Development, each with defined roles. This approach was regarded as efficient and conducive to better coordination, ensuring project objectives are met. The role of clear policies formulated by the Ministry of Economy and Digital Development, which help translate plans into action, was also highlighted.

Another crucial aspect discussed was the need for a people-centric approach and a re-evaluation of the cybersecurity architecture. Reducing the complexity of tools and rethinking the overall architecture are necessary steps. Speakers emphasised the importance of focusing efforts on strategy rather than merely adding layers of security to a faulty system. There should be a substantial effort invested in rethinking the ecosystem to ensure effective cybersecurity.

Throughout the discussion, it was noted that adapting project timelines to accommodate the speed of learning and the dynamic nature of cyber threats is often challenging. Donors may face difficulties synchronising their contributions with the rapidly evolving needs of the field, resulting in a focus on acquiring tools rather than developing the individuals involved. Therefore, speakers called for a greater focus on the people in the cybersecurity process, prioritising their training and education alongside procurement of tools.

In conclusion, the discussion underscored the vital role of the human element in cybersecurity. It stressed the need for individuals with the right values, ethics, and skills, alongside improved coordination and clear project objectives. A three-layer mechanism, supported by coordinated policies, can enhance coordination, and a people-centric approach, along with a reassessment of the cybersecurity architecture, may lead to more effective protection against cyber threats. Speakers called for greater attention to be given to the development of individuals in the field, emphasising their training and education as essential components of cybersecurity initiatives.

Louise Hurel Marie

The analysis emphasises the importance of better understanding and coordination among countries when it comes to supporting capacity building in specific regions. It argues that in order to avoid duplication of efforts and overloading of recipient countries, a more coordinated approach is needed. The analysis also highlights the crucial role of political buy-in for the success and sustainability of cyber capacity building initiatives. It states that without the government seeing capacity building as a priority, it becomes challenging to gain traction and achieve desired outcomes.

Another key point raised is the need to break down cyber capacity building into more specific categories. The analysis suggests that traditional cyber capacity building, capacity building for crisis response, and capacity building for conflict or post-conflict recovery can be considered as subcategories. By doing so, it becomes easier to define and address the specific needs and challenges in each area.

Insufficiencies in coordination of capacity building efforts can lead to poor sustainability measurement, according to the analysis. It argues that donor countries and recipient countries may lack effective measurements for longer-term sustainability efforts. This can result in one-off efforts or effects, with impact measurement focused on specific projects rather than holistic outcomes.

In contrast, the analysis also highlights the positive impact of longer-term programs and sustainable recommendations in capacity building. It suggests that building a longer-term capacity building program in a region could enhance sustainability. Additionally, both donors and implementers could benefit from developing and adopting broader measurements of impact beyond individual projects.

Insufficient domestic coordination is identified as a potential challenge in capacity building efforts. The analysis points out that multiple departments within a single government may conduct different types of capacity building efforts, potentially complicating coordination. Recipients might also be overwhelmed by multiple offers and struggle to designate the appropriate point of contact. This lack of coordination can lead to complications and inefficiencies in capacity building.

The analysis recommends that coordination and trust-building between countries prior to crisis assistance can enhance the effectiveness of capacity building efforts. It states that countries that have provided assistance in a crisis often had a previous relationship, highlighting the importance of trust and prior coordination. Mechanisms such as Memorandums of Understanding (MOUs) and institutionalized responses, such as the European Union’s Permanent Structured Cooperation (PESCO) framework, are cited as examples that can increase the effectiveness of coordinated responses.

Crisis response is seen as an opportunity for countries to gain political visibility and set up new coordination mechanisms to enhance sustainability. The analysis mentions the establishment of the Center for Cybersecurity Capacity Building in the Western Balkans as an example of leveraging crisis response to create new mechanisms. It suggests that the crisis response capacity building type and the broader cybersecurity capacity building can complement each other depending on the context.

Progress is reported in international-level discussions on addressing cybersecurity issues. The analysis highlights the existence of working groups on incident response and cyber diplomacy as part of the Global Foreign and Cyber Expertise (GFC) platform. It also notes that different communities meet and discuss in informal settings at the international level, indicating ongoing efforts in addressing cybersecurity challenges.

Challenges still exist at the domestic level depending on the country and culture, states the analysis. It points out that different departments in the government may have varying understandings of cybersecurity. Additionally, community engagement varies depending on the maturity of a particular stakeholder group. This suggests the importance of considering context-specific challenges and cultural nuances when designing and implementing capacity building initiatives.

Civil society organizations and think tanks are highlighted as crucial actors in bridging different communities. The analysis emphasizes their role in involving as many stakeholders as possible during the planning and designing of specific projects. Their involvement can help ensure a more inclusive and comprehensive approach to capacity building.

The analysis also suggests including recipients in the design phase of projects. Providing a bigger inception phase, where stakeholders can engage and provide input, can help create ownership and increase the chances of successful implementation.

Lastly, the analysis calls for designing a typology that accounts for contextual considerations in cyber capacity building. It argues that the evolving landscape in terms of agencies, stakeholders, crises, and conflict or post-conflict situations should be taken into account. This would enable a more nuanced and tailored approach to address the diverse needs and challenges in different contexts.

In conclusion, the analysis underscores the importance of better coordination, political buy-in, and sustainability measurement in cyber capacity building efforts. It also highlights the need for longer-term programs, domestic coordination, and trust-building between countries. The analysis recognizes the progress in international-level discussions and acknowledges the challenges at the domestic level. Additionally, it emphasizes the role of civil society organizations and think tanks, as well as the involvement of recipients in project design. Overall, the analysis provides valuable insights for policymakers and stakeholders involved in enhancing cyber capacity building efforts.

Rita Maduo

The rapidly evolving and complex cyber landscape presents challenges in coordinating cyber capacity building projects. The difficulty lies in the constant need to update strategies and priorities in response to new technologies and their associated threats and vulnerabilities. This negative sentiment arises from the fast-paced nature of the cyber landscape, which makes coordination increasingly challenging.

Emerging economies like Botswana face additional obstacles due to limited resources. Adapting to the changing cyber environment is expensive, requiring substantial funding that may not be readily available. This limitation hinders the training of cybersecurity experts and the management of complex vulnerabilities, further exacerbating the challenges faced by these countries.

Insufficient coordination in cybersecurity efforts has negative consequences. It creates weaknesses in a country’s overall cybersecurity posture, making it exploitable by cybercriminals. Ineffectual coordination also leads to gaps and vulnerabilities, hindering the effectiveness of cybersecurity programs. Additionally, inefficient resource allocation is a direct result of insufficient coordination, leading to wasted resources and misplaced priorities. Overall, insufficient coordination limits the effectiveness of cybersecurity initiatives.

Effective information sharing is crucial for cybersecurity. Insufficient coordination hampers the sharing of threat intelligence between entities, making it more challenging to detect and mitigate cyber threats. Timely and accurate information sharing is essential for robust cybersecurity measures, underscoring the importance of coordination in this area.

A positive stance is taken, emphasizing the need for proper coordination among stakeholders for effective cybersecurity. Timely and accurate information sharing between stakeholders strengthens cybersecurity efforts and can only be achieved through coordination and collaboration. This positive sentiment highlights the significance of coordination in establishing robust cybersecurity measures.

Successful cyber capacity building requires a multifaceted approach and sustained commitment from all parties involved. Donors, implementers, and recipients must demonstrate ongoing commitment to achieve long-term success. The multifaceted approach includes embracing diverse perspectives and voices in cyber capacity building initiatives. By avoiding a stagnant approach, the positive sentiment emphasizes the importance of involving different stakeholders in cyber capacity building.

In conclusion, the summary highlights the challenges faced in coordinating cyber capacity building projects in the rapidly evolving and complex cyber landscape. Limited resources, insufficient coordination, and a lack of information sharing hinder progress in strengthening cybersecurity measures. However, the positive outlook emphasizes the importance of proper coordination, sustained commitment, and the inclusion of diverse voices in cyber capacity building initiatives. Addressing these challenges is crucial for enhancing cybersecurity globally.

Hiroto Yamazaki

The discussion on cybersecurity coordination explores the challenges that arise when multiple stakeholders are involved. One key issue is the presence of too many organizational stakeholders in cybersecurity, which hinders full coordination. This fragmentation of stakeholders is observed in various layers, including divisions between private and government entities, technical and policy experts, and different countries or regions. The lack of a unified approach and participation from all relevant organizations impedes effective coordination.

Another challenge is the difficulty in achieving full coordination due to the focus on bilateral cooperation. The Japan International Cooperation Agency (JICA), a key player in cybersecurity cooperation, bases its efforts on bilateral agreements between Japan and recipient countries. This approach requires JICA to align its initiatives with the recipient country’s own cybersecurity approach, strategy, and specific needs. While bilateral cooperation is important, it poses challenges in achieving comprehensive coordination across multiple countries and stakeholders.

However, it is stressed that respecting the recipient country’s ownership in bilateral agreements is crucial. JICA adheres to the policy of recognizing the recipient country’s authority and strives to follow their approach and strategy in cybersecurity cooperation. By acknowledging and respecting the recipient country’s ownership, JICA aims to foster a collaborative environment and ensure its efforts align with the recipient country’s priorities.

Inadequate coordination within JICA’s cybersecurity capacity building initiatives is identified as a problem, leading to negative effects such as reduced efficiency, failure to maximize development impact, and a lack of sustainability. The challenges stem from duplication of assistance, limited resources, an excessive number of resources, and isolated approaches to assistance. These factors contribute to suboptimal results and negative implications in JICA’s cybersecurity capacity building projects.

To address the lack of coordination, JICA employs two strategies: bilateral efforts and multi-stakeholder efforts. In bilateral efforts, interactions with Cambodian partners and organizations such as Cyber for Development are used to reduce duplication and enhance coordination. Additionally, JICA recognizes the importance of engaging multiple stakeholders, as evidenced by their technical cooperation project in Thailand, where they collaborate with ASEAN member states, the ASEAN Secretariat, and other donors. By incorporating multiple stakeholders in their initiatives, JICA aims to foster a more coordinated and comprehensive approach to cybersecurity capacity building.

A noteworthy success is JICA’s technical cooperation project in Thailand. With the collaboration of ASEAN member states, the ASEAN Japan Cyber Security Capacity Building Center conducts training and contests, contributing to the overall improvement of cybersecurity in the region. This success story highlights the positive outcomes that can be achieved through effective coordination and collaboration.

Furthermore, the discussion emphasizes the importance of coordinating with multiple stakeholders or through bilateral interactions to maximize development impact. It highlights the need to reduce duplication and harmonize efforts through coordination. The significance of creating sustainable outcomes, such as establishing guidelines and training materials, is also recognized in the cybersecurity field.

While some sentiment expresses negativity towards the one-time training or meeting approach, suggesting it is not an effective means of achieving coordination, there is positive sentiment towards delayed or time-difference coordination. This approach allows for longer periods of interaction and enables donors to engage with recipient countries even after initial engagement has taken place.

In conclusion, the discussion on cybersecurity coordination sheds light on the challenges faced by various stakeholders in the field. These challenges include the presence of numerous organizational stakeholders, difficulties in achieving full coordination due to the focus on bilateral cooperation, and inadequate coordination within JICA’s initiatives. Strategies such as bilateral efforts and multi-stakeholder engagement are identified as potential solutions. The importance of respecting recipient country ownership, creating sustainable outcomes, real-time coordination, and employing more long-term approaches is also emphasized. By addressing these challenges and implementing effective coordination strategies, collaboration and impact in cybersecurity capacity building can be improved.

Calandro Enrico

The proliferation of cyber capacity-building efforts has resulted in challenges in aligning strategies, priorities, and activities among donors, recipients, and implementers. These efforts aim to improve cyber resilience and skills in the face of increasing cyber incidents, state-sponsored attacks, and cybercrime. However, the sheer number of initiatives has created difficulties in coordinating and harmonising these efforts.

To address these challenges, a roundtable discussion is being organised, involving representatives from various sectors, such as the internet governance forum, government officials, civil society, technical community, recipients, donors, and implementers of cyber policy. The objective of this discussion is to assess the achievements and difficulties in coordinating cyber policy activities. The outcomes of this discussion will be formulated into a policy brief, which will serve as a guideline for stakeholders involved in the field of cyber capacity building.

In the realm of cybersecurity, it is crucial for project deadlines to adapt to the learning speed of the individuals involved. Human learning speed often falls behind the strict timelines set for cybersecurity projects. Thus, the focus should shift towards prioritising people and knowledge over rigid deadlines. This approach will ensure proper skill development and overall project success.

Political willingness and transparency are essential aspects of cyber capacity-building projects. Governments are investing substantial financial resources in these endeavours; however, political will from donors is necessary to secure funding. Additionally, transparency in the use of funds is crucial, as it provides stakeholders with an understanding of how the financial resources are being utilised.

Cyber capacity building not only serves as a means to enhance technical capabilities but also as a diplomatic tool to strengthen partnerships. It can be utilised to foster collaborations and build relationships between nations. This perspective highlights the multifaceted nature of cyber capacity building, extending beyond technical aspects.

The Global Forum on Cyber Expertise offers numerous mechanisms for improving coordination in cyber capacity building. These mechanisms include the Clearing House Mechanisms, regional donor meetings, and the publicly available Cyber Portal, which collects data and information related to cyber capacity building projects over the years. Despite these resources, there is a need for increased awareness and effort to enhance global coordination in cyber capacity building.

Inefficiencies and duplication of assistance can be avoided through effective communication and coordination. Examples from Cambodia demonstrate the importance of proper coordination in cybersecurity capacity building. The ASEAN Japan Cyber Security Capacity Building Centre (AJCCBC) serves as a coordination mechanism, hosting training sessions and facilitating collaboration among different organisations. Encouragingly, there is a desire for other donors to explore potential collaborations through the AJCCBC to improve coordination within the ASEAN region.

In conclusion, the influx of cyber capacity building efforts has led to challenges in aligning strategies and activities across various stakeholders. Coordinating these initiatives requires political willingness, transparency in the use of funds, and the use of available resources. Furthermore, there is a need for increased global coordination and effective communication to avoid duplication and enhance efficiency. The examples from Cambodia and the establishment of the AJCCBC exemplify the importance of coordination and collaboration in cybersecurity capacity building.

Tereza Horejsova

The coordination and effectiveness of cyber capacity building efforts face significant challenges due to a competitive environment and a lack of sharing. The competitive nature of the field makes coordination difficult, hindering cooperation and collaboration among actors involved in cyber capacity building. This leads to a lack of project continuity and a decrease in overall impact. Insufficient sharing of information and collaboration among stakeholders also contributes to problems, particularly with duplication of projects that overwhelm recipients and waste resources. Improvement is needed in the needs assessment process, which is currently time-consuming for individual projects.

The issue of projects being supply-driven rather than demand-driven is also prevalent in cyber capacity building. This approach fails to consider the specific needs and challenges faced by recipients, resulting in projects that may not fully meet their requirements. To address this, it is important to listen attentively to the needs of recipient countries and take their unique circumstances into consideration.

Various approaches and platforms have been suggested to enhance coordination and effectiveness in cyber capacity building. The Global Forum on Cyber Expertise (GFC) serves as a valuable platform for dialogue, information exchange, and networking among actors involved in cyber capacity building. The GFC’s Clearinghouse mechanism matches government needs with the right implementers and donors, while the Sybil portal aids in project mapping, improving coordination and resource utilization.

A sustainability outlook is crucial for lasting and effective impact in cyber capacity building. Projects lacking sustainability may provide quick fixes but not long-term impact. It is necessary to consider the goals of sustainable development and ensure projects contribute to them.

Connecting the development community with the cyber community is also important for improved efficiency and better solutions in the future. Learning from the development community’s expertise enhances cyber capacity building efforts and outcomes.

Promoting openness and increasing communication among stakeholders plays a vital role in enhancing coordination. Transparency, sharing best practices, and facilitating information exchange allow stakeholders to work together effectively.

To overcome these challenges, it is crucial to improve coordination through platforms like the GFC, listen attentively to the needs of recipient countries, promote dialogue and exchange between the development and cyber communities, and foster openness and increased communication. These measures will contribute to more efficient and sustainable outcomes in cyber capacity building.

Regine Grienberger

The analysis examines various aspects of cyber capacity building and explores the challenges and opportunities associated with it. Germany acknowledges that cyber capacity building is a relatively new topic within its foreign office. They recognize its significance as a diplomatic tool to strengthen partnerships and ensure stability in cyberspace. However, one of the primary obstacles is the difficulty in securing funding for such projects. This is largely due to budget restraints and the need for political willingness, which in turn depends on risk awareness.

While funding is crucial, it is not the sole factor in implementing cyber capacity building measures. The analysis highlights the need for human resources with expertise in cybersecurity. Simply having financial resources is not enough; experts are necessary for effective implementation. The establishment of platforms, such as the EU cybernet, is essential for facilitating the identification and development of train-the-trainer programs, ensuring a skilled workforce capable of implementing capacity building initiatives.

Transparency in the investment of trust funds is lacking within the field of cyber capacity building. It is important to understand how the allocated funds are being utilized and what outcomes are being achieved. This transparency ensures accountability and can help in identifying areas for improvement and learning from past experiences.

Understanding the needs of recipients is crucial for a successful cyber capacity building project. This understanding often begins with the development of cybersecurity strategies. Expressing and admitting these needs becomes a starting point for effective collaboration and assistance.

Coordination plays a significant role in the implementation of cyber capacity building initiatives. However, it is important to note that coordination should not favour certain recipients. In development cooperation, there are instances where some recipients are given preferential treatment, while others may be overlooked. Overcoming this bias is essential to ensure fair and equal distribution of assistance.

The analysis also emphasizes the importance of regional cooperation in addition to global cooperation. Mechanisms should be developed that foster collaboration among neighbouring countries, enabling them to assist each other in addressing common challenges in cyberspace.

The field of cyber capacity building should be viewed as a two-way street. It should not only focus on the traditional donor-recipient relationship seen in development cooperation. Instead, it should encourage mutual learning and knowledge sharing between all parties involved to create a more comprehensive and conducive cybersecurity environment.

Digital development cooperation should include cyber capacity building as it is integral to digital transformation. Enhancing the skills and capabilities of public administrations as they transition into the digital realm requires a strong focus on cybersecurity. This includes providing the necessary hardware and software to ensure robust cybersecurity measures are in place.

In conclusion, the analysis highlights various aspects of cyber capacity building, including the challenges of funding, the importance of human resources, the need for transparency, understanding recipients’ needs, the role of coordination, the significance of regional cooperation, and the integration of cyber capacity building into digital development cooperation. These insights provide valuable considerations for policymakers, funders, and implementers in their efforts to build strong and secure cyber capabilities.

Audience

The Budapest Convention plays a crucial role in cybersecurity by providing a legal basis for capacity building programs. These programs aim to ensure consistency and sustainability in equipping countries with the necessary knowledge and skills to combat cyber threats. A key feature of these programs is their emphasis on localized training, where trainees become trainers themselves, cascading the knowledge to others. This localized approach expands the reach and impact of capacity building efforts.

The Budapest Convention also highlights the importance of South-South Cooperation, where individuals from different regions participate in the capacity building program. For example, an African judge in Ghana may train judges in Kenya, fostering collaboration and knowledge sharing. This approach strengthens partnerships and promotes a collective response to cybersecurity challenges.

Regional cooperation plays a vital role in capacity building as well, facilitated by the Budapest Convention. Countries, such as Albania and Montenegro, collaborate to collectively address common cybersecurity challenges, sharing resources and expertise. This regional approach enhances collaboration, stability, and the effectiveness of capacity building initiatives.

The establishment of a point of contact in each country, compliant with international law, is strongly advocated. The 24-7 network provided by the Budapest Convention ensures a stable point of contact, enabling effective coordination and communication during cybersecurity incidents. This promotes international standards and legal obligations.

While a separate legal basis for capacity building programs is not immediately necessary, better utilization of existing legal frameworks is recommended. Utilizing existing treaties that already have capacity building programs ensures sustainable and coordinated efforts.

Donor countries have a significant role in supporting capacity building. Drawing lessons from past development experiences can enhance demand-driven capacity building in cybersecurity at the national level. By leveraging these experiences and knowledge, countries can improve their capacities and contribute to international cybersecurity goals.

Overall, the Budapest Convention serves as a foundation for capacity building programs in cybersecurity, promoting localized training, South-South Cooperation, and regional cooperation. It emphasizes the establishment of stable points of contact and the utilization of existing legal frameworks. Donor countries can improve capacity building efforts by learning from past experiences and improving capacity at the national level, ultimately contributing to global cybersecurity goals.

Session transcript

Calandro Enrico:
Thank you. Good morning, ladies and gentlemen. Welcome to a new session, a roundtable on the successes and challenges of cyber capacity building coordination. So today we’ll tackle these issues of cyber capacity building, and we’ll focus on the key areas of cyber capacity building, which is cyber resilience, as well as cyber skills and competencies, which is the coordination of efforts aimed at enhancing cyber capacity building. So in a world where, as you know, state-sponsored cyber attacks, cyber crime, and cyber incidents are proliferating, governments, governments, and governments are allocating substantial resources and funding to bolster cyber capacity building. Developing nations are receiving vital support to fortify their cyber defense capabilities, encompassing the ability to detect cyber threats, promptly report on cyber incidents, and respond effectively to cyber attacks. However, with the proliferation of cyber capacity building efforts has emerged as a challenge. The task of aligning strategies, priorities, and supported activities among donors, recipients, and implementers in the realm of cyber capacity building has grown increasingly intricate, and we’ll try to discuss all these things today. So our session aims to explore both the achievements and difficulties associated with coordination in the cyber policy area. Today we are privileged to be joined by a distinguished panel representing not only the We have a number of speakers from different regions of the world. We have a number of speakers from different regions of the world. We have various actors from the Internet governance forum space or government representatives, civil society, technical community, but also actors from the cybercapacity building community that are defined somehow in a slightly different way, because we have got recipients, donors, and implementers. And our speaker also from different regions of the world. So, we will start with the first part of the presentation. We will explore what are the repercussions of inadequate coordination in the field of cybercapacity building, and we will share what are the existing mechanisms designed to enhance coordination in this sphere. And then we will identify what actions can donors, implementers and recipients take to improve the coordination of cyber capacity building efforts. And of course, most importantly, we hope that many of you in the room will participate in the discussion, and we hope you will share also your own experiences and recommendations for enhancing coordination mechanisms in this cyberpolicy area. We also would like to prepare at the end of this session a policy brief, which will be shared with many stakeholders in this area. The Global Forum on Cyber Expertise, the German Agency for International Cooperation, other agencies dealing with international cooperation and cybercapacity building, the European Commission, recipient nations, implementers, and ongoing projects and initiatives. So, without any further ado, we can start the conversation. So, I will let all panelists to briefly introduce themselves when answering the first question, and I will simply go in the order of the table. So, let’s start with Rita. Thank you for joining us. So, can you tell us why it is difficult to coordinate cybercapacity building projects from your perspective?

Rita Maduo:
Thank you so very much for the question, and it’s really a pleasure to be in this forum today to share my views pertaining to capacity-building projects. First of all, I’d like to introduce myself. My name is Rita Madjoba-Dumiling, and I work for the Botswana National CSIRT. I’m actually a CSIRT respondent. Before I can jump into your answer, I would also like to emphasize what really cybercapacity-building encompasses, right? Cybercapacity-building, it actually encompasses all initiatives that actually drive towards the development of necessary skills, necessary capabilities, as well as the infrastructure that will ultimately or effectively address any cyber security challenges. Now, to go back to your question, why is it difficult to coordinate cybercapacity-building projects? One pressing issue that ultimately affects both developing countries and as well as developed countries is actually the rapidly evolving cyber landscape and its complexity, right? We are living in a tremendously evolving technological era whereby we are seeing the emergence of new technologies, and these technologies are, however, taken advantage of by threat actors, so in such cases, we are seeing emergence of sophisticated threats, sophisticated vulnerabilities. Now, therefore, in this dynamic… So, I think it’s important for us to understand that, you know, when we are talking about this dynamic environment, it rather becomes a challenge in coordinating, especially in reference to, like, strategies and priorities. There are strategies and priorities that are implemented to address issues that come with this, that are actually a challenge, but it’s also a challenge in terms of, like, how we are going to implement these strategies or policies rather is a challenge because of the dynamical, the dynamic environment, and this, however, especially for us developing countries, for example, Botswana, it’s rather expensive in the sense that in order to, like, be agile or keep up to speed with addressing these issues, it requires a lot of training, a lot of, like, training in terms of, like, how we are going to manage these complicated vulnerabilities. It requires a lot of funding. It requires a lot of, like, training cyber security experts in order to, like, try and keep up with this emerging challenges. So, it requires a lot of training in terms of, like, how we are going to manage these issues, and it requires a lot of training. So, we lack capacity in the sense that we do not, there is no, like, tailored training that is actually intended for different aspects of cyber security. So, those are one of the pressing challenges. So, in essence, complexity and rapidly evolving landscape, as well as resource constraints, are a challenge, especially for emerging, for developing countries such as South Spotswana. Thank you.

Calandro Enrico:
Thank you very much, Rita. I think it’s clear that there is a need of support, so there’s no doubt of that. And because of these complexities and evolving cyber threats, and some organizations are also trying to improve the complexity around the coordination of these efforts for supporting countries like Botswana. So, for instance, Teresa, one of the main goals of the GFC is somehow to support coordination. So, what’s your take on that, difficulties and everything else and challenges?

Tereza Horejsova:
Yeah. Thank you, Enrico. Thank you also, Rita, for setting us up with some really excellent points. Good to be here. So, my name is Teresa Horejsova, and I’m from the Global Forum on Cyber Expertise, the GFC, working mostly on our regional hubs and regional efforts. And building on what you said already, I will try to provide a little bit more, yeah, maybe frank assessment why it is difficult. Frankly speaking, this is the cyber capacity building efforts. It’s quite a tough and very competitive environment. That’s why, you know, for kind of all the actors involved, be it the donors, be it the implementers, or being the recipient countries, kind of the intuitive answer is that less sharing will mean more projects, will mean maybe more control about what type of projects are delivered, and so on. And that’s… a problem, as we will get later to why it is a problem, because when we are in a situation that there is not enough sharing, that also means that one project does not build on another project. We do not connect the dots as we should. So that’s why it is difficult to coordinate. We also, and you pointed to it a little bit, Rita, that in many cases it’s very supply-driven, the capacity building support that is being provided, rather than demand-driven. So I still think there is a lot of room for maneuver in listening to the recipients of cyber capacity building support on what their needs actually are, rather than presuming that we, on the other side, know what the needs are. Thank you.

Calandro Enrico:
Thank you very much, Teresa. I think it’s very interesting to highlight this issue of the competitive environment. There is a lot of competition, actually, and it’s clear, really, also working from an implementer point of view, because I’ve been working for a project delivering cyber capacity building, and it’s clear to see that there is competition, because the funds, even if they are there, available from a number of sources, are also somehow limited, because these projects also require a substantial amount of funds to really deliver on the promises. That’s also the reality. So Claire, then, from your perspective as a government representative, what do you think about the complexities and challenges of cyber capacity building coordination?

Claire Stoffels:
Thank you, Enrico. Hello, everyone. My name is Claire Stoffels. I’m the Digital for Development focal point at the Luxembourgish MFA, and within the Directorate for Development, Cooperation and Humanitarian Action. So first of all, I wanted to thank you very much for inviting me to participate to this panel on this really relevant topic, on which I hope I can share some useful insights with you from a donor perspective. So from my experience I can definitely say that cyber capacity building coordination is lacking amongst stakeholders, that we face a lot of challenges when attempting to coordinate notably diverging objectives, approaches, duplication of actions. There are however a number of positive efforts that have been undertaken which I will get to a little bit later. But first of all cyber capacity building coordination needs to be driven by several parties from within, meaning it requires really an inclusive, demand-driven and context-specific approach by which ownership is fostered among stakeholders at both national and regional levels in order to create sustainable change. I think this encapsulates really a key challenge in cyber capacity building coordination efforts. So as I said, it requires a regional approach and because it transcends so many communities of practice from technical incident responders to cybercrime police to civil society educators, it’s really challenging to gather all relevant parties around the same table. But beyond getting everybody to sit at the same table and to actually discuss, one needs to also recognize that the success of cyber capacity building coordination processes is contingent upon operationalizing the consensus at international level and reflecting that in national policies and practices in a way that aligns with national and regional socioeconomic and security priorities. Then another essential component to cyber capacity building coordination is trust. So it sounds very basic but trust is definitely a necessary component for practical cooperation between stakeholders. However, trust can be challenging to establish when working across so many different policy fields and and institutions. And trust can be built through transparency and accountability. And I think Luxembourg has historically been perceived as neutral and trustworthy. And this has definitely had a positive effect on relationship building and developing different initiatives in the cyberspace. And finally, one of the biggest challenges that I’ve encountered in the past year has been the development also of scalable models to establish mechanisms to coordinate capacity building efforts. And this is basically how it comes down to how a project can be developed sustainably in the future.

Calandro Enrico:
Thank you. Thank you very much, Claire, for alighting the requirement on an inclusive approach. Success contingent also to consensus at an international level that that needs to be translated in national policy, also on possibly coordination of cyber capacity building projects. And then trust. Trust is so important, right? Trust in cyber security is a world recurring theme across so many issues and also on cyber capacity building. So thank you for that. Anatoly, what’s your take on the challenges on cyber capacity building coordination?

Anatolie Golovco:
Hello, everyone. I’ll try to oversimplify things. So cyber security, from my perspective, it’s about good people who are protecting computers against bad people. So the main goal is to teach that good people to have the right value, the right ethics, and to be able to have the right skill to do the engineering of the process. So the fundamental problem is to deal to people. It’s difficult. It’s hard to plan. It’s not like building a construction or a road. You have to adapt to the speed of learning of people. of the people that you have in charge to cybersecurity process. So what’s happening very often, the donors have the timeline for the project, and they can’t adapt to the speed of learning, to the speed of the humankind. So they’re starting to buy tools. They’re buying more and more sophisticated cybersecurity tools. It’s easier to manage the project in this way, but you miss the main purpose. So you miss the humankinds who are fighting against, let’s say, the defects in the cyberspace, in the engineering of the cyberspace. So paying more attention to the people in the process, it’s the main thing that can help with this complex puzzle. Thank you.

Calandro Enrico:
Yeah. Thank you. Another key issue, so this issue of the timing of the project. Sometimes you don’t give enough time, actually, to a project to allow to reach, in terms of improving skills, because the learning curve might be slower. But there are specific requirements, especially from a donor perspective. I think the recommendation of actually focusing more on the people and how much they can learn in how much time would be probably a better way of approaching a project, rather than having very specific and strong deadlines. So probably from a donor’s perspective, unfortunately, that doesn’t always work. But we can try to discuss that a little bit more. Hiroto, from JICA, another donor organization and dealing also with international development, what do you think about the challenges?

Hiroto Yamazaki:
Thank you very much, Eniko. I’m Hiroto Yamazaki, Senior Advisor on ICT and Cybersecurity at Japan International Corporation Agency, JICA. So JICA is an official development assistant agency under Ministry of Foreign Affairs. So in the last five years, JICA has been involved in bilateral technical cooperation related to cybersecurity, mainly in Asian region. So over the past five years, actually, the technical cooperation has been implemented in Vietnam, Indonesia, Cambodia, Philippines and Thailand and so on. So today I would like to share our experience from the JICA’s activity. So I have three points on the difficulties on the JICA. So first one is that there are simply too many organisational stakeholders to coordinate. Some of them may not be globally identified, which makes coordination difficult. So cybersecurity has many communities divided into several layers, such as private versus government, technical and policy, and country or region. So in some cases, discussions among development partners do not include the communities of specialized security organisations, such as the first or IP third and so on. So in addition, not all organisations participate every time. So even when a group or organisation coordinates something, there will always be the organisations that are not included, so making it impossible to fully coordination. So I have some examples. So JICA attends the regional coordination meeting. So since 2009, Japan and ASEAN have established a framework of the cybersecurity policy meeting and working group. So this meeting is held four times a year. So at the meeting, capacity building session is held to share what kind of capacity building each organisation is implementing and to exchange our opinions. So this works well, but generally we cover the cooperation for government agencies, but does not include the support from civil organisations, private companies and international organisations, such as the first or the IP third. that, except for the JPSAT Coordination Centre. Sorry, I have a lot of example, but time is almost up, so I have one more reason. So we are the bilateral cooperation agency, so our cooperation is based on the bilateral agreement between Japan and the recipient country. So even if we could coordinate something with other development partners or donors, we still basically have to try to follow the recipient country’s approach, strategy, and their needs by respecting recipient country’s ownership. So sometimes it makes it difficult to coordinate. Thank you very much.

Calandro Enrico:
Thank you. Thanks a lot, Hiroto, for highlighting the number of stakeholders that actually a donor organisation is supposed to coordinate. But I think it’s a very interesting mechanism, what you described, the regional coordination meeting, I think that could help. Also, as you said, including all stakeholders might still be challenging. Louise, from your perspective, that I believe it’s primarily academic, what’s your experience and your take on the challenges on cyber capacity building coordination?

Louise Hurel Marie:
Thank you very much, Enrico. My name is Louise Marie Urell. I am a research fellow in the cyber programme at the Royal United Services Institute. So for those of you who don’t know RUCI, shorthand, it’s a security and defence think tank based in London, but we work globally across different regions. And my own background, I worked in think tanks and mostly focusing on Latin America and the Caribbean. So hopefully, I’ll be talking from that regional perspective, but maybe also talk a little bit more, as Enrico mentioned, from a more scholarly academic perspective. So as a person that has been in the position of being an implementer in many ways of different capacity building initiatives, thinking that implementation is not something that’s only conducted by different governments, but all different stakeholders have a place of implementation when it comes to cyber capacity building initiatives. I think what I’ve observed in the past couple of years… years is that, you know, we use the term cyber capacity building, but in fact we’re talking about evolving mechanisms, right? So there are MOUs that should be in place so that governments can then activate it and build an agenda bilaterally. We’re talking about multi-sided kind of multi-donor funds that are being established. We’re talking about, you know, coordination among civil society organizations that are working in conducting cyber capacity building and academia, the private sector, and other colleagues at the international level also kind of developing agendas on that. So I think I have three key points. What I have observed as well in terms of the context is that, you know, many donor countries are in the second or third wave of developing programs for capacity building, so they are also restructuring the way in which they’re doing and establishing, let’s say, funds within the government, so which departments you need to bring together. So I think that’s something quite interesting, but while you see like, for example, the GFC, the Global Forum on Cyber Expertise, has a civil portal where you map all of the different, let’s say, capacity building initiatives that are publicly kind of recognized, you know, while we see lots of programs, coordination doesn’t necessarily mean that, you know, that’s something that’s necessarily there. But the first point that I would say is that I think there needs to be a better understanding of, like, how coordination happens amongst countries that are willing to support. So from a supporter perspective, one big challenge of coordinating investments in a particular region, right, so I think there’s some countries from a donor perspective that would be more interested in some regions and some more in others, and whenever there’s, you know, no coordination among them, it’s very hard to see, you know, you have one country that’s receiving from multiple other countries, so how do you actually make sense that you’re not overloading the recipient country, because they also have to coordinate amongst themselves. So I think, you know, duplication is something that we really need to think about. The second point is really domestic buy-in, as a person that has… as I said, worked in Latin America for many years. Political buy-in is something that’s quite fundamental. So if you don’t have political visibility over these capacity building programs, it’s very hard to ensure sustainability of implementation, right? You might have a civil society organization or a think tank, as I used to work, trying to implement and bring visibility to cybersecurity capacity building, but then if the government doesn’t see that as a priority, sometimes it’s very hard to gain traction. So I think that is a very real challenge to thinking about coordination and sustainability. And the final point is really, I think we need to break down the term cyber capacity building a bit for us to have a better conversation and more focused conversation. So maybe we might be challenged on the coordination element because we need to break that down. So I would break it down into at least three different sub categories as a good academic that I am. So there’s the traditional cyber capacity building, we’re talking about skills, we’re talking about longer term projects or short term projects that are looking at, let’s say more whole of society approaches. A second element could be CCB for crisis response. So for example, Costa Rica having to respond to a large scale incident, right? This is a very different context of thinking about capacity building and investment in a particular recovery scenario. And the third one is capacity building for conflict or post-conflict recovery, which as we’ve been seeing like in Ukraine, for example, it’s a whole different landscape of investment and also capacity building efforts. So I think we need to break down the discussion around capacity building into the context in which it is applied. It’s very different when we’re talking about peacetime and maybe conflict or crises triggered by a particular incident. Second, domestic buy-in, we need to ensure that there’s political buy-in. And third, coordinations amongst countries that are willing to support given regional priorities for each of them so that we don’t duplicate efforts. So these are my three.

Calandro Enrico:
points. Thank you. Thank you very much, Luis. Many interesting and thought-provoking points, and especially I think also on having a little bit more of granularity right on the term of sub-capacity building, because all these efforts of course have got different goals, and I think it’s a good categorization between skills, sub-capacity building for trice response, and for conflict or post-conflict recovering. Regina, based on your experience as a cyber diplomat, what do you think about the challenges on cyber capacity building coordination?

Regine Grienberger:
Thank you, Enrico, and yeah, I’m Regine Grimberger, the German cyber ambassador. I would subscribe to almost all the elements that have been mapped out here as parts of the difficulties that we meet when coordinating cyber capacity building, and I would like to add perhaps four more. So the first one, also from a donor’s perspective, this is spoken from a donor’s perspective, it’s actually difficult to fund cyber capacity building projects. For us in Germany, this is, I mean, what is this, the 18th IGF, so I think there is quite a lot of time that has passed that we meet the needs, but in the foreign office it’s still a new topic, and it’s for us a new experience to really go into the details of cyber capacity building, but we realize that it is not only capacity building for the sake of increasing cyber security, but cyber capacity building is also a diplomatic tool to strengthen our partnerships, to strengthen the stability of cyberspace, and by thus also the security of us all. So it’s difficult to fund CCB projects because it needs a political willingness to do so, and the political willingness depends on the risk awareness and you know many people in the decision-making level of the Foreign Office for example are not as risk-aware as people you know in the basis like Rita described it from CSIRT. The second reason is of course budget restraints and then the third reason is that we have a very short-term planning, cameralistic planning for one year only and we have mid-term needs and even long-term strategies as you described when you broke down what is actually capacity building so our planning period is a little bit contradicting the recipients horizon. And then the last element that makes it difficult to fund is the deckability of the expenses. So because this is not always within the definition of what is then ODA, so Development Assistance in the definition of the OECD, we have to take it from other funds, other titles. So another element that I would like to add to the difficulties is we need to free human resources to do that. It’s not about money only, it’s also we need resources experts. We have established an EU cybernet which should be a platform to find an expert and also to develop train the trainers programs but this has really come out of the experience that we don’t have enough people to implement cyber capacity building measures. The third element is transparency. It’s what we need is a transparency and what we experienced is that for example when we invest in trust funds to fund cyber capacity building measures like with the World Bank we are really missing that kind of transparency. of transparency that we understand what happens with the money and what do the recipients do with it. And the last element that I would like to mention here is what we are also need needing to really coordinate effectively is we have to know the needs of the recipients which requires them expressing and admitting and expressing their needs and often it starts with the cybersecurity strategy that is missing that would give a structure to what are the needs in this particular place. Thank you.

Calandro Enrico:
Thank you, thank you very much Regina for sharing really from a donor’s perspective the difficulties to fund cyber capacity building projects. I think you touch upon the political willingness from a donor’s perspective and Louise said the political willingness from a recipient perspective. Sometimes there is a mismatch there and issues of transparency also very important because there is a need of understanding what’s happening with the money so probably you know having monitoring and evaluation mechanisms while the project is implemented could help that. And also a very interesting point that let’s not forget the cyber capacity building are also a tool, a diplomatic tool to strengthen partnership and working on security issues with other countries. So then let’s move on to try now to understand what can be the consequences of this insufficient coordination in cyber capacity building. For instance Rita from your perspective your organization the National T-SERT is primarily a recipient so if coordination is actually insufficient what are the repercussions for

Rita Maduo:
your own organization? Okay thank you Enrico for that question. As a member of the National CSAT, I have had firsthand experience and exposure to the repercussions of insufficient coordination, coordinating cyber capacity building. So what we have identified or what we have come across is insufficient coordination ultimately leads to disjoint cyber security efforts, thus leading to gaps and vulnerabilities in the country’s overall cyber security posture, making it easier for cyber criminals and malicious actors to ultimately exploit such weaknesses within the cyber posture. And then another consequence could be there could be insufficient resource allocation. For example, if different or multi-stakeholders within a nation do not come together, coordinate towards cyber security capacity buildings, resources may be wasted or there could be duplicated efforts or there could be misallocated areas with less priority, and this inefficiency could ultimately limit the overall effectiveness of cyber security programs. And then with insufficient coordination, there is certainly limited information sharing, because effective cyber security entirely relies on timely and accurate information sharing between different entities, that is, between different stakeholders such as government agencies, civil sectors, or private sectors, or even international partners. So, if there is insufficient coordination, this can, like, hinder sharing of threat intelligence and then making it harder to detect and as well as respond to cyber incidents. Thank you.

Calandro Enrico:
Okay, thank you very much, Rita. Those are really, yeah, worrisome points because somehow it seems that lack of coordination could result in deteriorating the cyber posture of a country because it really touched upon issues of limited information sharing, increasing cyber vulnerability, so then the effect of cyber capacity building could be completely contrary to its final goals. Teresa, from your perspective, there are also some mechanisms at the GFC, like the clearinghouse and others, so what are the repercussions of insufficient coordination between all these actors?

Tereza Horejsova:
Well, thank you, and I know I feel in an answer to this question we will be kind of reconfirming a lot what you have said, so yes, what are the consequences? Less impact than we could have and definitely insufficient use of the limited resources as several donors on this panel have already stressed, so there is a lot of duplication going on and if you’re a recipient, you might have got into situations that, yeah, the kind of same or very similar project was offered to you by various implementers, in some cases not knowing that this has been delivered, or donors trying to also support a project that might have been delivered in that given country too. Another maybe point worth mentioning is that we are often overwhelming the recipients, because if there is not sufficient coordination, imagine that for every single project an implementer would come and for instance would want to do a needs assessment for their particular project and needs assessment is so time-consuming and in this sense it’s very unfair to overwhelm already limited capacities for instance in a given country. So yes it’s a bit maybe utopian to expect that if needs assessment is deliverable in your project that you would be sharing it with other implementers. I understand that this is like probably not a typical situation but I feel it is a topic that we really really need to talk about yes and do you want me to go to the mechanisms now or later? Yes okay perfect so yes I mean now sorry that I will be talking a little bit about the GFC, the organization I represent because you know for those of you not familiar we are kind of a platform for actors involved and interested in cyber capacity building over 200 members from all stakeholder groups and the main idea actually is exactly providing a platform that hopefully naturally will lead to more exchanges, more networking, more conversations about about cyber capacity building. A few concrete mechanisms that we have experimented with have for instance and you’ve named it Enrico been the clearinghouse mechanism which in practice means that let’s say a government would express a concrete demand or need that they have and we would try to kind of clear it through the richness of the network that we have at the GFC connecting to the right implementers in an ideal case scenario connecting to the to the right donors. It’s not straightforward, for sure. I mean, the idea is probably good. The practice can be, of course, very complicated, but we feel this is an experiment that is worth playing with further. We also try to organize various donor alignment meetings in various regions, where we also try to kind of provide space for donors to come together and talk and kind of exchange notes. Again, it’s also delicate and tricky, and it is unrealistic to expect that also donors would come and share all their intentions and plans and strategies. But we feel that there is maybe some progress that is being done in this regard, and if the GFC can have a minimal role in helping to facilitate the discussion, we would be very happy to continue doing that. Another mechanism that we have available for the community is the Sybil portal, which is available free of charge at sybilportal.org. And this is a space where we kind of try to provide mapping of what are the projects available for a specific topic in the field of cybersecurity, for a specific region of cybersecurity, for a specific country as well. So it’s possible to kind of filter and play around. The resource will be more valuable the more comprehensive it is. So, of course, I cannot say that we have 100% of projects everywhere covered. It also relies a little bit on the implementers and donors sharing with us this information so that we can feed it in the platform. And of course, it also relies on us and our agility to keep the portal as up-to-date as it can be. So this can be kind of a basic resource to see, okay, let me see what projects in the field. field of, I don’t know, cybersecurity skills have been implemented in Botswana by whom? What was the angle? So it’s Sibylportal.org. I also, of course, have to stress a little bit our regional efforts. So I talked about it a little bit, you know, that we really think there should be more of this demand-driven capacity building, and then it’s not something that should be happening kind of like from the headquarters somewhere, but being really much closer to the situation on the ground is essential, which we are trying to tackle through our regional hubs. And maybe to conclude, you know, a general mechanism, and also building on what others have said, Regina, you’ve been very frank in your response, is the kind of short-term, long-term planning, yes? If we have a project, you know, that is a quick fix on something, but we don’t have the sustainability outlook, you know, in a way it’s also not as impactful as it could be in the long term, which I feel is the common goal of all of us. So sorry, I took more time now.

Calandro Enrico:
Thank you. Thank you, Teresa, for sharing. So there are many mechanisms available from the Global Forum on Cyber Expertise, the Cleaning House Mechanisms, the regional donors meeting, the Cyber Portal, which is a great resource, because all the information is publicly available, you’ve been collecting data and information for a number of years, so you’ve got actually an historical perspective, and that’s really available for not only donors, but also for implementers, for recipients, and I think it’s a great way of increasing transparency for the broader global community dealing with cyber capacity building. So information is there, it is available, sometimes it’s a matter of making an effort. Interestingly enough, I believe that there are still organizations … organizations and donors that do not know, unfortunately, these tools, but I believe that those are somehow also the foundations to try to improve these mechanisms globally. So I really invite everybody in this room, if you are involved in cybercapacity building, to have a look at these tools before embarking in your next project, because that could really help you improve the coordination efforts. Claire, so from your perspective, what are the consequences of this insufficient coordination? And then I don’t know if you would like to talk about some of the mechanisms to improve the coordination.

Claire Stoffels:
Yes, I don’t want to repeat anything that’s been said already, the excellent points that were made. From a donor perspective, there’s definitely obviously a risk for duplication and lack of coherence because of the proliferation of actions in the cybercapacity building space. So therefore, coordination is essential to increase situational awareness and allows to learn if some of the needs identified in a country or that will be identified by the project have already been addressed by other CCB projects. So that’s why also platforms like the ones from the GFC are essential, especially for countries like Luxembourg, where we don’t have as many resources to identify what’s being done, what’s being carried out by other stakeholders in the field. So I want to address some mechanisms. So I just said, as a small country, again, with limited resources, Luxembourg really has to foster multi-stakeholder approaches across sectors, not just in development cooperation. In digital for development, cybercapacity building is one of our main intervention sectors, and it’s a key priority really at national level, and it’s reflected in our policies, in our administrations, and in our private sector, and it really has trickled down. into development cooperation and we therefore together with our implementing agency LuxDev as well as other actors We fostered a lot of partnerships to carry out CCB sorry capacity building interventions We’ve coordinated efforts with also our national cybersecurity agency in the framework of our projects really because we try to identify which needs and gaps can be filled by different partners that we work with At European level Luxembourg is a is a founding member state of the digital for development hub, which you might have heard of So it’s a global platform that was launched by the European Commission in 2021 Which aims to foster a digital cooperation amongst EU member states to promote digital transformation in our in partner countries So the digital for development hub works on different Thematics among and has basically different working groups dedicated to those thematics Which aim at fostering discussions and initiatives among which cyber security? so Luxembourg shares the co-lead of this thematic working group with France and the European Commission and the purpose of these working groups really is to provide a forum for information and best practices sharing an Experience sharing between member states and we try to involve as much as we can different External actors as well on a regular basis and it has been more or less Successful so successful in the sense that it has created an informal forum for technical levels to exchange and share practices Less successful in the sense that I would say that most European member states still have limited resources Dedicated towards digital for development and therefore I don’t think it has reached its full potential yet in terms of how much Information knowledge sharing and coordination capacity it could carry out Allow me to share maybe just an example of how the cyber thematic working group can actually work in practice So, the European Commission is currently formulating a new project focusing on Sub-Saharan Africa with one component on cybersecurity and one on e-governance. I’m happy to get into the details a little bit later. And in parallel, Luxembourg, we’re formulating a project at bilateral level with our implementing agency LuxDev and the African Union Commission with similar complementary actions. So our respective formulation teams, they are in contact now also with the African Union Commission and other stakeholders on the ground to ensure that both projects are actually complementary and make an efficient use of resources, that basically we don’t also duplicate needs assessments, that we can actually base ourselves on one single needs assessment, that we ensure that we have the same contact points within the African Union Commission so that we don’t, basically that they also feel that we are coordinated on our side and that it’s not going in every direction. So we were able, basically as I said, to share this information and our respective objectives through this thematic working group, which is, I think, quite a good example of how that can actually work in practice. And then Luxembourg has also joined other coordination platforms or practitioner groups such as the GFCE, the EU Cybernet as well, which is a great platform as well, and those have proven to be very beneficial, again, for a country where we have limited resources dedicated to D4D and small administrations to carry out these initiatives.

Calandro Enrico:
Thank you. Thank you very much, Claire, for highlighting some of additional efforts from the national and European perspective. Maybe not everybody is familiar with this Digital4Development Hub, which is this European union mechanism between member states, where, as Claire said, they can share what are their priorities in terms of digital development and assistance to various countries. Of course, as you said, it’s got its own challenges, but it just started also in 2021. So I think it’s good and significant to see that there is an effort towards trying to improve coordination of these efforts. So thank you for providing concrete examples from that point of view. And of course, also the issue of national coordination and the problem with lack of enough resources, as also Regina highlighted before. So Anatoly, from your perspective then, what are the consequences of this insufficient coordination and can you highlight some of other mechanisms in place to improve coordination that maybe we do not know?

Anatolie Golovco:
Thank you. Yeah, I’d like to start with elaborating a little bit on what Teresa just said about the competition of donors. What I’ve seen in the last year since I’m serving my prime minister is sometimes the beneficiaries are losing the point, they are losing the scope of the project. So we see the project as a process, but we miss to remember why we started this project. Because you know, when the scope of the project is just to buy some hardware or software, it’s very clear. You have a shopping list, you split the shopping list between donors, you buy it, you put it in place and expect people to use it. It’s more complicated when you have people involved, because sometimes you don’t, you have for example training programme, but you don’t have enough brains to put that knowledge in, so you have a shortage of people to train. I can give you an example why it’s happening that sometimes the people don’t fit the projects in cyber we have. We had the discussion with the European Commission, especially on the topic of nationwide approach when we have to improve the cyber security in the region. in the local authorities and I discovered that the same problem is not just in Moldova, the same problem is for example in Estonia because of this big decentralization of power, you have local authorities which have a big autonomy but in the same time with this autonomy they have to serve themself and what we discovered that the cyber or the IT in general in the regions is handled by private companies so the Commission say we can’t invite non-public servant in trainings for cyber and we had now to find the mechanism to fit that employees of private companies who serve the infrastructure of the states in the regions and it’s not easy because you have to find the legal mechanism and to rebound the project scopes. So as I just told in my previous blog, to work with people is hard and when you have technical people is more hard because they are special. The solution to that is obvious, you need a better planning but it’s not the planning that the donors have to do, it’s the planning that the state actors have to do in terms of making this wish list to the donors and yeah, that’s the solution. Regarding the mechanism, we developed in the last let’s say half year or maybe eight months, we developed the following mechanism. So since 2015 we have the discussion that the Prime Minister need one single point of contact person in his office to be the window to the state. So they found that person, it’s myself and I organized the Council of Cyber Security in the Prime Minister’s office. And the vision of the Prime Minister, we are spreading across the institutions. We are spreading it to give the understanding what we wish to achieve and to give this clarity that sometime is missing in all cyber security projects. Another mechanism that we have, it’s small groups. Because this council, usually it’s between 30, 50 people, and when you have in the same room 50 people all together talking, it’s not efficient at all. So that’s why we have this, the council usually it’s meeting every month. We have every two month meetings with a shift from different donors. Usually it’s happening in small groups. It’s around, I don’t know, five, seven people. And these small groups are delivering the peer review between projects and between activities of the donors. And it’s very, very efficient because they can adjust their steps in delivering the project. So after having all this, let’s say coordination and identifying what has to be done, it’s already our Ministry of Economy and Digital Development, or Economical Development and Digitalization. So this ministry, it’s putting all that in the policy and it became a law or a government decision to have it on paper, let’s say. So this three-layer mechanism is very efficient for now. We’ll see how we’ll move with the critical infrastructure, because critical infrastructure is beyond the cyber security. And we’ll need extra coordination, because it’s a different profile when you have to take care about cables and other physical stuff. But yes. Thank you.

Calandro Enrico:
Thank you very much, Anatoly, for I think highlighting some very important issues. So I like the point on the need for a better planning, you know, not only from the donor’s perspective but also from a state actor’s perspective, right? Because sometimes, as we have highlighted also before, there might be some lack of transparency at that level. And I think some of the mechanisms that you have identified at the national level are really great and probably could be replicated also in other contexts. I like the small working groups because, of course, I believe that we have all been part of several working groups. So probably working with less people might be easier and more efficient, right? And then, of course, probably in your role trying to coordinate all these smaller groups, I think it’s something very concrete that other countries or mechanisms could actually try to replicate. So yeah, thank you. Thank you very much for that. And also I think the importance of another point of the recipients that are not only or always government officers but might be also private sector representatives. And that might create a problem of formality from the European Commission perspective because actually they cannot really directly support them. So the difficulties then from a recipient perspective of trying to find legal ways to let them understand that actually those are somehow acting as public sector employees because those are the people dealing with cyber capacity issues at the national and government level. So thank you. Thank you for that. Hiroto, once again, from JICA perspective, in terms of consequences of insufficient coordination from your experience, and what mechanisms did JICA put in place to improve that?

Hiroto Yamazaki:
Thank you very much. From the development agency’s perspective, so inadequate coordination in JICA is a problem in cyber security capacity building will lead negative effects, such as the reduced efficiency, non-maximized of development impact, and lack of sustainability. So looking at the negative effects in more detail, we can see previous speakers already mentioned the duplication assistance, lack of the resources, or the too much resources, or the siloed approaches to assistance, and so on. And I skipped some other parts. But conversely, by promoting coordination and cooperation, it is possible to eliminate these negative effects. So I have two examples. One is a kind of the bilateral effort. The other is a more multi-stakeholder effort. So regarding the duplication of assistance, I have an example from Cambodia. So our project started in May this year. And this project included assessment activities for the national CSIRT. But it was discovered that the cyber for development had conducted already a few years ago. So in this case, since the project had not yet started, we had a chance to talk with our Cambodian partner and cyber for development. So then we decided to use the result of the cyber for development instead of conducting the same assessment so that we could reduce the duplication of the assistance. The other example is more about multi-stakeholder. So Japan, I mean the JICA, is conducting the technical cooperation project in Thailand. So there is a training center in Thailand that is called ASEAN Japan Cyber Security Capacity Building Center called AJCCBC, established in 2008 or something. So in this training center, we have we are coordinating with ASEAN member state countries and ASEAN Secretariat so that we conduct the training at least six times a year and also CTF contest to meet their needs. So in addition to providing, in addition to training, provided Japanese training company or training institute, we also discussed with other donors or other partners. So this AJCCBC framework provided more training with, for example, the CISA of United States. So they provided some of the open source training evaluation, sorry, cyber security evaluation framework and also we have, now we are planning to provide more training in coordination with the FCD of United Kingdom and ITU and more other organisation. So through this AJCCBC programme, so this is kind of a training centre for ASEAN region, but also this AJCCBC has a kind of the coordination function to meet the needs of the ASEAN and also to reduce the duplication or something like that. So okay, that’s all, thank you very much.

Calandro Enrico:
Thank you. Thank you very much for sharing one concrete example of two projects collaborating in order to avoid duplication and also for sharing the existence of this centre, which I believe probably also other donors beyond ASEAN and those that you have mentioned could actually collaborate with in order to improve coordination within the ASEAN region. So I would invite everybody who’s working in that region actually then probably to get in touch with you and try to understand better how to work together in that, through that centre, because it’s not only a physical centre, but it’s actually a coordination mechanism. Yeah, Luis, same question for you. Insufficient coordination. and some of the mechanisms that you might have identified to improve coordination?

Louise Hurel Marie:
Absolutely, and we’re right at the last mile right now of the panel. Yes, I also don’t want to kind of say the same. I think we’re all biased over here. We understand very, very nicely the landscape and the challenges. But I think in terms of fragmentation of efforts, I would say that one of the consequences is really leading to poor sustainability measurement. And that goes from an individual, let’s say, donor or recipient country, but also from different donor countries trying to assess the landscape. I think we don’t have necessarily good measurements for longer-term sustainability efforts. I think we’re very good in measuring KPIs for an immediate project that you’re implementing. But once you’ve implemented the project, I think that longer-term element, because of so many other layers, I think Regine alluded to the bureaucracy of government and how sometimes it’s very hard to keep track of things beyond the financial year. So I think we need to be very realistic of how do we then build effective measurements in terms of sustainability over a longer period of time. Is that something that should be discussed in other forum internationally? Is that something that we should talk more about, be it at the ITU, be it at the UN, whatever works, of making those two communities meet, the CCB and the development community. So that also leads to a higher propensity of a one-off efforts or effects. So you only measure impact in, like, I implemented a project, then you measure impact based on that particular project. So you don’t have a holistic view. And I don’t think that’s something that’s just applicable to donor countries or recipient countries, I think implementers as well, like civil society organizations, think tanks that are working on this. It’s having a bigger measurement of how do we actually… see this as more than a one-off. Is there something on the recommendations for you as an organization, and I can say from a think-tank perspective, right, having conducted CCB in different countries and also helped in the implementation, I think it’s like, how do you actually provide recommendations that are sustainable? Is it about developing a longer-term, you know, capacity building program in a region, right? So I think we need to think more about that. In terms of also other consequences from the domestic, domestically from the donor side, already alluded to this, so there are multiple departments dealing with different types of capacity building efforts within different governments. So one example, and it’s not to put the U.S. on the spotlight, but like for example with Costa Rica and the Conti kind of ransomware incident, you had USAID kind of providing some some support, and then you had now the Foreign Military Fund providing other types of support to Costa Rica, right, which shows that there are different parts of government that are doing different types of CCB. Whether they’re coordinating, and I have no insight into the particularities of the the U.S. government definitely, but I think from a person that has been observing from the outside and the public information that we see, I think this is something to consider, right, in terms of let’s say the consequences of insufficient coordination from a donor perspective internally. And then domestically from the recipient side, this can lead to really like overwhelming effect when recipient countries really have lots of offers. That is assuming a very good scenario where you have like a country that’s receiving, but from a crisis response CCB perspective, and going back to what I said previously, when you look at countries such as Montenegro, right, so we hosted RUSI, we hosted a discussion over at the open-ended working group at the sidelines on thinking about ransomware and requests for assistance, right, and we discussed with the colleague from Montenegro was talking about how when they were attacked it was very hard for them to actually designate one POC to respond to the multiple countries that we’re trying to support at that moment. So that is a good scenario where you actually have lots of countries wanting to support, but then you actually need to be realistic internally of whether you do have a POC, do you have a national cybersecurity agency that has the authority to do that. So I think there are lots of different nuances to thinking about coordination domestically. Domestically from the recipient side, from the donor side, and also from organizations that are trying to measure sustainability more broadly. Very shortly on mechanisms. So still sticking to my breaking down the discussion and the language and the terminology around CCB. I think more broadly as I discussed, you know, the mechanisms that we have for broader CCB are already out there. I think they’re MOUs. For example, you know, Australia has done a very interesting work throughout the past couple of years of tying, you know, the BOA declaration, you know, looking at the Pacific Island countries, then tying kind of like particular parts of, let’s say, funds to do training on certs within the region. So I think there’s like a sequencing of actions and there are MOUs that come before that that are renovated after a while. So I think, you know, these are things that have been working quite well in different regions. There are different experiences. When it comes to crisis response mechanisms, I think there’s still a lot of experimentation on how government seeks to institutionalize this from, let’s say, Costa Rica, Vanuatu, PNG, and others. What, you know, just from a research perspective, what we’ve observed is that all of them have been preceded by MOUs, almost all of them. So countries that provided assistance in the context of a crisis already had a previous relationship. So we go back to the point on trust and how you actually have to build that trust before you actually conduct any kind of assistance. But there are other also evolving mechanisms, such as in the EU there’s the PESCO. framework and there’s the the cyber rapid response teams which is a way of getting particular types of countries within the EU to respond to a particular crises like cyber crises. So that’s still an experimental stage even though the PESCO framework has been there for a couple of years. I think this is a one way of thinking about how mechanisms can be explored within a group of countries. And finally when we’re talking about you know crises CCB I think we need we see some some mechanisms that kind of go back to the broader lens of CCB. So one other example that we’ve identified is that Slovenia, France, Montenegro they set up a center for cybersecurity capacity building in the Western Balkans in 2022. And that is an example of like Montenegro faced a very large-scale cyber incident, it was a ransomware, then they have received crisis response assistance and now they’re going back to like what are the mechanisms that we can build together to have a longer tail sustainable impact and how different countries can come together to actually institutionalize that. So I think we need to see those different types of CCB as complement to each other depending on the context and I think that last example just shows that you can go from crisis response back to like a broader CCB lens and use the crisis response as an opportunity for that political visibility to set up new mechanisms. So hopefully you know that provides a bit of a an idea of the landscape based on this typology let’s say.

Calandro Enrico:
Absolutely, thank you. You touched upon so many important points, the issue of sustainability, trying to find also long-term sustainability mechanisms. Also from our perspective I think try to have a broader understanding of the impact so beyond the single project I think that would be great and I don’t think it exists unfortunately. I’m sure that all European funded projects somehow have to to demonstrate the impact but that really happens at the So I think, you know, I think that there’s a lot of work to be done on the project level but what about then globally, right, observing the impact of all these projects? What will be the result? Do they look at sustainability? So thanks a lot for that. And also some of the mechanisms and how, as you said, from cyber crisis we can actually then identify other more long-term kind of mechanisms on coordination. I think those are great examples, but I would like to ask you a couple of questions. I think we have the last one in our table, so is there anything else that you would like to add on in sufficient coordination and some of the mechanisms that maybe Germany put in place to improve coordination in cyber capacity building?

Regine Grienberger:
Yeah, not to be repetitive, I would like to just throw some, you know, some ideas around and we see if we can follow up then in the discussion. So, I think, you know, I think that there’s a lot of work to be done on coordination, and I think it’s important to flag that there’s often a misunderstanding what coordination actually is. And I have to explain it also in my own system, that coordination does not mean telling other people what they should do. I mean, we have, for example, a very, I mean, a situation that is perhaps not very easy to explain, but I think that, you know, we have to, you know, try to, you know, try to, you know, so come up with a project, so this is a way of establishing a cyber capacity building project that is not, that cannot be coordinated in the same way that’s something that, you know, when you start with a white sheet of paper and just, you know, map it out according to the needs of the recipients. So, you know, I just go to the coordination meeting and tell them, okay, that’s what I’m going to do, because there is no leeway on my side to do something different. to choose a different recipient. A second element, of course, like in all development cooperation, there are darlings, darling recipients and others and be aware of that, but a coordination mechanism should also help us to overcome this bias that we always lean towards certain recipients who are well prepared to receive our assistance. Then you mentioned the Montenegro Center. What I was very, not surprised, but what became very visible in, for example, the Albania case of when the ransomware attack hit them, was that they turned towards help, also emergency support, far, far away from the US, from France, from others. They didn’t ask their neighbors, although there might be some familiarity with the structures, you know, arguments to ask the neighbors. So I would say a mechanism or mechanisms should also look for, you know, regional cooperation, not only global cooperation or, you know, I mean, in cyberspace there are no borders so neighborhood doesn’t mean the same thing as in the analog world, but nevertheless also in cyberspace there are reasons to ask your neighbors for help. Mechanisms, I mean, what is really necessary, Claire mentioned the D4D hubs. What I also find very promising is the regional tables that the European External Action Service is setting up with regard, for example, to the Western Balkans, now also for Moldova, to integrate foreign policy and security policy considerations and development or assistance considerations. So to bring these two aspects of the two perspectives together, because it is often, you know, development cooperation seeks to be not so political, more technical, but of course there are many reasons to integrate these technical perspective with a more foreign policy perspective. And then something I would also like to add is we have, I think there is a good case for on the donor side, to have also a top-down approach with regard to coordination. So starting with the language and with the paragraphs that we have in the OEWG reports and GGE report from 2021, because there is a good outline of what cybercapacity building is like, there is this very good idea and expression and notion of a two-way street. And we haven’t talked about this yet, but I think cybercapacity building should in principle be a two-way street, so that there is also, you know, north-south, south-north, south-south, north-north cooperation included and not only, you know, a donor-recipient relationship as in development cooperation. So the reports on the cyber norms provide us with a very good general concept, and what we have now in the discussion, the program of action will even give us more opportunities to also have it in a sustainable way, so not reopen the case every five years and renegotiate the same document, but have a more long-term perspective of where we would like to arrive one day.

Calandro Enrico:
Thank you. Thank you very much, Regina, for highlighting some of the, you know, how regional cooperation could actually improve these coordination mechanisms and the fact that not always that is actually the preferred way to ask for support. Thank you for highlighting the example of Albania having the Montenegro Center next to the UN. And also, it’s very interesting on how these new discussions on linking policy consideration and development considerations are actually not growing, so that the political aspects and the technical one somehow are trying also to find a way to have a better dialogue. And then, of course, for those that are not familiar, I would invite all of you to read the Open-Ended Working Group final report on Responsible State Behavior in Cyberspace, which highlights some of the principles of cybercapacity building, and one of those is these two-way streets, right? And the fact that the relationships between donor and recipient, south and north, needs to be revised and understand that they’re actually on a better, equal base also on cybercapacity building. So, I would like to open up now the floor to some questions or additional comments before we wrap up and conclude. Okay, there is somebody, yeah, you can start.

Audience:
Thank you. Hi, I’m an attorney, I’m a lawyer, so I like to consider legal basis as being a sustainable way of going forward, and it occurred to me, of course, there are different things that we talked about, and OEWG is also more of just resolutions that have basically been passed, but the Budapest Convention provides a legal basis, and I was thinking about the work that at least I’ve done with them and the things that it helps doing. They have sustainability because of the legal basis. I’ll mention three things, the first is the legal basis, the second is that their capacity building programs make sure that they don’t do what we call drive-by trainings. If they’re training judges, they’ll make sure that the training is done, it is localized, and then after that, those trainers are going to train others, and not only that, but the point you made about south-south cooperation, an African judge in Ghana, I mean, I was part of this just a few weeks ago, trained judges in Kenya, as an example. So that’s helpful, and someone from the Philippines is training someone in Morocco, for instance, this happens under their auspices. And also, the TCY, which is the committee for the Council of Europe, allows a measurable mechanism which is global for all countries to check what you’re doing when it comes to capacity building with these things, and they do this consistently, maybe not be too public, but it’s something that they do. something to look into. And finally, of course, they do regional cooperation constantly for Albania, Montenegro. I’ve been part of many of these exercises where they had many of these countries get together. But the most important one, and I’ll leave you with this, is the point of contact, the 24-7 network, which allows you basically to have a stable, legally, international law compliant, domestic law compliant, sustainable point of contact, where everybody can get in touch and talk to each other. And it’s not something we’re saying, well, which country does what? No. Every country has one point of contact, designated, known who they are. If there’s a problem, they know exactly who to contact. So this might be just one alternative. Thank you.

Calandro Enrico:
Thank you very much for your comments, for adding some other mechanisms for coordination, more from a legal perspective. I think that was great. So I’m wondering if somebody would like to address the question on the legal basis for cyber capacity building, because it’s something, actually, we didn’t touch upon, for sustainability and better coordination. Is there anybody who would like to? OK. Yeah. I think it’s an interesting question, because actually, there aren’t. And probably, that could be also a way of creating more clarity and somehow to address some of these issues. I think it’s a good question.

Audience:
At the risk of repeating, I’m sorry. My apologies. It wasn’t really a question for the panel. I’m not expecting we should have legal basis for capacity building, not at all. I’m just saying that there are legal basis in treaties, which have capacity building programs, that bring that sustainability. And maybe using them more is something to think about. The donors should be looking at it and saying, hey, that’s something stable. That provides structure. It’s legally sustainable on an international basis, as well as domestically. So that’s OK. Thank you.

Regine Grienberger:
Actually, also, understanding it as a tasking for both sides, both donors and recipients, if you want, or partners in crime here.

Calandro Enrico:
Thank you. Thank you very much for that. There was another question.

Audience:
Thank you so much. Thank you so much, and thank you so much again to the panelists and this very comprehensive points of view from different regions and different expertises, so thank you for that. So my question slash comment, so first of all I’m Yasmin, I work in the ITU in actually cyber capacity building project implementation. So many of the things that were mentioned really resonated, and my question is related to this challenge of demand-driven capacity building, because, you know, donors obviously have different political priorities, different budgets and everything that was mentioned as well, but we need to remember that this is not only a cyber security issue, there are a lot of lessons learned that can be learned from the development world, I mean there’s decades of development work being done, there are mechanisms in place like, you know, donor pledging conferences, donor coordination conferences, and so something that on the donor side that can be done is also at national level to look into what is being done in other, you know, government agencies and other divisions to see if there’s any lessons learned there, and of course at national level this can be looked into so that it’s improved at international level. So in short, my question is basically, you know, how can donor countries look into inside and their national lessons learned when it comes to development, so that we can improve on demand-driven cyber capacity building. Thank you.

Calandro Enrico:
Thank you, thank you very much. One of our donors country would like to answer to the question, thank you.

Claire Stoffels:
Thank you for your question. I think the D4D Hub actually is, that I mentioned earlier, gives a very good platform to do that, and what we aim, at least what we aim to do within the cyber thematic working group is to do exactly that, so to have the technical level people to share what good practices, lessons learned, what went wrong in your project, and how to do it better to really provide an informal platform to exchange. on that basis, sometimes it’s a bit difficult, actually, to gather that information that you mentioned, and that is actually the important elements that we need in the inception and the formulation of a project. So, yeah, I think the G4D Hub provides a very good stage for that, but, yeah.

Calandro Enrico:
Teresa, would you like? Teresa, please.

Tereza Horejsova:
Is it on now? Yes, okay. Thank you, Yasmin, also for your question. I think it’s an important point of connecting, actually, learning from the development community, but also connecting the development community with the cyber community, which is also something of quite an interest for the GFC, and we are together with our partners organizing a conference on global cyber capacity building in Ghana at the end of November, and this is exactly one of the objectives, to bring these communities together a little bit more for more efficiency in the future, hopefully.

Calandro Enrico:
Thank you, Teresa. Regina?

Regine Grienberger:
Yeah, I would like to add one aspect, and that is that it is also true in the other way around. If you do digital development cooperation, so development cooperation for digital transformation or for digital transition of public administration or so, you should include the cyber capacity building part, because it doesn’t make sense to help an administration transition to a digital system without including enhancing their skills, their capabilities, also the hardware and software part that is necessary for good cybersecurity. So I would say it’s also the other way around, yeah.

Calandro Enrico:
Thank you. Donia, probably there is an online question.

Donia:
Yes, we do have a question from online participants. First off, a response to Claire’s very early intervention, just support. And then the question is, can we see capacity-building as end-to-end across the whole community, rather than just technology solutions? And what are the panels we use on thinking of capacity-building as more than just technology solutions, but also building community awareness, legislation framework, or government policies, et cetera?

Calandro Enrico:
Thank you. Thank you very much for that. Is there anybody who would like to answer to that? I think my understanding is trying to see it beyond the technical issues, but rather on building awareness, legal basis, and so on and so forth. I think if I understood correctly the question.

Louise Hurel Marie:
Yes, sure. And thank you for the question. I think maybe I’m biased, because I feel that we probably bursted that bubble a lot. And I think a huge part of that effort, I take my hat off for, the GFC, the Global Foreign and Cyber Expertise, which tries to provide that platform. So they’re working groups on incident response, and they’re groups on cyber diplomacy and norms. And those kinds of communities in the coffee break kind of meet up. And I think it’s about those small efforts in the sense of mixing those communities and having the space to do that. So I think from my perspective, those kinds of efforts The CCB discussion across communities is gradually at the international level, through these mechanisms, kind of progressed. But for sure, when you look domestically, depending on the country, depending on the culture, depending on where in the development spectrum they’re in, it’s still very challenging. I remember working in my previous job in a think tank, and I was working in a think tank, so working in my previous job in a think tank based in Brazil, sometimes speaking to different departments across the government, or people that were working more on CCB for, let’s say, certs. It’s a very different kind of community. So I think there’s also this effort of us, at least, from civil society organizations or think tanks to try to do that as much as possible when we’re planning and designing a particular project. I think that’s part of how do we design that engagement with those communities as much as possible in our own, let’s say, stakeholder group.

Calandro Enrico:
Thank you. Thank you very much, Louise, for that. We only have four minutes left, so very quickly before we conclude, I would like to invite the panelists to provide, I would say, maximum two takeaways or recommendations and to improve cyber capacity building coordination.

Rita Maduo:
Okay, thank you, Enrico, for that. Brief, sorry. Oh, okay, sorry, sorry, sorry. So effective cyber capacity building actually requires multifaceted approach and a very sustained commitment from donors, from implementers, and as well as recipients. So drivers and coordinators of such efforts should be intentional about onboarding new strategic members such that the voices of all parties are diversified and initiatives do not remain stale due to having the same players at the table at all times. So by working together and following these actions, then we can effectively enhance a national cyber security resilience and capabilities.

Calandro Enrico:
Thank you. Thank you, Rita. Teresa.

Tereza Horejsova:
So yes, let’s consider this openness that we want to build to the extent possible as a win-win-win situation. And without wanting to sound pathetic, let’s really try to talk more and exchange more. Yes, thank you.

Calandro Enrico:
Thank you. Claire.

Claire Stoffels:
Thank you. I just have one major point. So I think I really believe that it’s donors as well as implementers’ role to really promote and carry out awareness-raising measures, enhance communication, and facilitate cooperation amongst actors and bring relevant parties. together and facilitate knowledge and information sharing as much as possible.

Anatolie Golovco:
I think that’s my main point. Thanks. Anatoli? I believe my main point is we need a people-centric approach in all the cyber security. So we need to reduce the complexity of tools to protect and to rethink from scratch the entire cyber security architecture of states. So having a good strategy is what we need and probably we need to put more effort in the strategy and rethinking the existing ecosystem instead of putting layers of security on top of badly

Hiroto Yamazaki:
designed things. Thank you. Hiroto? Okay, thank you very much. So through this discussion, of course the coordination is very important with the multi-stakeholder or the bilateral to reduce the duplication or to maximize the development impact, but also we must think about the sustainability from the coordination perspective. So we should not, of course we can reduce some duplication, but we should not provide one training or one awareness or one meeting or something like that. That’s not a good way. So we should create something like a kind of sustainable outcomes or sustainable output so that we can coordinate, real-time coordinate with the stakeholders that real-time online, but also I think through that kind of a sustainable outcome such as the guidelines or the training materials or trained trainer and so on so that we can and also we need we can refer to such a sustainable outcomes or sustainable output so that we can kind of the delayed or time difference the coordination. So not the real-time, but maybe we leave one country, but maybe later one donor will join that country, so then that donor can see our outcomes so that we can, we don’t need to talk, but we can. their coordinator later, after we leave.

Calandro Enrico:
Thank you very much, Hiroto. Louise.

Louise Hurel Marie:
Wonderful. Three key points. Definitely, first, I’d say, both from us, like think tanks, implementers, and others, and also like recipients, include recipients in the design phase. I think we can do that. That is a very practical element that we can do. Be it providing a bigger inception phase for projects where we actually get to engage with stakeholders then. So I think from an implementer’s perspective, that is great. From a donor perspective, I think it’s also about thinking what you need to embed that while, you know, to embed that in the timeline. Second, we need to really, and that’s my point since the start, really break down the typology. We need to design a typology, actually, to think about CCB that accounts for the different contexts as we see the landscape evolving, in terms of agencies, stakeholders, and as I said, crises up to conflict and post-conflict situations. And finally, on the South-South point, I would also say that, you know, there’s a next step for developing countries to also kind of empower their development agencies or to also kind of bring more of the cyber into the development agencies, right? Sometimes it’s a different part of the government that’s doing that, but I think there is a remodeling thing. There’s an element of bringing cyber into that. So I would say these are my three key points.

Calandro Enrico:
Thank you very much, Louise. And Regine.

Regine Grienberger:
I think I have nothing to add, so amen.

Calandro Enrico:
It was a lot. No, no, no problem at all. So thank you. Thank you very much for all our panelists for sharing your experience and insight. And of course, for all participants, yeah, we conclude our session. Thanks a lot. But the conversation, of course, doesn’t end here. Thank you all.

Audience:
Thank you. Thank you. Thank you. Thank you all. Thank you. Thank you. Thank you all. Next time I’ll play the devil’s advocate, y’all be like, no, I don’t agree, I don’t agree. We all agree pretty much.

Louise Hurel Marie

Speech speed

185 words per minute

Speech length

2928 words

Speech time

948 secs

Anatolie Golovco

Speech speed

148 words per minute

Speech length

1111 words

Speech time

450 secs

Audience

Speech speed

207 words per minute

Speech length

838 words

Speech time

243 secs

Calandro Enrico

Speech speed

172 words per minute

Speech length

3207 words

Speech time

1116 secs

Claire Stoffels

Speech speed

168 words per minute

Speech length

1487 words

Speech time

531 secs

Donia

Speech speed

204 words per minute

Speech length

77 words

Speech time

23 secs

Hiroto Yamazaki

Speech speed

149 words per minute

Speech length

1164 words

Speech time

468 secs

Regine Grienberger

Speech speed

150 words per minute

Speech length

1487 words

Speech time

594 secs

Rita Maduo

Speech speed

128 words per minute

Speech length

863 words

Speech time

404 secs

Tereza Horejsova

Speech speed

150 words per minute

Speech length

1412 words

Speech time

563 secs

Stronger together: multistakeholder voices in cyberdiplomacy | IGF 2023 WS #107

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

John Hering

The analysis includes various speakers discussing cybersecurity and multi-stakeholder inclusion in dialogues. One speaker notes the increasing professionalism of cybercrime, with a growing focus on critical infrastructure sectors. Microsoft’s annual digital defense report highlights this trend. Moreover, 41% of observed nation state cyber operations target critical infrastructure.

Another speaker raises concerns about the integration of cyber operations in armed conflict, citing the situation in Ukraine as an example. Urgent discussions, particularly at the United Nations, are needed to address this rising concern.

The ownership and operation of cyberspace by private entities is also discussed. It is emphasised that cyberspace is primarily owned and operated by private entities, necessitating a proper multi-stakeholder approach to tackle conflicts in this shared domain.

Improving the United Nations’ processes for including multi-stakeholder voices in cybersecurity dialogues is identified as a key issue. The current approach is described as ad hoc and patchwork.

The importance of accountability and understanding existing cybersecurity norms is highlighted. Holding countries accountable for violating norms and focusing on implementation rather than creating new norms are deemed important.

Another speaker advocates for multi-stakeholder inclusion in future cybersecurity dialogues. The non-governmental stakeholder perspective is considered essential for impactful outcomes, transparency, and credibility.

Challenges faced by non-governmental stakeholders in engaging with processes like the Open-Ended Working Group are discussed. The speaker acknowledges the progress made since the first multi-stakeholder consultation in 2019.

Improving the process of multi-stakeholder engagement and learning from successful first committee processes are advocated for. Structured non-governmental stakeholder engagement and a comparison with successful processes are seen as crucial.

The hindrance of multi-stakeholder inclusion in dialogues by escalating geopolitical tensions is mentioned. It is noted that these tensions have blocked voices, including Microsoft, from participating effectively.

The importance of multi-stakeholder inclusion in future dialogues is stressed, highlighting its role in transparency, credibility, and aiding in implementation efforts.

Insights from different stakeholders are valued for a holistic understanding of the issues. Effective dialogues and engagement with governments are seen as important for gaining insights into their perspectives.

The goal of achieving a gold standard of multi-stakeholder inclusion is expressed. Working towards a higher level of inclusion is seen as necessary.

The legitimacy of questioning the involvement of private companies in discussing governance at national or international levels is acknowledged. However, it is argued that these companies should have a voice in such dialogues, with decision-making authority ultimately resting with governments.

The summary accurately reflects the main analysis, covering various aspects of cybersecurity and multi-stakeholder inclusion. It includes relevant long-tail keywords and adheres to UK spelling and grammar.

Joyce Hakmeh

The analysis explores the challenges and benefits of multi-stakeholder participation in UN Information Security Dialogues. One of the significant challenges mentioned is that some states actively block multi-stakeholder participation. Additionally, there is a lack of conviction among states regarding the value that multi-stakeholders bring to the table. States often perceive the multi-stakeholder community as a uniform group with the same agenda, which further hampers their participation. Moreover, there is a lack of strategic and consistent engagement with multi-stakeholders by supportive states. This lack of engagement creates uncertainty for multi-stakeholder groups regarding their accreditation in UN processes.

On the other hand, there is a supportive stance towards increased multi-stakeholder participation. The role of multi-stakeholders in the cybercrime convention marks an important milestone as it is the first time they are attempting to shape a legal instrument within the UN regarding cyber issues. Participants argue that multi-stakeholders bring diverse perspectives, and their input can significantly influence decision-making processes. Furthermore, in the context of establishing new processes in cyber and digital technologies governance, it is crucial to include multi-stakeholder participation from the beginning. Transparency and clear criteria for inclusion and exclusion are seen as essential components of good modalities in these governance processes.

The speakers emphasize the need for multi-stakeholders to prove their value through concrete actions such as providing data, conducting research, and offering capacity building. This is especially necessary because some member states do not fully understand the value that multi-stakeholders can bring. Additionally, the analysis highlights the importance of not solely focusing on the multilateral level but also considering the national and regional levels in digital technologies governance.

Collaboration and input from various stakeholders, including civil society organizations and industry, are seen as mutually beneficial. Multi-stakeholder involvement aids governments in quality control and gathering diverse ideas during negotiations and decision-making processes related to digital issues. However, the speakers emphasize the need for these collaborations and inputs to be more strategic, ambitious, and inclusive, rather than narrowly involving only big tech companies.

Furthermore, the analysis suggests that the current composition of multi-stakeholder groups is primarily Western-dominated, calling for more regional inclusion. It is argued that there is a wealth of valuable experiences and perspectives at the regional and national levels that can enhance UN processes and initiatives.

The analysis also highlights the importance of better coordination among multi-stakeholders. While it is important to improve collaboration with governments, it is equally crucial to enhance collaboration among the multi-stakeholders themselves to ensure diverse voices are included in the discussion.

The fragmentation of cyber negotiations is acknowledged as a present reality, with various negotiations focusing on different aspects of cyber issues. The interconnectivity and overlap of activities in cyberspace challenge the artificial separation between negotiations dealing with international peace and security and those dealing with criminal activities.

In conclusion, the speakers advocate for increased multi-stakeholder participation in UN Information Security Dialogues. While there are challenges such as states blocking participation and lack of conviction, the benefits include diverse perspectives, shaping legal instruments, and influencing decision-making processes. The analysis calls for the development of good modalities from the start, the provision of concrete evidence of value by multi-stakeholders, inclusion of regional and national levels, better coordination, and a focus on inclusive collaboration.

Nick Ashton Hart

The analysis explores the need for increased stakeholder participation in policy-making and decision processes, focusing on cybersecurity and international commerce negotiations. The lack of stakeholder involvement and frustration with current procedures are identified as significant issues that need attention.

One speaker emphasises the value that stakeholders bring to these decision-making processes. The absence of their input not only results in the loss of valuable perspectives and expertise but also undermines the legitimacy and effectiveness of the policies and decisions made. Additionally, frustration is expressed concerning the application and veto process in cybersecurity procedures. The closed nature of the World Trade Organization (WTO) negotiations on electronic commerce excludes stakeholders completely, limiting their ability to contribute and raising concerns about transparency and fairness.

In response to these challenges, one speaker proposes the implementation of a policy on stakeholder participation. Such a policy would transform stakeholder involvement into an administrative process, ensuring their perspectives are consistently considered and incorporated into policy-making. It is suggested that many states would support this policy if a vote were to take place, indicating a growing recognition of the need for increased stakeholder participation.

Another speaker supports a campaign to address the issue of stakeholder participation once and for all. Some states are indifferent to involving stakeholders and find the arguments and disagreements on this topic tiresome. A resolution would save time and energy by establishing a clear framework for stakeholder participation. The importance of stakeholder involvement, particularly in the context of cybersecurity, is stressed. It is believed that their participation would drive a more ambitious cybersecurity agenda, bridging the gap between current offerings in international cybersecurity and the actual need for comprehensive and effective solutions.

In conclusion, the analysis highlights the necessity of enhanced stakeholder participation in policy-making and decision processes related to cybersecurity and international commerce negotiations. The establishment of a clear policy or a campaign to address this issue is crucial to bring valuable perspectives and expertise to these processes and to achieve more effective and legitimate outcomes. Furthermore, stakeholder involvement is essential for bridging the gap between the current offerings and the actual need in international cybersecurity, leading to a more comprehensive and robust approach to addressing cyber threats.

Charlotte Lindsey

In a recent analysis, it has been highlighted that the veto power within the Open-Ended Working Group limits the participation of various organizations, a concern raised by Charlotte Lindsey. This poses a challenge for multi-stakeholder civil society organizations who strive to contribute to multiple parallel processes. However, the analysis also acknowledges that civil society organizations play a significant role by providing valuable data, evidence, and practical recommendations.

Another area of concern is the lack of transparency and clarity in the process for non-state actors to contribute. This issue is seen as a barrier to their meaningful engagement. To promote inclusivity, it is suggested that the scope of participation should be extended to include organizations operating at national and regional levels.

Charlotte Lindsey urges the creation of a dedicated forum that includes all stakeholders, as it would foster legitimacy and help shape future instruments. The involvement of civil society organizations in such a forum could facilitate the implementation of cyber norms by connecting different actors and building partnerships.

Additionally, it is recommended that states establish a mechanism that reflects the multi-stakeholder nature of cyberspace. This would enable relevant stakeholders to contribute to discussions and ensure transparency and credibility in decision-making processes.

The analysis also highlights the importance of increasing the representation of African countries in global processes. It notes that there is a willingness among ambassadors from the African Union in Geneva to engage and learn more about these processes. To foster the participation of African countries, there is a need for capacity-building efforts to enhance the skills of representatives from the African Union in negotiations.

To encourage wider participation, it is necessary to demystify the processes involved. Participants from the African Union reported a misconception that they could not contribute due to a lack of familiarity with the debates. Efforts should be made to provide clear information and guidance to potential participants.

Lastly, the analysis emphasizes the importance of fact-based framing and timely input for effective engagement. Even if organizations cannot actively participate in discussions, the ability to produce valuable input is recognized and valued.

In conclusion, the analysis highlights the need for greater inclusivity, transparency, and recognition of the value that civil society and multi-stakeholders bring to the table. Creating dedicated forums, enhancing representation, demystifying processes, and promoting fact-based engagement are essential steps towards achieving these goals.

Speaker

Joyce Hakmeh is the director of the international security program at Chatham House and actively participates in various UN cyber projects. In her role, she leads these projects, focusing on advancing cybersecurity and addressing emerging challenges in the evolving digital landscape. Hakmeh follows UN cyber processes such as the open-ended working group and the cyber crime convention, which play a pivotal role in shaping global standards and policies in the fight against cyber threats. Moreover, she is part of the international security National Research Institute, further showcasing her expertise and dedication to the field.

Nisha serves as the director of the Cyber Security Institute in Geneva and actively engages in UN processes. She is particularly involved in the open-ended working group and the ad hoc committee on cybercrime. Nisha’s primary focus lies in providing evidence and data-driven analyses of the cyber landscape, aiming to develop a comprehensive understanding of the challenges and potential solutions. By utilizing facts and data, she contributes to the formulation of effective strategies and policies to combat cyber threats and ensure a secure digital environment.

Joyce Hakmeh and Nisha both play crucial roles in the field of cybersecurity, making significant contributions to UN cyber processes. They bring their expertise and experiences to the table, actively participating in discussions and decision-making processes concerning global cybersecurity challenges. Through their involvement, they strive to enhance international cooperation and strengthen partnerships in addressing cyber threats.

Overall, the work of Joyce Hakmeh and Nisha underscores the importance of collaboration and knowledge-sharing in tackling cybersecurity issues. Their commitment to the field and active participation in UN cyber processes demonstrate their dedication to improving the security and resilience of digital infrastructure worldwide. Their expertise and insights serve as valuable resources in shaping effective strategies to combat cyber threats and ensure a safer digital future for all.

Pablo Castro

Pablo Castro, a cybersecurity expert, emphasises the importance of implementing existing norms rather than establishing new ones. He believes that instead of focusing on developing new norms, it is more crucial to focus on effectively implementing the current 11 norms. Castro argues that regional-level implementation of norms should be a priority for Latin America. This approach would ensure a strong foundation of cybersecurity practices and strengthen the overall security posture in the region.

Castro also supports the role of stakeholders in assisting states to improve the implementation of cybersecurity norms. He believes that stakeholders, such as industry experts and civil society organizations, can provide valuable insights, expertise, and resources to help states in the process of moving forward. To exemplify this, he mentions that Chile proposed a new set of Confidence Building Measures (CBMs) specifically aimed at leveraging stakeholder involvement to enhance the implementation of cybersecurity norms.

In addition to implementation, Castro highlights the need for capacity building in the Latin American region. He argues that capacity building is crucial to improve cybersecurity efforts and to bridge any existing gaps in expertise and resources. He mentions that several Latin American states made a joint statement in July, highlighting the importance of capacity building in the region.

Castro also emphasizes the need for a strategic approach to engage stakeholders in cybercrime processes. He suggests creating a clear strategy that defines specific roles for stakeholders in future dialogues, such as the Program of Action (PoA). This approach ensures that stakeholders are actively involved in shaping cybercrime policies and addressing challenges related to international law, norms, and Confidence Building Measures.

Advocating for partnerships between stakeholders and states, Castro calls for increased collaboration in specific tasks. He believes that by working together, stakeholders and states can better address the complex challenges of cybersecurity. He encourages stakeholders and states to establish strong working relationships to foster effective collaboration and improve cybersecurity efforts.

Furthermore, Castro underscores the importance of strategic dialogue with stakeholders. He observes that stakeholder opponents often have clear strategies and goals, making it essential for proponents to engage in more strategic and well-planned dialogues. He suggests developing a counter-narrative to address opposition and effectively advocate for stakeholder participation.

Castro also mentions the significance of working beyond formal meetings and rooms to achieve progress in cybersecurity. He believes that a lot of influence can be exerted outside formal settings, particularly at the regional level. He highlights the major opportunities for meetings and collaboration that regional initiatives present, making them critically important for advancing cybersecurity efforts.

From his analysis, Castro notes the struggles countries face in cyber discussions due to geopolitical and cultural differences. He highlights how these differences can lead to fragmentation in discussions and potentially result in different internets in the future. This underscores the importance of finding common ground and fostering collaboration despite these challenges.

In conclusion, Pablo Castro provides valuable insights into the importance of implementing existing norms, engaging stakeholders, building capacity, and forming partnerships in the field of cybersecurity. His emphasis on strategic dialogue, regional initiatives, and the need for an action-oriented approach through frameworks like the Program of Action demonstrates his comprehensive understanding of the challenges and opportunities in the cybersecurity landscape. Overall, his viewpoints contribute to a more holistic and collaborative approach to addressing cybersecurity concerns.

Bert

The Internet Governance Forum (IGF) and the United Nations (UN) have different discussion approaches. While the IGF promotes equal discussions, the UN discussions are more intergovernmental and less friendly to stakeholders. This discrepancy is concerning as it highlights the lack of stakeholder inclusion and equality in the UN’s discussions on cyber governance. The Open Networking Group faces challenges in discussing real-world threats like cyber espionage. It struggles to have an open discussion on these issues, which is important for addressing the evolving threat landscape. To address this, the Open Networking Group needs to be more transparent and open about cyber espionage discussions. Clear violations should be called out, ensuring a better understanding among stakeholders.

Implementing international law is crucial in cyber governance. The General Assembly has confirmed that international law applies fully, but there is a need to focus on better implementation and understanding of the existing normative framework. The Open Networking Group will dedicate sessions to this question next year. Some argue for new norms, while others believe that a better understanding of existing norms is sufficient.

Inclusive multi-stakeholder involvement is key in decision-making processes related to cyber governance. Non-state participants have been invited to negotiations in the Human Rights Commission, and NGO representatives are involved in government delegations in some countries. The Program of Action (POA) should focus on implementing the existing normative framework and involve non-state actors. This collaboration can facilitate efforts and coordination between stakeholders.

The involvement of stakeholders has been politicized, and moving it from a political process to an administrative matter is suggested. This administrative approach can remove unnecessary barriers and streamline decision-making. A one-size-fits-all forever resolution for stakeholder participation may not be ideal, as future circumstances may require different rules.

The upcoming global digital compact discussions should involve various stakeholders, despite opposition from some countries. The input and perspectives of different stakeholders are essential for an inclusive and effective digital compact. Bert supports a strong role for the mighty stakeholder model and the IGF, advocating for an inclusive approach involving industry partners, academics, and experts.

Negotiations must be inclusive, with representation from different countries. Availability of funding for travel aids representation, ensuring active participation from a broader range of countries. The quality of discussions varies based on the level and diversity of participation. Inclusive discussions lead to a better understanding of the issues at hand.

More funding and support are needed to facilitate multi-stakeholder participation in cyber governance. Denial of funding for extensive travels hinders effective participation. The COVID-19 pandemic has unintentionally democratized multilateral processes, allowing for more remote participation and inclusivity. While negotiations occur internationally, it is essential to engage at the national level as well.

Stakeholder involvement in the global digital compact process is emphasized, utilizing the national IGF for discussions and preparation. Partnerships and value contribution are crucial for effective decision-making, amplifying the impact and improving feedback provision.

In conclusion, there are discrepancies between the IGF and the UN discussions on cyber governance. Open and transparent discussions are crucial for addressing real-world threats. Implementation and understanding of existing norms are necessary, alongside multi-stakeholder involvement and inclusivity. Adequate funding and support are needed for equal and inclusive participation. The COVID-19 pandemic has unintentionally increased remote participation and democratized multilateral processes. National and stakeholder engagement are vital for effective cyber governance. The development of a global digital compact requires multi-stakeholder involvement and partnerships, with organizations having a potentially underestimated impact.

Eduardo

The discussion at hand revolves around questioning the legitimacy of companies participating in multi-stakeholder discussions within the sphere of international law development. This topic is relevant to Sustainable Development Goal (SDG) 16, which focuses on achieving peace, justice, and robust institutions.

Several concerns are raised regarding the involvement of companies in these discussions. One concern relates to democratic issues. It is argued that when companies participate in discussions shaping international law, it raises questions about democratic representation. In a democratic system, decisions about laws and regulations are ideally made by elected representatives who are accountable to the citizens. However, the inclusion of companies in these discussions potentially bypasses this democratic process.

Another point of contention revolves around the conflict of interest that companies may have when participating in these discussions. Companies, by their nature, prioritize their own interests and profits. In international law development, where decisions are made with the aim of benefiting society as a whole, the alignment of companies’ interests with broader societal interests becomes a concern. The question arises as to whether the participation of companies in these discussions could lead to biased outcomes that favor their own agendas.

Furthermore, the lack of direct election by citizens is raised as a valid concern in questioning the legitimacy of companies’ involvement. Unlike elected representatives who are accountable to their constituents, companies operate under their own governance structures. This lack of democratic oversight over their participation in multi-stakeholder discussions adds to concerns about the legitimacy and transparency of the decision-making process.

The sentiment towards these issues is negative, as the concerns raised highlight potential flaws in including companies in multi-stakeholder discussions on international law development. However, it is important to note that Eduardo’s stance is neutral as he is simply relaying a question posed by Amir Mokaberi on this matter.

The analysis emphasizes the complexity of balancing the involvement of various stakeholders, including companies, in shaping international law. The insights gained from this discussion emphasize the need for further exploration and deliberation on how to ensure legitimacy, transparency, and democratic representation in such multi-stakeholder forums.

Marie

Cybersecurity discussions have been ongoing since 1998, but their scale has significantly increased in recent years. There is a clear need for broader multi-stakeholder involvement in these discussions, including the participation of the technical community. However, the current level of inclusivity falls short of expectations.

Collaboration between different stakeholders is crucial in effectively addressing cybercrime issues, both within the United Nations and in other forums. Marie emphasizes the importance of connecting cybersecurity discussions in various domains to promote a secure and trustworthy online environment. The emergence of numerous multi-stakeholder initiatives is inspiring and can potentially enrich engagements beyond traditional diplomacy.

The lack of mention of the technical community in the report of the open-ended working group highlights the need for its inclusion in cybersecurity discussions. Marie insists on continuing dialogues with stakeholders such as the technical community, as their involvement enhances understanding of their potential contributions.

While discussions have grown in scale, it is challenging for developing countries to allocate resources and time to processes primarily taking place in Western countries like the UN. Marie highlights the importance of ongoing discussions at national and regional levels, emphasizing the value of long-term engagement in shaping informed policies.

Marie further emphasizes the significance of stakeholder engagement, drawing from her experience working on cyber issues in the Netherlands. She advocates for the use of platforms like the IGF, RightsCon, and GFC for open discussions and aims to demystify discussions in the first committee for stakeholders.

Capacity-building and the spread of knowledge regarding the normative framework are identified as essential elements in the field of cybersecurity. Marie’s team endeavors to share their knowledge about the first committee to enhance engagement, participating in regional meetings and holding cyber policy discussions.

Marie encourages non-governmental stakeholders to share information, facts, and the impact of projects, as this input can add value to the discussions within the context of the UN. Continuous involvement of all stakeholders and their accountability in taking the right positions are crucial. Marie acknowledges that the process can be frustrating but assures that raised issues do make their way into the final reports.

The idea that all stakeholders, including the private sector and civil society, should have a voice in policy-making dialogues related to cybersecurity is strongly supported. This inclusive approach recognizes the importance of considering a wide range of perspectives in shaping effective and comprehensive cybersecurity policies.

In conclusion, cybersecurity discussions have grown significantly since their inception in 1998. Broader multi-stakeholder involvement, particularly including the technical community, is needed to effectively address cybercrime. Inclusivity in these discussions must be improved, and collaboration between different stakeholders is crucial. Regional and national initiatives, capacity-building, and knowledge sharing are essential for robust engagement. Continuous involvement and accountability of all stakeholders are emphasized to ensure the right positions are taken and all perspectives are considered in policy-making dialogues.

Audience

The analysis reveals a significant issue concerning the lack of representation from African stakeholders in multi-stakeholder discussions. This absence is viewed as a negative aspect, highlighting the need for better ways to enhance the participation of African stakeholders in these discussions. The argument is made that the current level of engagement must be improved to ensure that the perspectives and interests of African stakeholders are adequately represented.

Additionally, the analysis emphasises the importance of stakeholder engagement at both the national and regional level, emphasising that it is crucial to strengthen and improve this engagement. It is believed that by doing so, a more inclusive and effective multi-stakeholder approach can be achieved.

The analysis also identifies a common problem faced by civil society organisations, which is a lack of access to engage with the government. However, it is suggested that national and regional level engagement could offer a sustainable solution in addressing this issue.

Furthermore, the analysis highlights the potential benefits of better engagement, stating that it could help strengthen the broader ecosystem of civil society organisations. This indicates that by actively involving and consulting various stakeholders, a more robust and collaborative approach can be fostered.

The analysis brings attention to the fragmentation of the cybersecurity debate, which is seen as a challenge not only for non-state stakeholders but also for many developing countries. Keeping up with multiple tracks of discussion at the UN is particularly challenging for developing countries, making it difficult for them to actively participate in these discussions.

The analysis also touches upon the polarisation of positions on the future of institutional dialogue after OEWG (Open-Ended Working Group). There is a division between those supporting the continuation of discussions on the proposal of a Program of Action (POA) and those against the idea of something legally binding at the moment. Brazil, for example, supports continuing discussions on the proposal of a POA.

Furthermore, concerns are raised about the potential underutilisation of OEWG if the POA is adopted this year. If the decision to adopt the POA is made two years ahead of the end of OEWG’s mandate on regular institutional dialogue, it is feared that OEWG discussions might be undermined.

The analysis also considers the involvement of users in the multi-stakeholder process, highlighting the importance of including users’ perspectives and addressing issues related to defective use and abuse. The role of Microsoft in involving users in multi-stakeholder processes is specifically mentioned.

Lastly, the analysis emphasises the engagement of young people in the tech industry, advocating for their perspective to be taken into account. It highlights how Microsoft incorporates the youth perspective into its submission and ensures that everything is on track.

Overall, the analysis underscores the need for greater inclusivity and participation in multi-stakeholder discussions, particularly concerning African stakeholders. It also highlights the importance of various levels of engagement, the concerns regarding fragmentation and difficulty faced by developing countries in the UN, and the significance of involving users and young people in the decision-making processes.

Session transcript

Audience:
th th th th th th th th th th th th th th th . . . . . . . . . . . . . . . . . . . . . . . . . . . .

John Hering:
Actually, this is a well-timed conversation because last week Microsoft released its annual digital defense report, which if you haven’t had a chance to dive into in the days since it’s come out, I would encourage you to do so. It’s our summative annual threat intelligence report that we put out, a pretty comprehensive overview of how Microsoft sees the threat landscape. It’s not necessarily the entire landscape. We only see our sliver of the internet on our platforms, but it does give a pretty illustrative view of what the contemporary challenges are. Unsurprisingly, cybercrime continues to be an increasing challenge. In particular, we’ve noticed over the past year it’s increasingly professionalized. That’s to improve the scale and then the impact of cybercrime operations. And then when it comes to nation state activities as well, we’ve seen continued escalations in that space, in particular with a focus on espionage operations over the past year, and 41% of which, in terms of all nation state cyber operations observed by Microsoft threat intelligence teams, were focused on critical infrastructure sectors across various regions of the globe. None of this is especially new. It’s been an escalating concern for decades. But now the integration, obviously, of cyber operations in armed conflict is becoming a rising concern, including in the past year and a half in Ukraine, most notably, which is making conversations around peace and security online, in particular at the UN, all the more urgent. We have seen over the same time period of the last few decades, also the UN stepping up to try and meet the moment and keep pace with an evolving threat environment. Stirring up various working groups and new processes, evolving its mandate to make sure it’s meeting the moment. And this has also introduced a new challenge in how do we include the right multi-stakeholder voices in those conversations. Cyberspace is, after all, a much more shared domain of conflict than perhaps any other, given that it’s inherently synthetic and a lot of it owned and operated by private entities. It also raises important questions about how to ensure the right human rights are protected and the necessary. necessary multi-stakeholder and academia voices are at the table as well. And thus far, we’ve seen sort of an ad hoc patchwork approach to trying to include more multi-stakeholder voices in those conversations. So that brings us to today, and I think a two-fold goal for this conversation. The one, on the one hand, it’s to hopefully keep everyone appropriately informed on where these conversations are at the United Nations and beyond, and to hopefully help people feel like they are equipped to more effectively engage in those conversations. And then two, is going to be to hear from you in the room, from those in the IGF community, about the challenges, recommendations, or guidance you might have around how we might improve the relevant inclusion of multi-stakeholder voices in cybersecurity dialogues. That will be essential, I think, both for our guests on the stage, as well for an after action report that we’ll put together following this session. So to that end, we will save the bulk of the time of this session towards the end for audience Q&A. So that’s not just question and answer, but also commentary, other suggestions, or things you’d like to contribute to this conversation, or that you’d like to hear our guests respond to. But without further ado, then, I’d like to welcome our speakers, first on the stage, and then Charlotte online, to just introduce themselves first, let us know who you are, what organization you’re from, and maybe your relation to the cybersecurity dialogues at the UN. We start at the end of the table, and come on down.

Marie:
Now working. Thank you very much, and thank you for having me on this prestigious panel. My name is Marie Mou, I’m working at the permanent mission of the Netherlands to the UN in Geneva, and I’m First Secretary Cyber, so I’m the incarnation of what cyber diplomacy is from a member state’s perspective.

Pablo Castro:
Thank you very much, John. I’m Pablo Castro, I’m Cybersecurity Coordinator at the Ministry of Foreign Affairs of Chile. I basically cover cybersecurity, cyber…

Joyce Hakmeh:
I’m the director of the international security program at Chatham House. My relationship to this conversation today is that my team, we lead a lot of or a number of projects following UN cyber processes, the open ended working group as well as the cyber crime convention, and we have had our fair share of most of the work that we do.

Speaker:
So, I’m very happy to be here. I’m also very happy to be here as a member of the international security NRI, when you know that there is a trilateral project, there might be a possibility of a multi-stakeholder engagement or attempt to do so. . Thank you. If you’re online, could you introduce yourself? My name is Nisha, I’m the director of the cyber security institute in Geneva, and with my team, we engage in the UN processes, the open-ended working group, the ad hoc committee on cybercrime, and other fora in order to try to bring evidence and data-driven-based analyses of cyber landscape.

John Hering:
After interviewing you and trying to deal with the private matters and I just kind of gave an outline from how to see the environment from the industry side, but let me maybe from Marie and Bert to start us off, how has the conversation around nation-state activity in particular, evolved at the UN in the time that you’ve had there? And where are we living up to and where are we falling short of the international expectations that have been set?

Marie:
Thank you. So after the introduction of the UN development programme, and again I’m going to focus on the cyber industry sense, which is cybersecurity. I work at CreativeOcean Public, but I think there have been a lot of developers within the public sector. So, 13 still or 20 or 20 people who are members of that public. I think those discussions are not really new. They’ve been going on for since 1998. But there is also a broader picture that we need to take into consideration because the cyber security discussion are not new, but what is new is first the scale at which they’re being discussed. So it’s in more and more places, but also the integration of other stakeholders that is pretty new into those processes. When actually we’ve seen on the, we would look at the broader cyber picture, more multi-stakeholder engagement, and that comes back to the 2003 with this. But the fact that in the cyber security strict sense of the discussion, we’ve seen stronger multi-stakeholder involvement, unfortunately does not have yet achieved the inclusivity that we had expected in the first place. And we would like to have more inclusivity. It’s already nice that the open-ended working group now is open to all member states, which was not the case with the GGE. So there is already more inclusivity, but I think we would like to go a bit further and to make sure that all relevant stakeholders can have their voice heard also in those discussions.

Bert:
Thank you. Thank you so much. From my perspective, I have to say I’m very much looking forward to the discussion here, because what I can observe is, of course, quite a discrepancy between the way we discuss things here at the IGF, and where everyone is on an equal footing. And as soon as, if I stay with the metaphor, step your foot into the UN, it becomes very intergovernmental. And by its very nature, it’s not very much stakeholder-friendly. So we always see. that is really an uphill battle, every time in all these processes. And when it comes to your specific question on sort of the threat landscape and how it is being discussed, also there, there’s a bit of a discrepancy between the real world and the UN world. In the real world, you described yourself, you just released your own annual report, which lays out the landscape that is very, which raises many concerns about state actors, non-state actors, the collusion between the two, or cyber crime activities by state actors, et cetera, espionage activities, how it’s combined, how cyber act, malicious cyber actors become more and more involved in disinformation, information campaigns, all this is there. But it’s very difficult, if not impossible, to have a frank discussion on this in the Open Networking Group when we discuss threats. There’s always, there’s a strong sense, it’s an uncomfortable discussion, so to say, and people rather skim over. There’s a stronger interest to discuss things like confidence building, et cetera, than to discuss the hard stuff. Why do we need to build confidence? Because there is a problem of growth in malicious cyber activities. Particularly when it comes to the issue of cyber espionage, we are there for the view that we should become much clearer in calling it out as clear violations of the normative framework when such activities are directed against states or critical infrastructure, et cetera, thank you.

John Hering:
Thank you, Paul, so much. Sticking sort of on the government side of this conversation, and we’ll go right back to you, actually, Bert, and then bring Pablo into the conversation as well. You guys have both mentioned, I think, now, the Open Ended Working Group, which is the current information security dialogue, the second iteration of that body. Previous to that, it was the group of governmental experts, and there were successive rounds of that. As of 2015, there have been established norms for responsible state behavior online, some recognition that international law. ought to govern also state behavior online. There has not been new norms established since that period of time. And I was wondering if you could just sort of shed some light on how should we think of the current status of the open-ended working group? What is its mandate and mission? And then what is the importance of multi-stakeholder inclusion in that?

Pablo Castro:
Thank you. Regarding norms, and especially new norms, I have to be really honest that maybe even from the perspective of Chile, we’re not really thinking probably in new norms. It’s more like the implementation of 11 norms, especially the regional level. It’s probably one of our main interests right now. And I would say this is also very important for our Latin American regions, to try to move forward in this. Not some, expectation could be different, regarding the open-ended working groups. And from Latin America and from our conversation with our colleagues, I would just say right now we have a good coordination with other states. After years that probably stayed up, and Minister of Foreign Affairs didn’t have, you know, someone in charge of cyber. Now it’s possible to do this sort of coordinations. Capacity to build it, for example, is very important. But when it comes to norms, I think implementation is something important, especially at the regional level. And that could be also a good chance for stakeholders, you know, how they can help this process, you know, the states, you know, and improve this implementation. It’s one of the reasons why last year, which has proposed a new CBMs, you know, there’s a working group at the OAS for the establishment of CBMs in cyberspace. It started back in 2017. We have now 11 CBMs, which is quite something. One of them, it’s about the strength implementation of 11 norms, you know. And that was proposed specifically to try to, you know, encourage a state to work more than this. With the assistance of the Organization of American States, I was a good program. It’s very good, very important, this process. And be working, you know, a lot with stakeholders. So I think that it could be a good opportunity, you know, to that stakeholder. they can help a state, you’re meant to move on this. Now, moving forward, the opening of the working group is a tough, it’s a really good question, you know, because as we have different expectation, by the way, for Latin America, for example, capacity building could be probably something really important. We managed in the last, I mean, in July, to make a joint statement, you know, several, I mean, states for Latin America about capacity building. And the current situation, I think, is a little bit complicated. We always have this conversation between the states and our region, because trying to get the consensus, the opening of the working groups, it’s really difficult. And so trying to, you know, to decide which are the subject, the item, do we really move on, it’s also very complicated. So it’s a complicated balance, you know. So far, I think we’ve been trying, you know, to agree on things that we, things that everyone could agree, one of CBM’s, you know, directory portal. And, but I cannot really oversee how we can really move on new topics and new discussion, we can do that. That’s because, of course, the current context of the complicated conversation we have, and I don’t see that’s gonna be easy to resolve in the coming years. I mean, according to the geopolitical situation we have right now. Thank you.

Bert:
Thanks so much. I can subscribe to everything Pablo said. And just to add, our sense is also, we need to focus on how we implement better, and even not only implement, understand better the normative framework as we have it. We also see quite some, there’s some countries out there who try to make, to produce confusion. They say, oh, we only have voluntary norms, we have, therefore we need a legally binding treaty to clarify what the legal obligations are. Ignoring the fact that we have, the General Assembly has repeatedly confirmed that international law, as enshrined in the charter, et cetera, fully applies. Therefore, we need to have more dedicated discussions to look specifically, what does it mean international law applies? So, since we are just finalizing our national position paper, where we’re. We are very happy, we have been lobbying this for a while, that next year, one of the intercessional sessions of the Open Networking Group will be dedicated to the question of application of international law. And as we need to see, it’s a bit, the voluntary norms are a bit muddy the waters almost, it leads to some confusion. We also will have more discussions on the voluntary norms, and all of you again, we don’t need new norms, we need better understanding of the existing ones, what exactly that means, and then of course, particularly, that they have been implemented and the countries who violated them are held accountable. Thank you.

John Hering:
Thank you both so much. I heard accountability, confidence building measures, clarifying the existing norms obligations, all the spaces where we can move forward within the context of the current Open Networking Group and future cybersecurity dialogues, and things which need multi-stakeholder inclusion and participation and engagement. And so to that side, I wanted to bring in Charlotte Online and Joyce from the sort of non-governmental stakeholder perspective. Could you just take us through, I think, first for, you know, I think we have a lot of government folks in the room. What is it like to try and participate and engage in the UN Information Security Dialogues as a non-governmental stakeholder?

Joyce Hakmeh:
Thanks, John. And very, very good question. I think this is something that sort of like an experience that we reflect on quite a lot, and we sort of share stories, the multi-stakeholder community between each other. I think, speaking from our experience at Chatham House, but also observing multi-stakeholder participation more generally, I think there are maybe four issues that I believe act as a challenge to the multi-stakeholder participation. And of course, you know, I’m not going to talk about the biggest one, which is states blocking actively going out of their way sometimes to blocking multi-stakeholder participation in UN processes. The first point I want to make is I guess there is a disbelief or perhaps insufficient conviction from some states about the value that multi-stakeholders bring to the table. And so you basically, and this comes often from states who arguably need this support or could benefit from this support the most. So often the starting point is really sort of making the case about why it is important that you’re at the table and what is it that you can contribute. And so this basically leads to either states not engaging with multi-stakeholders or, so if they tolerate your presence, they don’t engage or they engage at the superficial level. And perhaps this stems from the second point I want to make which is a perception that some states have that the multi-stakeholder community is a sort of a uniform group, it’s like monolith and we sort of all have the same agenda, the same approach, the same objectives. And this is obviously more true when it comes to civil society rather than to industry. But of course that’s not true, right? Because civil society is a very diverse group with sometimes overlapping but more complimentary mandates and the role that they can play is diverse. So if you don’t understand what they can bring to the table then it is hard to sort of engage with them properly. The third issue is, and this is more sort of directed at countries who actually support the multi-stakeholder participation and can be called the champions of multi-stakeholder participation who really kind of like make that point over and over again in UN processes and beyond. I think sometimes the challenge with that relationship is there is a lack of strategic but also consistent engagement. And this could be related to like time issues, resources issues or perhaps sometimes lack of coordination within the government itself between the different agencies. So this means that the relationship with multi-stakeholders isn’t as good as it can be, isn’t as impactful as it can be. Now, I’m not suggesting that this is just the responsibility of government. I think of course this is a shared responsibility, it’s a relationship so it has to go both ways. And perhaps, you know, on this current OEWG specifically, I think the, probably the sort of the word that describes it the best when it comes to multi-stakeholder participation is uncertainty, right? With every session, like multi-stakeholder groups, they don’t know whether they’re going to be accredited or not. I know Microsoft, you’ve had your good share in that, and so the Chatham House, until we finally got the ECOSOC status, which in a way kind of like, you know, gave us that right to be in the room. And you know, the ability to influence UN processes, like in this kind of very complex geopolitical climate that Pablo described, requires strategic planning over time. So if you’re uncertain whether you’re going to be in the room or not, it makes it very hard to actually influence. I want to also talk about these sort of other ways where you can influence, but maybe for later. But maybe I want to conclude with this point that, although this, you know, the participation hasn’t been great, it has been possible, right? And I think from our perspective, it is, it has been a learning curve, and particularly, for example, if we look at the cybercrime convention, this is the first time the multi-stakeholder community is trying to shape a legal instrument within the UN on cyber, right? And so we are learning a lot of lessons that will definitely help us in the future and also help us sharpen our tools.

John Hering:
And Charlotte, you’re up if you’re online.

Charlotte Lindsey:
Yes, I am. Thank you. I agree with the points that Joyce raised. I would just like to focus on a couple of points. I think that while the Open-Ended Working Group is officially open to stakeholders, states have this veto power, which Joyce mentioned, which limits the participation in dozens of organizations, including the Cyber Peace Institute, are regularly vetoed. And I think that that makes it very complicated for us to plan strategically, but also to really be representative. and to bring added value to this forum. I mean, clearly an achievement, what has been that the GGE and the Open-Ended Working Group run these sort of parallel processes and came out with a consensus report, which was aligned. However, it’s very complicated for these parallel processes for multi-stakeholder civil society organizations to be able to participate in all of these parallel processes and to really be able to contribute. The Cyber Peace Institute, we have been able to contribute to the objectives of several of the UN working groups. We’ve submitted comments, recommendations on pre-drafts, on zero drafts, on final reports of the Open-Ended Working Group. We have also submitted multi-stakeholder engagement statement, which we led with a group of other organizations and contributed ahead of substantive sessions. So we are able to find ways to contribute, but it does take a lot of navigation, a lot of engagement behind the scenes to be able to really be able to be present and to put statements and positions forward. I think that we, as civil society organizations, we do have added value that we can bring. And I think that what we have been able to demonstrate and many states demonstrate that they really appreciate these contributions is bringing data and evidence on many of the issues that are being addressed in the Open-Ended Working Group. And we have been able to, for example, bring things like a compendium of best practices on protecting the healthcare sector from cyber harm and bring practical recommendations that can really help negotiations and help discussions. And I think that by bringing these recommendations, we can add the diversity, we can bring voices which really represent the full range of how the cyber landscape is actually being managed and the threats on that. of that, in that landscape today. I would just like to make a couple of final points. I think while we see that a number of governments have really sort of reiterated their commitment towards an inclusive process in which the multi-stakeholder community really does have a voice, we think it really is important that there’s more clarity on what these potential contributions from civil society or non-multi-stakeholders can really bring. And this can encourage other states to really advocate for and pursue this more inclusive process. If there is an understanding of the added value, then each time each organization is not having to bring that. And we think also what is complex ahead of some of the sort of consultative meetings, we think it’s also very complex when documents aren’t shared ahead of meetings or are very late, and therefore it’s really hard to bring, as Joyce mentioned, this very strategic role if we’re not able to actually receive any of the documents, understand what the subjects are going to be, and then also not necessarily able to participate in the room. So we think it would be important to have real clarity on non-state actors and how they can participate in the substantive sessions, clarity on the level of transparency and visibility offered for multi-stakeholder contributions throughout the process. And we think that there also needs to be inclusion, not just at sort of international organizations and civil society organizations operating at an international level, but also those operating nationally and regionally. And this could also help have a more of a global understanding of the challenges, but also the contributions that different actors can play. Thank you.

John Hering:
Thank you so much, Charlotte. And just this, I think, hit all the major points in terms of, you know, as a non-governmental stakeholder who has also tried to engage in these processes before. for, I think covered pretty well what some of those challenges have been, but I do think, to your point, Joyce, this is also a learning process, and I think we should also give credit where credit is due. I remember, you know, the first ever multi-stakeholder consultation for the OEWG that happened, you know, in the room, conference room B at the UN in 2019, and we really have come a long way since then in terms of regularizing things and having much greater inclusion. That’s a credit to, I think, a lot of support from various member states, increasing numbers of member states, and then also the current chair of the OEWG, who I think we should recognize, as well as having worked to create regularized, at least intersessional consultations with non-governmental stakeholders. But I think to the broader point here, indeed, it has been highly ad hoc, and especially for resource-limited organizations, that’s a particular challenge to try and think about how best to structure that level of engagement. So then thinking about maybe the other moving forward and how to sort of begin to be a little more accommodating and inclusive, I’d love to hear from Burt and from Charlotte, thinking beyond sort of just the OEWG and the GGE, Pablo and Marie, I’d love to hear sort of just a comparison to other first committee processes that maybe have greater success with multi-stakeholder inclusion, whether that’s, you know, the ECOSOC status or any other ways that we’ve seen other stakeholders more successfully included in the past.

Pablo Castro:
Okay, I’m going to start. Well, I mean, as you mentioned, I mean, the cybercrime, you know, current process right now, we do have this modality, we agree, it was working pretty well. We can mention also, I mean, all the discussion, like, for example, we’re talking about weapons systems in Geneva. But when I also think about the future dialogues, future process, okay, I have to think about the program of action, which is something that’s coming. be a very good opportunity, you know, to really create a sort of, as you mentioned, and I really love the word strategy, something we definitely need, and in a way also tries to create some or define a specific role for multistakeholder, let’s say in the future BOA, you know, how they can help, you know, or assist in terms of identifying needs, assessment, how to help a state for the implementations. We can, in some way, our case, not to say in a structure, but define some roles in this future dialogue. I mean, that way, we can, I mean, identify some stakeholder good for some, let’s say international law, let’s say 11 norms, let’s say CBMs, or in that way, I think it would be good if we can actually try to start this discussion, you know. I think, well, Cyberspace Institute, for example, has been very good, I mean, reporting this at the website, because this is something that is coming, I mean, sooner in the next couple of years, and that’s going to be a good opportunity for a state, I mean, to think about this. Now, I would like to see also more, I mean, probably partnerships between the stakeholder and other states, something that maybe in Latin America, even from Chile, I would like to do more on this in some, as I said before, in this specific task. This is something I could, we could probably, I mean, start to think and work in the, I would say, near future. Thank you.

Marie:
Thank you. I think I will just go a bit further than just the UN and the first committee. But looking at, first, at a purely, like, UN perspective, I think those discussions we’ve seen popping in into so many different fora. When the pandemic started, WHO started talking about cybersecurity, we were seeing cybersecurity related discussion in the context of e-commerce, but also, obviously, with the ongoing situation in Europe and in Ukraine, in the humanitarian dialogue. So I think we also need to take this into consideration and how other stakeholders are involved into those discussion. Because if we don’t connect the dots and all those discussion as well, then we will never be able to have the open, free, and secure environment that we want and where people trust, that people trust, and that we can all benefit from. So that’s on a positive note. But we are also seeing a growing number of multi-stakeholder initiative outside of the UN, be it at the national, regional, international level. And those are really inspiring. And I think we need to look at how stakeholders, when they are having those multi-stakeholder initiative, how they engage with each other. I find it difficult to really compare because obviously we are in a UN situation or in a multi-stakeholder. But I think we need to look for inspiration wherever we can, and not only in First Committee or purely UN. Because as was pointed out, it’s really new to have multi-stakeholder engagement within First Committee discussion on cyber. So I think that’s one of the things. The other thing that was mentioned a bit earlier on the panel is it’s true that we don’t always really well understand as diplomat, the entire breadth of how much civil society can bring and the stakeholders. And one of the thing I would like to point out is in the context of the open-ended working group, we never mentioned the technical community. If you look at the report, we talk about civil society, the private sector, academia, but the technical community, for example, is not there. So I think that’s also a sign that we need to continue that dialogue, and we need to understand how much other stakeholders can bring to those discussion. And then little by little, I think we will make that space, and that we will hopefully see more participation in those discussions.

John Hering:
Thank you both. And Pablo, I think you… mentioned the elephant in the room here, perhaps, in these conversations, which is the program of action, the sort of recently passed resolution to establish what would be kind of the first standing body that’s gonna be focused on cyber security at the UN. And there was a lot of open questions about that, but I’d like to invite Bert and Charlotte back into the conversation to share a little bit about what that might look like and how that could regularize, perhaps, some more multi-stakeholder inclusion in the UN processes in the first committee.

Bert:
Yes, with pleasure. And again, as I said in the beginning, the discrepancy between the IJF and the General Assembly, we will not overcome this. There are many great initiatives out there. And again, we should also look more what the IJF experience, what that can bring, similar to the Vistas Forum, et cetera. We have to see this, but also, just to mention, what’s the best way forward? We have these limitations, and I must also say, I found it more frustrating in the Open Networking Group, because there, the no-objection procedure that is now the practice for the invitation of multi-stakeholder was used much more extensively than in the Ad Hoc Committee’s cyber grant process, where basically, if you look at the list, it’s a long, long list. More or less, everyone who wanted to participate was able to participate, which is exactly how it should be. There are also other ways. I mean, if you look in the, I have done many things. I was delegate to the CSW in the past. There, you have a practice. Many countries involve NGO representatives in government delegations. Now, this is something, this was also done by some countries in the Open Networking Group after some, with blocked organizations. I’m not totally sure that’s the right message, because I must say, for me, multi-stakeholder, or to put them in your delegation sounds like they’re aligned with you. They have their own voice. I mean, I want you to be there, whether you agree with me or not. That’s the idea. I don’t want them to be part of a government delegation. So, I’m not absolutely sure that’s the right way. I was often a delegate to the Human Rights Commission. Commission. There, it depends a bit on the country, but in a number of negotiations and resolutions, non-state participants are invited to participate in the negotiations as well. So there is precedent for almost everything. When it comes to the POA, I will not go into the question how it should look like, etc. This is a separate discussion. It’s an important project and we’re preparing another resolution for the General Assembly that is happening as we speak. But again, the idea would indeed be to have more stability by having a permanent body, by having it inclusive. A strong focus of such a POA should, of course, be on implementing the existing normative framework, including capacity building, where, of course, multi-stakeholder play a key role. They are major actors in this field, so therefore they also need to have a proper seat at the table. But this we have to see, the POA, we want this to be a UN body, so we still will have to fight that UN rules and regulations apply. So we have to see, we will have these difficult negotiations to have multi-stakeholders as prominently as possible at the table. Because, as the saying goes, if you’re not sitting at the table, you’re on the menu, so to say. So we really want multi-stakeholders to be at the table. Thank you.

Charlotte Lindsey:
Yes, thank you. So, yeah, I think it’s really important to underline that the POA does present a sort of a unique opportunity to try to advance peace and security in cyberspace by really focusing on the implementation of the agreed norms and ensuring practical and needs-driven capacity building. We do think that this initiative needs to address a variety of issues related to the operationalization of the agreed-upon framework that would benefit from real practical implementation and meaningful stakeholder, multi-stakeholder participation. And that needs to therefore be reflected in the modalities. So the modalities for stakeholder participation need to be very much clarified. to make sure that the multi-stakeholder nature of cyberspace is reflected. And the inclusion of all the relevant stakeholders in a dedicated forum would build legitimacy, would shape any future instruments. So this inclusiveness could create a process that really reflects the lived realities, addresses real threats that affect the safety, security and wellbeing of people. And stakeholders can assist states to build their capacity and understanding of how to apply the norm. So I think there’s a real added role that civil society and other multi-stakeholder organizations can play on a practical day-to-day level that if we can contribute would be invaluable. And I think civil society organizations are particularly well positioned to connect different actors and to build partnerships across a variety of communities and geographies and to help the practical implementation of the cyber norms. And we can help in national and regional implementation efforts, including reporting on the progress. So the real added value is there. And I think what is really, and I come back to this point I started with, the modalities, the POA modalities in relation to the scope, the method of establishment, the format, the frequency of meetings, the decision-making structures and stakeholder participation, all of these points are being debated. And we urge that states really create a mechanism that reflects this multi-stakeholder nature. And as some of the previous participants mentioned, it does need to include civil society, industry, academia, the technical community and other experts who can really play a vital role and bring expertise to future dialogues on cybersecurity in the context of international security. And this will really drive much more impactful outcomes from the process and really contribute to ensuring transparency and credibility of the agreed decisions, as well as the sustainability of implementation.

John Hering:
Thank you both so much, points very well taken. I’m putting everybody on notice that we’re gonna have maybe one or two more questions here, and then I would love to hear from folks in the room. Again, either questions or comments on things that you think would be helpful at including more multi-stakeholder voices at the UN or elsewhere in conversations around peace and security online. Speaking of online, if you’re part of the online audience, please do put questions in the chat, and my colleague Eduardo will make sure that he addresses them to the room. But I want to pull over to sort of, I think what’s been mentioned a couple times but underscores a lot of this, which is the geopolitics of the moment. And so to maybe Joyce and then to Pablo to discuss, rising tensions means that this is getting more difficult to have more inclusive, well, seems to be more difficult to have any kind of productive conversation in diplomatic spaces, certainly multilateral ones, but in particular as it relates to multi-stakeholder inclusion, increasingly difficult to have multi-stakeholder voices heard. Microsoft is certainly among many, many other multi-stakeholder voices that would seem to be relevant to dialogues, but have been blocked from participating by respective member states amid escalating geopolitical tensions. How can we address this, do you think, such that we can ensure that we have the necessary voices and the inclusive dialogues we need in future conversations without letting geopolitics play such a weighty role? And Joyce, if you want to start.

Joyce Hakmeh:
Thank you, thank you, John. I think this is a very important question. Like how do we understand our reality and work within the confines of that? I think the, maybe I’ll sort of like split my answer into two kind of parts, or maybe to talk about it from sort of two different lens. So first of all, there is, there will be new processes, right? So we heard about the POA, but outside of cyber, there are sort of like processes that are being established and in cyber, there are calls for new processes. whether leading to something binding or otherwise. So I think it is very, very important, and this point has been mentioned before, is that the starting point ought to be figuring out good modalities for the process, right? So it’s much harder when you have bad modalities to fight for multi-stakeholder participation. It’s much easier when it’s already enshrined in the process from the very beginning. And in that, there has to be transparency. There has to be clear criteria for inclusion, but importantly, clear criteria for exclusion, right? And I think we can perhaps also aim to be a little bit more ambitious than just that, because even if a certain member state can object and can say why it’s objecting, and this won’t really go much further than that, I think maybe we should be more ambitious and ask for maybe some sort of formal procedure to resolve disputes when it comes to multi-stakeholder participation. I think we should, if we believe multi-stakeholderism is the kind of way forward in digital technologies governance, which it should be, right, then we ought to have it more sort of like part and parcel rather than something we sort of like every time try and beg for, right? It has to be there, and it has to be unquestioned. But of course, this is a journey, and bit by bit, and as you said, we’ve already had some successes, and we hope to build on that. The other thing that I want to, sort of the other lens, is how we sort of, what can we do with existing processes and in the kind of geopolitical context that you described. I think an important point is that while it is very important to be in the room, and if you’re not on the table, you’ll be on the menu, that’s probably maybe true, but also I think it is also important to know that the ability to influence is not just in the room, you know? There’s a lot that can be done outside the room, and arguably, you can have a better impact outside the room. When we take the floor in the open-ended working group, they give us three minutes to speak. You know, how much can you influence in three minutes? That’s very, very arguable. So I guess this sort of like combining this with other initiatives outside of the UN processes is extremely important. And working on that sort of relationship with states on a long-term basis. I think the, I talked about the fact that some member states don’t understand the value of multi-stakeholders. And I think there is an onus on multi-stakeholders to actually prove through actions what their value is. Charlotte talked about data and research and the importance of that. Capacity building is absolutely important. And then through actions, member states can understand why multi-stakeholders are valuable. And they will then become, so the champion’s circle will expand beyond the current few. And I think also importantly is to focus on not just multilateral, we talked national, but also regional. And Pablo talked about OAS and the different kind of initiatives there. Because that has also like a huge potential for influence. If you can get ECOSOC status, then do. Because that will help you overcome a lot of challenges. And maybe kind of a final point, I think we are working on new areas, emerging areas. And I think we can’t use sort of like always or just old or existing models to solve new and emerging problems. I think it’s very important to be innovative, to be creative, think outside the box. Particularly that we as multi-stakeholder community have limited resources. So yes, we might be able to participate in meetings if the door is open, but we might not even if they let us, right? So I think there’s also the need to think about how can we do it creatively and differently than the way we do it now.

Pablo Castro:
Thank you. Well, I agree 100% of what we were saying. So I’m not quite sure that that’s something more valid. But thinking about this, I think while you mentioned modalities and also transparency in the regional aspect, let’s come back again for the strategy council. I think we definitely need more, this is the perspective of government, to work more with the stakeholders in terms to how to face this problem. And how again, I mean, create your own strategy for doing so. I don’t think there has too much dialogue, again, for the perspective of mine. region, Latin America, that we really do a lot in terms of every time we have new meetings opening up in the working group or cybercrime, we really have this chance to start to have this dialogue, you know, with stakeholders, you know. We are trying to move on in this last year with the Dutch initiatives in Chile that we managed to organize a dialogue with stakeholders and representatives from Ministry of Foreign Affairs in the region, which is really good, to basically discuss, you know, open up the working group, aid the economy. But I think it is, I think we definitely need to try to, I mean, to work more in a strategy to face this, because the members say that they are actually against participation, they already have a strategy now, they have a goal, you know, that’s the problem, we are not facing something that they like it or not, they really have a very clear mission, a goal to stop this. So I think we are not maybe have this sort of, this is my impression at least, this sort of coordination to say, okay, they have a strategy, they want to do this, how we can actually create the counter-narrative, you know, and do more than this. And I agree also with Jules very much, is a lot of things we can do at the margins on all these meetings, you know, especially at the regional level, at the U.S. or in Africa, et cetera, which is probably has the most chances, you know, to come and meet together, you know, and really thinks about things that are also are important to move on, as you mentioned, capacity building, implementations, those are the thing that in some region are really critical, really important, and we can have the chance in that case to work together, you know, at the space. Thank you.

John Hering:
Thank you so much, Pablo. You brought up a really good point at the end there that I want to circle back on at some point here, which is sort of what are the opportunities for engagement outside the U.N., and how can sort of cyber diplomats in that community help to facilitate that, and what can others from the nongovernmental community do? But sort of before diving in there, I do want to invite, now that we’re sort of in the latter half of the program, anyone in the room or online who has a question or a comment or other ways to contribute to this conversation and invite them to please take the floor. There are microphones in the aisles here. And there is certainly the chat box online. And Eduardo, if you’re able to come on, maybe you could ask the first question if there is one.

Eduardo:
Sure, John. Maybe actually I’ll pass the floor over to Nick Ashton-Hart, who’s had his hand up and I think wants to make a comment. Go for it, Nick.

Nick Ashton Hart:
Good morning from New York. It’s like 2.30 or something here. No, 3, sorry. Yeah, I wanted to follow up on the point that Joyce made. I mean, I agree with everything everyone has said about the value of stakeholders and what we bring to the table. I think we all know that’s true. But I think we have to do something about it. Because just like when women got the vote, they didn’t get the vote because those who had the vote decided it would be the right thing for them to get the vote. They got the vote because they went out and said, you’re giving us the vote, right? And made it unavoidable. And I follow a lot of processes at the UN. The cybersecurity process are frustrating because of this theater of the absurd of applying and then being vetoed. The WTO negotiations on electronic commerce are completely closed to all stakeholders. It’s the least open process. So believe it or not, it’s actually somewhat better in the first committee. But I spend a lot of time with delegates in New York. I think they’re tired of having this stakeholder argument every time a new first committee process is launched. I know they’re tired of it. I think a majority of states think it’s a lot of wasted time going. on arguing about this. It’s the same argument every time. And I do believe that there is appetite to make a set policy on stakeholders that would turn it into more of an administrative process that happens each time a first committee process is convened, especially related to the internet. And then that would be the end of it. The decision would be taken, we would be able to participate and that would be that. States would still take decisions and we would speak last and all the rest of it. But we would have something more like what we have at the ad hoc committee on cyber crime, where it’s an administrative process really. It’s not a political process, which is what of course it’s being turned into. And I think as stakeholders, that’s something we believe we want. We’re going to have to advocate for it. We’re going to have to do the legwork on the ground with the delegates, get someone to propose a general assembly resolution. And I think we would win. I think we would win on votes if there’s voting. It wouldn’t be consensus, of course, because the states that don’t want us, don’t want us. And that’s the way it is. I think we would have a clear majority in favor of an administrative process just because we’re right, basically. We’re right. But also even for the states who are somewhat, who don’t care that much one way or the other, they’re tired of fighting about it and wasting a great deal of time arguing over the subject. So I think it would be interesting to, if any of the rest of you have thoughts on that, it would be interesting to actually mount a campaign to solve this problem on a horizontal basis once and for all. Because I think that’s the only real way we’re going to get a solution. And the honest truth is the states would be far better off if we were around to bug them because they need a more ambitious agenda when it comes to this. cybersecurity really. I mean if you look at what’s on the table to be decided at the OEWG and you look at what’s going on in international cybersecurity, there is a huge gap in need versus

John Hering:
what’s actually being addressed. Well taken Nick and thank you so much. Well I will leave it to the panel on the table to see if there’s anyone that would like to take up that thought about moving this to an administrative matter as opposed to a political process and whether or not Nick’s read of the appetite has some accuracy and validity to it. I would be happy to try to

Bert:
answer, to try to respond to Nick’s question. It’s a good point. You’re right that people are very tired of this question because it comes up again and again and it’s particularly because it has become such a politicized question and it has been politicized by a number of countries and it will be difficult all along. I think the idea whether a one-size-fits-all forever resolution of the General Assembly, how might the stakeholders should participate in such processes, is an interesting idea. It has to be discussed. I see a number of drawbacks, namely it’s so difficult if you make this totally unclear for what type of future process. I’m not sure we get the best result. We might get better results on the specific process, on the specific circumstances, than minutes for any future process where it might become quite narrow and it might be a difficult process. But it’s something to be discussed. My concern is if it would succeed, if then a new process would be set up, we would then again have a fight whether the agreed framework is being applied or whether specific rules have to be decided upon. So we might come back to square one. But we certainly it’s an urgent matter and the issue where I think which is particularly urgent is we are discussing here a lot both in sessions and informally about the upcoming process of negotiating particularly global digital compact which is is part of a much broader process preparing for the summit for the future where we need strongest possible mighty stakeholder involvement in the general assembly process which is sort of as I said already intergovernmental by nature where we have the challenge that basically our key objective is out of this process a reaffirmation of the mighty stakeholder model. Also in our view a strong role for the IGF, but then again to get there the process must be as mighty stakeholder oriented as possible and this will again be an uphill battle. It’s even not clear whether the mighty stakeholder arrangement would be made specifically for the global digital compact negotiations or for the entire process. I would be in favor of doing it specifically for the global digital compact because there’s a better understanding of the world and for instance when you negotiate a new agenda for peace where basically I think there’s a sense that states have a much stronger role to play. But it’s also I think we need to see I mean where are countries who are I mean there’s some countries who are very critical they’re opposed to mighty stakeholder involvement for a number of reasons. We also need to do more work why it is of benefit to all of us. It has become a political issue for me it’s an issue of expertise of quality control. I can only say I come from a country our capacity in the area of cyber digital etc. is limited. We benefit a lot from talking to industry partners from academics experts etc. without them we can’t survive these such negotiations. This is from where we get ideas inputs with a quality check of our ideas and I’m sure it’s the same for others. And it’s also therefore important that mighty stakeholder involvement is as inclusive as possible and as representative because there’s also a sense mighty stakeholder means basically big tech companies sitting at the table it must be clear that this must be broad and every effort must be made it is as inclusive as possible. Thank you.

Joyce Hakmeh:
And maybe if I can add I agree with everything you said but and I agree with the sentiment behind Nick’s message that. We need more passion to have this issue resolved, and we need to be a little bit more strategic and have more ambitious plans. But on your point, Bert, about how you benefit from multi-stakeholder’s input, I think it also goes both ways. Because we benefit also when we speak with governments about what’s on their mind, how they’re thinking about the different priorities. Sometimes, even if we follow online, if we’re in the room, we might not know what’s really going on. So speaking to them is also very valuable to us, because it makes our role much better, if we have our fingers on the right pulse. So I just wanted to add that.

John Hering:
Absolutely. Thank you both so much. Thank you again to Nick. Even if we don’t have something that’s going to be the be-all, end-all, I think even moving towards what is a gold standard of multi-stakeholder inclusion, what in the US we call the Cadillac of multi-stakeholder inclusion. But noticing that we’re in Japan, maybe it’s the Lexus of multi-stakeholder inclusion, I think could be a good framework to work towards. I think we have a question in the room here.

Audience:
Thank you very much, our speakers, for such an interesting conversation and discussion. I think I’m just going to point out the elephant in the room. We are talking about multi-stakeholderism. And I was just looking at the representation of different multi-stakeholders from the panel, and I don’t see representation from African stakeholders. So I guess my question would be, how involved are African stakeholders in these discussions and debates? And what can they do to improve their participatory role in these discussions? So I understand maybe government actors, there could be different processes being followed. But with the private sector, academia, civil society, what exactly is being done to increase or to improve their participation in discussions like this? And just giving this as an example, like we talk about inclusion. if we are not going to have African voices being part of these discussions, it becomes a bit difficult to understand how we approach our multi-stakeholderism. Thank you.

John Hering:
Absolutely. Thank you so much for the question, and I will leave it to those on the table to comment on multi-stakeholder inclusion and participation in the dialogues from across geographic regions and lines of difference.

Joyce Hakmeh:
Yeah, thank you for your question. I’m happy to take a stab. I think you’re absolutely right. I think we talk about multi-stakeholder participation, but if we look at the composition of the multi-stakeholder groups, it tends to be more sort of Western-dominated. So you’re absolutely right, that there’s a need for inclusion that goes also at the regional level and not just bring different actors, but also actors who represent different regions. And that’s why I talked about the importance of regional efforts, that we don’t put all our focus just on UN processes, because there’s a lot going on at region level, at national level, and the experience from those stakeholders who are very much on the field would be absolutely very, very valuable to the UN processes and beyond. So I think, you know, definitely agree with you there. And, you know, I think also we need to be honest about how multi-stakeholders coordinate with each other. And I don’t think it’s great either. I think there is definitely room to improve, but as I said, it’s a learning curve on several different fronts. The focus for today is how we work better with governments, but also there is a bigger question around how we work better with each other and how we bring more voices into the debate.

Marie:
Thank you. Thank you for the question. I think, indeed, there is a lot that still can be done, but it’s also a capacity issue, I think. And coming from developing country, I think it’s even more difficult to dedicate some time to come to New York and to come to those processes. And I think that’s why also the initiative at national, regional level are so important. And it was mentioned earlier, it’s not only about what you were saying in the room, it’s actually the ongoing discussion that you have with your representative that will go to New York. and will represent those points. And having those long run discussion, not only a one go during the open-ended working group, but really like an ongoing discussion where you actually bring to your governments, to your people that will represent, be present in the room negotiating, you give them the arguments that they will need to shape an informed policy that will benefit also, not only us, but like everyone, every stakeholder groups. And that’s completely part of the entire process. So we have the luxury that we can do it. We also have some diplomats that are there in different countries that can also have those discussion, not only with our national stakeholders, but also with other stakeholders from other regions. But really like we need those information to take informed policy decision that we will then bring to those fora. And thank you, Nick, for being a very dedicated stakeholder being still up at 3 a.m. for this discussion. But that’s exactly the kind of stakeholders that we need, like really dedicated as well. And we understand that it’s a capacity issue as well. So wherever you can go at any level, try to like bring your expertise and knowledge so we can take better informed policy decisions.

John Hering:
I believe Bert and then Charlotte online also asked for the floor, so.

Bert:
Okay, very briefly, sorry. I think a very important point, just two comments. One is, it’s the same challenge also on the government side, how to ensure to have negotiations that are inclusive. What I noticed, for instance, if you compare the open networking group with the ad hoc committee negotiations, through a number of measures, including that some funding is available for travel, far more countries are represented by experts from capital in the cybercrime negotiations than in the open networking group. And you see that the quality of the discussion is quite different in a way. I mean, I find it, I learn a lot from listening to the different. perspectives, and that’s extremely positive. The same applies on the multi-stakeholder side. Some initiatives have been taken to facilitate, to provide funding, to participate in such meetings, etc., but of course we all agree, even for us. I mean, we sometimes get denied the funding for travel to New York because it’s too expensive to spend two weeks in New York, and I’m sure it’s even worse for multi-stakeholders. And so we have to see what more is possible there to allow participation, because if you have seen it once, then you also understand better how it works, and then that’s maybe the advantage, that’s also one of the positive side effects of COVID, there’s a sort of a democratization of such multilateral processes. All of this is now hybrid, all of this is screened, and you can participate much more easily. And again, a key role is, any government in New York, the position formulated is back in capital, so you have, you need to work with the people in capital so that the people who sit in New York, Geneva, or wherever, they press the right button or make the right statement. So a lot of the work has to happen at national level in any event. Thank you.

John Hering:
Thank you, and Charlotte, if you’re going to take the floor, please do.

Charlotte Lindsey:
Yes, just very quickly, and I think it’s a really important point, particularly about African representation. It’s something that we tested also a year and a half ago, where we invited ambassadors from representatives of the African Union in Geneva to come for a half-day workshop on all of these processes. There was definitely an appetite, there were representation from most countries of the African Union at ambassador level, so there’s definitely an appetite to engage and to learn more about these processes. And I think it’s also really important to demystify these processes, because we heard feedback, for example, that, oh, well, you know, we specialize more on human rights. Well, actually, a lot of what’s been discussing at the Open-Ended Working Group is about human rights, and so there are very transferable skills. It’s just sometimes the language is very exclusive. or very difficult for people to feel that, oh, I haven’t followed these debates for many years, therefore I can’t contribute. And actually what we saw was that there were very key messages and participation possibilities from the representatives of the African Union that could very easily transfer their skillset into these negotiations. So I think there’s an appetite. We just need to focus much more on the capacity building side.

John Hering:
Thank you all. I think we have two questions I saw in the room. Patrick, were you at the mic a moment ago? And then the young woman over here. And then back over to you, Eduardo, if there’s anyone online after that.

Audience:
Hi, I’m Patrick Pawlak from Carnegie Europe. My question was partly asked and partly answered. So let me use the microphone to push back a bit and get a bit more precise answers. In answering to the colleague’s question, many of you said, yes, the engagement with stakeholders at the national and regional level is important and we have to do it more. How exactly do you envisage this? We have three governments on the podium. Could you describe to us how each of your governments engages with your civil society ahead of the open-ended working groups? I know for the fact that actually we very often talk about this engagement of multi-stakeholder community. It happens through the side events during the open-ended working group sessions or any other events there. And very often those meetings are really used as a fig leaf, let’s say, for the lack of engagement of the national level. So if you could share some concrete examples, that would be great. Speaking about national engagement with civil society, a lot of organizations from many countries around the world will tell you that actually they have no access. It will be easier for Joyce from Chatham House to talk to anybody in the world, to cyber ambassadors and get the access, that for the regional civil society organizations who are completely ignored, right? So how do we break that sort of a ceiling at the national level? And thirdly, I think that disengagement at the national and regional level indeed might be a more sustainable. sustainable solution, if we really want to create, let’s say, better functioning cyber diplomacy engagement, simply because so many countries in the world actually have this shrinking space for civil society organizations. So by creating the opportunities for engagement around cyber issues, we’re also contributing to strengthening the broader ecosystem of civil society organizations. So yes, I agree, but I wonder how you think we could do this in a more specific way. Thank you.

John Hering:
Thank you so much, Patrick. And maybe a question over here as well, and we’ll just sort of take both together.

Audience:
Thank you very much. I’m Larissa Calza, Head of Cybersecurity at the Ministry of Foreign Affairs of Brazil. I would like to, first of all, thank all the panelists for their interventions. And I would have one question building up on a point that I believe Charlotte made about fragmentation of the debate on cybersecurity and how detrimental it was in the period from 2019 to 2021 to the participation of non-government stakeholders. Well, for Brazil, fragmentation is a huge concern. It is a challenge not only to non-state stakeholders, but also to most developing countries. It’s always difficult to have enough delegates to follow multiple tracks at the UN. And so one question I would have is, well, we’ve spoken a lot about the POA and in very supportive terms, and Brazil very much supports continuing discussions on the proposal. But it is not a consensus within the UN. We have observed recently a fragmentation on states that support a POA, states that still are very much in favor of starting negotiations on a legally binding instrument, which us nationally feel that it is not quite a moment for it yet, though we do not oppose the idea of something legally binding. So I guess my question would be, do you see a a risk of having this fragmentation once again, given the polarization of positions on the future of institutional dialogue after the OEWG? And second, if the POA is indeed adopted this year, how do we avoid that the OEWG in a way is undermined or has its discussions emptied due to a decision being made two years ahead of the end of its mandate on regular institutional dialogue? Thank you very much.

John Hering:
Thank you both so much. I will leave it to you to sort of take the questions in turn or in the order that you’d like. Anyone on the stage would like to hop in? Or of course, Charlotte online.

Joyce Hakmeh:
So I can’t answer the question, what governments are doing to engage the society? I know my government doesn’t do anything, so yeah, how they should do it, I suppose. But I can sort of like, because this is of course like a very important problem, and we think about it as well, because inclusive governance is one of the sort of strategic priorities for our work at Chatham House. And just someone talked about the appetite that exists, and I agree with that. I think there’s a huge appetite. We organized, I think it was last year, a conference in Jordan about, and we have a representative from the MFA here, about the cyber diplomacy in the Arab region, and what are their perspectives? And I was amazed by the turnout, by how much eagerness there was from not just governments, but also non-state stakeholders to be part of this conversation. But there is of course like issue of, you know, sort of subject matter expertise with these UN processes, and as Charlotte mentioned, it can be a little bit too intimidating, because if you’re not, I mean, even for us, if I miss one OEWG session, I’m like, I don’t know what’s happening anymore, you know? It’s very hard to kind of stay on top of these very lengthy negotiation process. and be, and feel like you have the expertise to contribute every time in an informed way. And so I guess there is sort of responsibility on both sides. If we look at the list of accredited organizations to the Cybercrime Convention, which as Bert mentioned, they were all accredited after maybe a little bit of a pushback. I think there were around 160, something like that. But if you look at how many organizations actually participate, I think maybe 20, something or maybe a little bit more in terms of consistently participating, and although there is the opportunity for online engagement, et cetera. So there is also this, if you want to engage, you need to put in the effort, and that’s very true. But there is maybe perhaps how do we encourage that? I think maybe sort of the governments, the way they have been supporting developing states to come to the negotiations, perhaps there could be some funding dedicated to bring in multi-stakeholders more into the debate. And I know, Patrick, you’ve done work on that in the past, and I think more initiatives like this would be extremely important. On the fragmentation point that was mentioned, I think the question was should we be concerned about fragmentation with new processes? I think, to be honest, I think the fragmentation is already here to a certain extent. I mean, because we engage in the open-ended working group and the cybercrime process, you feel that there is this huge desire to keep those different conversations separate. And of course, this one is dealing with international peace and security, this one is dealing with criminal activities. But the reality of cyberspace is that the lines are not sort of like this division is not very clear. Sometimes it’s artificial and the distinction is not as clear-cut. So there are overlaps that need to be understood. If we take, for example, as you probably know now, the open-ended working group is trying to operationalize this point-of-contact directory about each state will have one organization dedicated to sort of answer responses and requests to de-escalate. So, we need to be more conscious in terms of how do we make sure that we have the same expectations in the cybercrime convention about 24-7 networks and having a point of contact, et cetera. Of course, they will have different mandates, but, as we know, in a lot of states, like, there will be one agency doing different, like, the same role, right, doing sort of cybersecurity stuff, but also cybercrime stuff, so we need to be more conscious in terms of where do these, where the touch points are, how do we understand them, and how do we make sure that we have the same expectations about the same things, and I think here, really, like, multi-stakeholders can play a very big role, right, in bringing those sort of nuances together and kind of, like, talking about them in a more sort of clear way. So that’s my answer.

Pablo Castro:
ยป Thank you. Let me start with a question from Larissa. This is a very good question, by the way, because we have this internal discussions, you know, in our countries, like, Brazil, Argentina, et cetera, about the situation we are right now, as I mentioned before, about this geopolitical content, which is quite difficult, and the problem is not just in the open-ended working group, also in the cybercrime, if you go to this cautionary weapon system, we cannot, it seems that we have this sort of fracture, you know, that is already there, so what we can do, basically, I mean, what we can really, I mean, one of the reasons we, regarding the open-ended working group, I mean, we have this discussion in the UN, we have this discussion in the IPOA, what we supported, I mean, the pay from the, not the very beginning, but from the 2021, was because it was action-oriented, so it was something, say, okay, we have this discussion in the UN, which is, well, by the way, Chile voted against the open-ended working group back in 2019, as I say, for the comedy. I can agree with the idea, because it’s something that, basically, we definitely need from the perspective of our country, Latin America, action-oriented, you know, focus on capacity building, implementation, we can keep the discussion, I mean, about international law application, et cetera, but we have very critical needs that we need to in some way achieve. So that’s the reason why we’re against this. We support it, but you’re right that we have this sort of things, what we can do now, that we have the Open Networking Group, the POA, the POS part, of course, the discussion, the Open Networking Group, the regional dialogue. And I’m not quite sure that I have the right answer. In a way, I think it solves something connected with what’s going on today, I mean, worldwide. This situation is going to have been stayed, I mean, for a long term, or we cannot just give a point that we can actually create or establish some concern. The discussion is quite frustrating, by the way. The cybercrime is sometimes even impossible in order to agree in some technicals, I mean, the practical solution, because we have this, to know this problem, you know. And it’s not so simple, because some states have their own view, principles, and values, and other states are avoiding different ones. So it’s a cultural problem, geopolitical problems, maybe in the next future, where we have different internets, I don’t know. But I agree with those, and this factor is already, I mean, the way I can manage this is going to be something that definitely we need to discuss more and see what can we remove on. But I agree with you that it’s something that several states have a lot of concern about how to deal with the process. So it’s not just, it’s a very important matter, I mean, part of our discussions. Regarding Patrick’s questions, always very good questions, very fundamental questions. I have to confess to you, Patrick, that, you know, sometimes we are, I see a lot of states that regard the multistakeholder in our statement, you know, we’re very clear that we support multistakeholder engagement, and so on, and at the very end, you come back to capital and realize that probably I did not do it, I mean. enough, you know, to work with them. That is true. I have to confess, in my case, when I started cyber security at the Ministry of Foreign Affairs ten years ago, I had no idea about this thing. All it was, I think, was Microsoft, the first time, kind of a secret in Singapore that taught me that Microsoft has a cyber diplomacy approach. I just came back to our Ministry of Foreign Affairs to explain to my bosses, you know, ambassadors about the role of Microsoft in international security, to try to make them understand that. So that’s been quite fascinating. I mean, my background is non-proliferation, I’m in control. You don’t see that, you know, in other processes. Sometimes, you know, and I can tell from our reality, and maybe also Latin America, is a lack, you know, of people, you know. You still don’t have too much, you know, expert on our Ministry of Foreign Affairs. You have the capacity, you know, to cover one thing or another. You know my case, I have to cover cyber security, cyber crime, and many other things. So we’ll have to have more time, you know, to engage, you know, on what I would like to do more with Stacey Holder is to work in some specific line of actions, you know. Again, the idea of a strategy, you know. If it comes to international law, I mean, China has maybe, you know, have some idea to do something, I mean, next year in Chile, you know. When it comes to CBS, or we can, I don’t know, with the implementation, IHL, which is something very important in Switzerland, be one of the champions in this, you know, how we can actually work with some specific, Stacey Holder thing in our regions. It’s something that can be done. It’s sometimes a lack of time, a lack of resources, you know, many things to do back in the capital. But I would give it again, I mean, there’s something that we did with Netherlands, you know, which is dialogue, you know. I don’t think we have too much dialogue in our regions, you know, when we can. And for that dialogue, something that we agree with the Dutch Ministry of Foreign Affairs was to invite representatives of the Ministry of Foreign Affairs, not just people from Ministry of Interior or CSIRS, you know, to bring, I mean, the people in charge of cyber security, the Ministry of Foreign Affairs, to talk. and engage with multi-stakeholders, you know, and talk about, you know, processes we have, you know, at the UN level, you know. And I would add just to keep in mind that the important roles that regional organizations play on this. And most of engagement of stakeholder be thanks to the OAS, and I think in other regions it’s the same. Chile’s now the chair of the SICTA, you know, the Inter-American Combat Against Terrorism, with the cybersecurity programs placed there. So it’s something with, and especially on the implementation, CBM is something that we definitely need to, would like to do more, and engage this stakeholder more in this process. But I totally agree with you that it’s not maybe good enough we’re doing right now. Thank you. Francesco.

Marie:
Yeah, I couldn’t be a better advocate for our way of doing stakeholder engagement than you are. But yeah, maybe I’ll give you a bit of like my background, and how it was when I started working on cyber issue, and in the Netherlands, that was a few years back. So at national level, it was back in the preparation of the GGE, and the Open-Ended Working Group in 2019. And back then, I went back to The Hague from Geneva, and we were having actually a consultation with other stakeholders to actually, before we entered into those rooms, more or less open, then we would be able to have an informed policy position. And I’m not saying we’re doing it enough, and I think that’s one thing, probably not enough, but we’re already trying at that level. The other thing that we are doing is, obviously, those conferences, I think Bert pointed out quite well that the IGF is a place also where we can have lots of open discussion. And we should also grab those opportunities that we have at the IGF, at our national, the IGF at also the regional IGF, to also talk about those issues that we are facing in the first committee and then grab all the expertise that is there. Because there are so many people around here, they know so much more than we would, then we really need to grab those opportunities. I’m talking about the IGF, but I can also talk about non-UN forum like RightsCon or the GFC conference in Accra next month, for example. I think we need to take all those opportunities to really also ourself, engage with the stakeholder community as well. I mean, capacity building, we are doing lots of capacity building and we’re also trying to bring this knowledge about what the first committee is about, what we’re discussing, what is the normative framework, what are our objective, and really looking into the implementation and what it means also for people. So then when they are informed, they can also engage. So Charlotte, as you said, we need to also demystify what’s happening in the first committee. And I think that’s on our side, also an effort that we need to do, because it’s already complex for us to understand how those processes work. So for someone who doesn’t have been there for so long or is not engaged in everyday or in all the discussion, or can be engaged in all the discussion, it’s even more complicated. So I think on our side, we also need to do more on demystifying those processes and explaining what you can bring to those discussion. I have to say, we have the luxury of having a nice cyber policy and we have 34 now cyber diplomats around the world. So we also participate in regional meetings and so on and so forth. And we try to grab all of this, but we also try to share our knowledge and our experience to make everyone be able to also engage. and we still have so much to learn and I’m sure some others have better ways of doing, but I think it’s about exchanging on how we do and then learning from others on what they have been doing and then we can just like improve the way we engage, but it’s true that we have the luxury of having a bit more people, so I’m happy to share, but also really happy to get some feedbacks on how you would like us to engage with you, because that’s the only way we can make it better.

Bert:
Thank you. Thanks so much. Also, as I mentioned already, for us it’s important, we learn a lot from others, both governments, other multi-stakeholders, it’s of critical importance. Joyce, you said that many multi-stakeholder were accredited to the ad hoc committee, but not so many make use of it, and that’s of course a challenge and of course even for governments, I mean these cybercrime negotiations, this is this year, three times two weeks. It’s a huge investment. It’s difficult to take, and there indeed, if you’re not following it closely, it’s difficult to do so. So that’s a challenge in terms also of resources. And by the way, if I may go back for one second to the global digital compact, which is coming up. Also there, I hope that many multi-stakeholder will make the investment, because it’s important that one does. I was a bit concerned that everyone was invited to provide input and so on, already late last year, until I think March or April this year. And then I think nobody ever heard what happened to the input. And then we had the policy briefs, which I can’t imagine would really somehow be a reflection of the input received. So I hear, also hear when I talk to people about the process, there’s a lot of, there’s some who say, well, is it really worthwhile to invest? It’s so difficult anyway, there’s such limited access, and so far our input has not been appreciated. That’s a huge concern to me, because we need multi-stakeholder involvement in the process in order to get the reaffirmation of the multi-stakeholder model as an outcome. So that’s certainly an issue. in the global digital compact, we have a very special committee, and it is responding to the question of Patrick, how do we involve multi-stakeholders, and maybe I start with the global digital compact. There basically we used a national IGF to discuss the process, but also to prepare input. So we had both a government input and a multi-stakeholder input, but we used a national sort of IGF for it. And also, I have to state, there was a huge circus, second Saskatchewan conference on technology security platform where we basically bring all the people together. Telecom, they’re hugely interested, how this treaty turns out because it has some serious implications for them. But some of them also actively participate in, but we regulated exchanges with them. So, there’s a lot of interest, and we have to make sure that they’re fully aware of it. So, we have a relatively small country, the interest is limited, because most people are not really, for them, it’s not clear what’s in it for them. So, there we need to mobilize interest, so that they’re fully aware of it. But as I mentioned, we’re working on a national position paper on international law. There we’re now finalizing our government draft, and this one we want to consult with mighty stakeholders, particularly, of Brazil, Oscar Diaz, Douglas Gilbert and there are many stake holders from Brazil. Soon, we will have very important stage questions to go before the Paul committee with the idea of the POA. I also think it makes a difference in terms of implementation, et cetera. We will permanently settle the question of micro-stakeholder involvement, hopefully, at least for that process. But again, also there, the idea certainly would be that we use the current open-ended working group to discuss in detail how this should be figured out, what the element should be, and then any sort of configuration would be the follow-up to the open-ended working group. But we will have to see how it pans out. It’s negotiations on this ongoing, but we very much hope that in the end we have an inclusive process and we end up with one mechanism after the open-ended working group, because also more, as we discussed, for any one of us, it’s difficult to entertain. Thank you so much.

John Hering:
Thank you. Thank you all for this. We’re coming up on time here, so I’m just going to say, Eduardo, is there a question online? Otherwise, I think we’ve exhausted things in the room, and I’ll move to just a final quick lightning round question.

Eduardo:
We do have a question. I wonder if we have time to answer it, but Amir Mokaberi was questioning the legitimacy of companies participating in multi-stakeholder discussions, especially in the field of international law development, nor making, due to their democratic nature, conflict of interest and lack of election by citizens. So I wonder if you have a quick response to that. John?

John Hering:
Yeah, maybe I’ll take that one. No, I think it’s a fair question and a fair thing to be concerned about in terms of what is the proper scope and size of private industry engagement in any conversations that relates to governance, whether at a national level or international level. The only thing I’ll say is Microsoft makes products and services that we sell, and that helps to augment the digital domain. And we certainly don’t want to be contributing to a space that is getting increasingly unsafe and more unstable. And so supporting these dialogues is critically important to us as an organization that is a large technology company. But I think we are always clear in that, and we would want to always be as transparent in this as possible in saying that. that obviously governments make the decisions here. We don’t. We are, you know, I think together with our other multi-stakeholder partners pushing for a voice at the table, not a vote. And that seems to be always the proper boundary and limitation there. So I hope that answers that question well enough. And I will just say in the last couple minutes, a lightning round. If there are sort of non-governmental stakeholders in the room who have not engaged before in any of these processes at the UN, what would be just a quick piece of guidance on the way they could be most impactful in helping to support government dialogues on cybersecurity at the UN? Anyone can start.

Marie:
I’ll start because I’m at the back of the table. I’ll be short because we don’t have so much time, but I would say, so approach us. We’re not, we will listen and be there, provide information, numbers, facts, show the impacts of the project that you’re doing like in the different countries, in the different regions, report on what’s happening. Those information can only give us like, give an added value to the discussion that will happen in the context of the UN. But also if you start following it from a, like it’s not a one go. If you start following it, then come back to us and tell us, oh, you did this, but you haven’t yet talked about that or this. And actually I have to say, I find it like very sad that people are saying that they don’t see the impact of what they bring to the table. But I can say that for some of the outcome from the open-ended working group and GGE report in 2021, there are actually things there that I heard happening at the beginning of the process, when it was inside event in discussion that we had with civil society, the private sector, academia, and actually they made their way through the end report. So actually it’s a long process. It’s frustrating because it takes time. and you don’t always have everything you would like to see, but it made its way through. So just like continue and all this accountable for making sure that we are taking the right position when it comes to those discussions.

Pablo Castro:
Yeah, I would agree on what Marie said. And also, I mean, I encourage to approach the states, you know, in just our conversation in New York, in Vienna and other places, but also in capital, you know. Most of our work that we did with stakeholders is because they approach to us, you know, proposing, you know, side events. We did one very good in July regarding the toolkits for implementation of norms for marginalized stakeholders. That was very interesting with other states, I mean, Mexico and Colombia. And most of this relation has been because thanks to this stakeholder approach to us, you know, to propose ideas, to make exchanges view about what thinking about their, I don’t know, the next POA or open-ended working group or cybercrime conversations. So I encourage to know to take on the winner state, you know, of course, during this, our meetings in New York and Vienna, we had the chance to create a sort of, I would say even friendship, you know. That’s one of the things I really like about State Hall, you know, you share beer, go dancing, whatever. And then, I mean, just come back, you know, and say, hey, let’s go to work on something or just have, you know, some meetings. And we were having this conversation very much with, for example, Microsoft, with cybercrimes. I always really like, of course, also the submission of documents. Probably we never really, I mean, thanks, I mean, to the stakeholder about this really good documents. Sometimes even in our state, we didn’t be using some of the very good ideas in the office today. I mean, it’s never really, I mean, recognizes, I mean, incredible. We’re very good documents, and we need that just for both, I mean, conversation. So thank you for that

John Hering:
We are one minute over so 20 seconds for everybody else No, please

Joyce Hakmeh:
Okay, 20 seconds, and I think maybe choose One thing that you think you can contribute value and not try to do everything if you’re new to this, right? If you want if I want to look at the way WG in July They agreed on an animal progress report with the like a whole loads of recommendations very concrete actions, right? If you’re you know a CSO like industry, whatever you want to be involved Look at those recommendations and see can I contribute to one or more from my perspective whether it’s national or regional? I maybe sort of take that as your first step and gradually, you know You’ll feel that you are being more involved than as Marie mentioned like, you know states are reading are listening So your input will make a difference. I

Bert:
Would fully subscribe to that Build also partnerships with others and then again what also often notices we receive a lot of proposals ideas sometimes They general sometimes very specific and very often it happens that you pick certain elements up in a statement in Negotiations as an argument etc. But rarely you write back to the organization’s. Thank you. I use this here there. So therefore I’ve thought I want to get better with this because it’s difficult sometimes to measure impact and Often you might not hear about it and you might have no idea how it might was used and you might have more impact

John Hering:
That you think that you have. Thank you. Thank you. And Charlotte if you’re online for the last word for 20 seconds

Charlotte Lindsey:
Yes, I am. I would just say very quickly In terms of engagement, I think what is critical is fact-based Framing and the timing of the input particularly for states so that they can then take that input engage So even if you can’t speak at the table that you can produce that input, but you have to do it in a timely way

John Hering:
Thank you so much. Thank you all so much for showing up, especially late in the day here to everybody online Especially folks like Nick who are up at 3 in the morning really appreciate the engagement and look forward to seeing you all throughout the week here at IGF. And thank you to our panel. And please join me in giving them a big round of applause.

Audience:
I am a computer science major at Georgia Tech. And I was just, we were talking multi-stakeholders. I thought it was just… See that again? Oh, oh yeah, that was really crazy. That was so crazy. It makes me so happy. No one expected that. No, no. We were talking about multi-stakeholders, I thought it was really interesting that no one really brought up the defective use, abuse and how users should be more involved in the multi-stakeholder process. So I was wondering, at Microsoft, when you’re talking about multi-stakeholder capabilities, how do you involve users? Do you involve users, or what does that look like? Good question. I just importantly have to think about what our role is in setting up these societies. Engaging them. We’re engaging them. And our goal is to sort of provide the opportunity for them to sort of get a part of it and say, hey, we have this new gadget that we want. Along the process, we’re inherently not doing this in the first place, until we find ourselves in a happy place. So I said, hey, how are we getting the youth perspective into Microsoft’s submission, into Oh, hey! Yes, how are you? Good, good. Everything’s on track? Yes. I mean, for us, yes. But I think we still have, like, a couple… As I’m working on it. We don’t have to freak out. No, not yet. Not yet freaking out time. No, no, good, good. I mean, the only reason why I’m also pushing a bit is because I’m… Especially for the people… Especially for the people who need funding. But I’m going to go through my sessions. Yeah, exactly. But because others don’t require the funding. So then we… I’ll send, like, in any case… Yeah, exactly. So are you staying until 5? Ah, thanks. Okay. Okay. Yeah, yeah, exactly. Yes. Okay. Yes. Yes. Yes. Yes. Yes. Thank you very much. Thanks. Thanks. I’ll see you tomorrow then. Thank you. Thank you.

Audience

Speech speed

141 words per minute

Speech length

1357 words

Speech time

576 secs

Bert

Speech speed

203 words per minute

Speech length

3094 words

Speech time

914 secs

Charlotte Lindsey

Speech speed

171 words per minute

Speech length

1493 words

Speech time

523 secs

Eduardo

Speech speed

170 words per minute

Speech length

104 words

Speech time

37 secs

John Hering

Speech speed

211 words per minute

Speech length

2806 words

Speech time

797 secs

Joyce Hakmeh

Speech speed

204 words per minute

Speech length

3109 words

Speech time

913 secs

Marie

Speech speed

184 words per minute

Speech length

2146 words

Speech time

699 secs

Nick Ashton Hart

Speech speed

187 words per minute

Speech length

662 words

Speech time

213 secs

Pablo Castro

Speech speed

198 words per minute

Speech length

3043 words

Speech time

923 secs

Speaker

Speech speed

191 words per minute

Speech length

124 words

Speech time

39 secs

Searching for Standards: The Global Competition to Govern AI | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Michael Karanicolas

During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world. The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.

The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority. It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.

Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.

The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it. As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively. The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.

Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world. It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts. The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.

Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation. The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic. It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.

In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally. It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights. The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.

Tomiwa Ilori

AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies. For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.

Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance. This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.

However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.

On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives. International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.

In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.

Carlos Affonso Souza

In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology. The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.

However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions. Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.

One of the challenges in regulating AI in the majority world lies in the nature of the technology itself. AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.

Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications. This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.

Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation. The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.

Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation. It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.

Countries’ motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally. This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.

In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks. The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation. The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.

Irakli Khodeli

The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies. These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.

To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation. By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.

One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability. The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.

Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.

While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.

In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach. Regulatory frameworks must exist at different levels โ€“ global, regional, national, and even sub-national โ€“ to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation. Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.

To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants. The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.

Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully. UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.

In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.

Kyoko Yoshinaga

Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry. Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance. Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.

Simon Chesterman

The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere. On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.

Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.

Thirdly, Singapore’s approach to AI governance is highlighted. The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles. This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance.

Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.

The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.

Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.

The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.

Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.

In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.

Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.

In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.

Audience

During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors. It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.

Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.

The discussion also explored the need for context-based trade-offs in AI usage. One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned. This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.

The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively. This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.

The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned. However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.

Data governance and standard-setting bodies were also acknowledged as influential in AI regulation. Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.

The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.

Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.

The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.

Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.

A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.

The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.

Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.

In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.

Courtney Radsch

In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI. The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.

Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors. This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.

However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon. These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.

Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.

The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system. It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage.

The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes. The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.

The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well. The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.

A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments. It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.

In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation. However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies. These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.

Session transcript

Michael Karanicolas:
Hi, Simon. How are you? Can you hear us? I can indeed. Great to see you also. And we can hear you. You can hear me? Just give me a thumbs up if you can. Can you hear? I can hear you. I’m not sure if I’m coming through on your side. Welcome. So just off the top, I want to invite folks that are sitting in the back to come join us at the table. We want this to be as interactive as possible. So please don’t be shy. It’s OK if you’re doing your emails. We won’t judge. We just want we want people to be participating in the conversation as opposed to, you know, 75 or 90 minutes of us talking at you. Welcome to today’s session, Searching for Standards, the Global Competition to Govern AI. My name is Michael Karnikolas. I’m the executive director of the UCLA Institute for Technology, Law and Policy, which is a collaboration between the School of Law and the School of Engineering. And this session is co-organized with the Yale Information Society Project and the Georgetown Institute for Technology, Law and Policy. Our objective today is to foster a conversation on the development of new regulatory trends around the world, particularly through the influence of a few major regulatory blocks, particularly China, the U.S. and the EU, whose influence is increasingly being felt globally. And the tension between rulemaking within these centers. power and the impacts of AI as they’re being felt outside of this privileged minority. As part of that conversation, we have a fantastic set of panelists. We’re not going to be setting aside a specific time at the end for Q&A. Rather, we’re hoping to run this session more as an inclusive conversation. So what that means is that after an initial round of short three-minute interventions from each of our panelists, strictly policed three minutes, we’ll have a set of discussion questions. And for each of those discussion questions, after a couple of interventions from our panel, we’re going to be inviting interventions and comments from the rest of you to engage on these questions as well. So please, again, for those of you who are just joining us in, come join us here at the table and participate. So without further ado, let’s kick things off with a set of short introductory comments from our panelists to discuss trends in AI governance related to their region and area of specialization. Out of deference to our wonderful host country, I’m going to start with Kyoko Yoshinaga, who is a project associate professor of the Graduate School of Media and Governance at Keio University and also an expert at GPAI’s Future of Work Working Group. Kyoko.

Kyoko Yoshinaga:
Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brief overview of AI regulations in Japan. Japan adopts soft law approach to AI governance horizontally while revising some sector specific laws. It’s not really known worldwide that Japan took the lead in introducing principles for AI research and development designed to guide related G7 and OECD discussions. In 2016, the then Internal Affairs Minister, Minister Takaichi proposed eight principles of AI R&D principles. Transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability as non-binding international framework which was agreed by participating G7 and OECD countries. And it contributed to the OECD’s AI principles. Japan has social principles of human-centric AI which was developed by the Cabinet Offices Council as principles for implementing AI in AI-ready society. And there are seven principles to which society, especially state legislative and administrative bodies should pay attention and they are human-centric, education literacy, privacy, ensuring security, fair competition, fairness, accountability, and transparency, and innovation. Then we have AI R&D guidelines made in 2017 which added collaboration to the previous eight principles of AI R&D principles which I mentioned earlier for developers and business operators of AI. And we also have AI utilization guidelines which consists of 10 principles to address the dangers associated with AI systems. And it was also developed by Ministry of Internal Affairs and Communication. But these were for the developers, users, and data providers of AI. This user perspective guidelines was made because AI may change its implication and output continuously by learning from data in the process of its users, process of its uses. Also, we have governance guidelines for implementation of AI principles issued by the Ministry of Economy, Trade, and Industry, which guides how to analyze the risks associated with AI system implementations and offers some examples to help organizations adopt suggested principles. So these non-regulatory, non-binding soft laws are used by prominent Japanese companies to develop their AI policies and communicate them to external parties. As for the sector-specific, I wouldn’t go into detail right now, but Japan is amending sector-specific hard laws, such as the Act on Improving Transparency and Fairness of Digital Platforms and Financial Instruments in Exchange Act, which requires businesses to take appropriate measures and disclose information about risks. Also, for doctors, there is notification from the ministry that the doctor owes the responsibility of final decision in the treatment that uses AI. So we have soft law approach at horizontal level combined with some hard law approach by revising existing laws. Thank you.

Michael Karanicolas:
Let’s go next to Carlos Afonso Sousa, the Director of the Institute for Technology and Society of Rio de Janeiro and a professor at Rio de Janeiro State University Law School.

Carlos Affonso Souza:
So thanks, Michael. It’s a pleasure to be here among friends to discuss this very important topic on how we think about regulation of AI in the region. So in this my brief introduction, just to say a bit that it seems that even though AI like national strategies that do end up like sharing a common language, of course, like different states, they will have different priorities, they will have different approaches, and they will have, of course, different long-term visions about like how AI will end up producing relevant economic, political and cultural changes in society. And especially when we look in the region and by region, I think about Latin America, we see that different countries are looking to the issue of governance and regulation of AI. And we have for that specific purpose, Argentina, Brazil, Colombia, Peru, Mexico, all being very active in this discussion. But one thing that I would like to pinpoint that we can see right now in the region, and I think that’s something that we might scale up to a discussion to different regions, is how we’re moving through almost like this three-step process, in which it all began with a very broad ethical principles about AI, that end up turning to a second phase in which different countries end up designing their different national strategies. to think about AI and we now seems like we are in this third phase in which different countries are actually actually regulating about AI through hard law through different mechanisms and that’s I think one of the the greatest moments for us to take a look on especially because governance and regulation on itself it’s a form of technology and we need to understand how are we approaching those different topics concerning the future of AI and making sure that regulation and governance is appropriate to deal with the challenges that we’re facing right now and at the same time come up with solutions that could be future proof in terms of the challenges that we are going to face forward so I’ll just stop here in this very brief introduction but just to to take to provide this quick look on the region and seeing different countries going through this different stage on thinking about national strategies regulation and governance tools for for AI so thanks Michael perfect Courtney Raj is the director of the Center for Journalism and Liberty at

Michael Karanicolas:
the Open Markets Institute and a member of the IGF’s multi-stakeholder advisory

Courtney Radsch:
group thank you so much so in the United States the focus right now is on creating frameworks for figuring out what governance of AI should look like and what regulation should look like and I think one of the challenges is that we talk about AI as if it is a brand new thing without actually thinking about its components and breaking down what exactly it is we mean by AI including the infrastructure data cloud computing computational power as well as decision-making so right now a few of the major kind of regulatory initiatives or standard-setting initiatives include the blueprint for an AI Bill of Rights by the White House Office of Science Technology and Policy which is mainly focused on risk management and mitigation it includes a set of five principles and associated practices that are designed to help guide the design use deployment of automated systems these are like automated decision-making systems so again only one small component of AI designed to protect the rights of the American public in in this age through safe effective systems dealing with algorithmic discrimination protections data privacy notice and explanation human alternatives considerations and fallbacks and it is intended to inform policy decision and guide regulatory agencies and rulemaking but it is non-binding the OSTP is soliciting input to develop a comprehensive national AI strategy and it is focused on promoting fairness and transparency and AI meanwhile the National AI Commission Act which is a proposal that would create a 20-member multi-stakeholder Commission to explore AI regulation within the federal government itself is focused on responsible AI and specifically how the responsibility for regulation is distributed across agencies their capacity to address regulatory challenges alignment among enforcement package actions and a binding risk-based approach much like the EU I would say so there is support for the creation of a new federal agency dedicated to regulating AI which include which could include licensing activities for AI technology, although there are alternative views which think that some of this regulatory expertise should be embedded within each individual agency. There is also at the federal level the Safe Innovation Framework, which sets priorities for AI legislation, focusing on security, accountability, and protecting foundations and explainability, as well as a proposed privacy bill, the American Data Protection and Privacy Act, which would set out rules for AI, including, again, risk assessment obligations. The federal agencies are providing guidance to regulated entities. So for example, the FTC is regulating deceptive and unfair practices attributed to AI and are increasingly using their antitrust authority to impose some antitrust impositions looking at whether they can break it down some companies. I’d also just add that at least nine states have enacted AI legislation, with another 11 with proposed legislation. And we need to, I think, look at competition interventions as well, which is not yet part of the regulatory landscape, but is occurring with some court cases happening alongside these regulatory standard-setting initiatives. Thank you.

Michael Karanicolas:
So we have three fantastic panelists in the Zoom as well. Let’s go first to Simon Chesterman, who is the David Marshall Professor and Vice Provost for Educational Innovation at the National University of Singapore, as well as the Senior Director of AI Governance at AI Singapore.

Simon Chesterman:
Thanks so much, and I’m sorry not to be there in person. But coming from the Singapore and Southeast Asian perspective, I think one of the challenges that every jurisdiction is facing is we’re wary both of under-regulating and of over-regulating. and you expose your citizens to risk, over-regulate, and particularly for small jurisdictions, you risk driving innovation elsewhere. And so when the European Union adopts the AI Act, META might determine that it’s not gonna roll out threads, but it’s not gonna withdraw from that market completely. If a very small jurisdiction like Singapore adopted something like that, it might lead some of the tech companies to opt out of that jurisdiction completely. So that’s one of the sort of baseline considerations that I think is operative here. A second consideration is that in these discussions, certainly over the last eight or so years, the tendency has been to try and come up with new sets of rules, very much like sort of Isaac Asimov laws of robotics, that will address this problem of AI. But as Courtney just said, AI is not that new, and indeed laws are not that new. And I think that kind of approach often misunderstands the problem as too hard and too easy. Too hard in that it assumes that you’ve gotta come up with entirely new rules, whereas a lot of my own work has been essentially arguing that most laws can govern most AI use cases most of the time. But that approach also misunderstands the problem as being too easy, because I think it fails to understand that the real devil is in the application. It’s in the application of rules to new use cases. So in Singapore’s context, rather than coming up with a whole slew of new laws, we have had some tweaks. So for example, the Road Traffic Act had to be adjusted so that leaving a vehicle unattended wasn’t necessarily a crime, which would be a problem for autonomous vehicles and so on. But at the larger level, what we’ve really focused on is two things, human centricity and transparency. And the majority of the model AI governance framework that was adopted here back in 2019 is looking at use cases, what this actually means in practice. Because saying that AI shouldn’t be biased is merely repeating anti-discrimination laws. Discrimination should be illegal, whether it’s done by a person, a company, or by a machine. But applying that to particular use cases can be a challenge. So recently, Singapore released AI Verify, which is a tool which is intended to help companies police themselves, help organizations police themselves and determine whether or not they’re actually holding themselves up to the standards that they’ve been espousing and whether more work needs to be done. So I’m looking forward to a really interesting discussion, but I’ll hand the time back to the chair. Thank you very much.

Michael Karanicolas:
Thanks. Let’s go next to Tamiwa Elori. Tamiwa is a postdoctoral research fellow at the Center for Human Rights at the University of Pretoria. Tamiwa, are you there? Oh, yes. Yes, I am.

Tomiwa Ilori:
Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiatives in Africa on AI governance. And quickly, according to the African Observatory, on Responsible Artificial Intelligence, there are at least 466 AI policy and governance items or used in this conversation initiatives that make direct reference to AI in the African region. And it covers quite a broad period from 1960 to 2023. Those initiatives are categorized in various ways. First, some are categorized as laws, some are categorized as policies, some as reports, some as organizations or projects. Currently, across the region, there is no major treaty or law or standard when it comes to AI governance. When it comes to policies, there are just about two to three of them. And when it comes to… organizations and projects that are currently all working on AI governance, there are about 25 of them. So I wanted to give just about a high level summary of what is happening with respect to initiatives across the region. And these initiatives cover at least 17 policy areas, including access to information and accountability, data sharing and management, digital connectivity and computing and so on. Generally, these initiatives are led by government, multilateral organizations, public funded research, academia and the private sector. And the jurisdiction this initiative cover include the national, we’re looking at countries like Mauritius, Kenya and Egypt already have a kind of national AI policy. Then we have regional initiatives such as the AU Working Group on AI and also documents that refer tangentially to the regulation and governance of artificial intelligence systems, such as digital transformation strategy that covers from 2020 to 2030. Then also the African Union Data Policy Framework. Then we also have jurisdiction like global jurisdiction, like the OECD AI initiatives, and also subnational initiatives. Quickly, that said, artificial intelligence governance in Africa is still very much in its infancy. And most approaches for now are soft, but we already seen growing interest towards a hard blow approach. And that was just a recent example from, for example, the Kenyan government that has signified interest to now pass a law with respect to regulating AI systems. However, while governance may tarry for a while, interest are increasing from diverse key stakeholders such as governments, businesses, civil society, regional institutions, and many others. What this signals is that governance will not only catch up. We not only have to catch up, I mean, but when it does, it needs to be dynamic and respond to the unique challenges faced by Africa as a region in order to ensure that we do not replicate ongoing inequalities. I will stop there for now. Thank you.

Michael Karanicolas:
Thank you. And finally, let’s go to Irakli Khodeli, who is a program specialist for UNESCO to introduce their initiatives in this area.

Irakli Khodeli:
Thank you very much, Michael. Good day, everyone. Thank you for inviting UNESCO to join this panel. My name is Irakli Khodeli. As announced, I’m from the Ethics of Science and Technology team of UNESCO, and I’ll be contributing to our discussion today from a specific angle, angle of UNESCO’s recommendation on ethics of AI, focusing on the tool for its proven potential to guide countries on AI governance and AI regulation. The recommendation was adopted, so in a way, I’ll be very happy to be bringing in a global perspective on AI governance, global because the recommendation that I’ve mentioned was adopted almost two years ago by 193 member states of UNESCO. It is grounded in overarching fundamental values, such as human rights, human dignity, diversity, environmental sustainability, peaceful societies, and then these broad values are translated into 10 principles. There was a lot of mention of the principles already. There is perhaps nothing new in the UNESCO principles, either some of, for instance, Kyoko has mentioned some of the principles that were guiding the national discussions in Japan and also OECD principles were mentioned. What does make UNESCO’s framework distinctive is the specific emphasis on gender, because UNESCO believes that this should be actually disassociated from the general discussion on discrimination, because there are some specific and. and severe harms and threats to gender diversity, gender equality, and there’s also an emphasis on environmental sustainability because oftentimes in the global discussions this is under overlooked, this dimension. And then finally these values and principles are translated into concrete policy action areas by the recommendation to show the governments how you can actually operationalize these principles in specific policy contexts, whether this is education and scientific research, whether it’s economy and labor, whether it’s healthcare and social well-being, etc. There are 11 different, in communication and information, there are 11 different policy areas of the recommendation. Now there has been, as you’re aware, a lot of discussion globally focusing on the risks posed by AI, ranging from benign to catastrophic and from unintended to very much intended and deliberate harms. And we understand that the risks are significant and these risks are also cross-border. This AI also is closely related to pillars of the UN, such as sustainable development, human rights, gender equality, peace. So in this sense, a UN-led effort in our view is critical, not only because AI requires a global multilateral forum for governance, but also because unregulated AI could undermine other multilateral priorities like the sustainable development goals and others. So what I would like to postulate today in our discussion is that UNESCO’s recommendation represents a comprehensive normative background that can guide the design and operation of the global governance mechanism. I will end with saying that despite this focus on the global governance, we must admit that the successful regulation happens at the national level. Ultimately, it is the national governments that are responsible for setting up institutions and laws for AI governance. And here again, the recommendation on ethics of AI comes in handy because we are currently working with governments around the world, both in the global north and global south, to help them make a concrete use of this recommendation by reinforcing their institutions and the regulatory frameworks based on this overall ethical framework. Thank you very much. Really looking forward to engaging

Michael Karanicolas:
with these discussions with you today. Thanks. So I think that’s a fantastic framing of different initiatives as they’re taking place in different parts of the world and by different agencies. I want to start now by opening things up with a discussion of the north-south, global north-global south, majority world-minority world dynamics that are at play in the broader regulatory landscape, and particularly the pressures from standard setting emerging from major regulatory blocks and the challenges that that creates in trying to make space for, particularly for smaller nations or for voices from the majority world. Why don’t we, I think that Simon might be a good place to start there in terms of the challenges that smaller nations face in trying to make their own way from a regulatory perspective, and then we’ll maybe go to someone else from there.

Simon Chesterman:
Sure, thanks so much, and again it’s great to be part of this conversation. I think as Carlos said earlier, there are phases that countries go through. It starts with principles, but indeed those principles themselves, this sort of set of ideas that, as Irakli said now, we’ve seen in the UNESCO document, you can actually trace their origins back through primarily western technology companies. It was around 2016 to 2018 that western technology companies, partly because it was around that time that the Cambridge Analytica scandal revealed to many that the risks of errant AI went beyond a weird Amazon recommendation or a biased credit or hiring decision to actually potentially impacting elections, and now we’ve seen with generative AI everyone’s suddenly realizing that AI could actually affect their jobs. So we’ve seen this spread around the world, and it is now I think a truly global discourse, but there are three challenges I think facing small countries in particular. The first is the one I’ve already highlighted, the sort of whether to regulate, because if you’re a small jurisdiction and you regulate too quickly, one of the concerns is that all you will do is drive innovation elsewhere. That can happen to big countries as well. An example of this is when in In 2001, the United States imposed a moratorium on stem cell research, and that really just led to a lot of that research moving elsewhere. So that’s the first question, whether to regulate for fear of driving innovation elsewhere. The second is when to regulate. And here, there’s a useful idea that some of you might be familiar with called the Collingridge Dilemma. This goes back to David Collingridge’s book called The Social Control of Technology. And basically what he argued back in 1980 is that in an early stage of innovation, regulation is easy, but you don’t know what the harms are. You don’t know what you should do, and the risk of over-regulation is significant. The longer you wait, however, the clearer the harms become, but also the cost of regulation goes way up. And so I think, again, for smaller jurisdictions, there is this wariness of losing out on the benefits of artificial intelligence. And as we carry on this discussion, I do think it’s important to keep in mind that there are risks associated with over-regulating as well as under-regulating AI. The third challenge, and this faces many countries around the world, but again, in particular, smaller countries, is that in many ways, the biggest shift over the past decade of machine learning is the extent to which fundamental research as well as application has moved from public to private hands. Back 10 years ago, at the start of the machine learning revolution, a lot of the research was going on in publicly funded universities. Now, a decade later, almost all of it is happening in private institutions, in companies. And that means a couple of things. Firstly, it means it greatly shortens the speed of deployment from an idea to application. We’ve seen that in generative AI in particular. But secondly, again, it limits the ability of governments to constrain behavior, to nudge behavior, or even to be involved in the deployment cycle. So with those ideas, I’ll hold off, but again, I’m really looking forward to an exchange of views. Thank you.

Michael Karanicolas:
Let’s go to Carlos next, and then I would like to hear from someone in the room. So please express your interest now if you’re interested in intervening in this.

Carlos Affonso Souza:
And since we’re talking about regulation, of course, architecture is a form of regulation as well, and the architecture of this room might be non-inviting for people to come up and provide their ideas and to join the conversation. So please feel free to do that. And just to offer a quick segue on what Simon was saying, one challenge that we might face when we think about governance and regulation of AI in the majority world is that AI might be invisible, might be something really ethereal, hard to grasp on how regulators could enter into this discussion to begin with. And of course, that will end up creating the effects that examples that we can take out from different countries that have already regulated this topic end up as creating a very strong influence on the models, the categories, the way in which these conversations is actually being set up in those countries. And we now face a challenge because we want countries from the majority world to be protagonists in the discussion about governance and regulation of AI. But at the same time, when we think about especially the largest countries in the majority world, they end up serving mostly as a resource for users of the AI most famous applications than anything else. And this is something that we need to pay extra attention because when we think about the regulation and the governance of AI, we need to think about what we are trying to communicate, what are we addressing? Because one thing is the deployments and the creation and the design. of AI. And another thing is the usage of this AI application. And when it comes to the majority world, we can see pretty often that the applications were not going to be designed, created in those countries, but they will be used heavily in those countries. So it’s quite obvious that the discussion about how to regulate not only the creation, but also the use of those applications will be key for the successful experiences of those initiatives on regulation and governance. I’ll just stop here, Michael.

Michael Karanicolas:
Perfect. Let’s go here to the back and then over this way afterwards.

Audience:
Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is involved in policymaking. It was interesting to listen to Simon who mentioned that under-regulation and over-regulation, that has to be a balance. And the thing is that we have a national AI policy. It’s in the draft stage and we’re currently getting input from a lot of stakeholders. Now, it does touch upon the aspect of generative AI because it’s the newer phenomena, but it’s had a really disruptive effect in so many ways. And we talk about the ethical use of AI, but as long as generative AI, for example, I’ll consider a use case that as long as it’s assistive in nature, it’s acceptable. But beyond that, it can be considered unethical. So I just want to learn from the panelists that how can we strike a balance over there? Because at the government level, if we look at the education sector, there are a lot of problems already being raised by different government institutions, educational institutions, universities, that generative AI is misused. So at a policy level, I would like to know, I mean, how can we address this problem? Thank you. All right. Thanks very much. Sliming Zhu from Australian National Science Agency. So we basically, our staff chairs Australian’s AI standards body, and also we developed Australian’s ESCO AI principles back in 2019. So as a science agency, interestingly, we are not allowed to comment on policy and regulation because we provide scientific evidence into these policy discussions. But I think I want to raise two points. One is no matter the standardization and the policy and the regulation, you need to measure the risks, the size of the risks. And that’s a scientific question. And we need to have an international research alliance on how to measure those risks. Once you fully understand how to measure those risks, then you can probably reduce those risks and make a very informed decision. The second point I want to make is, a lot of these are trade-offs decisions. Only those risks are well understood and measured, you can make those trade-offs. For example, when US Statistical Bureau released their data for further research, they had to make a very concrete trade-off between data utility and privacy. You have to make a conscious decision to sacrifice some privacy for the gain of some benefits. And that informed decision is done by stakeholder groups from privacy advocates. But it’s even more complex than that. After Lady Dice, I think there are studies coming out to say, actually, privacy, utility, and fairness, they all have trade-offs. Having privacy-preserving approaches sometimes harm fairness and sometimes promote fairness. You have to have the fairness foundation for that as well. And many of these are context-driven. I don’t know whether people have seen the recent CHATGBD vision system model. And they basically said, one of the use cases they have is blind people wanting to have face recognition. That’s the number one feature they requested. Because blind people do say, I just want to have the same level of ability of normal humans in a room to recognize face. But based on face recognition risks and the various legislations, they are not allowed to have that. And that’s the number one feature they have requested. But I think that’s an interesting discussion to say how the standardization of policy and the science will enable this kind of trade-off decisions. Thank you. Milton, did you? So I just wanted to raise a question about something. I can’t remember which panelist said that when we talk about regulation, we’re necessarily talking fundamentally about national governments. And if you look at what AI consists of, break it down into its component parts, as Courtney said, you’re looking at a combination, really, of data resources, software programs, networks, and devices, computing devices. And all of those are globalized markets. And we… with the internet, and here’s where I’m trying to create a link to internet governance, which is what this forum is supposed to be about, although we might wanna rename it the AIGF. The internet makes it all very easy to distribute applications and to distribute data resources very quickly and hard to control. So I understand that many forms of applications will be regulated at the national level. Like medical devices or something, where you have a nicely defined thing, but AI as a whole is going to be a very globalized form of human interaction, and I don’t think that national governments are all going to solve this by themselves.

Michael Karanicolas:
Let’s hear from Tamiwa, and then maybe one more intervention, and then we’ll move on to the next question.

Tomiwa Ilori:
Thank you very much, Michael. And discussing not-so dynamics, especially from an African perspective when it comes to AI governance, for me, I think the race towards global AI governance will favor the boat. And while I will not delve into the ethics of that sentence, it is the reality, especially in Africa, especially with how the region is often bedeviled with importation, especially of standards, and sometimes even being referred to only as standard stakeholders, not people who design standards for themselves. And we know in international law, as it is in international politics, smaller nations are seldom bold, and they often end up as pawns or testing grounds for bad governance attempts. However, in my view, smaller nations in this context can be bold if. if they strategize and work together with like-minded initiatives or systems. And when I use the word small, I also use small in terms of progress with AI governance and initiatives on the ground. The way I see it, it is a long way for a small nation to move alone, but that journey towards responsible AI governance could be shorter if we work with others who may share maybe similar goals and intended results. That would be my quick contribution on that. Thanks.

Michael Karanicolas:
So let’s go to one more intervention from the room and then I’m going to move on to it. Yeah.

Audience:
Janet Hoffman, Germany. I have a question to Simon Chesterman on the situation in Singapore. You pointed out that Singapore is a small jurisdiction and thereby always face the risk of driving companies out of the country. But I was thinking of the fact that Singapore has quite a number of really successful companies under public ownership. So I was wondering whether that not creates perfect conditions for regulatory sandboxes where you can in fact test what type of regulation works and what effects it has on the companies.

Michael Karanicolas:
Sure. Simon, did you want to respond to that? Sure.

Simon Chesterman:
So it’s a great question. And indeed, regulatory sandboxes is something we’ve been exploring in particular in the fintech sector. The Monetary Authority of Singapore has used this technique, which is not unique to Singapore. The basic idea is you give a kind of safe regulatory playground where there are reduced risks that enables companies to test out new use cases. But the larger point about the danger of driving innovation. elsewhere really is a concern not limited to Singapore’s domestic economy, but to attracting the big tech companies apart from anything else to Singapore, which we saw 11 years ago when Singapore adopted the Personal Data Protection Act. Its legislation was specifically said to be aiming at balancing the rights of users against the legitimate needs of business. And so I think the combination of the small size, the openness to the world and the regulatory flexibility of a country like Singapore does give us an opportunity, but we’ve still got to operate within those kind of constraints. Maybe if it’s appropriate, I can very quickly just respond to earlier comments. I didn’t catch his name, but the gentleman from Pakistan. One of the key arguments that I think needs to be spread around the world about the use of generative AI is that if you’re going to use these things in particular, if you can use them in a public sector context, you’ve got to keep in mind two things. Firstly, that if you share data with these generative AI systems like Chattopadhyay and similar capabilities, you’re essentially sharing that data with private agencies. So you need to be very careful what you share. The second is it needs to be clear that whatever comes out of it, if you use that, you are responsible for it. And then really quickly, Li Ming, great to see you at a distance even, and I think it was Milton, I’d link both those two to say that the levels of regulation that we’re talking about, you need to think of three. We do need the regulatory hammer. As Irakli said, you do need states are the only entities with real coercive powers that are going to be essential for harsh regulation when that’s needed, and that’s going to be an important level. But above and below that, you also need self-regulation, you need industry standards, you need interoperability that will in practice be the most common form of regulatory intervention, that kind of standard setting. do need some measure of coordination, not just coordination of standards, which is what I think Milton was talking about, but also to Li-Ming’s point, you need the ability to share information about crises. And so I won’t get into it now, but elsewhere I’ve written, as others have, about possible comparisons with the International Atomic Energy Agency and the efforts to share information about safe uses of nuclear power in exchange for a promise not to weaponize that technology. I want to pick up on something on what you mentioned previously about the

Michael Karanicolas:
Collingridge dilemma for the next question. We’re in relatively early phases of it. There are a lot of unknowns of this technology, but there are also a lot of clear manifestations of potential and existing harms. So the regulatory questions are certainly not speculative on this, but we are in the relatively early phases of implementation, wide-scale implementation at least. Are there lessons to be drawn from previous eras of tech governance in how we approach the regulatory picture? Are there successes and failures of previous regulatory frameworks that can teach us about what works and what doesn’t? Maybe I’ll go to Courtney on this first. Thanks so much.

Courtney Radsch:
Yeah, so my work is primarily focused on the so-called global south or majority world with a focus on the Middle East. And I think if you look at previous eras of tech governance, whether social media, search, app stores, online marketplaces, even standards, they were all rolled out and they remain controlled by a few monopolistic tech firms. And so we need to really take this as instructive. The debate about AI governance has failed to grapple with the issue of market power. We are taking the economic ownership and control of AI as a given. And while the discussions around how to prevent AI from inflicting harm are important and the issues of preventing exploitation and discrimination are absolutely necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market. I think that is the biggest takeaway. No matter how well we design our rules, we will struggle to enforce them effectively on corporations that are too large to control, that can treat fines as the cost of doing business, and that can decide to simply, for example, censor news in an entire country if they don’t want to comply with the law, as we saw Meta do in Canada recently. So I think that we have to look at AI again in its component parts, as I mentioned earlier, and think about the dominance that we’re already seeing by literally a handful of big tech firms that are providing the leading AI AI foundation models, they are taking aggressive steps to co-opt independent rivals through investment, partnership agreements, and their dominance over, for example, key cloud computing platforms. We know that between Meta and Google and Amazon, for example, nearly a thousand startup firms were bought with no merger oversight, no FTC intervention. This has to change because I think, as we’ve discussed the kind of small, large divide, the big economy, small economies, this is kind of relevant, but also irrelevant when you have massive firms that are creating new capabilities, creating new technologies that national governments do not have power over. And so I think that we have to look at reshaping the structure of markets and ensuring that we crack down on anti-competitive practices in the cloud market, look at common carrier rules so that, for example, regulators should be considering forcing Microsoft, Amazon, and Google to divest their cloud businesses in order to eliminate the conflict of interest that incentivize them to self-preference, for example, their own AI models over those of rivals as we have seen in app stores, in search, in the way that Amazon constrains and forces small businesses to comply with the rules it sets because if you’re not on Amazon marketplace, it’s very hard for you to do business. In many countries, if you’re not on Google search, you might as well not exist if you’re a news organization. So there’s much to be learned, but we need to get out of this idea that this is somehow some new, really scary thing that we’re trying to govern. I mean, like, all right, again, the components, data, how do we separate out data, computational power, software applications, cloud computing, and think about each of those component parts as well as thinking about risk assessments, these more risk frameworks that are really at a far end of the application layer and implications of a certain subset of AI systems.

Michael Karanicolas:
The multidimensional nature of how power is concentrating is certainly well taken. Let’s, if we’re thinking about a longer view of technological developments and regulation, maybe let’s go back to Iraq, UNESCO, who certainly has been present through, UNESCO at least has been present through a lot of these different areas of governance and would be interested to hear their thoughts.

Irakli Khodeli:
Sure, thank you very much, Michael, and also to all the other participants. participants for very insightful comments and discussion. This is actually a really nice question for me to also get back to something that Milton has mentioned in terms of the difficulty for member states to govern something that is so cross-border in nature, and that has to do with things like the flow of data across borders, internet, et cetera. Because that relates to how have we, are there cases where we have successfully regulated an emerging technology? And my answer there goes that you need to have, and I might be reiterating some of the points that Simon has made and other speakers have made, is that the successful regulation of any technology, in our view, takes regulatory frameworks existing at different levels. Of course, at the global level, that’s precisely, again, responding to Milton’s questions, that’s precisely why you need a global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation. Usually at the global level, you have the softest level of regulation, it could be a declaration, it could be a recommendation, but it could also be a convention, which would be a more binding document. And this is what, at the international level right now, at the UN level, that’s what the conversation is about, is what kind of regulatory mechanism that you have. And let’s not forget the importance of the regional organizations and regional arrangements. Of course, European Union comes immediately to mind, and it has been mentioned many, many times, but ideally, we would want to have the same type of movement within African Union, within ASEAN. in Asia, Council of Europe already has a movement, a concrete process towards an instrument, of course OECD, so a regional organization, regional regulation would be very important. Then again national, we cannot avoid the fact that whether it comes to redressing cases where harm has been done or enforcement of different mechanisms, then the national level is indispensable. And let’s not also forget sub-national level. Courtney has mentioned for instance a lot of state-level activity on AI regulation. We’re also aware that in other countries also similar processes exist. In India for instance there is a lot of legislative activism at the state level, below the national level. So all these different levels I think can effectively work together to regulate the technology. And I’ll end with a concrete example. Bioethics for instance is something that UNESCO has been engaged for a long time. Simon has mentioned stem cell research. Perhaps that is not the best specific example because maybe for the US that is an example of over-regulation. But bioethics, so we have the Universal Declaration on Bioethics and Human Rights at UNESCO that all member states, all countries around the world basically, have signed on to. Then you have an example of a more stronger framework, Oviedo Convention of the Council of Europe, also in bioethics that provides more stringent framework. And then that is translated into specific very binding and strong regulation at the country level in the European countries to protect people against the risks that are emerging from biological and medical sciences technologies. So that’s a concrete example, thanks.

Michael Karanicolas:
Yeah, I think that the structural framing is helpful. I would add trade associations as well and private sector standard-setting bodies as well, which can be enormously influential. I’ll also note though that you talk about how these different levels of regulation, these different structures can work together. They can also compete and work at cross-purposes, which I think adds interesting dimension to how norms get set and applied. Let’s go to another comment in the room and then to Carlos.

Audience:
Can you use this one? Just a moment. We have lots of microphones right here, probably too many. We can diffuse them out. So I think the microphone was broken, it was not just my fault. Anyway, Ingrid Volkmann, University of Melbourne, Digital Policy. I think this debate about power is really interesting. And it’s power between, or new power dynamics rather, between the global North and South, with the Western companies producing a lot of data across the world, etc. We’ve addressed that. But I think there is another dimension and that is the granulation of data. Because in the global South, perhaps there is not the same quality of fine granulated data that’s available in the global North. So through that process alone, I think there is a lot of risk that could be produced through AI. And I don’t know a solution to that. I know that the ITU has a lot of initiatives around AI for good, with farming and medical, etc. in the global South. But I think this issue of data granulation is perhaps also another one that could be addressed in the power debate we’re having. Thank you.

Carlos Affonso Souza:
Super quickly, since we are discussing what are the lessons learned on the at least 25 years of thinking about internet regulation, maybe one thing we should take into account is that copyright and freedom of expression were the two issues addressed early on in the regulation of the internet. And by the time social media ended up appearing in the global scenario, we had this surge of personal data protection laws that was fundamental for us to understand what internet regulation looks like in the last decade. So when we shift gears into the discussion about AI regulation, it looks like we have at least two very interesting questions in comparison to this experience that we had with the regulation of the internet. So first one is like how much the modeling of personal data protection laws such as concerning about risk analysis, we will end up influencing the way in which AI regulation will be shaped. And the second one is the decisions that were end up being taken on the issue of platform liability in different countries in different regions. And how can we to on a certain way, take that into the discussion about the damages caused by AI? Because it looks like, first of all, we need to ask ourselves what type of AI we’re talking about, what type of damages are we talking about. And especially, I think we have an entirely different discussion when it comes to AI. Because when it comes to AI, we have this opportunistic discussion in which if the AI application end up causing trouble and damage to other people, the AI application is dumb. And you have, sorry, quite the opposite. The AI application is super smart. And the robot, the application, decides on itself to cause the harm. But on the opposite end, when the AI application end up providing you profits, such as in this discussion about copyright, you want the machine or the application to be as dumb as possible. So you as a developer, you as the deployer of the AI, you end up having all the profits of having this type of application out there in the market. So I think this is a type of discussion that we didn’t have back then in the discussion about internet regulation. And this is very unique to the discussions that we have right now on AI, this opportunistic usage of the autonomy of the AI application. I think this put us in a very different set of questions.

Michael Karanicolas:
I think the copyright example, anytime you’re talking about learning from previous eras of regulation, the copyright example is incredibly salient. And I would say, I think that even today, enforcement of IP rights online today is vastly stronger than, say, enforcement of privacy rights. And the reason for that, it’s entirely a legacy of the early prioritization of harms that were viewed as the most pressing and the most urgent to address early on in regulatory efforts. And I think the point about needing to be deliberate and careful in selecting how harms are understood and prioritized in the current regulatory, as we grapple with developing new regulations now, is incredibly important. Because this will ripple forward over time as these technologies continue to proliferate.

Courtney Radsch:
Yeah, just to build on that, I mean, I think we have to think about also that there’s a political economy of the protocols that are created by technical standards setting bodies. This idea that, for example, robot TXT, that’s a standard, or HTTP versus HTTPS, are standards that were created in technical communities without necessarily considering the political economic implications of the abilities that were being created through those standards. And so to build on your two points here on copyright, the ability to just hoover up all of this rights-protected data to create large language models without any compensation to content creators, news producers, et cetera, has huge repercussions on the economics of certain industries. In my own work, I focus on the journalism industry, but also then on broader society, work, et cetera. And so I think we need to take that into consideration, that technical standards are not neutral. They have political economic impacts. And we have to think about neutralizing big tech’s unfairly acquired data advantage proactively. And we should think about the fact that a representative from Meta stood on the AI high level panel as a representative. We are recreating a lot of the problems of the past by elevating the same big tech companies versus seeing a greater diversity of technology and technical community. You have an overabundance of the big tech representatives and corporate tech representatives in a lot of the multi-stakeholder processes. So we need to, I think, reorient that.

Michael Karanicolas:
OK, so we’re doing a lot of beating up on big tech and the tech sector at the moment. So let me ask then, the next question I want to get to is, what is the appropriate role for industry in regulation and standard setting? What does it mean to have meaningful multi-stakeholder engagement? I want to go to the room. So we have one over there. And I want to know, is anybody here from either industry, from tech sector, who could contribute as well, either here or from the back? No? Yes, well, let’s go to the corner first.

Audience:
Can you hear me now? Okay. This is Guo Wu, actually from TWIGF. And it’s kind of interesting. I learned the natural language processing in 1980. I don’t know how many of you is, you know, learning the natural language processing in the early days. And it’s really a big difference between the 1984 and now. Because in the early days, we are talking about the algorithm, but these days we are talking about a massive database. You know, the machine is learning from the massive database. But I don’t know anybody who do have a study in this kind of situation. Because today, the AI is learning from massive of the database. But the problem is, let’s try to think about a two group. A group is a group of the people that produce the huge data. And such the machine can learn from this huge data from the group A. And think about it as another group of the people. They don’t produce a lot of data. Let’s mean machine cannot learn enough from this group B. Now when the AI machine, after this whole study, and then try to generate their comment or whatever the result, is that it’s possible because this data is less, this is more. So there would be generate kind of discrimination. to the good B and prefer the good B. I don’t know anybody study such of a case. Yeah, I’m gonna make one more call for anybody from private sector industry to discuss the role of the private sector in regulation, in crafting good regulations and standard setting. If nobody speaks up, you don’t get to complain if our outcome document is not fair. So well, let me frame it this way then. I mean, we hear a lot about multi-stakeholderism. What does it mean to have a meaningful multi-stakeholder process in terms of crafting either standards or regulation in this space? Why don’t we go to Kyoko first and then someone in the room who would like to chime in or in the Zoom.

Kyoko Yoshinaga:
So let me talk about the industry role. Industry should consider developing or using responsible AI as part of their corporate social responsibility or as part of their environmental, social and governance, ESG practices. Since the way in which they develop and sell or use AI will have huge impact on society as a whole. So I would like to point three main things what organizations can do. One is to create guidelines on the development and use of AI, including a code of conduct, internal R&D guidelines and AI utilization principles and provide publicly accessible documents such as AI policy on how the organization develops and utilizes AI systems. Like we did for privacy policy. And this is very meaningful. And I know this because I was working in a think tank developing AI. AI systems. I was in charge of AI risk management and compliance. But when we made those and we made AI policy publicly available, all the people involved in this developing AI process will have responsible, will be really responsible in making this ethically. And so it’s like a manifestation, but I think this manifestation is very important and effective for the developer companies and user companies to be responsible for making and using AI appropriately. And many companies in Japan, like Sony, Fujitsu, NEC, NTT data, they already developed AI policies based on the guidelines, which I mentioned at the beginning. And it seems to be working well, even if we have non-binding guidelines, software approach. I’m seeing a similar situation now of what I’ve seen back in 2005. In 2005, Ministry of Economy, Trade, and Industry created what we call Information Security Governance Policy Framework. At that time, there were many information security incidents and the government realized that we need to do something. And I was in the think tank assisting to make that Information Security Governance Policy Framework. But we made three tools for establishing information security governance. One is we made an information security benchmark to help organization rigorously and comprehensively self-assess the gap between their organization’s current condition and the desirable level of information security. Second, we made a model for information security reporting to encourage companies to disclose their information security efforts. And three, we made a guideline for business continuity planning to encourage companies to develop such plans. Now, this initiative has led many companies to build robust information security governance. So in the context of AI governance, creating informed frameworks may encourage management to establish robust AI governance within their organization, perhaps functioning as part of their ESG efforts.

Michael Karanicolas:
So let’s go to Simon next, and then someone else in the room if they wanna join in, if they wanna chime in.

Simon Chesterman:
Thanks, yeah, on the role of companies, I do think it’s sort of amazing how things have changed. So back in 2011, Ryan Kala, who’s a great scholar in this area, wrote something very silly, I think, where he argued that in order to encourage research into AI, we needed to give companies immunity from suit. Otherwise, the risks would be so great that they wouldn’t innovate. Now, clearly that hasn’t happened. Jump forward to today, and you’ve got companies lining up to call for regulation. But they’re doing that for at least three reasons. One reason is, I think many of them do actually accept that some regulation would be useful. Second, they know that some kind of regulation is coming, they’d like to be part of that conversation. But thirdly, especially for the big market players, they know that if the regulatory costs go up, that becomes additional barriers to entry for their competitors, so it’s good for them. So by all means, I think it’s important to involve companies in these processes. And I echo what Kiyoko and others have said about the importance of standards, the importance of these emerging interoperable standards is gonna be very, very important. But we’re also gonna be clear-eyed about the incentives that drive these companies, which is to make money, which is one reason why a lot of this stuff that’s being deployed now is really, seems to be making money in two ways. that have been revealed as the sort of money-making aspects of AI. The first is to monetize human attention, and we’ve seen that through the surveillance capitalism, the experience of social media. And the second is to replace human labor. And so for all these reasons, I do think it’s important to involve companies, but also to understand that their role is, yes, they’ve got to pay attention to ESG and so on, the triple bottom line, but ultimately they are businesses. And if we, the community, or if regulators make a determination that these companies are too big, then it’s necessary to either… You’ve got three choices. You’ve got the litigation path, which the US is going down at the moment with the slim, slim possibility that some of these FTC or DOJ actions might actually lead up to the breakup of companies. You’ve got the European approach, which is to say, okay, we’re just going to identify gatekeepers and these six companies are now going to be subject to a much heavier regulation. Or you’ve got the Chinese approach, which is to say, well, just through executive action, Alibaba is going to be broken up into six companies and address the problem that way. So yes, by all means, I think it’s important to involve companies, but also to understand their perspectives, where they’re coming from, and not expect them to be turkeys that vote in favor of Christmas.

Michael Karanicolas:
Yeah, I think that that leads pretty neatly to the next question that I wanted to raise, which is related to risk-based versus rights-based approaches towards regulation and challenges of, not to say self-regulation, because I think that there’s probably going to be good consensus in this room and in most rooms that we don’t want to, we’re not satisfied with a self-regulatory solution, but the emphasis within a lot of early regulatory models on self-assessment and risk assessment as a critical component of regulatory structures. That’s an important part of the EU’s regulation in this space, draft regulation in this space. In the US, the AI Commission Bill has explicitly endorsed a risk-based assessment model. Are there thoughts on challenges and the role of this kind of assessment in effective regulation, the role of companies in carrying out these assessments, challenges in finding an effective avenue towards developing an effective framework if it relies on internal assessments by companies and the need to develop that in a robust way? Melton?

Audience:
Oh, yeah. Well, I think the risk assessment approach that is in one American bill and in the European bills are kind of a joke. Basically, they are asking for self-assessments. And this is not because I’m anti-industry and don’t trust them to do this. I think it’s just going to be a ticking-the-box exercise. And the point that I think we need to think about is that you don’t know what the risks are going to be in many cases. These things don’t exist yet, right? And so that’s what makes me laugh about the European model. So people are supposed to sort themselves into the five different risk levels. But how do you know what the risk is until it happens? And so I don’t believe in this kind of ex-ante forms of regulation where the government pretends that it is all-knowing and it thinks it can decide. And I’d like to bring your attention more to a rights-based approach based on property rights, which is whenever you have a new technology, you create new forms of property. So we saw this in the domain name industry, Michael. We saw these things were nothing. They were given out for free. And then suddenly, they were valuable. And they conflicted with trademark rights. So we had a policymaking process ex-post where we figured out who had a right to what. And now with so-called surveillance capitalism, we are discovering the value of data resources. And we have to renegotiate the boundary between who owns or who controls what data when users interact with platforms. And I think it’s a mistake to view that as this extraction process where a helpless human just gets data taken away from them by you’re engaged in an exchange. You are getting something. And you are giving up something. And we have to decide how that data gets monitored and owned and regulated. And that’s not an easy problem. So I would think that with AI, the issue is going to be a lot about property rights. And it’s interesting to see how we’re replaying some of the conservative protectionist stuff about a copyright now. Remember, when we started the internet, some copyright people were saying, every time you move a file from one server to another, you’re making an illegitimate copy. And that would have killed the internet. They wanted the definition of property rights in digital items to be so strict that we simply would not have had an internet.

Michael Karanicolas:
So we have to be careful about that. I’ll also say it’s interesting because regulatory ambiguity in other use cases can lead to overly cautious approaches if it’s accompanied by either aggressive enforcement and or coercion. or extremely severe penalties. So in the speech realm, a vague law is always viewed as being really dangerous because if it can be aggressively enforced, people see a really clear of the line. But it’s just not clear to me that any of the proposed AI regulatory frameworks would incorporate or could incorporate that level of enforcement. And so that’s why I think it’s interesting that ambiguity can work both ways, but it’s unlikely to work that way in this context. Let’s go, Courtney, and then, yeah.

Courtney Radsch:
Yeah, I think one of the problems, to definitely agree with Milton on the risk-based approach, you just don’t know, and that limits what you’re even talking about. We’re not talking about, for example, regulation that is aimed at reclassifying some types of companies as common carriers or having public utility type requirements. Common carrier requirements, for example. I think that, I mean, yes, the way that we address property rights on the internet gave rise to the internet. On the other hand, the way that we implemented some of those copyrights or lack thereof, and some of the digital advertising structures that emerged has killed off a large part of the news media industry, which is considered an essential component to democratic systems. So there is a trade-off. I think we have to think unfettered innovation is not necessarily good. So I think that the rights versus risk-based assessment does not get to many of the issues at stake. We talk a lot about individual level data. I think you’re right, Milton, you know this. User data, there is some exchange, but there’s a lot of data that is not individual data. It’s sensor data, it’s environmental data, it is data about movement and data about data. That is also incredibly valuable and currently dominated again by larger firms that have more access to data, et cetera. So we have to, I think really, I feel like the rights and the risk-based approach are important for a specific subset of AI, particularly when we’re talking about maybe generative AI, decision-making AI systems in certain sectors, but that is only a small component of AI. And we have to think about public interest-oriented regulation and a wider set of policy interventions.

Michael Karanicolas:
So let’s, did you raise your, oh, I’m sorry. Oh, yeah.

Audience:
Thank you. I wanted to respond to Milton and politely disagree. First of all, I mean, no, that’s nothing new. It, my objection concerns the fact that you think it’s ridiculous to ask platforms to assess the risks. First of all, all companies have lots of experience with risk modeling as a technique. It’s not new to them. They’re used to doing that. And now they are asked to assess the risks vis-a-vis some fairly specific groups, vulnerable groups, to wellbeing, to all sorts of things. And as we know, through various leaks, they know themselves what they’re doing to specific user groups. Also, that is not new to them. And finally, the DSA gives researchers now privileged access to data produced by platforms in the area of risk. And I think it will be possible to some extent for research to assess how platforms assess the risks they impose on societies. It will be, I think, fairly interesting to see, particularly how platforms deal with the question of general risks to society. I have no clue how they’re going to operationalize that. They will have to do that to tick boxes, but there are research groups that will be able to hold them to account on the ways how they approach this problem.

Michael Karanicolas:
Yeah, I think a lot of us poor non-EU researchers are jealous of our colleagues that are gonna be able to do some really interesting research based on that. I have-

Courtney Radsch:
At least funding to do that, right? You’re relying on underfunded civil society and academia to provide oversight of powerful, wealthy companies that do their own risk assessments, but may fire the people who find the risks or bury that research. So it’s not a perfect solution.

Michael Karanicolas:
All right, so let’s… So Carlos, did you want to enter in? And then I think we have a comment on the call.

Carlos Affonso Souza:
So just very quickly to react to Milton’s provocative comments on the status of regulation. And I think there’s something for us to take into account here is that for countries to have something about regulation of AI, it’s almost like a brand, a signal of being part of a group that is thinking about the future. And that’s been leading us to situations in which we come up with regulation that are far from perfect, but we keep hearing that in different countries, in different discussions, people say it’s better than nothing. So I think this is a moment in which we should think about, should we be happy with having something that is like better than nothing, just to be part of a group of countries that have already. connected to something, I think quite the opposite. We are in a very important moment in which we could learn from the experience from abroad, coming up with lessons learned and best practices to come with interesting and innovative solutions. But just to react to Milton’s comments, I think that when we look to some, especially thinking about the influence of the European solution to some of those topics, we have shadows of the European solutions appearing in different countries, solutions that might not even function properly, but legislators will say, hey, we have done something. So by the end of the day, better than nothing.

Michael Karanicolas:
Yeah, I think there’s an interesting tension between the need for, the undoubted need and benefits of engagement and mutual learning and sharing best practices and the importance of factoring local contexts into regulatory processes, right? Like obviously I don’t know that the world benefits from 195 different, radically different frameworks, but it can also be problematic when countries sort of cut and paste, say an EU model or an American model into their local context, which can either lack an appropriate regulatory structure of related legislation, right? Like the EU Act, but without the GDPR and without the DSA and without the Digital Markets Act that are also important components of the same regulatory ecosystem, or that just import a conception of harm that’s not necessarily fit for purpose based on a local context. We had a comment in the corner.

Audience:
Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few words, and I thought we were drilling towards what I was trying to say earlier on, but it’s actually maybe helped me a bit more now that everyone else has spoken. So the word assessment, I think is where I’m going to start. And so that’s expressly a measurement activity is probably where I’d start with that one. So how are we going to measure all these things? So how are we going to measure compliance, performance? How am I going to measure how I can trust a system or that it’s safe depending on the context, because every context is different. So there’s a lot of thought. It’s a bit of a paradigm shift that’s coming, and it’s coming in our part of the world as well as just generally. And when I mean our part of the world, I mean the measurement part of the world rather than any geographical part. So how do we measure these things? And before standards, things called pre-normative standards, which can be anywhere from two to 20 years in development before you get to the standard. So this is where how you can measure what it says on the tip. And so there’s a lot of work that needs to go from that side of things. And so the kind of work that missed us in America is what MPL does in the UK. So there are a hundred signatories to this thing around world nation states. So there could be an interesting platform where some collaborations and multi-stakeholder approach occurs. And the reason I say that is the organizations like us, we sit on that cusp between industry, academia, and civil society. And then just to hit on the, what can industry do for us part is we need to collaborate with industry. They provide access. They will provide access to resources, be that compute, be it to the models. They bring us access to case studies and use cases. And also that knowledge and understanding, we can help them, they can help us to help them. Because then we open things up and we get different lenses can be supported on various things. And then the last thing is this context thing. So it’s not just about quantitative measurement, it’s about qualitative. So what does a socio-technical test bed to measure the trustworthy outputs of AI actually look like? that’s something that the world needs to work on together. Thank you. Thanks.

Kyoko Yoshinaga:
Yes, I understand that the EU, US and Japan are all taking risk-based approach right now. And I think it is important to examine what the risks are beforehand. But regulating this risk precautionary with hard law is somewhat dangerous. Because the risks varies according to context. Also, the level of AI technology varies among countries. And so, we should not impose hard law to other countries. But to agree with the basic principles and leave it to each country to decide whether to take hard law or software approaches. Like factors like corporate culture, if the companies are obedient or not, safety, the level of technology, it should all be taken into account how to regulate AI. For example, one of the threats caused by AI is the intrusion to privacy. For example, surveillance or real-time biometric ID systems. In that case, it is important to have personal data protection law. And these factors vary among countries. So, we should not say this law is better than the other. I think each government should make regulations on their own way considering these factors in their own context. Thanks. So, that just about takes us to time. I was a bit daunted when I saw

Michael Karanicolas:
the IGF schedule get released and saw that there were so many different sessions on AI. I’m not going to say that ours is the best. I might put that in our outcome report. But I certainly I learned a lot from the perspectives expressed here, both among our panelists and from the rest of you in the room. And I think that it’s an incredibly important conversation given both the importance and urgency of these challenges and this unusual combination of having something that urgently needs attention, but it’s also incredibly important to get right. So thanks again to all of our panelists. Thanks again to all of you who participated and I look forward to keeping the conversation going. Thanks, Michael. Thanks everyone. Thanks, Michael. Thank you. Have a nice day.

Audience

Speech speed

172 words per minute

Speech length

2755 words

Speech time

964 secs

Carlos Affonso Souza

Speech speed

150 words per minute

Speech length

1572 words

Speech time

627 secs

Courtney Radsch

Speech speed

166 words per minute

Speech length

1868 words

Speech time

674 secs

Irakli Khodeli

Speech speed

138 words per minute

Speech length

1211 words

Speech time

527 secs

Kyoko Yoshinaga

Speech speed

124 words per minute

Speech length

1186 words

Speech time

574 secs

Michael Karanicolas

Speech speed

159 words per minute

Speech length

2197 words

Speech time

828 secs

Simon Chesterman

Speech speed

201 words per minute

Speech length

2278 words

Speech time

681 secs

Tomiwa Ilori

Speech speed

141 words per minute

Speech length

743 words

Speech time

316 secs