High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being

High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being

Session at a Glance

Summary

This panel discussion at the 19th IGF 2024 focused on extending digital identity verification to protect internet transactions. The panelists, representing government, technology, and international organizations, explored the concept of a trusted digital identity framework and its key elements. They emphasized that digital identity is a fundamental infrastructure for digital transformation, not just a service.

The discussion highlighted the need to balance enhanced security with user privacy protection. Panelists suggested that clear core principles and values, along with independent oversight, could help manage this balance. They also explored the potential of emerging technologies like blockchain, biometrics, and AI in shaping the future of digital identity verification, while cautioning against over-reliance on new technologies without proper evaluation.

Barriers to international cooperation in developing standardized digital identity systems were addressed, including the digital divide, lack of basic infrastructure in some regions, and the complexity of the digital identity ecosystem. The importance of understanding regional contexts and needs when deploying solutions was stressed.

Panelists called for global collaboration in creating a high-level framework for digital identity, building on existing success stories in areas like international travel and telecommunications. They advocated for a phased approach to implementation, allowing countries to progress at their own pace while encouraging experimentation among those ready to advance.

Key takeaways included the potential of digital identity to accelerate inclusion, the need to protect user privacy, and the importance of investing in digital identity as infrastructure. The discussion concluded by emphasizing that while countries may face different challenges, they share a common goal in developing effective and trusted digital identity systems.

Keypoints

Major discussion points:

– Defining trusted digital identity frameworks and their key elements

– Balancing enhanced security with user privacy protection

– The role of emerging technologies like blockchain and biometrics in digital identity

– Barriers to international cooperation on standardized digital identity systems

The overall purpose of the discussion was to explore the challenges and opportunities in developing trusted digital identity systems on a global scale. The panelists aimed to share insights on creating effective frameworks, leveraging new technologies, and fostering international cooperation.

The tone of the discussion was largely optimistic and forward-looking. Panelists acknowledged challenges but focused on potential solutions and opportunities for progress. There was a sense of urgency about the importance of digital identity systems, balanced with calls for careful, principled approaches. The tone became slightly more pragmatic towards the end when discussing practical barriers to implementation, but remained generally positive about future possibilities.

Speakers

– Shivani Thapa: Moderator

– Prince Bandar bin Abdullah al-Mishari: Assistant Minister of Interior for Technology Affairs, Kingdom of Saudi Arabia

– Emma Theofelus: Minister of Information Communications and Technology, Namibia

– Siim Sikkut: Managing Partner of Digital Nations

– Sangbo Kim: Vice President for Digital Transformation, World Bank

– Kurt Lindqvist: CEO, ICANN

Additional speakers:

– Fatma: Mentioned briefly, no role specified

Full session report

Expanded Summary of IGF 2024 Panel Discussion on Digital Identity Verification

The Internet Governance Forum (IGF) 2024 hosted a panel discussion focused on extending digital identity verification to protect internet transactions. The panel, moderated by Shivani Thapa, brought together experts from government, technology, and international organisations to explore the concept of a trusted digital identity framework and its key elements.

1. Defining Trusted Digital Identity Frameworks

A central theme of the discussion was the definition and importance of trusted digital identity frameworks. Bandar Al-Mishari, Assistant Minister of Interior for Technology Affairs from Saudi Arabia, emphasised that digital identity is not merely a service but fundamental infrastructure for digital transformation. He highlighted Saudi Arabia’s initiatives in this area, including a national digital ID system used for various services.

Emma Theofelus, Minister of Information Communications and Technology from Namibia, stressed the need for frameworks to clearly define roles, responsibilities, and limitations. Kurt Lindqvist, CEO of ICANN, added that trust in digital identity systems involves both technical and human elements, highlighting the complexity of building truly trusted systems. He also noted ICANN’s role in providing infrastructure that could support digital identity systems.

Siim Sikkut, Managing Partner of Digital Nations, emphasised that user adoption and ease of use are key for building trust. Sangbo Kim, Vice President for Digital Transformation at the World Bank, noted that digital identity enables access to essential services, further underlining its importance as infrastructure and a starting point for various services.

2. Balancing Security and Privacy

A significant portion of the discussion focused on the challenge of balancing enhanced security with user privacy protection in digital identity systems. Siim Sikkut proposed that privacy and security can be advanced simultaneously, rather than being balanced against each other.

Emma Theofelus suggested that an independent oversight body could help manage privacy and security concerns, providing a governance-based approach to this challenge. Kurt Lindqvist offered a perspective on how existing technologies like the Domain Name System (DNS) can provide security and stability in identity management.

3. Emerging Technologies in Digital Identity

The panel discussed the role of emerging technologies in digital identity systems. Blockchain technology was mentioned as a potential tool for enhancing user control and privacy options. Biometrics and artificial intelligence were also discussed as technologies that could play significant roles in future digital identity systems.

However, Kurt Lindqvist cautioned that existing technologies like DNS might be sufficient for many identity management needs, sparking a debate on the role of emerging versus established technologies in digital identity systems.

4. Barriers to International Cooperation

The panel identified several barriers to international cooperation on standardised digital identity systems. Sangbo Kim highlighted the lack of connectivity and basic digital ID solutions in many countries, noting that 2.6 billion people still lack internet access and about 1 billion lack any form of legal identification.

Emma Theofelus emphasised the need to understand different regional contexts and needs when developing digital identity solutions. This point was echoed by Siim Sikkut, who noted the differing levels of readiness and priorities across countries.

Bandar Al-Mishari pointed out the lack of a global framework or standards for digital identity, suggesting that international organisations could play a role in developing such standards. He proposed a high-level framework that could be adapted to different national contexts.

5. Key Considerations for the Future

Looking towards the future of digital identity, the panellists offered several key considerations. Siim Sikkut stressed the importance of experimentation and learning by doing, advocating for a practical, phased approach to implementation. He also emphasised the critical importance of usability in digital identity systems.

Emma Theofelus highlighted the potential for digital identity to accelerate inclusion of underserved populations, particularly in regions like Africa. She stressed the importance of understanding regional differences in digital identity implementation.

Sangbo Kim emphasised the need to protect user privacy through decentralisation, while Bandar Al-Mashari reiterated that digital identity, as critical infrastructure, requires significant investment.

Kurt Lindqvist called for an inclusive, phased approach to implementation, allowing countries to progress at their own pace while encouraging experimentation among those ready to advance.

6. Conclusion and Future Directions

The discussion concluded with several key takeaways. Bandar Al-Mishari emphasised the need for a global framework and increased investment in digital identity infrastructure. Emma Theofelus stressed the importance of understanding regional contexts and needs. Siim Sikkut highlighted the need for experimentation and user-centric design. Sangbo Kim underlined the importance of addressing the digital divide and protecting user privacy. Kurt Lindqvist advocated for leveraging existing technologies and adopting a phased approach to implementation.

The panellists agreed that digital identity is crucial infrastructure enabling various services and transactions, including travel and banking. They emphasised the need for trusted frameworks that balance security, privacy, and user adoption. The importance of experimentation and phased implementation was highlighted, along with the critical need to protect user privacy and give users control over their data.

However, several challenges remain, including creating global standards while respecting national sovereignty, bridging the digital divide, and balancing centralised identity management with calls for decentralised, user-controlled systems. The discussion highlighted the complex nature of implementing global digital identity systems and the need to consider various national and regional contexts, providing a foundation for future work in this critical area of digital governance.

Session Transcript

Shivani Thapa: The 19th IGF 2024 it is and it is a matter of great great privilege ladies and gentlemen for me to come in front of you, get on the stage and to get the ambience set for this very very important panel here at the IGF 2024. Thanks to Miss Fatma for this great privilege that I have just been entrusted with. I can see my fellow panelists coming in. I request them to kindly grace us on stage and yes as I turn to my esteemed members in the audience, ladies and gentlemen, yes yes your highness, if you could kindly be seated here. Please join me ladies and gentlemen as I introduce my very distinguished panelist, his highness Prince Bandar bin Abdullah al-Mishari, the Assistant Minister of Interior for Technology Affairs, the Kingdom of Saudi Arabia. A warm welcome to you. We have joining us here at the panel Her Excellency Miss Emma Theofelis, the Minister of Information Communications and Technology Namibia, joined in by Mr. Sangbu Kim, the Vice President for Digital Transformation, the World Bank and we also have joining us in our panel Mr. Curtis Lindquist, CEO ICANN and Mr. Sim Sikud, the Managing Partner of Digital Nations. Thank you so much for gracing us at this very very important occasion. Ladies and gentlemen, today we focus on a very very important topic, a topic of central importance that’s extending digital identity verification to protect internet transactions. Now yes, we all live in an era of online transactions wherein these online transactions certainly underpin everything, everything from global economy, commerce to public services. Therefore, digital identity verification has become the cornerstone of trust in our digital ecosystem. Now, why is this topic even here amidst us here at the IGF? Because this is not merely a technical issue. It is a multidimensional issue that intersects with innovation, governance, human rights, and inclusion, and the list is pretty long. So our session here today is carefully curated for us to reap an overall understanding as to why we need to strengthen the digital identity verification, give us an outline on the global standards and frameworks for identity protection, of course, talk about some best practices that we would certainly be focusing on, and also quickly run through the central element that is the role of international cooperation and multisectoral collaboration, which again is of central importance, as I said. So we are fortunate to have amidst us an esteemed and illustrious members in the panel, and I know that you bring in a very unique set of ideas and expertise from the niches that you come from, and what I would certainly be looking forward to is exploring the synergies between the insights and the perspective that you bring to this forum. Thank you one more time for gracing us. So without further ado, let’s dive in directly to the conversation for which we are here. We will begin with a deceptively very simple question, so to say, yet a very complex one. How do you define our trusted digital identity framework, and what are the key elements that make it effective? Now, this is a blanket question for all our esteemed panelists, but as I pose this question to each one of you, I certainly encourage you to reflect on how your unique role and perspective shape your answer. So, said that, let me begin with Her Excellency, Ms. Emma Theophilus, the Minister of Information, Communications, and Technology from Namibia, whose work connects governance and technology in remarkable ways. So, what does it mean, Your Excellency, to have a trusted digital identity framework, and what are the indispensable elements that make such a framework effective?

Emma Theofelus: Thank you, thank you very much, and I’m very happy to be here at the IGF and discussing this very important topic around digital identity systems and how they relate to a broader digital economy. And I think trusted could have a very broad definition on a system that is dependable, that is trustworthy, and that a system that can carry the systems and processes that require or depend on digital identity. But I think the biggest one I could say is that a trusted digital identity system is one that would ensure it’s clear who does what and when they do it, and what their limitations are. Because I think it’s very important when dealing with frameworks of this nature that it’s very clear who does what, and very clear what the limitations are of that authority or that body. And especially when you go further, when you look into the administration and oversight, who performs the operational functions, such as incident management. change and release management and coordination, fraud prevention, it should be very clear on who does what and when and what their limitations are to ensure that there are no ambiguities in order to trust the particular system. But of course this is all underpinned by legislation that puts out clear guidelines and clear regulations around these systems and frameworks. So I think that’s very important. Thank you.

Shivani Thapa: Thank you, Your Excellency. May I now turn to Mr. Curtis Lindquist from ICANN. From ICANN’s vantage point, Mr. Curtis, where ensuring trust in the Internet is so, so fundamental. How does, I mean, how does the notion of a trusted digital identity align with the broader challenges of governance in a decentralized digital world?

Kurt Lindqvist: Thank you for your question. Let me start with observation. When we talk about trust, or especially trust in identity, it isn’t just a technical concept of trust, it’s also the human element of trust. And the trust in the system, trust that the system works, that it’s safeguarding, that it’s delivering on the promise, why we turn to the identities. And this is irregardless of if this is an individual trying to use this, going about their daily roles or jobs, or whether it’s in a business context. And so when we talk about trusted digital identity frameworks, we need to encompass all this and we need to make sure that the system provides this comfort or the trust element. And ICANN, we have this conversation very much alive, it’s very much ongoing. We, as part of the ICANN ecosystem, with domain names and registration data, a lot of this data is no longer available to the general public. We have GDPR and other requirements. And this is great from a privacy point of view and safeguarding privacy, but it creates an obvious challenge when there are legitimate requests for access, for example from law enforcement. And the question then becomes, when we get these requests, how do we validate or safeguard these requests? The requester is who they say they are. And this isn’t just a theoretical question, we have these discussions with Interpol, Europol and similar organizations. And they have tools that are useful for their purposes and in their context, but they’re not necessarily systems or models that scale to a global level or a global challenge in identity verification. And so that’s one challenge you have to do to create and prove this trust, of course. And on top of that, there’s the operational issues of building these systems. How do you do this at scale but retain the confidence in the system or the trust to the model without driving up costs or complexities that would hinder access or create uneven adaptation or uneven access to a identity system? Because that would be, again, eroding trust in the system. And then we have readiness of technologies and authentication and credential systems are emerging, but it’s still very immature and very early days. So for this to work, and it also can’t work most of the time, digital identity systems It has to work all the time for everyone, otherwise you can erode trust in the system. ICANN’s role in this is that we don’t build identity systems, especially not global ones. That role is for governments or institutions that they designate. Our focus and our role is to safeguard the Internet’s infrastructure, so domain names, IP addresses, that underlines the function of the Internet and provides, as you said, the trust in the Internet and the Internet model. You can think about this as that we provide a smooth road surface for the cars to drive on top of it, to oversimplify perhaps a little bit, but that’s very much our role. But the ICANN community has a unique role they can play in this. It’s a space where stakeholders from across all of these spectrums, civil society, business, the technical sector, technical community and civil society can all come together, form policies and discuss them, and create globally applicable standards. So this is something that we work very much on. ICANN has developed something called the Registration Data Request Service, which is a way to handle this registration data I talked about, and how you can provide this in a safe and secure manner. So that’s something that we see as our foundational role, ensuring the Internet is stable, secure, and without that you can’t build any trust in the higher layer systems. So that’s what we are very much focused on working on.

Shivani Thapa: Right. Thank you. Thank you, Mr. Curtis. Yeah. We’re here talking about technological reliability and then the user confidence, and when there’s a blend of both of these, this becomes very, very tricky. And we understand the problems are not the same everywhere. There are large communities that are still transitioning into the digital age, and we all really need to instill that trust again. That’s again another part of the entire scenario. Of course, I’m sure we certainly will venture on this later in the conversation. Allow me now to turn to His Highness Prince Bandar bin Abdullah Al-Mashari. Your Highness, the Kingdom of Saudi Arabia has been at the forefront of technological innovation. The world certainly is at awe and keeping a watch at the advancement and how you embrace into the digital world. Right. So what is your take on the trust that we’re talking about?

Bandar Al-Mashari : First of all, let me thank the forum for inviting me to participate in this important panel. Let me start with saying that digital identity is not a service, it’s an infrastructure. It’s an infrastructure for digital transformation, digital transactions between individuals, between entities, business and governments. Therefore, it’s as important as any other infrastructure. It has to be secure, it has to be regulated, it has to be trusted. In Saudi Arabia, we started a long time ago with building the ID or identity in the physical world. We issued the digital cards with SIM cards which stores the biometrics and the numbers and the name and the birth date. Long time ago, 45 years ago, we established what we call the unified national number where each citizen or resident has a unique number where we tie all the information for identity with that number. This managed us to expand to the next level which is the digital identity. Back to your question, trust is a broad word. It means a lot of angles. It has to be trusted in terms of the identity itself since the creation of the identity has to reflect the identity of the person himself with a high trust from the users. Trust means from the user itself, from the identity holder, he has to trust that his identity, his digital identity is going to help him to access all information. everywhere, anytime, across the border, etc. So the trust doesn’t mean only secure and private. It doesn’t mean that it’s protected from invasion or impersonation. It means more than this. It means more business. It enables that the identity holder to use it for digital transactions. Framework. Framework means ecosystem which covers legal, governance, infrastructure, technical options, etc. Of course in different countries there are different frameworks. In Saudi Arabia, thanks to God, we started a long time ago with the identity itself, the physical identity, and when we came to the digital identity we just upgraded the framework. So we are not creating a new framework. It’s an extension to the framework of the physical or card-based identity. As far as identity, we consider identity to be the identity of individual, the identity of entities, business, government entities, NGOs entities. So we have to cover all types of objects, persons, identities, and nowadays we are talking about Internet of Things, digital identity. So trusted digital identity framework is a combination of four words that means a lot. It means trusted infrastructure for digital transformation.

Shivani Thapa: That is so beautifully put and I believe certainly it’s not a service, it’s an infrastructure. I think that adds a lot of value and meaning to how you’re going to sketch or carve the way forward. And certainly in this very essence I’m sure there will be a lot of lessons from the initiatives that are underway here in Saudi Arabia, in Riyadh for the IGF participants. to see as well and for to help guide us the creations that they would want to embrace In the near future. Thank you. Thank you your highness. We are now move on to mr. Sim sick with what is your take on this? well

Siim Sikkut : Going back to my own background from Estonia I had the privilege of serving the government and now we advise other governments that we clearly see one thing Digital identity is just a key So you talk about trust in digital entities and so forth We have to talk about two things one is to trust in the key does that work and secondly? But how is the usage of that key? And how is that secure and trusted? So from from my perspective and we really had to build a lot and we now see the current countries have to build a lot The whole rest of the components around it just like his highness said I mean the framework is actually much wider than that For example elements like personal data management Or how can users have control and consent start to matter so much more for that to be trust in digital identity? even the key itself and and with that in mind What I see is an element that we really have to think about when we talk about Trust in digital identity things is are the users using it? Ultimately, it’s trusted if users are using it and there’s two elements in that Yes, we have to build obviously the frameworks that you know make sure that this reliable etc But at the same time we cannot forget the usability side of things So how to make it work how to make sure that you know people will be using it We have to build this trust safeguards in a way that actually it’s easy to use them That’s no extra effort for example that I have to go through to make sure that now my transaction can be trusted. That’s easy If it’s easy people will use it then trust is really there and then identity and all the impacts are there

Shivani Thapa: Coming to mr. Kim mr. Kim the World Bank operates at the intersection of global development and technology How do you define this trusted digital identity framework, and how do you distil the elements that make them effective?

Sangbo Kim: As other panelists already provided a lot of good insights for the digital identity, I would like to add some more point that digital identity, as many people said, it is fundamental infrastructure. But at the same time, it is just how we encourage our people to use the digital service more frequently, more comfortably, and in a safe way. So it is a matter, if we can just keep, you know, secure the data protections through the digital ID and, you know, save some space so that people can trust the system, maybe we can encourage the people to use more the digital services. So that’s one point. On the other hand, actually, it is very beginning, starting point for the every services. If you see another commercial services in many, provided by many, you know, global company and some tech company, you know, sign-up process is the starting point to, you know, register your identity or account, username and password is the starting point. Actually, digital identity is beyond that, much complicated system, but anyhow, know your customer and know your people from the government services is the starting point. That can, you know, provide through the identity, we can provide the social protection services, you know, financial services, so many government services, much easily through. this ID system, I think.

Shivani Thapa: Thank you, thank you. What a rich start to our discussion already, and thank you all. Now, while we understand what constitutes our trust framework, let us head on to address the tightrope walk that every nation is having to face, that’s balancing enhanced security with the protection of user privacy. My second question would be in this regard, and I would want to begin with Mr. Simsekud. How can we balance the need for enhanced security with protecting user privacy and avoiding over-surveillance?

Siim Sikkut : Well, to me, and not just theoretically, but in practice, it really starts from that I would move away from talking about balancing. They both can be advanced at the same time, as a lot of experience and practice has shown. So it’s not a question of one or the other, how do we advance both? And the beginning point, as with any transformation, is to say that, but let’s really define a certain core principles we will always adhere to. For example, I would be willing to make an argument that if we really want to have trust there for digital identity, people ultimately, for example, want to have control and assurance of privacy. Perhaps, based on the values, if we define those principles that should always be held, then we have a chance of actually thinking, okay, how can now we do security within those confines based on these values and so forth? So it’s not a balancing act, it’s defining what are the things we will always observe and then figuring out how do we do the rest around it. For example, security in a way that privacy is kept. Going beyond that into practical stuff, there’s several things that have worked. I mean, things you can look technologically. I mean, that is, for example, one of the really strong arguments for distributed systems, for data sharing, et cetera. But practically speaking, really going through this exercise of, yeah, principle-based approach is really… really what helps to solve this. And lastly, why this principle space matters is the devil is always in the details, as they say, right? So which means that the dilemma often occurs at the operational and technical level. So on the down the line, techies like system administrations have to make this call, right? Again, for them, it’s so much easier if there’s a clear base of what are the principles we will not steer away from, clearly set from the start.

Shivani Thapa: Let me come to Minister Theofilus. What would be your take on this, especially in the light that privacy concerns can be particularly acute in regions, successfully with limited data protections?

Emma Theofelus: Definitely, and I think I’m leaning towards what Sim has said. I think with clear core principles of values on what the data is to be used for, for each individual user, becomes easier on what parameters to keep within the administration of a digital identity system. But there’s also a school of thought around perhaps an oversight, independent body or authority, to be able to manage the system, to ensure that the data of the users is used particularly for what it was meant to do, and that there’s no trespassing or going beyond and above the authority of the data processor. So I think perhaps around having an oversight authority would be perhaps one of the practical ways to ensure that the system is officially managed and administered, and that there would be no need to have competing interests. I think the balancing act can be carried forward by both keeping the privacy of the user and ensuring that there’s no trespassing around the rights of those users and their data on the system.

Shivani Thapa: Well, thank you. security and privacy certainly are two sides of the same coin and yet constantly in conflict so this can be tricky but that’s been pretty optimistic from your side. Now let’s shift our gears from today to look into the future. Emerging technologies such as blockchain and biometrics that’s what is the food for our I mean like third discourse. These as you all know is redefining the digital identity space at the moment but how can we leverage these advancement without falling prey to their hype and risks. I think this has been a very very burning question or rather concern at all tiers and among all these stakeholders. Now let’s hear from our panelists you’ll have some three minutes each to answer this question beginning with his highness Prince Bandar bin Abdullah Al Misari. With emerging technologies like blockchain and biometrics what do you see as the future of a digital identity verification?

Bandar Al-Mashari : Of course as the wave of technologies comes around it brings with it with itself the challenges and the opportunities. For biometrics it’s available since long time ago and we are using biometrics maybe more than 15 years ago to identify persons and individuals, face, fingerprint, DNA for in special cases, voice in special cases, iris, all are these full biometrics are utilized since we started the digital ID initiative or the ID initiative in fact. For blockchain it’s also around since the web 3 motion or wave which started maybe more than eight years ago And it came after Web 2.0 wave, and then the necessity for more trusted internet brought what we call blockchain. And it started with the Bitcoin, and then it generalized itself by the blockchain. Blockchain, in a nutshell, it’s an option to replace the database or the central database with distributed ledger, or in a simple word, with distributed identity that controlled mainly by the identity holder himself. It cannot be changed, it cannot be forged, he can allow any other user to use the credentials of his identity. But at the end, it needs identity issuer, identity holder, identity user. So it’s going to solve a problem in some societies who are sensitive to the concept of central database for identity. Of course, it provides more secure, more privacy, more control by the identity holder. What is the future or what opportunities? Of course, it will add more options for any country or society or an individual who weigh the privacy more than any other thing else. Of course, security is part of that. However, it should be evaluated in terms of accessibility, in terms of ease of use, in terms of cost. All these aspects has to be assessed in order to weigh the value of using blockchain in identity. Specifically, blockchain can be utilized efficiently in identity management, not identity issuing or identity, I’m sorry, for identity access management, not for identity issuing or identity management. What I mean by that is that blockchain can be used in a variety of ways. need by identity issuing is to create the identity itself. You have to create it outside the blockchain. Then you put it in the blockchain. You have to manage the identity in terms of updating the identity, stopping the identity, re-initiating the identity, et cetera. These are the surfaces around the identity. This has to be done maybe within the blockchain or outside the blockchain. So biometrics, blockchain, I may add AI also. AI is going to add more opportunities to protect the identity or create new models of identity. So as technology progress and the business models progress, we will have more options. And hopefully, this will give more options to other countries, societies, individuals to start the digital identity as a basic right for each individual in the world. As you know, in the UN, there is sustainability goals. And one of them is to provide access to the digital services. Without these options that offer itself as a cure for some concerns in some society, we will not be able to achieve this goal. So hopefully, these technologies, biometric and blockchain, AI, BKI, all of them will add more options to the digital identity infrastructure. And we’ll see one day more than 90% of the planet people here in the planet are having the digital identity, are enabled to access health care, education, opening bank accounts, et cetera. And this will add a lot of value to the economy of each world, each society, each individual.

Shivani Thapa: Thank you. May I now turn to Mr. Curtis? Please feel free. free to add on to what has just been said, while answering that, you know, how we’ve been hailing the revolutionary technology for digital trust, blockchain, which we’ve been hailing for its revolutionary power. Now, is it the panacea often made out to be, or are there any blind spots that needs to be addressed?

Kurt Lindqvist: I’m trying to pick up, I don’t want to follow on to what His Highness just said about blockchain. I think it’s one thing that’s worth reiterating, is that blockchain has some fantastic characteristics, but one of them is that you can’t revoke what’s in there. And if you lose a key, you might not be able to access this, you lose the access to the identity again. And having this recovery of identity is quite an important part of any identity system, right? And I think there might be many systems in this that you can build on. I think, from the organization I represent, talking slightly in my own interest, that there are also existing technologies, right, that will provide very similar, or maybe even better characteristics. The main name system, the DNS, has a hierarchical structure. We are seeing how this is being used for identity management in, for example, social media platforms like BlueSky, who uses this as a way to verify the identity of the users. The Internet Engineering Task Force, that defines all the standards and the technologies that have been in the internet, are working to add the new fields and parameters to this structure to actually use it for identity verification and management. So I think that we like to talk about, actually, going back to your question, we like to talk about future technologies as that they have to be. something brand new and shiny but sometimes the new technologies can also be built inside the context of what we have or built on what we have to deliver maybe perhaps a more stable and Expanded Functionality where we are today, but without going through a lot of unknowns as we have today And I think that’s worth keeping in mind when we talk about the future technologies to enable building secure inclusive access and stability

Shivani Thapa: Mr.. Stickwood eager to know what would you like to add to this conversation? well

Siim Sikkut : Well first of all obviously it’s important to experiment right constantly keep trying things out in a way, and and if you look at emerging Technologies side of things I’m very much with his highness in a sense that AI is probably the biggest impact Especially given our topic here. How do you ensure trust in these frameworks? so from defense point of view as well as of novel ways to really live doing that but if you talk about the goal of having an entity for everyone globally I Think even some ways trying to think too much about emerging technologies like at this service So much proven technology already out there just need to get that out scale that on board people and That does the trick and then we talk about next stages of evolution with next technology and so forth

Shivani Thapa: Mm-hmm Okay, so we’ve explored the what and how in trying to understand the landscape and then the experts point of view How we are looking at the future of the internet that we want as now let us now address the where? Question where the barriers lie and where the international cooperation can make a big difference May I turn to mr. Sun Kim with the plethora of experience that you and the entity that you represent I think you would be the right person to begin as to what are the key barriers to international cooperation in developing standardized digital identity systems? And how do you think these barriers can be addressed?

Sangbo Kim: Two barriers I would like to say today. One is still we are struggling with lack of connectivity and lack of basic digital ID solutions. From the internet connectivity point of view, we are still struggling with 2.6 billion people so no access to the internet. That’s huge. On the other hand, 3.3 billion people in the world are living in a country where there are no ID solutions and data sharing mechanisms. And even 850 million people are not recognized by the government. So they have no physical identity at all. So this is a very fundamental issue. So still we are really struggling with a digital divide across the developing country and the developed country. So that’s a huge challenge. So we need to invest more in the fundamental infrastructure. At the same time, it is really a complicated ecosystem. Not only government, but also many private sector, big tech, and some identity dedicated in a special technology. The startups and many companies should follow the trend. And people need to understand how to use this and needs to get trained. And some participation by the academia and also from the NGO are very crucial for the success of this, building the ecosystem. It is very complicated. complicated ecosystems too. So the leadership is the key to bring the all the, you know, digital ID solution across many stakeholders. So and also international collaboration are another key to, you know, enhance the regional, you know, interoperable services across the cross-border. So many collaboration and leadership are the key solutions for the big challenge of the complication of a digital ID.

Shivani Thapa: Of course cooperation is key and central, but then even after having realized what the scenario is and what needs to be done, things are very very difficult in the ground and in practical life. So how do you go to this scenario, Your Excellency Minister Theodosius? Yes, thank you.

Emma Theofelus: Yeah, I fully agree that cooperation and collaboration definitely would go a long way. Notwithstanding the barriers as indicated earlier, I think there also needs to be a better understanding of the various contexts of the different regions and the solutions to address some of those barriers and challenges. So if we don’t understand the existing barriers of a particular region or area, the solutions around digital identity might necessarily not work for that particular region. So for example, the African region would have a completely different context to the MENA region or to the North American region or to the European region. And if we want to then try to deploy solutions that would necessarily work for Europe, it doesn’t mean necessarily that it will work for the context of the needs of the African region. So better understanding the needs of every region around digital identity. and access, I think, is quite important, and then we can take it up from there. Because with collaboration and cooperation, you must understand the context, and the solutions must then be able to be applied on the various needs of a particular region or area. And I think thereafter, we can better cooperate, and we can better deal with the barriers, if possible, one by one, to ensure that we reach a particular standardized digital identity system or framework that takes care of the entire context, and that ensures that we are able to deploy the solutions needed. Thank you.

Shivani Thapa: Thank you, Your Excellency. May I turn to His Highness one more time? What would be your perspective on these cooperation barriers? At the same time, Saudi Arabia’s initiatives have been deeply rooted in collaboration. Could you also share with us some lessons that can be drawn to inspire global partnerships in the context that we are deliberating now?

Bandar Al-Mashari : What will stimulate the international cooperation and make it stronger than what we have right now is to have an honor for the global digital identity framework. Maybe UN, maybe ITU, maybe someone else, maybe a combination of all these entities has to come forward in a clear responsibility to bring all bodies, all countries, at least the advanced countries, to build such a framework, not as a detailed one, but a high-level one. Since the framework has to come to fill the gaps, not to interfere with specific countries’ laws inside the laws or the culture or the definition of privacy in each society. It has to focus on the gaps in the global level. As far as the challenges, we have a challenge to extend the current success story. We have right now success stories in the travel business. As everyone knows, you travel from a country to a country with a passport, and this passport is a standardized document. There is an international body which is called the International Civil Aviation Organization, and all passports between all countries are used in a standard way. They have standard information. Right now, we are most of the countries moving to the digital passport. So we are very close from global digital identity. We have to focus on the digital passport, the international standard of the digital passport, and extend it to be an option for global digital identity. On the other hand, we have such a story in the roaming services, in the telecommunication, with the GSM. Everyone, when he travels from a country to a country, he can use his mobile in the other country. So by the same token, we can generalize our digital identity to when you travel from a country to a country, you can just activate certain credentials. And the digital identity that you have in your country might be extended as a trusted digital identity in another country. So we have such stories in other fields that we should also focus on and extend it a little bit without interfering with the detailed culture of each country. In Saudi Arabia, we have mature digital identity, thanks to God. We are approaching by way soon the 28 million, which is more than 85% of the population. The left, maybe, from the population are the children and the young people, or those who don’t have any services. So going to the success story in each country, and building on each success story to extend it to another country, is going to expand, also overcome the obstacles. So we have a lot of ingredients. To me, we are not far from the global digital identity. We are very close to that achievement, inshallah.

Shivani Thapa: May I now turn to Mr. Simsekud. We’ve talked about global standards. in practice, that’s quite an optimistic view to build up on, but global standards often face resistance due to local nuances. How can nations strike a balance between alignment and sovereignty for the very concern that His Highness just raised.

Siim Sikkut : Well, I’m not really going to answer you, for good reason. What we have seen, and I come from European context, mind you. In European context, we used to work, and even a small Estonia, we tried steering exactly these things on a European Union level, bringing identities to be mutually usable, exchanging data across the borders and so forth. All the other aspects, including the ones that have been voiced here, can be summed up to basically the challenge is alignment of readiness. Alignment of readiness. You have different priorities of different maturity, you have different sort of, you know, cultural context, you have different connectivity, the readiness is different. And we have to be practical about it and pragmatic about that, even all the way to the point that where I would still say, for example, today, I would be even willing to argue that it’s still too early to have, let’s say, a global standardization or framework effort. Most countries are not ready for this. Look, most of these countries also have teams that are tiny. If you make these people now work on global stuff, they don’t have time to work on domestic things. Really building up, implementing, adopting, scaling identities in the countries, we should come first. But, again, going back to learning from, for example, how did we get things going in Europe? We had the same alignment issue. It’s okay if there are different tracks and sort of a two-tracked approach. Those who are still not ready, let’s give them time and support and everything to get ready. Those who are ready, and there’s more and more of those countries, starting from Saudi Arabia, for example, right? Let’s really allow them to start experimenting, start trying, playing, figuring out these frameworks. Also, for the… reasons that then does practice to then look at in terms of what could work on a international, global or whatever level. And I really want to say this that again very much my own and the Stonian approach has been to say that let’s not debate before we try it out. In standardization effort there’s so much talk about trying to regulate and debate and standardize ahead of time. People don’t know what they’re talking about, they’ve never seen the elephant, they’re talking about elephant. Let’s start trying, experimenting with those who are ready and then we can scale when others are ready more as well.

Shivani Thapa: All right, coming to Mr. Curtis, just building on this very statement that Mr. Sim just built on, ICAN operates as a global steward on internet governance. How do you navigate differing national priorities while advancing collaborative solutions?

Kurt Lindqvist: I think as was said by the panelists before is that, actually just what we just said is that the national priorities are what they are, right? You know, I think you’re gonna have to have respect for them and they have to move forward in the pace they can do. You can of course incentivize those priorities and I think I’m gonna come back to something that Rikson said is that when we define these frameworks and we define the solutions, I think it’s important that they become actually universally implementable if you want, so that we don’t come up with very complicated technological solutions or frameworks, become so expensive or burdensome to implement that you start leaving regions or communities or users out of them. And as we come back to this is that the priorities has to be that you have to adopt this readiness and it can be a phased approach and this can be on a nation level, on a regional level, user application level, what it may be, but it has to be some sort of phased approach that becomes inclusive and enables everyone to follow in this and I think that should be the priority, to have this enablement and as you said, you know, we have to be… a phased approach to it, not wait for everyone to join at the same time because they will never get there.

Shivani Thapa: Well, I believe it’s been quite an enriching and incredible discussion from this very esteemed panel. Before we conclude, let me turn back to our panellists, if you could share a thought on the key takeaways or something maybe we’ve missed to make a statement on this very important topic. I think we would be running out of time, but I could extend at least a minute to each of the panellists. May we begin from Mr. Sekut?

Siim Sikkut : Well, to be really short, I would really want to emphasise perhaps two things here. As we started talking about what is the sort of how to build trust and what is a trusted framework and my whole point was it’s more than just the key, it’s the whole thing around it, the policies, and fundamentally, technology is the easy part in any sort of digital transformation. How we transform ourselves, we have to do proper change management for that. And that takes me to the key point, what is the biggest challenge is build up the leadership and the capability and the governance for that. That’s what I really believe we all should be investing more into all the way from within countries to donors and internationally. Secondly, just going to the last point, even if yes, everyone’s not ready, let’s experiment more, let’s try more, let’s start thinking about how to really build something like a global framework at some point. Even if not everyone’s ready, experimentation is the way to start.

Shivani Thapa: All right, Mr. Curtis?

Kurt Lindqvist: Well, to build on the same thing, I think, you know, let’s remember that digital identity is as much about people as it is about systems and tools and technologies. Make sure that they become inclusive on all layers and in all intersections so that it actually becomes usable. And I very much agree with this, let’s experiment and try so that we get some experience and we can actually build around those pitfalls.

Shivani Thapa: Yeah, Mr. Kim?

Sangbo Kim: I would like to highlight the decentralization feature of the digital ID, so many advanced the privacy of the users. And many countries are now giving back the right of control of the user privacy and private data. So it is really important to protect the privacy. It’s very important. At the same time, I would like to highlight that we don’t need always just every full spec of ID in the central country. We need to protect the privacy of the users. So in order to secure the authentication process or verification process, only just a small piece of the information will be more than enough.

Emma Theofelus: So we should be cautious about that privacy factors. I think a key takeaway for me would be that with digital identity, we can accelerate the inclusion of many people who might have been left behind. So, for example, if you look at the number of people who have been left behind, it’s a huge difference for the government services or the ability to transact, and especially for a region where I come from, like Africa, which continues to have some of those gaps, and that, secondly, I think there is a lot of room to collaborate and to peer-learn from those that have gone before and experimented, like Mr. Sim said, and have done some few successes, so that we

Bandar Al-Mashari : can build on the success story in the digital identity, and also in the digital identity, and also in the digital identity, and also in the digital identity, first of all, build on the success story in travel, banking, telecommunication, global cooperation. Second, consider the digital identity as an infrastructure. Therefore, invest, and I can say invest less to get more. They have their jobs, their salaries, with these jobs, it can be very difficult. Looking in the future, their job area, their economy, their status, their education, their healthcare, the Covid-19 is the proof for that. Thank you.

Shivani Thapa: are forming the backbone of global commerce and connectivity already. So this is not something that we limited to our desires, but something that has become imperative and of paramount importance. So as I read from our esteemed panelists, certainly we might be in different boats dealing with our different realities, having a different degree of will. However, the direction that we’re headed and the destination certainly is the same. And then there are so many we already have achieved, which can be opportunities or doors of opportunities, which we’ve already worked on, and that can be built on to work for the future that we all mutually desire for. I think that was in essence what this forum had to give away to the IGF 2024. And I would want to take a moment on thanking on behalf of the IGF and the host of partners that have set up this very, very important coming together here in Riyadh, especially to our panelists for gracing us and extending your valuable time, thoughts, and experience to this panel. And said that, I also rest the microphone. That’s all from this panel. Thank you. Thank you. Thank you. Thank you.

B

Bandar Al-Mashari

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Digital identity is fundamental infrastructure, not just a service

Explanation

Digital identity is viewed as essential infrastructure for digital transformation, enabling transactions between individuals, businesses, and governments. It is not merely a service, but a foundational element that requires security, regulation, and trust.

Evidence

Saudi Arabia’s long-term approach to building physical and digital identity systems over 45 years

Major Discussion Point

Defining a trusted digital identity framework

Agreed with

Sangbo Kim

Agreed on

Digital identity as fundamental infrastructure

Framework should encompass legal, governance, and technical aspects

Explanation

A trusted digital identity framework must cover various aspects including legal, governance, infrastructure, and technical options. It should be comprehensive while respecting different countries’ laws, cultures, and definitions of privacy.

Evidence

Saudi Arabia’s experience in upgrading their physical identity framework to a digital one

Major Discussion Point

Defining a trusted digital identity framework

Agreed with

Emma Theofelus

Agreed on

Need for clear frameworks and governance

Blockchain offers more user control and privacy options

Explanation

Blockchain technology provides options for replacing central databases with distributed ledgers, giving identity holders more control over their data. It offers increased security, privacy, and user control, but should be evaluated in terms of accessibility, ease of use, and cost.

Evidence

Discussion of blockchain’s potential in identity management and access management

Major Discussion Point

Balancing security and privacy

Agreed with

Siim Sikkut

Sangbo Kim

Agreed on

Importance of user privacy and control

Differed with

Kurt Lindqvist

Differed on

Role of blockchain in digital identity systems

Lack of global framework or standards for digital identity

Explanation

There is a need for a global digital identity framework, possibly overseen by international organizations like the UN or ITU. This framework should focus on filling gaps at the global level without interfering with specific countries’ laws or cultural definitions of privacy.

Evidence

Success stories in travel business with standardized passports and international roaming in telecommunications

Major Discussion Point

Barriers to international cooperation on digital identity

Differed with

Siim Sikkut

Differed on

Approach to global standardization of digital identity systems

Digital identity as critical infrastructure requires investment

Explanation

Digital identity should be considered as critical infrastructure, requiring investment for its development and implementation. The speaker emphasizes that investing in digital identity can yield significant returns in various sectors of society and the economy.

Major Discussion Point

Key considerations for the future of digital identity

E

Emma Theofelus

Speech speed

151 words per minute

Speech length

778 words

Speech time

307 seconds

Framework must clearly define roles, responsibilities and limitations

Explanation

A trusted digital identity system should clearly outline who does what, when they do it, and what their limitations are. This clarity is crucial for ensuring trust in the system and avoiding ambiguities in operational functions.

Evidence

Examples of operational functions such as incident management, change and release management, coordination, and fraud prevention

Major Discussion Point

Defining a trusted digital identity framework

Agreed with

Bandar Al-Mashari

Agreed on

Need for clear frameworks and governance

Independent oversight body can help manage privacy and security

Explanation

An independent oversight body or authority could be established to manage the digital identity system. This body would ensure that user data is used only for its intended purposes and prevent any misuse or overreach by data processors.

Major Discussion Point

Balancing security and privacy

Need to understand different regional contexts and needs

Explanation

Different regions have varying contexts and needs when it comes to digital identity solutions. Understanding these differences is crucial for developing effective solutions that address the specific barriers and challenges of each region.

Evidence

Comparison of different contexts in African, MENA, North American, and European regions

Major Discussion Point

Barriers to international cooperation on digital identity

Digital identity can accelerate inclusion of underserved populations

Explanation

Digital identity systems have the potential to accelerate the inclusion of people who have been left behind in terms of access to government services and ability to transact. This is particularly important for regions like Africa that continue to have significant gaps in digital inclusion.

Major Discussion Point

Key considerations for the future of digital identity

S

Siim Sikkut

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

User adoption and ease of use are key for trust

Explanation

Trust in digital identity systems is ultimately determined by user adoption and usage. It’s crucial to build trust safeguards in a way that makes the system easy to use, without requiring extra effort from users to ensure transaction security.

Major Discussion Point

Defining a trusted digital identity framework

Agreed with

Bandar Al-Mashari

Sangbo Kim

Agreed on

Importance of user privacy and control

Privacy and security can be advanced simultaneously, not balanced against each other

Explanation

The speaker argues against the notion of balancing privacy and security, suggesting that both can be advanced simultaneously. This approach involves defining core principles that should always be adhered to, and then figuring out how to implement security measures within those confines.

Evidence

Suggestion of a principle-based approach and the use of distributed systems for data sharing

Major Discussion Point

Balancing security and privacy

Differing levels of readiness and priorities across countries

Explanation

Countries have different levels of readiness and priorities when it comes to digital identity systems. This creates challenges in aligning efforts for international cooperation and standardization.

Evidence

Example of European context where countries had different priorities and maturity levels

Major Discussion Point

Barriers to international cooperation on digital identity

Differed with

Bandar Al-Mashari

Differed on

Approach to global standardization of digital identity systems

Importance of experimentation and learning by doing

Explanation

The speaker emphasizes the importance of experimentation and practical implementation over theoretical debates. He suggests that countries that are ready should start experimenting and trying out digital identity systems, which can then inform future global frameworks.

Evidence

Reference to Estonia’s approach of trying things out before extensive debate

Major Discussion Point

Key considerations for the future of digital identity

S

Sangbo Kim

Speech speed

123 words per minute

Speech length

602 words

Speech time

292 seconds

Digital identity enables access to essential services

Explanation

Digital identity is seen as a fundamental infrastructure that encourages people to use digital services more frequently, comfortably, and safely. It serves as a starting point for various services, including social protection and financial services.

Evidence

Comparison to commercial services where sign-up processes are the starting point for user engagement

Major Discussion Point

Defining a trusted digital identity framework

Agreed with

Bandar Al-Mashari

Agreed on

Digital identity as fundamental infrastructure

Lack of connectivity and basic digital ID solutions in many countries

Explanation

Many countries still struggle with lack of internet connectivity and basic digital ID solutions. This creates a significant digital divide between developing and developed countries, hindering the implementation of global digital identity systems.

Evidence

Statistics on global internet access (2.6 billion people without access) and lack of ID solutions (3.3 billion people in countries without ID solutions)

Major Discussion Point

Barriers to international cooperation on digital identity

Need to protect user privacy through decentralization

Explanation

The speaker highlights the importance of decentralization in digital ID systems to enhance user privacy. He emphasizes that countries are now giving users more control over their private data and that full identification information is not always necessary for authentication or verification processes.

Major Discussion Point

Key considerations for the future of digital identity

Agreed with

Bandar Al-Mashari

Siim Sikkut

Agreed on

Importance of user privacy and control

K

Kurt Lindqvist

Speech speed

163 words per minute

Speech length

1267 words

Speech time

466 seconds

Trust involves both technical and human elements

Explanation

Trust in digital identity systems involves both technical aspects and human elements. It’s not just about the technical concept of trust, but also about user trust in the system’s functionality, safeguards, and ability to deliver on its promises.

Evidence

Discussion of ICANN’s challenges with domain name registration data and privacy requirements

Major Discussion Point

Defining a trusted digital identity framework

Existing technologies like DNS can provide security and stability

Explanation

The speaker suggests that existing technologies, such as the Domain Name System (DNS), can provide similar or even better characteristics than newer technologies like blockchain for identity management. He emphasizes the importance of building on existing stable technologies.

Evidence

Examples of DNS being used for identity management in social media platforms like BlueSky

Major Discussion Point

Balancing security and privacy

Differed with

Bandar Al-Mashari

Differed on

Role of blockchain in digital identity systems

Need for inclusive, phased approach to implementation

Explanation

The speaker advocates for a phased approach to implementing digital identity systems that is inclusive and enables everyone to participate. This approach should respect national priorities while incentivizing progress and ensuring that solutions are universally implementable.

Major Discussion Point

Barriers to international cooperation on digital identity

Agreements

Agreement Points

Digital identity as fundamental infrastructure

Bandar Al-Mashari

Sangbo Kim

Digital identity is fundamental infrastructure, not just a service

Digital identity enables access to essential services

Both speakers emphasize that digital identity is a crucial infrastructure enabling various services and transactions, rather than just a standalone service.

Importance of user privacy and control

Bandar Al-Mashari

Siim Sikkut

Sangbo Kim

Blockchain offers more user control and privacy options

User adoption and ease of use are key for trust

Need to protect user privacy through decentralization

The speakers agree on the importance of giving users control over their data and ensuring privacy in digital identity systems.

Need for clear frameworks and governance

Bandar Al-Mashari

Emma Theofelus

Framework should encompass legal, governance, and technical aspects

Framework must clearly define roles, responsibilities and limitations

Both speakers stress the importance of comprehensive frameworks that clearly define roles, responsibilities, and governance structures for digital identity systems.

Similar Viewpoints

Both speakers recognize that different countries and regions have varying levels of readiness and specific needs when it comes to implementing digital identity systems.

Emma Theofelus

Siim Sikkut

Need to understand different regional contexts and needs

Differing levels of readiness and priorities across countries

Both speakers advocate for a practical, phased approach to implementing digital identity systems, emphasizing the importance of experimentation and inclusivity.

Siim Sikkut

Kurt Lindqvist

Importance of experimentation and learning by doing

Need for inclusive, phased approach to implementation

Unexpected Consensus

Potential of existing technologies for digital identity

Bandar Al-Mashari

Kurt Lindqvist

Blockchain offers more user control and privacy options

Existing technologies like DNS can provide security and stability

While Bandar Al-Mashari emphasizes the potential of blockchain for digital identity, Kurt Lindqvist unexpectedly suggests that existing technologies like DNS can provide similar benefits. This consensus on the importance of leveraging both new and existing technologies for digital identity solutions is noteworthy.

Overall Assessment

Summary

The speakers generally agree on the importance of digital identity as fundamental infrastructure, the need for user privacy and control, clear governance frameworks, and understanding regional differences. There is also consensus on the need for practical implementation approaches.

Consensus level

The level of consensus among the speakers is relatively high, with agreement on core principles and challenges. This suggests a strong foundation for international cooperation on digital identity systems, but also highlights the complexity of implementation due to varying regional needs and technological considerations.

Differences

Different Viewpoints

Role of blockchain in digital identity systems

Bandar Al-Mashari

Kurt Lindqvist

Blockchain offers more user control and privacy options

Existing technologies like DNS can provide security and stability

While Prince Bandar sees blockchain as offering increased security and privacy options, Kurt Lindqvist suggests that existing technologies like DNS may provide similar or better characteristics for identity management.

Approach to global standardization of digital identity systems

Bandar Al-Mashari

Siim Sikkut

Lack of global framework or standards for digital identity

Differing levels of readiness and priorities across countries

Prince Bandar advocates for a global digital identity framework, possibly overseen by international organizations, while Siim Sikkut emphasizes the need to consider differing levels of readiness and priorities across countries before pursuing global standardization.

Unexpected Differences

Emphasis on experimentation vs. established frameworks

Siim Sikkut

Bandar Al-Mashari

Importance of experimentation and learning by doing

Lack of global framework or standards for digital identity

While most speakers focused on establishing frameworks and standards, Siim Sikkut unexpectedly emphasized the importance of experimentation and practical implementation over theoretical debates. This contrasts with Prince Bandar’s call for a global framework, highlighting a difference in approach to developing digital identity systems.

Overall Assessment

summary

The main areas of disagreement revolve around the role of emerging technologies like blockchain, the approach to global standardization, and the balance between privacy and security in digital identity systems.

difference_level

The level of disagreement among the speakers is moderate. While there is general consensus on the importance of digital identity systems and the need for trust and security, speakers differ on the specific approaches and technologies to achieve these goals. These differences reflect the complex nature of implementing global digital identity systems and the need to consider various national and regional contexts. The implications of these disagreements suggest that achieving a universally accepted approach to digital identity may be challenging and may require flexible frameworks that can accommodate different priorities and levels of technological readiness.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of addressing privacy and security in digital identity systems, but they propose different approaches. Emma Theofelus suggests an independent oversight body, while Siim Sikkut argues for advancing both aspects simultaneously through core principles.

Emma Theofelus

Siim Sikkut

Independent oversight body can help manage privacy and security

Privacy and security can be advanced simultaneously, not balanced against each other

Similar Viewpoints

Both speakers recognize that different countries and regions have varying levels of readiness and specific needs when it comes to implementing digital identity systems.

Emma Theofelus

Siim Sikkut

Need to understand different regional contexts and needs

Differing levels of readiness and priorities across countries

Both speakers advocate for a practical, phased approach to implementing digital identity systems, emphasizing the importance of experimentation and inclusivity.

Siim Sikkut

Kurt Lindqvist

Importance of experimentation and learning by doing

Need for inclusive, phased approach to implementation

Takeaways

Key Takeaways

Digital identity is fundamental infrastructure, not just a service, that enables access to essential services and economic participation

A trusted digital identity framework must balance security, privacy, and user adoption

There are significant barriers to international cooperation on digital identity, including varying levels of technological readiness and differing regional needs

Experimentation and phased implementation approaches are needed to advance global digital identity solutions

Protecting user privacy and giving users control over their data is crucial for building trust in digital identity systems

Resolutions and Action Items

Experiment more with digital identity solutions to gain practical experience

Invest in digital identity as critical infrastructure

Work towards developing a high-level global digital identity framework, potentially through international bodies like the UN or ITU

Focus on building leadership, capability and governance for digital identity within countries

Unresolved Issues

How to create truly global standards for digital identity while respecting national sovereignty and priorities

How to bridge the digital divide and bring digital identity solutions to the 3.3 billion people currently without access

How to balance the need for centralized identity management with calls for decentralized, user-controlled systems

How to ensure digital identity systems are inclusive and do not leave behind certain populations or regions

Suggested Compromises

Adopt a phased, multi-track approach where countries can implement digital identity at their own pace while still working towards interoperability

Build on existing successes in areas like international travel documents and mobile roaming to extend digital identity globally

Focus on high-level frameworks and principles rather than prescriptive technical standards to allow for local adaptation

Thought Provoking Comments

Digital identity is not a service, it’s an infrastructure. It’s an infrastructure for digital transformation, digital transactions between individuals, between entities, business and governments.

speaker

Bandar Al-Mashari

reason

This reframes digital identity from a service to a fundamental infrastructure, emphasizing its critical importance.

impact

Set the tone for discussing digital identity as a foundational element rather than just an add-on service. Led to further exploration of the broad implications and requirements for digital identity systems.

Trust means from the user itself, from the identity holder, he has to trust that his identity, his digital identity is going to help him to access all information everywhere, anytime, across the border, etc. So the trust doesn’t mean only secure and private. It doesn’t mean that it’s protected from invasion or impersonation. It means more than this. It means more business.

speaker

Bandar Al-Mashari

reason

Expands the concept of trust beyond security to include utility and economic benefits.

impact

Broadened the discussion to consider user perspectives and practical benefits of digital identity systems, not just technical aspects.

We have to talk about two things one is to trust in the key does that work and secondly? But how is the usage of that key? And how is that secure and trusted?

speaker

Siim Sikkut

reason

Distinguishes between trust in the identity itself and trust in how it’s used, highlighting the complexity of trust in digital systems.

impact

Led to a more nuanced discussion of trust, considering both technical reliability and user experience/control.

I think with clear core principles of values on what the data is to be used for, for each individual user, becomes easier on what parameters to keep within the administration of a digital identity system.

speaker

Emma Theofelus

reason

Emphasizes the importance of clear principles and user consent in managing digital identities.

impact

Shifted the conversation towards the importance of governance and user rights in digital identity systems.

Blockchain, in a nutshell, it’s an option to replace the database or the central database with distributed ledger, or in a simple word, with distributed identity that controlled mainly by the identity holder himself.

speaker

Bandar Al-Mashari

reason

Provides a clear explanation of blockchain’s potential role in digital identity, highlighting user control.

impact

Sparked discussion on the pros and cons of blockchain for digital identity, leading to consideration of various technological approaches.

Two barriers I would like to say today. One is still we are struggling with lack of connectivity and lack of basic digital ID solutions. From the internet connectivity point of view, we are still struggling with 2.6 billion people so no access to the internet.

speaker

Sangbo Kim

reason

Highlights fundamental infrastructure challenges that often get overlooked in discussions of advanced digital identity systems.

impact

Refocused the discussion on the need to address basic connectivity and access issues alongside more advanced digital identity solutions.

Overall Assessment

These key comments shaped the discussion by broadening the concept of digital identity from a narrow technical focus to a more holistic view encompassing infrastructure, trust, user control, governance, and global accessibility challenges. The conversation evolved from defining digital identity to exploring its implications for privacy, security, economic development, and international cooperation. The comments highlighted the complexity of implementing digital identity systems that are both technologically advanced and inclusive, leading to a rich discussion of potential solutions and ongoing challenges.

Follow-up Questions

How can we create globally applicable standards for digital identity verification while respecting local laws, cultures, and privacy definitions?

speaker

Bandar Al-Mashari

explanation

This is important to address the gaps at a global level without interfering with specific countries’ laws or cultural norms.

How can we extend the success of standardized digital passports to create a broader global digital identity framework?

speaker

Bandar Al-Mashari

explanation

Building on existing successful international standards could provide a pathway to more comprehensive global digital identity solutions.

What are the best approaches for experimenting with and implementing digital identity frameworks among countries that are ready, while allowing others time to develop?

speaker

Siim Sikkut

explanation

This is crucial for making progress on international cooperation without leaving behind countries at different stages of readiness.

How can we design digital identity solutions that are universally implementable and don’t exclude regions or communities due to complexity or cost?

speaker

Kurt Lindqvist

explanation

Ensuring inclusivity and accessibility in digital identity frameworks is essential for widespread adoption and effectiveness.

What are the most effective ways to build leadership, capability, and governance structures for digital identity initiatives within countries and internationally?

speaker

Siim Sikkut

explanation

Developing these capacities is fundamental to successfully implementing and managing digital identity systems.

How can we balance the need for centralized identity information with protecting user privacy through decentralization?

speaker

Sangbo Kim

explanation

Finding this balance is critical for creating trusted digital identity systems that respect user rights and privacy.

What strategies can be employed to accelerate digital inclusion in regions like Africa through digital identity initiatives?

speaker

Emma Theofelus

explanation

Addressing the digital divide and including underserved populations is a key potential benefit of digital identity systems.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #171 Legalization of data governance

Day 0 Event #171 Legalization of data governance

Session at a Glance

Summary

This discussion focused on the legalization of data governance and internet regulation in various countries, with a particular emphasis on China’s approach. Speakers from China, Singapore, Brazil, and international organizations shared insights on their respective legal frameworks and strategies for managing data in the digital age.

The discussion highlighted the importance of balancing data security with development and innovation. China’s approach, as outlined by several speakers, involves a comprehensive legal framework including cybersecurity, data security, and personal information protection laws. The country is also working on regulations for cross-border data flows and AI governance.

Singapore’s representative described their evolving approach, which began with minimal regulation but has expanded to address harmful content, online falsehoods, and cybersecurity. Brazil’s speaker outlined their efforts in data protection, AI legislation, and plans for a national data economy policy.

Corporate perspectives were provided by representatives from Lenovo and ZTE, who shared their companies’ data governance practices and compliance efforts. Both emphasized the importance of aligning with national and international regulations while fostering innovation.

Key themes that emerged across presentations included the need for international cooperation in data governance, the challenges of regulating rapidly evolving technologies like AI, and the importance of balancing security concerns with the benefits of data sharing and utilization.

The discussion underscored the complex and evolving nature of data governance in the digital age, with countries and companies alike grappling with how to protect privacy and security while promoting innovation and economic growth.

Keypoints

Major discussion points:

– The importance of balancing data security and development in data governance

– The need for international cooperation and harmonization of data governance approaches

– The challenges and opportunities presented by AI and emerging technologies for data governance

– The role of law and regulation in ensuring responsible data practices and cross-border data flows

– Industry perspectives on implementing data governance and compliance frameworks

The overall purpose of the discussion was to explore different national and industry approaches to data governance, with a focus on legal and regulatory frameworks. Speakers shared insights on how to balance innovation and security, address emerging challenges, and promote international cooperation on data governance issues.

The tone of the discussion was largely informative and collaborative. Speakers presented their country’s or organization’s approaches in a factual manner, while acknowledging common challenges and the need for continued dialogue and cooperation. There was an underlying sense of urgency about addressing data governance issues, but the overall tone remained constructive and solution-oriented throughout.

Speakers

Speakers from the provided list:

– Tang Lei: Director of the Internet Governance Research Center, Chinese Academy of Cyberspace Studies

– Wolfgang Kleinwächter: Professor Emeritus at the University of Aarhus, Denmark

– Shi Jainzhong: Professor and Vice President of China University of Political Science and Law

– Jose Roberto de Andrade Filho: Immediate past deputy consul general from consulate general of the Federative Republic of Brazil, Shanghai

– Daniel Seng: Director of Center for Technology, Robotics, Artificial Intelligence and Law Studies, National University of Singapore

– He Bo: Director of Research Center for Internet Law, China Academy of Information and Communication Technology

– Zhao Jingwu: Associate Professor at Law School of Beihang University

– Gao Huandong: Vice President of Lenovo Group

– Li Wen: Vice President of ZTE Corporation

Additional speakers:

– Wu Shenghua: Professor at Beijing Normal University (mentioned but did not speak)

Full session report

Data Governance in the Digital Age: Balancing Security, Development, and International Cooperation

This discussion brought together experts from various countries and sectors to explore the complex landscape of data governance in the digital age. The speakers, representing China, Singapore, Brazil, and major technology companies, shared insights on national approaches, legal frameworks, and corporate practices in data governance.

Key Themes and Approaches

1. Balancing Security and Development

A central theme throughout the discussion was the need to balance data security with development and innovation. Tang Lei, representing China’s perspective, emphasised that the country’s data governance framework aims to achieve “high-quality development and high-level security”. This approach is reflected in China’s comprehensive legal framework, which includes cybersecurity, data security, and personal information protection laws, as outlined by Shi Jainzhong. Tang Lei also noted China’s global initiative on data security, highlighting the country’s efforts to engage internationally on these issues.

In contrast, Daniel Seng described Singapore’s evolving approach, which began with minimal regulation but has expanded to address harmful content, online falsehoods, and cybersecurity. Singapore has recently enacted laws addressing online safety and criminal harms, demonstrating a shift towards more comprehensive regulation. This “light-touch” approach to content regulation differs from China’s more comprehensive framework, highlighting the diversity of national strategies in data governance. Seng also emphasized the importance of public education in Singapore’s approach to internet governance.

2. Legal Frameworks and Regulatory Challenges

The discussion revealed that countries are at different stages of developing and implementing legal frameworks for data governance. Brazil, as described by Jose Roberto de Andrade Filho, has had a data protection law (LGPD) in place for four years and is currently working on AI regulations and plans for a national data economy policy.

Speakers agreed that emerging technologies, particularly AI, are creating new challenges for data governance. He Bo highlighted the shift from model-centric to data-centric approaches in AI development, emphasising the crucial role of data quality management. He stressed the importance of high-quality data in training AI models and suggested strengthening policy guidance to encourage data sharing among companies, particularly to benefit smaller firms and startups in AI development.

3. Cross-Border Data Flows

The regulation of cross-border data flows emerged as a complex challenge requiring international cooperation. Zhao Jingwu provided detailed insights into China’s approach, outlining four categories of rules governing cross-border data flows: general rules, rules for important data, rules for personal information, and rules for specific industries. He also mentioned a recent regulation issued in March 2024 aimed at facilitating data flows while ensuring security.

4. Corporate Data Governance Practices

Representatives from major technology companies provided insights into corporate data governance practices. Gao Huandong described Lenovo’s comprehensive data security and privacy framework, which includes a global privacy compliance system, data classification, and protection measures. Li Wen outlined ZTE’s data compliance system designed to improve efficiency, detailing their approach to data governance, risk management, and compliance with various international standards.

5. Multi-stakeholder Collaboration

Wolfgang Kleinwächter, Professor Emeritus at the University of Aarhus, emphasised the importance of multi-stakeholder collaboration in internet governance. This view was echoed by other speakers, who recognised the need for cooperation among governments, private sector entities, and civil society to address the complex challenges of data governance effectively.

Areas of Agreement and Disagreement

There was broad consensus among speakers on the fundamental importance of data governance in the digital age. All participants acknowledged the need for comprehensive frameworks to address security, privacy, and economic development concerns. However, specific approaches and priorities varied between countries and organisations.

The main areas of disagreement revolved around the level of regulation required. While China’s approach emphasises comprehensive legislation, Singapore’s evolving strategy represents a different philosophy. These differences reflect diverse regulatory landscapes and national priorities, which may complicate efforts to establish global standards for data governance.

Emerging Challenges and Future Directions

Several speakers highlighted the need to address emerging challenges in data governance:

1. AI Regulation: The rapid development of AI technologies requires new approaches to regulating training data and algorithms.

2. Data Economy: He Bo discussed data as an economic asset, citing Shanghai’s data exchanges as an example of how data can be leveraged for economic growth.

3. Data Quality Management: He Bo emphasized the critical role of data quality in AI development and the need for policies to support effective data management practices.

4. Adapting Governance Models: Kleinwächter emphasised the need for tailored governance models for specific digital issues, challenging the notion of a one-size-fits-all approach.

5. Blurring of Communication and Content Platforms: Daniel Seng highlighted the challenges posed by the convergence of communication and content platforms, necessitating new regulatory approaches.

Conclusion

The discussion underscored the complex and evolving nature of data governance in the digital age. While there is general agreement on the importance of balancing security, development, and innovation, the specific approaches vary significantly across nations and sectors. The speakers highlighted the need for continued international dialogue, flexible regulatory frameworks, and multi-stakeholder collaboration to address the challenges posed by rapidly evolving technologies and the increasing economic value of data.

As the digital landscape continues to evolve, policymakers, industry leaders, and academics must work together to develop governance models that protect privacy and security while fostering innovation and economic growth. The diversity of approaches presented in this discussion provides valuable insights for shaping future data governance strategies in an increasingly interconnected world.

Session Transcript

Tang Lei: You Sustainable development Sustainable development Sustainable development Sustainable development Net governance forum and to the day zero event on the legalization of data government data governance You He from You World Internet Conference WuZhen Summit, President Xi Jinping noted that we should see the trends of digitalization, networking and intelligentization, embrace innovation as a primary driver, uphold security as a baseline requirement, and pursue inclusiveness as a core value. Efforts must be accelerated to promote innovative, secure, and inclusive development in cyberspace, working together to usher in a brighter digital future. Today we are gathering here to jointly discuss issues related to data governance, which are of great importance. In cyberspace, China has consistently committed itself to place equal emphasis on development and security, establishing and improving the legal system for data governance. Firstly, China has enacted the cybersecurity law. data security law, and a personal information protection law, providing basic provisions for data security and personal information protection systems. Secondly, China has released the provisions for promoting and regulating the cross-body flow of data, fostering the free flow of data in a lawful and orderly manner. Thirdly, China has released several provisions on the management of automobile data security and several regulations to build the data security management systems in some key areas. Firstly, China has issued the global initiative on data security. For example, an effective utilization of network data Firstly, the regulations forward the overall requirements and the general provisions for network data security. Secondly, the regulations further specify rules concerning personal information protection. Thirdly, the regulations improve mechanisms for the management of important data security. Fourthly, the regulations enhance the provisions on security management. cross-border network data flows. In recent years, with the advancement of new technologies and applications such as artificial intelligence, the volume of data has been growing rapidly. Accompanied by escalating security risks, striking a balance between high-quality development and high-level security has become a common challenge all over the world. It has become increasingly crucial to promote data governance by law-based approaches. To address this, the following actions are necessary. Firstly, improve the legal system for data governance. Establish and improve the foundation rules for data governance and address the data security challenges brought about by the development of new technologies. Secondly, strengthens theoretical support for law-based data governance. Promote the development of the theoretical system for data governance, ensuring a notational relationship between theory and practice. Thirdly, strengthen international exchanges and cooperation on the legalization of data governance. Create a win-win international cooperation pattern for data governance. Colleagues, data governance is a shared challenge for countries worldwide. I encourage that all the guests here today to conduct in-depth discussions around relevant topics, jointly discuss the legal solutions for data governance, work together to promote the legalization of data governance. global data governance and ensure that the benefits of digital advancements are shared by people worldwide. In closing, I wish today’s event a full success. Thank you. Thank you Mr. Tang for the relevant experience of China. Now I give the floor to Professor Wolfgang Kleinwachter, Professor Emeritus at the University of Aarhus of Denmark.

Wolfgang Kleinwächter: Okay, thank you. Thank you very much and thank you for the invitation and thank you all the private administration of China for organizing this important workshop. Mr. Tang Li in his opening speech mentioned two very important concepts. One is education and the other one is collaboration. I think the whole internet is a permanent learning process, so we know more than we knew 20 years ago and we will know a lot more in 20 years from now. So it means it’s a never ending story and it will continue. And the other one is collaboration, so it means No one is able to manage all the things, you know, with one shot. So there is no silver bullet. So there is no one road. So there are many roads which has to be taken and the only way to manage all the forthcoming problems is by collaboration of all stakeholders in their respective roles. I think this is the beauty of the Internet Governance Forum. This was the beauty of the outcome of the World Summit on the Information Society 20 years ago. It was a compromise at this time. Some governments wanted to have private sector leadership. Others wanted to have governmental leadership. And the compromise was we need all stakeholders. So the Internet and all these new achievements are not for leadership. They are for collaboration. So that means we have to work hand in hand in their respective roles. Governments are different than the private sector. Technical community is different from civil society. But we can manage the problems of the future only if we work hand in hand. I think this is the big message. And I’m very happy that Mr. Tang Lai mentioned these two concepts, education and collaboration, as the main guidance. As an academic person, I’m dealing with definitions. So this is a core work of academics. And the title of this session is Legalization of Data Governance. So I want to ask what is data and what is governance? So let’s start with data. So 20 years ago, I was in another workshop where we said, okay, data is the starting point. That’s the resource. That’s the raw resource, the raw material. Data leads to information. Information leads to knowledge. And in the best case, knowledge leads to wisdom. So that’s why some people said, okay, we have, we start with the data society, then we have the information society. It was the world summit on the information society, but we have to move upwards to the knowledge society, and some people say that, and the wisdom society. This is a little bit idealistic, but as an academic person, I think I’m allowed to think in idealistic terms. But more complicated is the term of governance. So this is the internet governance forum. I think internet governance was the term we used 20 years ago, because the internet was brand new. A lot of people did not understand what the internet is. Is it a technical issue? Is this an economic issue? Is this a political issue? So it was very complex, and I was a member of this UN working group on internet governance, which was tasked by Kofi Annan to propose a definition. So just to define and to enable governments and other stakeholders to have an understanding what internet governance means. So the definition we proposed, and which is reflected in the Tunis agenda, had three main elements. The first thing is internet governance means the involvement of all stakeholders. What I said already, so it’s not only one stakeholder approach, it’s a multi-stakeholder approach. Second thing is governance means that you have to share. You have to share protocols, you have to share codes, you have to share regulations and decision making. And the third thing is we proposed a layered approach. We made a differentiation between the evolution and the use of the internet. At this time, the majority of the problems were with the technical layer, the evolution of the internet. And the use of the internet is related to the so-called internet-related public policy issues. Today, the majority of problems is on the application layer and not on the technical layer. I think a lot of issues on the technical layer have been cleared. It’s not a question anymore. So that means 20 years ago, it was a technical problem with some political implications. Today, it’s a political problem with a technical component. And this makes it rather different. So what means this governance now? I think today there is a confusion because everything is governance. We have here data governance in today’s workshop. We have internet governance. People in the cybersecurity field speak about cyber governance. We have now AI governance. We have ICT governance. We have IoT governance for the Internet of Things. So there’s a huge confusion. And what governance do you mean? Data, Internet, AI, and things like that. So I think more or less this is all the same soup because this is governance in the digital age. Governance in the digital age means you have to have a specific solution, governance model, for each of the specific issues. There is no one-size-fits-all that you say this is the governance for the data or this is the governance for AI. You have to identify the problem. What do you want to govern? What is the subject? And then to build the governance model around the system. And this is complicated because in this layered system, you have on the one hand a universal set of norms and principles and codes. The technical layer, this is one world, one Internet. But on the application layer, you have 193 national jurisdictions. So we have a problem, 193 sovereign nations, national jurisdictions, but one world. So and I think this is. a challenge that you have a contradiction and you have to manage the contradiction. You cannot settle this. So that means a settlement would mean the whole world would be ruled only by technical codes. This is an illusion. The other alternative, the whole world would be ruled only by one country. This is also an illusion. So that means you have to find a compromise. And this is the challenge. And to meet these challenges, you need more discussion, more dialogue among all stakeholders. And that’s why the Internet Governance Forum is such a wonderful platform. Thank you very much.

Tang Lei: Thank you very much, Professor Kleinwachter for your wonderful sharing. Next speaker, let’s invite Mr. Shi Jianzhong, Professor and the Vice President of China University of Political Science and Law.

Shi Jainzhong: Please. Good afternoon, ladies and gentlemen. I’m very delighted to have the opportunity to visit this ancient and magnificent city, Riyadh, to attend this panel with you to discuss the topic that is both cutting edge and challenging, namely data governance under the rule of law. As we know, the mission of law is to adjust social relations by allocating rights, obligations, and responsibility, even liability among different social entities in order to maintain the security, justice, and efficiencies of these relations. We have uttered The continuous advancement of ITCI technology is a protecting home being in the construction of digital economy, digital government, digital society. In the digital age, the subject identities of various social relations are being digitalized and datafied, such as nature firms and government departments. The behavior of various subjects are also being digitalized and datafied, such as radio consumption, digital transaction of firms, and digital government affairs. And the objects are also being digitalized and datafied, such as the digitalization and the datafication of the goods, the services, even equipment. In the digital age, the social relations are also being digitalized and datafied, such as between individuals, between firms, between individuals and firms, between individuals and governments, between firms and government departments, and among the government departments at center. Consequently, in the digital age, the mechanism, tools, and modes by which law adjusts the social relationship must be changed accordingly, which poses a lot of new challenges and opportunities. The challenges brought by digital intelligence technology to the law mainly refer to the need for the law to confront unprecedented challenges. Unprecedented new issues, such as how to configure the rights related with data, how to ensure data security, how to maintain the data sovereignty, how to make equal use of data resources, how to protect personal privacy, how to protect digital human rights, and how to regulate data processing, including data collection, storage, use, processing, transmission, provision, disclosure, and deletion. The opportunities brought by digital intelligence technology to law are primarily reflected in the fact that the digital intelligence technology can serve as a tool for the rule of law. It can be internalized and embedded in the rules of law process in real time, therefore empowering all the aspects of legislation, law enforcement, and justice, achieving a higher level of scientific legislation, strict law enforcement, and fair justice, while seeing the opportunities that digital intelligence technology brings to law is a great challenge of itself. So we must acknowledge that in the current era of rapid advancement in the AI, the challenge that the digital intelligence technology poses to the law outweighs the opportunities. Now, we can find an interesting phenomenon that is, on the one hand, digital intelligence technology is creating new legal problems. On the other hand, digital intelligence technology is helping the law to solve the problems. In other words, digital intelligence technology and the data are both objects and the tools of the rule of law. This is a special phenomenon that data governance must be aware of. Certainly, it is important to recognize that data we discussed today refers to electronic log records of information, that is electronic data. Compared to the other forms of information recording, such as paper-based terms, such as paper-based forms, electronic data has several unique characteristics, such as technical characteristics of reproducibility, economic characteristics of non-arrival, and legal characteristics of non-exclusivity. We believe that security is the prerequisite for development, and development is a guarantee for security. When it comes to the data governance, it is about both security and development. In other words, in theory, security and developing development, in other words, in theory, security and promoting development are not opposite, and this is no recognizable contradiction. As one of the goals of data governance, in theory, security means protecting individual privacy. commercial security, and national security in the process in developing and utilizing personal data, corporate data, and the government data. In this regard, the China government has enacted the co-authorization laws such as cyber security laws, personal information protection law, and data security laws. Mr. Tang Lei has already touched upon this in his speech just now, so I will not elaborate it further. As one of the objectives of the data governance, promoting development entails effectively developing and utilizing personal data, corporate data, and the government data, while protecting individual privacy, commercial security, and national security. Promoting development means make use of data and digital intelligence technology to foster technological advancement, economic prosperity, social development, and well-being of the people. From this purpose, Chinese government has also made corresponding provision in many laws, such as the civil code. Moreover, it is formulating a number of laws and regulations to facilitate development and utilization of data. For instance, the Chinese government places great emphasis on fair competition in digital economy, specifically stipulating in the anti-monopoly law that undertakings shall not use data and algorithms, technology, capital advantages, and platform rules to engage in monopolistic behavior prohibited by this law. As we know, In every country, the government holds the largest amount and the highest quality of data. If effectively developed and utilized, it can empower business developments. To this end, the Chinese government has formulated the regulation on fair competition review in accordance with the anti-monopoly law, which can guarantee fair and more discriminative exploration and utilization of government data by all type of firms. In the process of using data security and promoting development of the digital economy, Chinese law is continuously improving system and mechanism for governance, while addressing the challenges brought by digital intelligence technology. China is actively using this technology to empower the legal system. For example, China has not only established the three specialized use net codes, but also actively employs procedural rules of digital intelligence technology to enable justice. These rules include online mediation rules, online litigation rules, and online court rules, ensuring that the law is implemented more justly, efficiently, and with great integrity. In summary, data governance requires support of law and is in separate belief. Therefore, only organic integration of law and the digital intelligence technology can build more scientifically sound and reasonable structure and the mechanism for data governance and achieve positive interaction between higher. standard data security and higher quality digital economic development. Thank you.

Tang Lei: Thank you very much Professor Shi. Now let’s turn to online speaker Mr. José Roberto Andrade, immediate past deputy consul general from consulate general of the Federative Republic of Brazil, Shanghai. Please. Thank you very much. Can you hear me well?

José Roberto de Andrade Filho: Good morning. Thank you. Good morning from Brasilia. It’s a pleasure to join this event online from far away and I’m happy to see many familiar faces. First of all, let me thank the friends and authorities from the Bureau of Internet Laws and the Cyberspace Administration of China, in special Professor Wu Shenghua, with whom we have from Brazil a long time collaboration, but also my fellow speakers and authorities, Mr. Tang Lei, Professor Kleinwächter and Professor Jiang Zhong. I would like to introduce to you today some examples of how we in Brazil are working on our internal governance, on the structuring of our data environment, and also how this will serve as a support and also a way for Brazil to contribute to international, to global governance. And I liked very much as Professor Kleinwächter gave the example that, well, we have so many governances, but it’s digital age, governance in the digital age, and then we have several different approaches. The examples I will give, I could divide in first what, where are we coming from right now, late 2024? What kind of structures we have? And then I would like to speak about, well, the consolidation of our data protection law, the LGPD, and our national authority of data protection. Also, I would like to give an example as a second, number two, of our AI law, which is currently in fast development. And third, our views, our current discussion for data economy policies. So, just last month, we completed four years of the implementation of Brazil’s National Data Protection Authority, ANPD. ANPD is the most robust data authority and instance in Brazil. It was created because of the provisions that we had in LGPD, our national data law, which was approved in 2021, but has a background of discussion since 2018. Well, Brazil, as you know, is one of the 10 biggest economies in the world. Our population is one that spends most hours a day online. We are a leader in terms of users in all platforms, especially now with e-commerce, with a strong connection to Chinese companies as well. So, we are a big data producer, but also at the same time, we have in all fields of our country, especially, let’s say, in agriculture, in biodiversity. So, we are a big data producer, but also at the same time, we have in all fields of our country, especially, let’s say, in agriculture, in biodiversity. a wealth. I always compare that to the Amazon forest. We have an Amazon forest of data, of wealth. So it’s our duty to organize internally and promote development and new internal governance, I would say, but that reflects our values. So one key element that I would say, even before I dip into the three examples, all discussions in Brazil are very much permeated by inclusion. So we have private sector, civil society is a strong player in all discussions regarding our data laws and our frameworks, and also the need to have people at the center, the human beings at the center. As Professor Wu-Sheng Kuo also said, well-being and development. So inclusion, people, human values at the center, and development and well-being. That is very much the spirit, and we can find many of these provisions at LGPD, as I said four years ago, and now with the National Data Protection Authority. A report, a very extensive report has just been published. It’s available online, but unfortunately so far only in Portuguese, but I hope very soon in English as well, which gives a full vision of how ANPD, our national authority, has been working. First in giving guidance and recommendations, and let’s say soft norms as well, to private sector and data actors. I mean companies and public institutions and others. So ANPD has been working actively and has already consolidated. in line with other exchanges, with other legal environments abroad, but has consolidated a robust set of norms and also publications for guidance for our data players or actors. At the same time, we have, at the National Data Authority, improved the number of people. We are still building the capacity. Currently, ANPD, according to the report, has 150 employees. We still don’t have a career of specialized people inside the agency, but the idea is to further professionalize that. And I would say ANPD is our main vehicle for implementation of data policies in Brazil. And also, as one of its reports, we call it international data transfers, or, well, very much like the cross-border data transfer that is at the center of many discussions. So this is the key platform for us to establish international cooperation. At the same time, number two, we have just, five days ago, approved at the Senate, Brazil legislation has two levels, the Senate and the House of Representatives, we have just approved the new project for AI law. This is the result of years of discussion, which, if we see in the timeline, has increasingly absorbed and included new actors, especially civil society. Now, it has had a very strong contribution. One of the key aspects, let me say, is… For example, the use of copyright material for training models. Our artistic and writers associations have been very, very active and we don’t expect to have a final model. I think right now, nobody can expect to have a final model. Of course, because the AI environment is a fast-changing one. But we do want to have something that reflects our values and is operational. The AI law will now go to the House of Representatives. For further consolidation, there might be some modification, but we expect that in the course of 2025, we will have another tool, another legal framework together with the data protection law from 2021, that will give more robust legal framework for us internally to organize ourselves, this wealth of data, but also to promote our international cooperation. As a third level that I would like to enhance, our Ministry of Industry, and this is very much connected with all the international collaboration, has already started organizing a future of coming public consultation for a national data economy policy. This is very much aligned with what we see in other countries of using data, unlocking the power of data, unlocking the value of data to promote, well, economic competitiveness, better products, better business models. But also, as I said in the beginning, oriented, aimed at people-centered, development and reflecting values. So, this is the very early stage of consultations. We will soon have something published and in the press, but it has been announced already that from the industry part, and this is very important now, especially, I would say, with China as one of our main investors in Brazil, but also the just-celebrated Mercosur, our trade bloc in South America, agreement with the European Union for us to attract new business or partnerships and the data economy policy promises a lot of future, let’s say, potential and very good results as well. We have to discuss a lot of, I would say, elements that will serve as base for this data economy development. In China, for example, you have Shanghai Data Exchange and the local data exchanges all over. That is a fantastic example of how data assets can generate value even being, as I saw in Shanghai, data can be declared in the balance sheet of companies. So, this is one very good example. We will certainly take into account all international experiences. We aim at having more and more delegations traveling and learning about the experience not only of China, but also European Union and other partner countries. But I see a very positive horizon. This is, I would say, is more medium and longer term, but a very positive horizon. element as well. So in conclusion, I don’t want to extend my time, although I think we can have an extensive discussion and very productive. These three elements, the consolidation and strengthening of our National Data Authority, the coming approval of our AI law, which I mean with flexibilities to be adapted, to be in line with what we have to be prepared for, and always people-centered, with a vision of protection of vulnerable groups, rights, promotion and protection of human rights in the digital space, and our coming economic-oriented, with competitiveness, national data economy policy. I think these three levels can give a very good vision of how Brazil is moving. Of course, as a diplomat, to conclude, I would say Brazil is very active, has been very active, and will be, will continue to be very active in engaging to promote an organized order that we have to build in collaboration, government to government, government within international organizations, and I congratulate all of you for organizing and participating on events like this, where we have the opportunity to exchange and share, and including other key stakeholders, like civil society and private sector. Thank you very much, and I look forward to to participate further in the discussions and benefit from this

Tang Lei: event. Thank you. Many thanks for Mr. Chou Hsien, which provides us with a new perspective. Thanks again for all speakers of the first session. Next is the second session of roundtable discussion. Now, let’s welcome Mr. Daniel Sun, Director of Center for Technology, Robotics, Artificial Intelligence and Law Studies, National University of Singapore. Can everyone hear me? Yes, we can hear you. Thank you very much.

Daniel Seng: First, I’d like to thank Professor Wu from the Beijing Normal University for this very kind invitation, as well as the UN Internet Governance Forum and the Cyberspace Administration of China. It is a privilege to share with you Singapore’s approach towards Internet governance. And in fact, as you will hear from my presentation later, it is a fast-evolving one that is also adapting to the vicissitudes of the recent uses of the Internet and the problems that it poses. I propose to start by first outlining some basic principles that set the stage for the approach to the governance of the Internet in Singapore. First, Singapore is an open society. We have done our growth through trade, finance, multiculturalism and multilateralism. And in fact, Singapore is one of the most connected countries in the world. 93.2% of our residents have broadband and up to 166% of our population have mobile phones. I was really puzzled by this number until I realised that there are six phones amongst the four family members that I have. So this statistic is indeed true. Furthermore, we have been a firm believer in the free and open access to information, which we believe are key to education, research and innovation. So one of the underlying premises behind our concepts towards internet governance is that fundamentally don’t believe in the monitoring of user access to information. But at the same time, there has to be some minimal light touch across the board content regulation approach that has to be adopted for all information that is accessible in Singapore. So since 1996, we have regulated content providers via something called an internet code of practice. The way the internet code of practice works is that it defines a category known as prohibited materials. These are materials defined to be contrary to public interest, public morality, public order, public security and national harmony or prohibited by applicable Singapore laws in Singapore’s multicultural society. As you can see, the focus is on the fact that as a multicultural society, we have to take steps, sometimes serious ones to ensure that our multicultural society is stable and it’s not disturbed. So some of the factors taken into account in prohibiting material that can be accessed in Singapore include pornography, materials pertaining to sexual violence, materials pertaining to extreme violence or cruelty and materials that end to incite ethnic, racial or religious hatred, strife or intolerance. Having given ourselves a very broad definition of prohibited materials, in practice, our internet service providers only restrict access to about 200 mass impact websites and prevent these. websites from being accessed in Singapore. From our research, most of these prohibited websites are pornographic in nature, and others pertain to content that are harmful to Singapore’s racial or religious harmony or against national interests. As far as our internet content providers are concerned, we encourage them to exhibit or exercise self-regulation by not hosting fora and programs that contain prohibited material, and where the content pertains to news websites and political websites, we require these websites to be registered. So as you can see, the level of internet regulation for these internet content websites is largely minimal. That was until recently. In 2022, we discovered that there is a growing phenomenon where the internet websites are blurred in their usage in that we can have online communication service providers that also provide content, which are done essentially by point-to-point and point-to-multipoint communications. If I describe to you social media services for communications, such as those done by Facebook, TikTok, X, and YouTube, you understand what I mean, because many of these platforms are designed to actually communicate from individual to individual, sometimes harmful and inappropriate content. To deal with this situation, we passed a new law in 2022 called the Online Safety Miscellaneous Amendments Act to address this category of harmful and inappropriate content, which we define to include sexual content, violent content, suicide and self-harm content, cyber-bullying content, content endangering public health, and content facilitating vice and organized crimes. These platforms that I… discussed under our new laws are required to establish content rules and employ content moderation to filter out such content, especially protect children from accessing such content. Users are also empowered to report the availability of such content to the authorities and these websites are required to publish their annual online safety reports to show that they have complied with laws. The COVID incident at the turn of the decade has given rise to a new phenomenon called online falsehoods and to deal with this particular problem we had to enact one particular piece of legislation to deal with this and this is where there are online falsehoods that contain false statements of facts or misleading information that have a tendency from a public interest perspective to affect public health, safety, tranquility, public confidence, public finances, preventing ill will between different groups, preventing the influence on elections, the security of Singapore and relations with other countries. This piece of legislation that we enacted called the Protection from Online Falsehoods and Manipulation Act of 2019 or POFMA in short, is designed to allow the government to respond by issuing what are known as correction notices to counteract these posts. Companies that post these posts are required to also post the Singapore government’s correction notices to counteract the falsehoods that are contained in these posts. The advantages of such a mechanism are that because we do not go through the courts but go through a government system, the response of the government can be very fast sometimes within a matter of hours or days and the design really to combat serious falsehoods such as falsehoods arising from the COVID-19 incident and various other falsehoods claimed against against government ministers, institutions, and policies. You can find a vast majority of these correction notices targeted at Facebook content posted by users. In another recent initiative in this regard, we have enacted two additional pieces of laws to deal with the issue of harassment and online crimes. I’ll focus on the Online Criminal Harms Act of 2023, where in essence, we enacted this new piece of law to deal with online child sex exploitation, job investment, product scams, and phishing attempts. Under this new law, directions can be issued by the government authorities to various online service providers where there is a reasonable suspicion that there is some kind of online activity perpetuated on the online service providers, services, or content in furtherance of a commission of an offense. So for instance, if there is a post page that is used for phishing or other types of scams, the authorities can issue orders to the online service providers to block the content, to disable the content, or to prevent the content from being accessed by people in Singapore. Last but not least, on the issue of personal information and cybersecurity, we have, as many other countries in the world, a Personal Data Protection Act that seeks to regulate the collection, use, and disclosure of personal data by organizations, and it recognizes the rights of individuals to protect their personal data, especially in the online environment. The act also regulates the cross-border data flows that will pertain to Singapore as a trading hub and exchange of data between Singapore and other countries in the world, as well as a cybersecurity act that is designed to preemptively prevent. manage and respond to cyber security threats and incidents by also regulating owners of critical information infrastructure. So in conclusion, Singapore’s experiences in this regard have been largely as a result of our experiences from initially starting with a minimal platform for regulating content online by way of prohibited materials, but as online platforms develop and evolve into communications platforms to enable private individuals to communicate, we have to expand our concept of regulation to include harmful and inappropriate content, such as content that promotes suicide and self-harm or cyberbullying, to ensure that our masses are protected. At the same time, we have to update our laws to deal with online falsehoods and also to protect personal data and cyber security. But amongst all these regulations that are put in place, it is still very important for the government to put in place various public education measures to educate the public on both the advantages and disadvantages of accessing information online and to teach its population to be discerning in its proper use of information. So on that note, thank you very much, and I look forward to the opportunity to hear from the other speakers in our course to learn about developments around the world. Thank you.

Tang Lei: Thank you, Professor Daniel, for your wonderful words. Now I give the floor to the next speaker, Mr. He Bo, Director of Research Center for Internet Law, China Academy of Information and Communication Technology.

He Bo: Thank you. Good afternoon, everyone. I’m He Bo from China Academy. Academy of Information and Communication Technology. Thanks for the invitation of Cyberspace Administration of China. I’m so delighted to participate in today’s discussion. Just now, we have several professors mentioned that we should find out the specific issues of data governance. So today, I would like to discuss the design of legal system for data governance in the era of artificial intelligence. As we know, with the fast development of the wide application of AI technology, data has become the most important factor. When we come into the era of large language model, which also called LLM, the development of AI is shifting from being model-centric to data-centric. And data resources have become the most core and fundamental elements in the development of AI. In order to promote the health development of AI, it is particularly important to build a more suitable system. However, the breakthrough development of AI technology, as actually we know, the AIGC, has posed a huge demand for high-quality data. But the existing data governance rules and regulations have not been adjusted in time, which leading to issues such as data in and difficult to restrict the development of AI technology and industry. Facing the development needs of the new generation of AI, we should promote, adjust, and improve the relevant legal rules. Firstly, it is necessary to improve the data security rules to resolve the problem of data being unusable. During the fast development of AI, the legal use of data has become an issue that urgently needs to be resolved. Laws and regulations around the world are just now… professor from Singapore also mentioned that many laws have made clear provisions regarding data security protection, data collection, and data use. For instance, many laws prohibit any individuals or organizations from engaging in activities that endanger cybersecurity, such as stealing data online. The GDPR of the EU, among others, also has clearly defined the legal basis for processing personal data. However, with the rapid development of AI technology, issues such as the lawful use of public or available personal information have become a very important question. But the relevant rules have not been adjusted at the same time, which means the LRM may face legal problems with using data. For example, it is still unclear whether we’re using public or available personal information as a training data for LRM is legal or not. Therefore, it is recommended to further improve the system for the reasonable use of data. The mirror clarifies whether it is legal to use public personal information as training data, formulating rules, standards, and guidelines for personal information protection issues in different stages, such as the training, generation, and application of large language models. Secondly, it is essential to establish the comprehensive rules for data sharing and circulation to resolve the use of data being insufficient. Only when data circulates can value be created, and it is also an important way for LRM companies to get data. Data sharing and circulation are the key to unlocking the value of data. However, At present, we can see there are still problems in aspects such as data sharing, data training, and data openness. For example, there is a lack of effective, insensitive mechanisms for data sharing among companies, which limits the ability of small companies to access data. Many leading AI companies, they are also the traditional large Internet companies or the big platform companies. They have a large amount of data resources through their existing Internet service and they use their own data to train models, thereby they can form a competitive advantage in their development. But some AI companies restrict other small companies from accessing and using their data, which may become a barrier for start-up companies and small companies. To solve this issue of insufficient data resources, it is recommended to strengthen the policy guidance, encourage and support leading companies to open and share the valuable data. Meanwhile, it is also recommended to open and share valuable data. Thirdly, improve the quality management tools to resolve the problem of data being inefficient. Data quality directly determines the development level of AI, and high-quality data is the core for improving the accuracy, stability, and interoperability of models. High-quality data sets can help large AI models gain a deeper understanding of different concepts, semantics, and grammatical structures, which can significantly enhance the value of large models. Currently, the requirements for data quality management mainly focus on industry-solved norms, while the relevant laws and regulations don’t have much progression on data quality. data quality. To some extent, this has affected the equality and efficiency of the training of the large language models. Therefore, it is recommended to build and improve the rules for the quality management of training data, formulate data quality management standards and reframe the specific requirements for training data in terms of accuracy, objectivity and diversity. That’s all my speech. Thank you and have a nice day.

Tang Lei: Thank you, Mr. He. Now, let’s turn to Mr. Zhao Jingwu as associate professor at law school of Beihang University. Please. Good afternoon, distinguished guests. It’s my honor to have the opportunity to give a

Zhao Jingwu: little speech here. I’m Zhao Jingwu from Beihang University. What I would like to talk today is a small question. It’s how to ensuring the security of cross-billion data flow through the legal instruments. Well, cross-billion data flows is not just a matter of domestic data security regulation and the commercial utilizations. But it’s also a complex issues that affect the promotion of global digital economy. Well, in recent years, we can see that more and more countries, religions and the international organizations, well, including China, have tried to explore the safe and the trustworthy model for cross-billion data flow through domestic legislations. So, however, at the same time, there are also many controversies need to be solved urgently. In this context, China has development the promoting of governance path of cross-billion data flows. However, there is a misleading in the international governance activities, which is to encourage the cross-billion data flow without restriction. Perhaps their original intention was to achieve a border and more efficiency data flow effect. But the key is they fail to understand the relationship between the security and the data flows. It’s worth mentioning that in the Article 1 of data security law in China, the government’s idea of data is to ensure data security and promoting data development and utilization. Well, in summary, it means that we should pay equal attention to the safety and the utilizations. So, we agree that when they pursue cross-border data flows without paying attention to the data security, not only fail to realize the exchange value of data, but also it’s a broadly security risk such as data linkage and theft. These kinds of situations may happen in the future. So, in the international communities, there is also a view that China follows a data controlism path, which essentially politicalize the issues of the data securities. That is because we don’t have a unified standard for the international cross-border data flows around the world, while modernization corporations always have to comply with the different domestic laws and the international agreements. So, there is no denying that the international data security and the personal privacy generally recognize the first and the primary premise for the cross-border data flows. So, furthermore, across the global, we can see that there is no country allowed cross-border data flow without any conditions. So, in most countries, domestic law put data security or national security at the first and the important part. So, what I want to emphasize is that China’s instance on the open and comprehensive governance model for cross-border data flows is not an empty word. China’s domestic law has clearly defined four categories of rules for the cross-border data flows, which include security assignment of outbound data transfer, standard contract for the cross-border transfer of personal information, and the third-party security certifications, and special rules for the personal and other species of special areas. So all above these rules are supported by the comprehensive law and regulations. So in March 2024, China issued a regulation on promoting and regulating cross-border data flows. This purpose is to formulate the regulation that further clarifies the applicable laws in the professing of cross-border data flows. And further, it’s mentioned how to promote the orders of free flows on the data. Regulating cross-border data flows, such as a lot of conditions were considered in this area. So we considered the international trade, cross-border transportation, and academic corporations, and others like manufacturing and areas. So China also released the global data cross-border flows cooperation controversy this year. So this document clarifies China’s position and how to some more useful solution to solve this kind of problems. So in order to truly and efficiently resolve the institutional conflicts, we have enhanced the trustworthiness and the confidence of multi-parties in carrying out international data cooperations. So China’s supervision system for the cross-border data flows is not just an empty word, and it’s not just simply to restrict data export. But we can see that it is also trying to make a better protect and promoting data area export. So China’s legislation has established a diverse channel for cross-border data flow, just as I mentioned before. It’s not just only trying to catering the market demands of a value industry and enterprise. So finally, I hope we can reach a consensus. that in the future, that the governance of cross-border data flow cannot ignore the data security, nor can we set too many restrictions for our securities. So this concept for the security and the utilizations, is in China data governance system is both two important part. So I think in the future, we will find a wisdom and approaching way to solve the problem of cross-border data flows. That is all I want to say, thank you. Thank you, Professor Zhao, for your wonderful insights sharing. Now I give the floor to the next speaker, Mr. Gao Huandong, Vice President of Lenovo Group, please.

Gao Huandong: Thank you, Professor Wu. Distinguished fellow speakers, guests, ladies and gentlemen, good afternoon. I’m Gao Huandong from Lenovo Group. It’s my great pleasure and honor to participate in this forum with the theme on legalization of data governance. The year of 2024 marks the 10th anniversary of Chinese President Xi’s proposal for the strategy of building China as a great cyber state. Looking back over the past decade, the internet and the digital economy has flourished and become an important engine on social progress and economic development, both in China and around the world. We have realized that the high-quality data governance is essential to the high-quality development of the digital economy, and compliance is the foundation of data governance. Therefore, Lenovo Group has made tremendous efforts in data security and privacy protection over the last few years. Today, I would like to quickly share Lenovo’s practice in data security and privacy protection in China for further discussion with experts presented today. Lenovo China’s data security and privacy governance framework basically consists of five building blocks. One is governance structure, the second is process and guidelines, and the third is key work streams, and the fourth is how we use technology to safeguard data security and privacy. And the fifth is cultural awareness and internal education. This framework is in line with Lenovo’s strategy, smart AI for all, and it reflects our mission in data security and privacy governance, security for the future. Our first building block is so-called governance structure. Why? Simply because data security and privacy protection for a company like us has more than 1,000 employees. It’s an astonishing and a giant project. How to deal with this resource issue? We set up a Lenovo China data security and privacy protection committee as a virtual team since the end of 2021 under the leadership of the Lenovo China security committee. This virtual team has three standing sub-committees, namely data security team, privacy compliance team, and the data cross-border transport team for collective decision-making, and another one ad hoc emergency response team forming this so-called three plus one structure. The committee has adopted its bylaw and all of our operational activities are strictly in compliance with the bylaw, and the committee normally has a bi-monthly committee to discuss important issues and drive collective efforts for critical projects. Key to success of this committee is the cross-functional collaboration. Up to now, the committee has more than 200 representatives or focus, serving as ambassadors of more than 40 internal business units and functions. The committee also trained over more than 100 data compliance specialists as the frontline compliance team. And the committee has been coordinating and working closely with the China ESG Committee and AI Compliance Committee, both of which I’m also driving in China. Let’s move to the second building block, process and guidelines. Based on national legislation, administrative regulations, and a number of group policies, our committee has drafted and issued more than 40 guidelines and playbooks covering eight practical areas, for example, data cross-border transfer, data categorization and classification, and AI compliance, et cetera. The guidelines and playbooks helped us to implement detailed rules in routine data security and privacy protection governance on a daily basis. Our third building block is about five work streams of the committee, which we normally roll out annually. The first three are initiated at the forefront and will be deep-dived continuously. Among these three work streams, data inventory mapping is the cornerstone, and data cross-border transfer is the key challenge. Privacy protection is always our top priority. At the same time, we are exploring and navigating another two new work streams. One is AI data security governance, and the other is how to utilize data as an asset. These two issues are also very hot topics in this forum. The fourth building block of our data security and privacy protection governance is how to use technology. to safeguard data and privacy efficiently. I only take one example, the AI guardrail tour developed by Lenovo. This picture illustrates how this tour works in large-language model scenario. In enterprise privacy domain, the guardrail can identify more than 17 types of personal identifiable information and the sensitive personal information in order to block them from inputting to the larger-language model and protect our customers’ privacy. This tour can also realize the identification and the rejection of data on the basis of self-defined keywords. Last but not least is the culture awareness and the internal education. Lenovo has established a training program of data compliance specialists, all of whom come from the business team. The training program is a closed loop with four steps. The committee will train more than 100 specialists by providing a series of professional courses on data security governance and the five work streams of the committee. Then the specialists practice relevant rules in their daily work while they have learned in step one. Then the committee will randomly check and inspect what they have implemented in practice and provide improvement otherwise. The last step is to recognize and award those specialists who contribute significant to the committee. So what I just mentioned is the primary governance framework and the practice of Lenovo China data security and privacy protection. Again, we as a China-based multinational company and a leading technology company are very honored to have this opportunity to share our thoughts and practice. on data security and privacy governance, and our AI for All strategy. Companies’ internal data governance should be strictly in compliance with the rule of laws, both in China and other jurisdictions, to ensure the products and services we provide are secure, reliable, and trustable. We have been learning from international advanced data governance experiences and best practices on one hand. On the other hand, we will continue striving for a more rule-of-law approach, thereby contributing to the healthy and sustainable development of the global digital economy. That’s all, and thank you for listening. Thank you very much, President Gao. Now, let’s turn to Mr. Li Wen,

Li Wen: Vice President of ZTE Corporation, please. Thank you, Professor Wu. Distinguished guests and experts, good afternoon. I am Li Wen from ZTE Corporation. It is a great pleasure to share with you the practice and exploration of ZTE’s data compliance governance at this meeting. Thank you for your trust and support. In the last few years, the new generation of information technology, such as cloud computing, big data, and AI, have promoted and integrated each other. And fields like smart cities, smart transportation, and smart medical care have developed rapidly, and the human society is moving towards a better future of digital intelligence. As a global leading provider of comprehensive communication solutions since 1985 and serving customers in more than 160 countries and regions worldwide, CTE will always adhere to its vision of to enable connectivity and trust everywhere. In 2016, CTE had launched a comprehensive digital intelligence transformation from process driven to data driven, and we are dedicated to build an automated cloud-fired company with a fully cloud-based, intelligent, and lightweight workflow. Releasing the value of data is a key element of corporate innovation, development, and management in the digital intelligence era. The premise is to ensure data security and compliance. After the effect of GDPR in the EU in 2018, more than 160 countries or regions in the world have formulated data protection laws and regulations, and China has also established the data protection legal framework with three laws of the cybersecurity law, the data security law, and the personal information protection law at the core. Against this background, CTE attached great importance to data security and privacy, and privacy protection strictly abides by the relevant laws and regulations, and the continuous implementation of data-comprised governance and the devote to… forming a virtual circle of value creation for compliance. The construction of DTS data compliance governance system follows the risk-oriented methodology. It is based on management commitment, staffing and organizational structure, and is carried out in six aspects, including the establishment of system and rules, risk monitoring, risk assessment, personnel training, security audits, and incident response to ensure data compliance of business activities. For example, as we know, cross-border data transfer is the high-risk data processing activities. CTE is one of the first batch of the companies in China to apply for data export security assessment according with the relevant national requirement. And in early this year, 2024, we obtained the approval from the Cyberspace Administration of China for the export of the reported personal information. CTE has conducted a special audit on the implementation of its signed data transfer contract to evaluate and verify the effectiveness of the company’s cross-border data compliance program. In order to further improve the efficiency and the quality of data compliance management, DTS developed a data compliance system which integrated the Privacy Center APP, Privacy Compliance Scanning, Data Protection Impact Assessment, the Data Leakage Response, and so on. This data compliance system can display the data compliance management in real-time and dynamically and realize the data compliance management. digitization, and also to provide a practical basis for the data intelligence transformation of companies’ comprise management. CTE continues to pay attention to authoritative certification for data security and personal information protection. Currently, all related products and services have been certified by ISO-IEC 27701, 2019’s Privacy Information Management System, European ePrivacy, and U.S. Trusted Certification, which indicates that CTE has reached the international advanced level in privacy protection technology and management ability, and is able to help global customers enter the area of digital intelligence more comfortably. In recent years, China has stepped up its efforts to cultivate and build a market of data elements, which put forward high requirements for the data compliance management of enterprises, and it is necessary for enterprises to adjust the strategy and key points of data compliance management dynamically according to the change of legislation, policies, and business development. On the one hand, enterprises still need to perfect its data compliance management system continuously. On the other hand, they also need to pay more attention to the new data compliance challenges that face the digital intelligence era, such as AI training data security compliance, algorithm governance, data transaction compliance, data competition compliance, and so on. 2024 is the year of the deepening development of China’s digital economy, and also a year for ZTE’s data-comprised governance to move forward steadily. In the coming 2025, ZTE will stick to its role as the driver of the digital economy, fully promote the deep integration of data intelligence business development and data-comprised, and contribute wisdom to a better digital intelligence society. The above is my sharing today. Thank you for listening.

Tang Lei: There will be more opportunities and more chances to deepen our insights sharing in the next steps, not only for the legal system, but also for the so-called legal ecology. So thank all of you, and today we stop here. See you next time. Thank you.

T

Tang Lei

Speech speed

99 words per minute

Speech length

776 words

Speech time

467 seconds

China’s data governance framework emphasizes security and development

Explanation

Tang Lei outlines China’s approach to data governance, which focuses on both security and development. This framework is implemented through various laws and regulations.

Evidence

China has enacted the cybersecurity law, data security law, and personal information protection law, providing basic provisions for data security and personal information protection systems.

Major Discussion Point

Data Governance Approaches and Challenges

Agreed with

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Zhao Jingwu

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

Differed with

Daniel Seng

Differed on

Approach to data governance

W

Wolfgang Kleinwächter

Speech speed

140 words per minute

Speech length

1082 words

Speech time

461 seconds

Multi-stakeholder collaboration is key for internet governance

Explanation

Kleinwächter emphasizes the importance of collaboration among all stakeholders in their respective roles for effective internet governance. He argues that no single entity can manage all aspects of internet governance alone.

Evidence

He cites the outcome of the World Summit on the Information Society 20 years ago as an example of this collaborative approach.

Major Discussion Point

Data Governance Approaches and Challenges

Agreed with

Jose Roberto de Andrade Filho

Zhao Jingwu

Agreed on

Need for collaboration in data governance

New technologies require updates to existing data governance rules

Explanation

Kleinwächter points out that the rapid advancement of new technologies necessitates updates to existing data governance rules. He suggests that the current governance landscape is complex and evolving.

Evidence

He mentions various forms of governance such as data governance, internet governance, cyber governance, AI governance, and ICT governance.

Major Discussion Point

Emerging Challenges in Data Governance

Agreed with

Tang Lei

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Zhao Jingwu

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

S

Shi Jainzhong

Speech speed

98 words per minute

Speech length

1056 words

Speech time

642 seconds

China has enacted cybersecurity, data security and personal information protection laws

Explanation

Shi Jainzhong discusses China’s legal framework for data governance. He highlights the enactment of key laws to address various aspects of data security and protection.

Evidence

He specifically mentions the cybersecurity law, data security law, and personal information protection law as core components of China’s data protection legal framework.

Major Discussion Point

Legal Frameworks for Data Governance

J

José Roberto de Andrade Filho

Speech speed

112 words per minute

Speech length

1528 words

Speech time

815 seconds

Brazil is developing data protection laws and AI regulations

Explanation

Jose Roberto de Andrade Filho outlines Brazil’s efforts in developing comprehensive data protection laws and AI regulations. He emphasizes the country’s focus on creating a robust legal framework for data governance.

Evidence

He mentions the recent approval of an AI law project in the Senate and the ongoing development of a national data economy policy.

Major Discussion Point

Data Governance Approaches and Challenges

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Daniel Seng

He Bo

Zhao Jingwu

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

Brazil’s data protection authority is consolidating implementation of data laws

Explanation

Andrade Filho discusses the progress of Brazil’s National Data Protection Authority (ANPD) in implementing data protection laws. He highlights the authority’s role in providing guidance and recommendations to data actors.

Evidence

He mentions that ANPD has been operational for four years and has published an extensive report on its activities.

Major Discussion Point

Legal Frameworks for Data Governance

Agreed with

Wolfgang Kleinwächter

Zhao Jingwu

Agreed on

Need for collaboration in data governance

D

Daniel Seng

Speech speed

132 words per minute

Speech length

1449 words

Speech time

654 seconds

Singapore uses a light-touch approach to content regulation

Explanation

Daniel Seng describes Singapore’s approach to internet content regulation as minimal and light-touch. He explains that while Singapore is an open society, it maintains some level of content regulation to ensure stability in its multicultural society.

Evidence

He mentions that internet service providers only restrict access to about 200 mass impact websites, mostly pornographic in nature.

Major Discussion Point

Data Governance Approaches and Challenges

Differed with

Tang Lei

Differed on

Approach to data governance

Singapore has laws addressing online safety, falsehoods and criminal harms

Explanation

Seng outlines Singapore’s legal framework for addressing various online issues. He discusses recent laws enacted to deal with harmful content, online falsehoods, and online criminal activities.

Evidence

He mentions the Online Safety Miscellaneous Amendments Act of 2022, the Protection from Online Falsehoods and Manipulation Act of 2019, and the Online Criminal Harms Act of 2023.

Major Discussion Point

Legal Frameworks for Data Governance

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

He Bo

Zhao Jingwu

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

H

He Bo

Speech speed

127 words per minute

Speech length

837 words

Speech time

395 seconds

Data quality management is crucial for AI development

Explanation

He Bo emphasizes the importance of data quality management in AI development. He argues that high-quality data is essential for improving the accuracy, stability, and interoperability of AI models.

Evidence

He suggests building and improving rules for the quality management of training data and formulating data quality management standards.

Major Discussion Point

Data Governance Approaches and Challenges

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

Zhao Jingwu

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

AI development is shifting from model-centric to data-centric approaches

Explanation

He Bo discusses the shift in AI development from being model-centric to data-centric. He argues that data resources have become the most core and fundamental elements in AI development, particularly with the advent of large language models.

Evidence

He mentions the emergence of AIGC (AI-generated content) and the increasing demand for high-quality data in AI development.

Major Discussion Point

Emerging Challenges in Data Governance

Z

Zhao Jingwu

Speech speed

139 words per minute

Speech length

826 words

Speech time

354 seconds

Cross-border data flows require balancing security and utilization

Explanation

Zhao Jingwu discusses the need to balance data security and utilization in cross-border data flows. He argues that China’s approach to cross-border data flows emphasizes both security and development.

Evidence

He mentions China’s regulation on promoting and regulating cross-border data flows issued in March 2024.

Major Discussion Point

Data Governance Approaches and Challenges

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Gao Huandong

Li Wen

Agreed on

Importance of data governance in the digital age

Cross-border data flows pose complex regulatory challenges

Explanation

Zhao Jingwu highlights the complexity of regulating cross-border data flows. He argues that there is no unified standard for international cross-border data flows, leading to challenges for multinational corporations.

Evidence

He mentions that most countries prioritize data security or national security in their domestic laws regarding cross-border data flows.

Major Discussion Point

Emerging Challenges in Data Governance

Agreed with

Wolfgang Kleinwächter

Jose Roberto de Andrade Filho

Agreed on

Need for collaboration in data governance

G

Gao Huandong

Speech speed

127 words per minute

Speech length

1010 words

Speech time

476 seconds

Lenovo has implemented a comprehensive data security and privacy framework

Explanation

Gao Huandong outlines Lenovo’s data security and privacy governance framework. He explains that the framework consists of five building blocks to ensure comprehensive data protection and compliance.

Evidence

He describes the five building blocks: governance structure, process and guidelines, key work streams, technology safeguards, and cultural awareness and internal education.

Major Discussion Point

Corporate Data Governance Practices

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Zhao Jingwu

Li Wen

Agreed on

Importance of data governance in the digital age

L

Li Wen

Speech speed

103 words per minute

Speech length

779 words

Speech time

450 seconds

ZTE has developed a data compliance system to improve efficiency

Explanation

Li Wen discusses ZTE’s efforts in developing a data compliance system. He explains that this system integrates various data protection and privacy management tools to improve efficiency and quality of data compliance management.

Evidence

He mentions that the system includes features such as Privacy Center APP, Privacy Compliance Scanning, Data Protection Impact Assessment, and Data Leakage Response.

Major Discussion Point

Corporate Data Governance Practices

Agreed with

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Zhao Jingwu

Gao Huandong

Agreed on

Importance of data governance in the digital age

Agreements

Agreement Points

Importance of data governance in the digital age

Tang Lei

Wolfgang Kleinwächter

Shi Jainzhong

Jose Roberto de Andrade Filho

Daniel Seng

He Bo

Zhao Jingwu

Gao Huandong

Li Wen

China’s data governance framework emphasizes security and development

New technologies require updates to existing data governance rules

Brazil is developing data protection laws and AI regulations

Singapore has laws addressing online safety, falsehoods and criminal harms

Data quality management is crucial for AI development

Cross-border data flows require balancing security and utilization

Lenovo has implemented a comprehensive data security and privacy framework

ZTE has developed a data compliance system to improve efficiency

All speakers emphasized the importance of data governance in the digital age, highlighting the need for comprehensive legal frameworks, balancing security and development, and addressing emerging challenges.

Need for collaboration in data governance

Wolfgang Kleinwächter

Jose Roberto de Andrade Filho

Zhao Jingwu

Multi-stakeholder collaboration is key for internet governance

Brazil’s data protection authority is consolidating implementation of data laws

Cross-border data flows pose complex regulatory challenges

These speakers emphasized the importance of collaboration among various stakeholders, including governments, private sector, and civil society, in addressing data governance challenges.

Similar Viewpoints

Both speakers highlighted China’s comprehensive approach to data governance, emphasizing the enactment of key laws to address various aspects of data security and protection.

Tang Lei

Shi Jainzhong

China’s data governance framework emphasizes security and development

China has enacted cybersecurity, data security and personal information protection laws

Both speakers from major Chinese technology companies discussed their organizations’ efforts to implement comprehensive data governance and compliance systems.

Gao Huandong

Li Wen

Lenovo has implemented a comprehensive data security and privacy framework

ZTE has developed a data compliance system to improve efficiency

Unexpected Consensus

Balancing data utilization and security

Tang Lei

Zhao Jingwu

He Bo

China’s data governance framework emphasizes security and development

Cross-border data flows require balancing security and utilization

Data quality management is crucial for AI development

Despite representing different sectors (government, academia, and industry), these speakers all emphasized the need to balance data utilization for development with ensuring data security, showing a surprising alignment across different stakeholders.

Overall Assessment

Summary

The speakers generally agreed on the importance of comprehensive data governance frameworks, the need to balance security and development, and the challenges posed by emerging technologies and cross-border data flows.

Consensus level

There was a high level of consensus among the speakers on the fundamental importance of data governance. This consensus suggests a growing recognition of the critical role of data in the digital economy and the need for robust governance frameworks. However, specific approaches and priorities varied somewhat between different countries and organizations, indicating that while there is agreement on the importance of the issue, there may still be divergence in implementation strategies.

Differences

Different Viewpoints

Approach to data governance

Tang Lei

Daniel Seng

China’s data governance framework emphasizes security and development

Singapore uses a light-touch approach to content regulation

While China emphasizes a comprehensive framework balancing security and development, Singapore adopts a more minimal, light-touch approach to content regulation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to data governance, with variations in emphasis on security, development, and regulation across different countries.

difference_level

The level of disagreement among the speakers appears to be moderate. While there are differences in approaches and emphases, there is a general consensus on the importance of data governance and the need to address emerging challenges. These differences reflect the diverse regulatory landscapes and priorities of different countries, which may complicate efforts to establish global standards for data governance.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of collaboration and balance in data governance, but Kleinwächter emphasizes multi-stakeholder involvement, while Zhao focuses specifically on balancing security and utilization in cross-border data flows.

Wolfgang Kleinwähter

Zhao Jingwu

Multi-stakeholder collaboration is key for internet governance

Cross-border data flows require balancing security and utilization

Similar Viewpoints

Both speakers highlighted China’s comprehensive approach to data governance, emphasizing the enactment of key laws to address various aspects of data security and protection.

Tang Lei

Shi Jainzhong

China’s data governance framework emphasizes security and development

China has enacted cybersecurity, data security and personal information protection laws

Both speakers from major Chinese technology companies discussed their organizations’ efforts to implement comprehensive data governance and compliance systems.

Gao Huandong

Li Wen

Lenovo has implemented a comprehensive data security and privacy framework

ZTE has developed a data compliance system to improve efficiency

Takeaways

Key Takeaways

Data governance requires balancing security and development/utilization

Multi-stakeholder collaboration is crucial for effective internet and data governance

Countries are developing and refining legal frameworks for data protection, AI regulation, and cross-border data flows

Corporate data governance practices are becoming more comprehensive, with frameworks addressing security, privacy, and compliance

Emerging technologies like AI are creating new challenges for data governance that require updates to existing rules

Cross-border data flows pose complex regulatory challenges that require international cooperation

Resolutions and Action Items

None identified

Unresolved Issues

How to effectively regulate AI training data and algorithms

Balancing data security requirements with the need for cross-border data flows

Addressing potential monopolization of data by large tech companies

Harmonizing different national approaches to data governance internationally

Suggested Compromises

Light-touch content regulation combined with self-regulation by internet companies

Balancing data security requirements with mechanisms to promote data sharing and circulation

Developing flexible AI regulations that can adapt to rapidly changing technology

Thought Provoking Comments

Governance in the digital age means you have to have a specific solution, governance model, for each of the specific issues. There is no one-size-fits-all that you say this is the governance for the data or this is the governance for AI. You have to identify the problem. What do you want to govern? What is the subject? And then to build the governance model around the system.

speaker

Wolfgang Kleinwächter

reason

This comment challenges the notion of a universal governance model and emphasizes the need for tailored approaches to different digital issues. It’s insightful because it recognizes the complexity and diversity of digital governance challenges.

impact

This comment shifted the discussion towards a more nuanced understanding of governance in the digital age, encouraging participants to consider specific solutions for different aspects of digital technology rather than seeking a one-size-fits-all approach.

In the digital age, the social relations are also being digitalized and datafied, such as between individuals, between firms, between individuals and firms, between individuals and governments, between firms and government departments, and among the government departments at center.

speaker

Shi Jainzhong

reason

This comment provides a comprehensive view of how digitalization is transforming various social relationships. It’s thought-provoking because it highlights the pervasive impact of digital technology on all aspects of society.

impact

This observation broadened the scope of the discussion, encouraging participants to consider the wide-ranging implications of digitalization on social structures and relationships, beyond just technical or legal aspects.

We have to discuss a lot of, I would say, elements that will serve as base for this data economy development. In China, for example, you have Shanghai Data Exchange and the local data exchanges all over. That is a fantastic example of how data assets can generate value even being, as I saw in Shanghai, data can be declared in the balance sheet of companies.

speaker

Jose Roberto de Andrade Filho

reason

This comment introduces the concept of data as an economic asset and provides a concrete example of how this is being implemented in China. It’s insightful because it bridges theoretical discussions about data governance with practical economic applications.

impact

This comment shifted the discussion towards more practical considerations of data governance, particularly in terms of economic value and business practices. It encouraged participants to think about the tangible impacts of data policies on economic development.

To solve this issue of insufficient data resources, it is recommended to strengthen the policy guidance, encourage and support leading companies to open and share the valuable data. Meanwhile, it is also recommended to open and share valuable data.

speaker

He Bo

reason

This comment addresses a key challenge in AI development – access to data – and proposes a solution that involves collaboration between large and small companies. It’s thought-provoking because it suggests a shift in how data is viewed and shared in the AI industry.

impact

This comment introduced a new perspective on data sharing and collaboration in the AI industry, encouraging participants to consider policy solutions that could foster innovation while addressing data access inequalities.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely legal or technical considerations to encompass social, economic, and practical aspects of data governance. They encouraged a more nuanced and multifaceted approach to understanding the challenges and opportunities of the digital age, emphasizing the need for tailored solutions, consideration of social impacts, economic potential of data, and collaborative approaches to data sharing. The discussion evolved from theoretical frameworks to more concrete examples and practical policy considerations, reflecting the complex and rapidly evolving nature of data governance in the digital era.

Follow-up Questions

How can we strike a balance between high-quality development and high-level security in data governance?

speaker

Tang Lei

explanation

This is important as it addresses the core challenge of promoting data utilization while ensuring data security, which is crucial for sustainable development in the digital age.

How can we improve the legal system for data governance to address new challenges brought by emerging technologies?

speaker

Tang Lei

explanation

This is crucial for ensuring that legal frameworks keep pace with rapid technological advancements, particularly in areas like AI and big data.

How can we strengthen international exchanges and cooperation on the legalization of data governance?

speaker

Tang Lei

explanation

This is important for creating a globally coordinated approach to data governance, which is essential in an interconnected digital world.

How can we define and differentiate between various types of governance (data, internet, AI, cyber, etc.) in the digital age?

speaker

Wolfgang Kleinwächter

explanation

This is important for clarifying the scope and boundaries of different governance areas, which can help in developing more targeted and effective policies.

How can we manage the contradiction between the need for universal norms and the existence of 193 national jurisdictions in data governance?

speaker

Wolfgang Kleinwächter

explanation

This is crucial for addressing the challenge of creating global standards while respecting national sovereignty in the digital realm.

How can we configure rights related to data in the digital age?

speaker

Shi Jainzhong

explanation

This is important for establishing clear legal frameworks around data ownership, use, and protection in the evolving digital landscape.

How can we ensure fair competition in the digital economy, particularly regarding the use of data and algorithms?

speaker

Shi Jainzhong

explanation

This is crucial for preventing monopolistic behaviors and ensuring a level playing field in the data-driven economy.

How can we develop a national data economy policy that balances economic competitiveness with people-centered development?

speaker

Jose Roberto de Andrade Filho

explanation

This is important for harnessing the economic potential of data while ensuring that the benefits are distributed equitably and align with societal values.

How can we improve mechanisms for data sharing among companies, particularly to benefit smaller companies and startups?

speaker

He Bo

explanation

This is crucial for fostering innovation and preventing data monopolies in the AI and tech industries.

How can we establish comprehensive rules for data quality management, particularly for AI training data?

speaker

He Bo

explanation

This is important for ensuring the accuracy, objectivity, and diversity of data used in AI development, which directly impacts the quality and fairness of AI systems.

How can we ensure the security of cross-border data flows while promoting necessary data exchange for global digital economy development?

speaker

Zhao Jingwu

explanation

This is crucial for balancing national security concerns with the need for international data flows in an increasingly interconnected global economy.

How can companies effectively implement data compliance governance systems that adapt to rapidly changing legislation and business environments?

speaker

Li Wen

explanation

This is important for ensuring that businesses can maintain compliance while remaining agile in a fast-evolving regulatory landscape.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #35 Empowering consumers towards secure by design ICTs

Day 0 Event #35 Empowering consumers towards secure by design ICTs

Session at a Glance

Summary

This discussion focused on the Internet Standards Security and Safety Coalition (IS3C) and its efforts to promote a more secure and safer internet. The session began with an overview of IS3C’s work, including reports on IoT security by design, education and skills, and government procurement of secure ICT. Janice Richardson presented the concept of a “hub” for cybersecurity collaboration, emphasizing the need for education and diversity in the field.

Bastiaan Goslings discussed IS3C’s report on the deployment of DNSSEC and RPKI standards, highlighting the importance of these technologies for internet security. The panel on consumer protection featured representatives from Lithuania and Singapore, who shared their countries’ approaches to internet safety and regulation. They emphasized the need for international cooperation and a balance between regulation and industry incentives.

The discussion then turned to IS3C’s future plans, including a new project on IoT security and post-quantum cryptography in collaboration with AFNIC. This project aims to examine the societal impacts of IoT and the challenges posed by quantum computing to current security measures. The speakers stressed the importance of addressing these emerging technologies and their potential consequences.

Finally, the session concluded with an update on IS3C’s organizational development, including plans to become an Internet Society special interest group and potentially establish itself as a non-profit foundation. These changes aim to expand IS3C’s reach and funding opportunities while maintaining its role as a dynamic coalition within the IGF structure.

Keypoints

Major discussion points:

– The Internet Standards Security and Safety Coalition (IS3C) is working on initiatives to improve internet security and safety, including IoT security, education/skills, and government procurement practices

– IS3C is planning to create a “hub” to bring together experts and stakeholders to collaborate on cybersecurity solutions

– International cooperation is crucial for addressing cross-border cyber threats and creating harmonized security standards

– Consumer protection and empowerment is an important focus, including through security labeling schemes and regulations

– IS3C is launching a new project on the societal impacts of IoT and post-quantum cryptography

Overall purpose:

The discussion aimed to provide an overview of IS3C’s work and future plans to improve internet security and safety through various initiatives, research, and stakeholder collaboration.

Tone:

The tone was informative and optimistic throughout, with speakers enthusiastically describing ongoing and planned efforts to address cybersecurity challenges. There was a sense of urgency about the need for action, but also confidence that progress is being made through collaboration and new initiatives.

Speakers

– WOUT DE NATRIS: Moderator, Coordinator of the Internet Governance Forum dynamic coalition on Internet Standards Security and Safety (IS3C)

– JANICE RICHARDSON: CEO of Insight, IS3C Working Group 2 Chair on Education and Skills

– BASTIAAN GOSLINGS: Works for .nl registry IDN, former member of IS3C Working Group 8

– STEVEN TAN: Assistant Director of the Cyber Security Agency of Singapore, leads the Safer Internet Mobile and IoT security team

– KRISTINA MIKOLIŪNIENĖ: Council member at RRT (Lithuanian Communication Regulatory Authority)

– NICOLAS FIUMARELLI: Chair of IS3C Working Group 1 on IoT security by design

– ELIF KIESOW CORTEZ: Member of IS3C Working Group 9 on emerging technologies

– JOÃO MORENO FALCÃO: Member of IS3C working group on IoT

Additional speakers:

– Mark Carvell: IS3C senior policy advisor and rapporteur for the session

– David Huberman: Chair of IS3C Working Group 8 (mentioned but not present)

Full session report

The Internet Governance Forum session on the Internet Standards Security and Safety Coalition (IS3C) provided a comprehensive overview of ongoing efforts to enhance internet security and safety. The discussion, moderated by Wout de Natris, brought together experts from various backgrounds to explore key initiatives, challenges, and future plans in the realm of cybersecurity.

Internet Security Standards and Best Practices

A central theme of the discussion was the critical need for widespread deployment of existing security standards. Bastiaan Goslings, formerly of IS3C Working Group 8, highlighted the importance of DNSSEC and RPKI for securing internet infrastructure. However, he noted that implementation challenges persist due to perceptions of cost and complexity. This sentiment was echoed by Steven Tan from Singapore’s Cyber Security Agency, who emphasised the importance of balancing regulation and incentives for industry adoption.

Kristina Mikoliūnienė from Lithuania’s Communication Regulatory Authority advocated for a holistic approach to internet security regulation. This perspective aligns with the overall consensus that a comprehensive strategy is necessary to address the multifaceted challenges of cybersecurity.

Consumer Protection and Empowerment

The discussion highlighted the challenges and opportunities in consumer protection and empowerment. Steven Tan stressed the importance of building digital trust and secure systems, arguing that developers and service providers must prioritize security. The speakers discussed the potential role of certifications and security labels in empowering consumers to make informed decisions about online products and services.

Both Tan and Mikoliūnienė agreed on the importance of raising awareness and educating consumers about cybersecurity risks and best practices. They emphasized the need for collaborative efforts between governments, industry, and civil society to address these challenges effectively.

International Cooperation on Cybersecurity

The speakers unanimously agreed on the crucial need for international cooperation in addressing cybersecurity challenges. Steven Tan highlighted the importance of shared threat intelligence and common security standards, as well as partnerships between countries and industry. Kristina Mikoliūnienė emphasised the value of learning from other countries’ experiences and the need for clear problem definition and active participation in international efforts.

This focus on collaboration was further reinforced by a video presentation on the concept of a “hub” for cybersecurity collaboration. This hub would bring together experts and stakeholders to work on solutions, addressing the need for better education and diversity in the field. The presentation outlined the potential benefits of such a hub, including improved knowledge sharing and more effective problem-solving.

Emerging Technologies and Future Challenges

The discussion also touched upon the challenges posed by emerging technologies. Nicolas Fiumarelli reported on the analysis of IoT security regulatory documents across various countries, highlighting the fragmentation of approaches. Elif Kiesow Cortez and João Moreno Falcão emphasised the need for research on the societal impacts of IoT and post-quantum cryptography, stressing the importance of understanding the social implications of current IoT security status.

IS3C Organisation and Future Plans

Wout de Natris outlined IS3C’s future plans, including becoming an Internet Society special interest group and potentially establishing itself as a non-profit foundation. These changes aim to expand IS3C’s reach and funding opportunities while maintaining its role as a dynamic coalition within the IGF structure.

De Natris also announced a new project on IoT security and post-quantum cryptography in collaboration with AFNIC, with a report to be delivered at IGF 2025. This initiative underscores IS3C’s commitment to addressing emerging technologies and their potential consequences.

Additionally, IS3C plans to create capacity-building programs and continue its work beyond 2025. The coalition’s previous work on procurement was also highlighted, demonstrating its ongoing commitment to improving cybersecurity practices across various sectors.

Key Takeaways and Action Items

The discussion yielded several key takeaways, including the need for more widespread deployment of existing security standards, the importance of consumer protection and empowerment, and the critical role of international cooperation in addressing global cybersecurity challenges.

Action items emerging from the session included IS3C’s plans to organise a first event on consumer protection in the new year, apply to become an Internet Society special interest group, and convene a meeting in January to discuss the creation of a cybersecurity hub.

Nicolas Fiumarelli announced an upcoming IS3C session on Thursday, encouraging participants to attend for further discussions on cybersecurity initiatives.

In his closing remarks, Wout de Natris provided an overview of IS3C’s history and achievements, highlighting the coalition’s growth and impact since its inception. He also mentioned a QR code available for accessing additional IS3C resources.

In conclusion, the session provided a comprehensive overview of IS3C’s work and future plans, emphasising the need for collaborative efforts to improve internet security and safety. The discussion highlighted the complex challenges facing the cybersecurity landscape and the importance of multi-stakeholder cooperation in addressing these issues.

Session Transcript

WOUT DE NATRIS: Thank you and welcome to this ICC workshop on empowering consumers towards secure by design ICTs. But I have to admit that this flag does not cover the all the topics we are about to share with you. Things change over time. My name is Walter Nazis and I’m the coordinator of the Internet Governance Forum dynamic coalition on the Internet standards security and safety, or ISVC and I am your moderator today. ISVC has an overarching theme to make online activity and interaction more secure and safer by achieving more widespread and rapid deployment of existing security related Internet standards and ICT best practices. We cover through reports and IoT security by design tertiary secure cybersecurity education and skills and government procurement. We have also published two tools, and the first by presenting a list of covering the most important Internet standards aimed at operability plus how to secure websites. And the second we present to you today in a, in a few moments. You can find our work on our website, www.is three coalition.org, or on the website. In this session we will present our upcoming work and our plan to create a hub. I’m the first to see a video on this topic, so stick around. ISVC has ended the first phase of some of our priorities. It’s time to move forward by putting theory into practice. ISVC strives to create capacity-building programs so that our guidelines, recommendations and tools will be implemented around the globe in the coming years, leading to more harmonized and not isolated security actions. But that is the future. Let’s turn to now. Today we will first learn about the hub by Janice Richardson. Next, Basiaan Gosselinks will present on ISVC’s latest tools, our outcome for 2024. And this is followed by a panel on consumer protection. And we end with our plans for 2025 and beyond. But first, the hub. Janice, I think that you’re online and I would like to present the word to you. Janice is the CEO of Insight, based in Luxembourg, and is the ISVC Working Group 2 Chair on Education and Skills. Janice, the floor is yours.

JANICE RICHARDSON: Thank you and good afternoon, everyone. I’m sure you’re all aware that we’ve gone through a tectonic shift in the security landscape over the last couple of years. The speed, the ferocity of cyber attacks are coming faster and faster, and no one is really prepared for this. The rise of generative AI also has made it much easier to cyber attack many of the applications that we use daily. Organizations have increasingly moved their business to the cloud. And once again, this is a point of fragility. Also, identity-based attacks are growing considerably through social engineering. This raises a question, what can we do because the traditional way of cyber attacks is no longer valid? We need to educate, educate at all levels. We learned a couple of years ago when we did a study that in fact, young people are coming out of tertiary education, they’re really not prepared to kickstart their career in industry. Industry is decrying this lack, decrying the gap and asking for better tertiary education. But I’d like to go back even further, because cybersecurity depends on every single one of us. We are all the weak link in the chain. And therefore, I think we all need to be much more aware of what cybersecurity means for us. And this goes right back to the first classes of elementary school. Over the last couple of weeks, I’ve done a quick scan of what’s available to help young people know how to use computers, technology safely and securely. And what I realized is that we’re really not getting to the heart of cybersecurity. We teach about hard passwords, but we’re not teaching the fundamentals. And this is actually what we learned from the study that we did and that we published at the IGF two years ago. Industry considers we need to get back to basics. Young people need to understand the architecture of the internet, the architecture of the cloud, if they’re really going to help find innovative solutions. Having education and training, I’ve already mentioned that, but every single person must be aware of how we can very easily be victim of social engineering. Even people like ourselves. consider ourselves experts in the field. We need to improve collaboration. In tertiary education, professors are lecturing with their own resources, and yet industry has some fabulous resources available. If only they would share these resources, if they would improve the collaboration, there is a real gap. Industry doesn’t know what’s being taught, but just knows that not the right things are being taught, and education is struggling to find the answers. We also need to boost diversity. I don’t know how many people are in this room right now, but usually I’m one of the few women talking about cyber security. If we don’t have women, if we don’t have different races, if we don’t have a broad overview of the population working in cyber security, we really cannot fully understand where the breaches are, and how to improve them. And of course, we need to upgrade recruitment procedures. These in-service trainings are really not working for anyone. Young people are there making the coffee when they should be there, really understanding how cyber security needs to work, and how they can be part of a team. This has led us to push for a hub. What is a hub? Well, it’s a place where people from all walks of life, interested and involved in the cyber security system, would meet, would exchange ideas. It’s a place where there would be room for the general public, room for youth, room for everyone to discuss and find the best ways ahead. Cyber security is not going to lessen. Every day we’re learning about new AI tools. tools. This morning I was listening to intuitive AI, which adds further burdens to the system. So my call for action here is join us. Join us to create a hub. Create a hub where we can all work together and start finding solutions and making the public aware that they also are the weakest link in the chain. And when I talk about young people, I’d like to say that they very often have a lot of solutions. If only we know how to work with them, how to guide them, but not put ideas into their mouth. We’ve worked with young people, thanks to Buchanan Coal and Tony Grillo. Pixel Blue was the company. We’ve actually worked with young people in Canada. They have created a video. And I really think that this brings together the ideas of why we need a hub, how to make that hub, and maybe a glimpse of the future. So I’m calling on you. Join us. We’ll be running meetings in January. Join us to help the hub become a reality. Back to you, Selby, to play the video.

VIDEO: So it is the dawn of the internet. The world is suddenly connected like never before. The free flow of information reveals a global community brimming with innovation. Welcome to the world wide web. But there are those who seek to subvert the web. web, to poison its promise for ill-gotten profit. It is the dawn of the Internet. The world is suddenly connected like never before. The free flow of information reveals a glimmer of hope. We are still trying to find out how to get the movie on screen. Okay, are there any questions for me while you’re getting the movie on screen? I’m very

WOUT DE NATRIS: Is there a question in the room? I don’t see any fingers. So, let’s watch this video. Shelby is trying to figure it out for the guys at the technique section.

VIDEO: So Shelby is getting back and here is our video on the hub. It is the dawn of the internet. The world is suddenly connected like never before. The free flow of information reveals a global community brimming with innovation. Welcome to the world wide web. Seek to subvert the web. To poison its promise for ill-gotten profit. Necessary and existing security measures are not built in by design. Cybercrime becomes big business, exploiting the cracks in our defenses, taking advantage of our trust. taxing our resources, leaving countless victims. Our leadership struggles to develop a coordinated response. Our defense is disorganized and outdated. We’re left to fend for ourselves. To protect our global connection, experts around the world come together to form the vanguard of cybersecurity. The Hub. Populated with the smartest people on the planet, using the most effective solutions available. With adequate funding and collaboration, the Hub grows. Schools are empowered to provide state-of-the-art training. A new generation of cyber warriors enters the battlefield. Citizens of the web have open access to protection, ensuring the security of every link in the system. Put an end to cybercrime, once and for all. Support the Internet Standards Security and Safety Coalition. Let’s build the Hub.

WOUT DE NATRIS: Yes, I think that’s This is made by a good friend of mine called Tony Grillo. And he works with a university in Canada where the department is called Pixel Blue. And their students made this as a graduation assignment. And then it was finished by the head of the department to get some finishing touches together. But I think it’s a very powerful video, as Jenna said. Are there any questions on the idea of the Hub, or what it could do, or what it could do for you? Janice, as a final question from my side. How do you envision the next step for early in 2025? What are your plans?

JANICE RICHARDSON: First, I think that. All of those interested need to sign up. We will inform you when we’ll be conducting a meeting in January to see concretely how we can put this together. So first step, call for action. Sign up please to the IS3C. Keep an eye on the date that we will announce and then come with your ideas on how we can put this together and the road ahead.

WOUT DE NATRIS: Great Janice, thank you very much and we’ll be looking forward to the dates that will be announced on the IS3C website and beyond very soon. Thank you very much. The next is that Basia Gosseling is in the room. Basia works for the .nl registry as IDN nowadays but when we started this project we were on working group aid in Ireland. We are one of the two sponsors of this project. The result is some guidelines that we produce on arguments and Basia will lead us through his presentation to show what this work is and how it came about and what the recommendations are. Basia. Thank you, oh you can understand me?

BASTIAAN GOSLINGS: Thank you Wout for the introduction and I think that’s being announced I think you know that emphasizes you know the urgency of security standards having to be deployed and I’m proud you know that I can be here to share an overview of an endeavor, an IS3C endeavor that was recently finalized and in this particular case on deployment of standards DNSSEC and RPKI. So I have 10 minutes to go through this and you know I also want to give you the opportunity to reflect on it and give statements or questions. This is going to be, I’m not going to be able to go into details, the report is publicly available on the ISO 3303 website. But I think you know, it’s good to take the opportunity here to give you an overview of what we’ve been doing. So in a nutshell, the problem statement, probably you’re all aware, but the domain name system and as well as a global system for internet routing are both fundamentally important when it comes to the functioning of the internet overall. Everything else depends on it. The functioning of naming, numbering, and then you know, the combination of that and the way that internet routing works. If there’s an issue there, then any content or any communication that relies on it, you know, is affected. So that leads to the conclusion that if there are standards available that can improve those fundamental technologies, the security of them and increase trust in online services provided and online presence of entities and individuals, then that would at least give you an indication, right? This is something that you need to implement or if you purchase services from someone else, but that particular vendor has taken this into account. These technologies have been available for quite a long time in internet terms, but deployment, it’s different across operators, it’s different across regions, and we’ve seen growth, but it’s still lacking. So in order to have a real impact, this deployment needs to be increased, but what’s the reason for that? So this was something, you know, that fed into this effort about also mentioned the fact that the RIPE NCC and I can support it this kindly. And there’s a lot of technical documentation available, many reports over the years looking at these techniques. And when I worked for the RIPE NCC, and also with regard to RPKI to improve the security of routing, all the knowledge is there. And there has been quite some engagement effort, right, to increase deployment, but we thought, hey, maybe there’s a different narrative. necessary, and that’s what the working group aimed at. So again, you know, the deployment of these standards is fundamental. I think it’s really important to emphasize that, you know, that routing and the way the DNS works, everything else depends on it. So whether it’s for organizations, whether it’s for public entities, public services, for business as well as individuals, to main trust, you know, in terms of internet content consumed, internet services used, internet presence. It’s fundamental that those technologies work properly and are secure. So then at least when it comes to these technologies, it sounds like a no-brainer, but at least consider looking at them. So either when it comes to your own network, your own devices that you have control over, which you can configure, you know, think about implementing them there. Or otherwise, if you purchase services, whether it’s from a transit provider or a cloud operator or other infrastructure services, then make it part of your procurement process to include these types of criteria. Because again, everything else depends on a secure internet routing and a secure DNS. So why is deployment lacking? And I will not go into the numbers and details. There’s more in the report, so please go ahead. The URLs, the links are included later on. But there are a number of points that were raised by the working group of experts that were involved in this. On the one hand, you know, there’s the perception of cost and resource constraints, right? Like it takes additional knowledge, additional software, maybe additional hardware, control of this to manage all of this. People consider this to be quite technically complex. Not only the fact, you know, that you need to have the knowledge to actually use these type of standards. But also, if anything goes wrong, because these technologies, the underlying technologies are so fundamental, there’s a risk, you know, if anything goes wrong with implementation, that the provision of online services might be affected. And also, the working group considered, you know, that for quite a few entities, they’re just involved in their business, have their commercial reasons to do so or other reasons to do so. And they’re not even really aware of the risk, you know, this is very much under the hood of these type of technologies and how this works. So people are not really aware of it. And then they come to the lack of awareness, and maybe a lack of education, also, even when it comes, you know, to the engineers and the ICT people that are employed. And the last but not least, and then we can get more, you know, towards the target group that the report is aiming at. It’s not part of priorities, right? Even an ICT strategy, and everything that comes along with it, quite a few organizations don’t have that. So it’s not part of those strategic considerations and priorities. So as I mentioned, you know, although the technical reports are there, many analysis have taken place before, but the working group felt there was a reason for a new narrative and a number of elements fed into that. On one hand, you know, national cybersecurity resilience, the risks or the availabilities of online services, they’re so huge, if all of this breaks, if your internet doesn’t work, if you cannot communicate with your public authorities, because everything is done online, then you do have a serious problem. And we see in many countries, especially, I’m from the Netherlands, and so looking more at that part of the world, the western part of the European Union, specifically, more and more our sector, the internet sector is being regulated. And I think to some extent, rightly so, because of the risks and the, because of the risks involved. So there’s more and more regulatory pressure. So if you include these type of standards, as a base, best practice in the way you know that you approach your ICT strategy, then I think you’re already a step ahead. And then, of course, for commercial organizations, ICT and digital presence and online services, it’s part of your core business. It doesn’t really matter which business you’re in. It’s so important. So you have to consider at least these type of standards. And then maybe from a moral perspective, it’s not only about you as an individual. It’s not only about you as an organization. It’s about us as a society, as a whole. The internet as a global phenomenon as a whole, I think. So again, go back to the report. All the details are there. But to take some of the main takeaways from the conclusion, it’s about safeguarding an organization’s reputation. It’s protecting critical services, vital information related to infrastructure. The integrity and authenticity of online services can be improved by technologies like RPKI and you implementing DNSSEC. And I mentioned it a couple of times, I think, you know, this has to be part of your core business. Everything is online nowadays. It doesn’t really matter which line of business you’re in. So then we’d argue, please, decision makers, take this on board and include it in your strategic plans in order to promote trust in online services and also your own online presence. These are the experts that contributed to the document. Our gratitude goes out to them. A special shout out to our chair, David Huberman from ICANN. He put a lot of time and effort into this and herding cats, you know, this group of people. Unfortunately, he cannot be here, but I do want to mention him specifically. And we’re really grateful for all the time and effort he put into this together with the other experts. And of course, Wout, as a secretariat. And I mentioned, you know, this could not have been possible without the financial support of both ICANN and the RIPE NCC. Those are the websites of the I3C itself. And then, you know, the working groups and working group eight is there and you can find the report. This ends my summary. If there is anyone who has remarks, comments, questions, I’m happy to make an effort to answer them. Thank you.

WOUT DE NATRIS: Thank you, Basiaan. And thank you everybody who worked on this project because we really had really excellent comments from all people from all over the world who worked to get this together. You can find the document by scanning the QR code. And what I can add is that I’ve heard from both organizations that they’re really, really happy with this outcome. And the RIPE NCC will actually share it as of today now that it’s officially released with all their members, but also their colleagues at RIRs, the internet registries around the world. So if that is the sort of impact that our work has, then it means that we’re changing perhaps a little bit how people who have to convince their bosses can actually do so. So let’s hope that that will happen in the coming year. Working group eight will be closed this Wednesday officially because then we have our internal meeting, but also for me, David, also Basiaan, thank you very much for getting this together. And it is very much appreciated by ISVC memberships. Thank you. And a small applause for the work is certainly in place. Is there a question? It worked when we were at home. Soli, can you take a look whether this is the right code because it’s not working they say, but it worked when I tested it. Yeah. It’s a different… We’re going to try and change it so that the right code will come on, sorry for that. The next up is, okay, I can’t hear myself anymore for some reason. Oh yes, that’s it. As soon as you put it into something then the sound disappears. The next topic is on consumers, and what we have is that we tried to get a working group together in 2022 with consumer organizations, but then the finance did not work and then the specialist stepped away so it never really got off the ground. We talked to people at the ICF in Kyoto last year and that sort of started to revive it, and we hope to start some work on this topic of consumer protection in the next year. In the panel today, we have two consumer protection organizations. And we have Stephen Tong, and he is Assistant Director of the Cyber Security Agency of Singapore, and he currently leads the Safer Internet Mobile and IoT security team under the Cyber Security Engineering Center. And his work focuses on assessing cyber risk in the internet, mobile, and IoT domains, and develop initiatives to secure Singapore’s digital landscape. They’re both online, so hopefully we can see them on the screen soon. Welcome, Christina and Stephen. I think the first you have two minutes to introduce your organization, and what exactly is that, but it doesn’t make me start so Christine, you go first. Thank you.

KRISTINA MIKOLIŪNIENĖ: Yeah, hello, everybody in the room. Me, I’m Kristina Mikulina, I’m council member at RRT, it’s Lithuanian Communication Regulatory Authority. And we are a small country in the Eastern part of Europe. Going forward to our institution, RRT is a, started at the beginning as a pure technical organization. It was National Radio Frequency Agency many years ago and evolved to the big hub of regulation, starting from electronic communications and post railway sectors, and going forward to the big bunch of digital services as electronic signature, as electronic stamp or safer internet or hotline in general. So me, I am over 20 years, and this organization and beginning, I have worked in electronic communication field, with more with technical and economical aspects, then also with consumer disputes, going forward to postal and railway issues. And currently as a council member, I see strategic decision-making across all these sectors, and I’m working deeply with digital services, including safer internet and measures to combat child sexual abuse material online, or filtering measures and mechanism to protect minors. So shortly about me and my organization. Thank you.

WOUT DE NATRIS: Thank you, Christine and Steven.

STEVEN TAN: Hi there. Right. I think, firstly, thanks a lot for the introduction. Yeah, maybe a quick one. I think as we all know, right, online transactions used to be very pretty straightforward. You click a button and then you make a purchase, right? But as digital services evolve and becomes more interconnected, things got a little bit more complex. And while this… connectivity brings convenience, it also introduces a range of cyber risks that we can’t ignore, right? Scammers and cyber criminals are constantly finding clever ways to exploit vulnerabilities. We have all heard about data breaches, identity theft, and online scams. It has become something none of us can ignore anymore, right? This makes digital trust more important than ever. It’s about making people feel safe when they’re online, whether they are shopping, banking, or just browsing the internet. But digital trust isn’t just about users being careful, it’s about building secure systems that people can rely on without having to think twice, right? So, in the Cyber Security Agency of Singapore, it’s a national agency dedicated to protecting Singapore’s cyberspace. At CSA, we are all about co-creating a safer cyberspace, we work closely with the industry partners, raise public awareness, and of course, promote secure technology adoption. But at the heart of it, we also think that developers and service providers have a primary responsibility, right? They need to build security into their products right from the start, ensuring that there’s privacy, there’s data protection, and also secure development process are non-negotiable. And importantly, on the flip side, we also realize that consumers also need to play a role. They should better demand for security from the products and services they use. This is where, you know, certifications, security labels, and standards come into play. And that’s also one of the core businesses that we have in CSA, by providing transparency and giving companies a competitive edge when they prioritize security, right? So, essentially, that’s what CSA does, right?

WOUT DE NATRIS: Thank you, Stephen. I think you answered my first question already quite good, that how does your organization currently contribute to a more secure and safer internet for your country? And I think you gave some excellent examples. Now, is that in Lithuania, Christina, how does your organization currently contribute to a more secure and safer internet for all the people living in Lithuania?

KRISTINA MIKOLIŪNIENĖ: You know, RRT, so as a national regulatory authority, we also help and promoting internet as such. So we do market analysis to enhance competition in the market. We do any proposals for giving the frequencies or numbering to resources to the market participants. But at the same time, we see how the internet in general impacts end users, consumers, and that we have to see and help them to not be lost in the internet space in general. So first of all, thank you for helping, for making internet safer. So it’s really very helpful to know to each other the possibilities in the market. But internet knows no borders. So if one press looks for some information online, it goes, the information can go from any countries abroad. So it’s really important for us to act together, I think. And in Lithuania, we have the holistic approach. We, being a hub of regulation, we can impact the market participants, beginning from the operators for market participants. So in the level of interconnection, then we can go forward to different problems occurring with numbering resources, that numbering resources wouldn’t be used for fraud or any forbidden actions in general. And also we see that bullying or scam or child sexual abuse material, they are also online. And we, as a hotline, we do some, not some, but many. actions or not only also active clearing the internet against the children’s prohibited information. We also have some requirements for fixed and mobile networks. We also have, as I said, numbering resources. We acting as independent auditor for trust services or electronic identification services that these services will not be not so secure in the especially where the state is giving the security level high security level for consumers. We also pre-trial authority for consumer dispute resolution. It means that you as a consumer and user you can go to us if some operators acts not according the requirements that you are somehow not you feel not so safe or secure according your agreement. And also we have very special attitude to minors. We have a special law already from 2011 and implemented in the level of state the hotline. We also have international cooperation. We are part of InHope Arachnid projects. We are also have an agreement with Interpol to make internet safer. So we are also trust flagger in different platforms as Google, YouTube, TikTok or Discord. We also trying to raise the awareness in any of these different layers. So the holistic approach and being a regulatory hub helps us to be everywhere or to try to be everywhere on time. Because in internet, every second matters. Because if you push a button, the same time, the same second, it makes an impact to consumer or any internet user or not always the very positive impact. And of course, I think that priority is very important. Knowing that internet is so huge and interact in all different layers, it’s very important to set the right priorities. For example, in the world, in the whole world, there are over 200 countries, but hotlines implemented on the state level are only 10. And only five of them are in European countries. And we are one of them. So actually, I’m proud to be part of that system, which makes internet safer for anybody, especially for minors, who do not have a possibility to be safer, because they cannot protect themselves. Thank you.

WOUT DE NATRIS: Thank you, Christina. I think that I heard from your answer three topics that we can move on to. One is that we have heard from Stephen, where there’s a responsibility in terms of themselves, but we also heard about the industry and the role that industry plays. And the next, that there is a complete international component that makes it extremely hard to actually do something as an organization from one specific country. To look at the industry itself, to start with, because they are often the organization that could put forward a solution towards more security. like we heard from Rossi on internet standards and the deployment. But there are some, is this something that you took care that the ICT industry could have? Is it something that you ever thought about the deployment where security of the internet is concerned and for example, with the deployment of security on the internet standards that would make the end user far more safer than currently it is? Is that something that you’ve discussed among yourself? And let me start with you first, Stephen.

STEVEN TAN: Right. I think firstly, the short answer would be absolutely. Right. Why so? I think firstly, the clear duty to care rules can push ICT providers to adopt stronger security measures. When regulatory framework set minimum security expectations providers out there, developers out there have no choice but to comply, right? This help makes security a standard practice and not just a competitive edge, right? So in Singapore, we have rolled out initiatives like the internet hygiene portal which sets a strong example by encouraging businesses to adopt secure practices by default and then publicly recognizing those that excel in security through internet hygiene rating. Similarly in Singapore, we have also launched out a safe app standard as well as a cybersecurity labeling scheme for IoT products as well. This shows how setting clear expectations can actually offer developers and providers some public facing recognition and then drive compliance and even giving businesses a business niche or market advantage, right? This balance of regulation and industry recognition is important. It helps to motivate companies to go beyond the bare minimum, right? And we do understand that at. Many a times, regulation isn’t just everything, right? It works best when you pay with incentives like certifications, security labels, or even industry recognition itself, right? This creates clear differentiation and give businesses that competitive edge, encouraging they themselves to not only meet but exceed minimum security requirements. And what we really intend to do is that we hope this actually motivates continuous improvements. And of course, innovation in cybersecurity practices for the various enterprises and business out there, right? So when we are looking at the duty to care, we thought it’s important that some rules will be useful, but it should be a good mix between regulations as well as incentives, right? To actually help to match in the industry to move on forward.

WOUT DE NATRIS: Yes, and creating a level playing field as I also understand from your words. I think that is a very encouraging answer that you gave that it’s not just about regulation and the hard side of the law, but that the softer side of the law is just as important. How is that in Lithuania, Kristina?

KRISTINA MIKOLIŪNIENĖ: As I mentioned before, yes, we have rules. We have in each level of internet interaction, in a field of interaction, we have some particular part, some amount of rules, but I totally agree with Stephen that is not, rules are not everything. Rules are only the, too many rules brings the market participants to the insecurity feelings. And they do not want to invest, especially in the levels, in the areas where investments are not so profitable. So actually as a representer of regulator, I would suggest to be on the good. the balance between regulation and between motivation. Maybe some, if you want to have some requirements for market participants, you have to give the regulatory qualities or something like that. Not to, and not convince very strictly in every point where you need to have more security, internet security. Because, you know, at the end of the day, everything costs money. And if you will require, only require, that all the investments will be paid by their consumers. And are consumer ready for it? Are consumer ready to pay for every security implementation on the market? I am not so sure. So I think that right balance is the best idea.

WOUT DE NATRIS: We’re also talking about the international component. In what way could citizens of your countries profit from international cooperation that would ensure a secure and more secure and safer internet? Steven.

STEVEN TAN: I think when it comes down to international cooperation, right? We must firstly understand that global cooperation would potentially, or, you know, be seen as, you know, shared threat intelligence, common security standards. And of course, faster responses to incidents, you know, but at times we do understand that that’s not what is really happening. But if we were to actually do it carefully, intricately, this is what we actually foresee. Governments play a crucial role by sharing cyber threat information, coordinating responses, and even collaborating on joint research initiatives, right? This transparency would help to build collective resilience and ensure that no country is left vulnerable due to isolated cyber security efforts, right? So in CSA, some of the things that we have done is that we have built strong partnerships with key industries. players like Akamai, Google, Microsoft, even non-profit organizations like APNIC and even in the Internet Society. These collaborations coupled with government-led information sharing efforts would enhance our cybersecurity capabilities through joint intel sharing, training, and even research initiatives. Such collaborations would also allow us to enhance our cybersecurity capabilities. For example, by working together on securing IoT devices, we will be able to align on common security baselines, ensuring that consumers worldwide have access to safer products. These partnerships will also help address cross-border cyber threats more effectively, making it harder for attackers, even scammers, to exploit gaps between different regions. In the long run, having international cooperation would mean better protection, enhanced trust, and more resilient digital services for everyone. We have identified and even noted that cross-border cyber threats are tough to tackle alone. International partnerships between countries, even between the government and the industry, will create a united front, making it harder for attackers to exploit gaps between different regions. At the end of it, I really hope that through international cooperation, this will actually help to enhance the protection, and at some point in time, we will actually gain back the digital trust for everybody.

WOUT DE NATRIS: Some very important comments on making the world more secure and safer. Christina, what’s your thought about the international cooperation and if that could make citizens more secure and safer?

KRISTINA MIKOLIŪNIENĖ: Yeah, so internet, as I mentioned before, internet has no borders, so it’s very important to be part of a big family. So we, almost every… Everybody knows that sometimes synergy gives not 1 plus 1 equal 2, but 1 plus 1 equal 3 or even 4. So I think this is the result of international cooperation, and this is the reason why we are part of ARAKNIT or INHO projects, which are going global to make our children and in general consumers safer on the Internet. And you know, we have even the proverb in Lithuania that the fool learns from their own mistakes, but the wise person learns from other mistakes. So I think it’s a very good sign to learn from other mistakes and not repeat the same mistakes in every country because of separately views or attitudes to the same issues. And it’s, I think, so, you know, every time we do the market analysis, we search for experience in other countries and collecting the experience from other countries. We do the obligations, which suits for Lithuania, for small country in Eastern European part, but still valid all around the globe. And I think the Internet being such an international thing must be treated also internationally, because if we agree on values, we share, we do the best in terms of all of us. So I think we have to cooperate. and work together in order to have the best results, and then everybody will win from it.

WOUT DE NATRIS: Thank you, Kristina. I think that you’re totally right, that in the end the challenges for everybody in every country, every organisation on the internet are about the same, because the threats come from the same sources, most likely. As ISRC, we hope that we can start working on this to create some sort of a blueprint on this topic, or whatever we would like to call it, so that the same sort of information goes out to the alliance organisations. It would be a good step, I think, a first step to try and get this international cooperation going. What would be your advice, Stephen?

STEVEN TAN: Right, when it comes down to a good step to actually start getting international cooperation, I think it can start and can begin by forming multilateral working groups, such as those that we are seeing currently in IS3C, but it would be always a good mix if you could actually involve the government, industry leaders, and standard-setting bodies at times, and last but not least, consumer groups as well, to actually come in together to collaborate on global frameworks for the internet and application security, ensuring that solutions would work across borders while reducing fragmentation in cybersecurity practices. The last thing we really want to do is that, you know, when we call each country coming up with different cybersecurity practices, and in the end, we get the various fragmentation and balkanisation, you know, this is something that we are trying to avoid, and this is something that I believe, right, as part of IS3C itself, it’s something we really want everybody to have a common internet working together. Another essential step, I think, would be is to establish regional forums and international workshops where experts can discuss pressing cybersecurity challenges like securing digital supply chains, mitigating cross-border cybercrime. Such events would help create actionable roadmaps and foster partnership that will drive long-term improvements. I also feel that as government, right, we will always need to take the lead in sharing Cybertrack intelligence to trusted global networks. Transparent communication and real-time data sharing would enable faster and more coordinated responses to emerging threats, strengthening collective defenses against global cyber attacks. And last but not least, I think it’s important that we could advance capacity building initiatives. I think just now when Janice was actually bringing up about the hub, right, I didn’t previously heard about it before. I mean, through this platform, I actually heard about it. I’m very excited itself, whether we could actually pull in various experts on all around the place, right, to work together. You know, hopefully we could share best practices and support on technology transfers. And perhaps, you know, even for nations-wise, right, we could help to uplift each other cybersecurity capabilities and sharing that no country or no region is actually left behind in a fight for a safer internet.

WOUT DE NATRIS: I could never put that better myself. Thank you, Stephen. Stephen, what are your thoughts about the international cooperation and what would be the first good step started? Well, this is a question for me? This question for you, Christina.

KRISTINA MIKOLIŪNIENĖ: Yeah, sorry, because it’s very difficult to hear you. Yeah, so from my point of view, it’s very important to clear the problem, first of all. because the internet has so many different layers and in every layer there are some different problems. So first of all, I think it’s necessary to find a quite narrow description of the problem you would like to solve. And then it’s important to find the active people because active people, the right people that are of a critical importance. The third thing I think is to have a necessary tool for it as internet.nl and similar. So really to convince your partners that you have something which is really suitable for them. Going forward, the voluntary participation as we do in ARAHNET or INTERPOL programs are very important also. And as a good example, because example motivates I think it’s a IE convention started to sign in Vilnius this year and on 5th of September, which is that there was the point where every country in the world agrees. And now the creator of that convention they started to find the people who are agreeing on that convention. And now they are trying to find the signing parties. So I think it’s like some similar like lobbying activities. Yes, when you have a problem, you have the people around you, you can convince regulators to implement some obligations necessary, some part of obligations. You have convinced. maybe some market participants to be more active and more social responsible in the internet. Maybe there are some end users where awareness, raising awareness could help to act more safe in more safe way on internet. So I think all the related parties must be implemented in that work, because as I said before, you are encompassing the whole world. So thank you for doing this.

WOUT DE NATRIS: Thank you, Cristina. I think that we’ve heard from the panel that we have quite some challenges, but also a lot of opportunities. And I suggest that we, when the new year starts, let’s see if we can organize a first event to get this going. So I will be in contact with you in the new year. For now, thank you very much for participating and for your very clear and concise answers, because we have heard very good answers in this panel. So thank you, Steven. And thank you, Cristina. The next topic is- Thank you for inviting me. Thanks for inviting me. You’re very welcome. I’m very happy to say that ISRC has received a new assignment. We’re gonna start a new work next year. And I have the chairs of Working Group One and the project leader of Working Group One with me and online of working with mine on emerging technologies. Working Group One has produced a report last year on IoT security by design, led by Nicholas. And we’re gonna start a new project on that topic combined with emerging technologies. I’ll first give the floor for five minutes to Nicholas to say what exactly was the current affairs and where we’re going to. Then I ask Elif to tell about the quantum cryptography, QPC, she’ll tell you, and then Joao about the IoT components in that. So Nicholas first, you have five minutes, please. Thank you.

NICOLAS FIUMARELLI: Thank you so much, Valt. Good afternoon, everyone. I am Nicolas Fiumarelli, the chair of the working group one on IoT security by design. Well, it’s a pleasure to be here and discussing on how we can empower the consumers on different topics we have raised. In 2022, we conducted a comprehensive analysis of IoT security regulatory documents and different policies around 18 different countries and regions. We identified 442 different best practices around four key areas that are data privacy, secure updating, user empowerment, and operational resilience. 442 best practices. So we also found that many nations, particularly in the global south, lack about enforceable IoT security policies, even where the frameworks exist, because there are several of them. They are often like voluntary or fragmented ones. And the global adoption of the security by design, ICTs, is hindered by these inconsistent standards, right? So one of the most promising solutions in implementing cybersecurity are labeling schemes. Labeling schemes are seen in Singapore and Finland. Labeling empowers consumer by providing clear information about products, security features. So this drives manufacturers to prioritize on the security. But these systems require robust independent testing mechanisms and so on. So global standardization and ensure effectiveness is difficult. So on the other hand, consumer empowerment must be complemented by strong regulatory frameworks. For example, we have the new ones about the UK’s product security and so on, NIST standards on the 8.425, on the 2024, and the EU Cyber Resilience Act, different ones, right? But on our research report, we recommend establishing these clear frameworks, promoting more interoperable global standards, and so on. Well, the Working Group 1 remains committed to advancing IoT security through education, research, and different advocacy mechanisms, as we recommend in our research. But looking into the future, we will continue with this research. We will continue with different approach now, because we identified that there are other factors that are important. And so my colleague Joao will tell us more about the 2025 action plan for our working group and beyond. And well, I invite you all as well to join our efforts, whether implementing these recommendations we have at the report, also contributing to engaging on our ongoing research and repository of the best practices. I mentioned we have 442. So looking for more examples from the global world, and also to advocate for this stronger policy right in your own regions. So together we can ensure that the IoT devices and in the more extended way, ICT, not only connect us, but also protect us, right? So I’m giving the word to Joao to explain more about the next year plans.

WOUT DE NATRIS: Thank you, Nicolas. I think that the message here is that I think it was 22 or 22 constituencies in the world that were studied. In 22 constituencies, we had 442 different best practices or advices or whatever you want to call them. And that’s unworkable for industry. I think I’m going to let Elif go first, and then Joao. As two years ago already, we sort of launched the idea to start a working group on emerging technologies. And we talked to a lot of organizations. And finally, once we met in Kyoto, decided to work with us. And that project is going to start pretty soon. The contract is signed. And Elif, please explain from your side from exactly what it is that we’re going to study and report on. Then Joao will explain how that interconnects with IoT. So Elif, the floor is yours.

ELIF KIESOW CORTEZ: Thank you very much, Wout. We are, of course, very happy to announce this new project. of IS3C with AFNIC from France. And this project will be delivered as a collaboration between Working Group 1 and Working Group 9. Our research will have two different areas to focus, one dedicated to the societal impacts of IoT and the second one on the post-quantum cryptography. We will be also providing a brief combined analysis of these domains. And our project will have a multidimensional analysis looking at societal, legal, economic and environmental impacts. And we will be also including policy recommendations both at the state level and at the organization level. So we have a big task for us for this project. And in the next IGF in 2025, we will be also facilitating stakeholder engagement on these issues through a common workshop that will encourage dialogue on societal implications as well as the future directions. The project will be finalized with a combined report both on IoT security and on PQC. I’m also exploring cross-cutting teams like digital transformation and future proofing against emerging threats. That was also the focus of our Working Group 9. We will be also making sure to refer to international cooperation and economic competitiveness aspects within the broader context of global cybersecurity efforts. And we think that these are extremely relevant and important topics today. So we are also happy to hear from you if you would like to collaborate with us in the future in any of these domains. And I think I can give the floor to Joao.

WOUT DE NATRIS: Thank you, Eylif.

JOAO: Hello everybody. So I’m here to represent the working group that will develop the part regarding to IoT. So when we were discussing about this project and sketching it, what we see is that people understand that there is a security problem with IoT. And what we wanted to know about after realizing it is, well, okay, if someone gets hacked, if this current security status of IoT is kept, what are the security implications of it? And what are the social implications? Because we are developing a world based on the security levels that we see, and we want to see further and think of what would happen and what we need to change to make the society safer in regarding to IoT. So we want to see this societal phase of the work of making IoT safer.

WOUT DE NATRIS: Thank you, Joao. And I think that shows how the two topics also intersect with each other, because when the quantum computer is there, then all IoT devices will have an instant security problem that’s even bigger than it is today. So that is where we are going to try to come up with not immediate solutions, but at least with an indication of where we are at this point in time and what the consequences will be. And from there, hopefully build that into some sort of a capacity building program, which has been discussed with Avnik already about how to move forward after the IGF in 2025. What it shows is that IS3C is building and we’re delivering. As you see at this moment, all the reports we promised to deliver are there and you find it on our website. Is it possible already, Selby, to show the QR code? The gentleman in the back, can we show the correct QR code, please? Thank you. To wrap this session up, because we’re about to end, but if there are any questions first, and are there any online questions? That is something that I cannot see from the stage. Are there any questions? No, we don’t have any. So I’ll wrap the session up and let you go to the next two sessions. To talk about IS3C, again, the Internet Standards and Safety Coalition, the dynamic coalition within the IGF structure, we’re now in existence for four IGF cycles. We started at the virtual IGF in 2020 with our inaugural meeting. And we can look back at being a dynamic coalition that started by making promises. We painted a picture of where we wanted to be in about two years’ time. And we decided on three topics to start with. The first was IoT security by design. The second was education and skills. And the third is procurement. And that’s the only one you haven’t heard about, but that was also a report we published that showed that most governments in the world do not procure their ICT secure by design. They have no policy for it. In 2021, we were able to present solid plans on these three topics. And with them came the first funding in 2022 and the first research and then our first reports. From there we grew and more topics came aboard. The fact we have seen a new one presented just now, but it’s also proven to be a struggle to find funding to attract attention, to be recognized within the IGF system, and this all has still not been solved satisfactorily, but this has led to ideas on how to organize ourselves in a different way, and that is what we’re seriously studying at this moment. We’re looking at two options simultaneously. The leadership team, and that is Mark Carvell, who’s sitting next to me, who’s the rapporteur of this session, who is our senior policy advisor, and our working group chairs. We have decided to try and come to apply, or to apply to become an Internet Society special interest group, because this will allow ICC to reach out beyond the IGF, but also to bring funding of projects closer. This does not mean that we will not remain a dynamic coalition, because we will, only that we are spreading our wings, and this is also logical. If we manage to set the next step, and that is what we strive to do, to move from theory to practice, to come up with a recommendation, to turn them into capacity building programs or workshops or whatever we call them, we move ourselves out of the IGF system, because that is not what the IGF is for. The IGF doesn’t do capacity building programs or workshops, and we do strive to do that so that there will be some form of harmonization around the world on specific topics, so that organizations start thinking the same about, for example, procurement and the Internet standards that you can procure on. So ICC will, and is striving to become more mature, but it also means it has to organize itself differently. So what we’re also studying, and that’s the second topic, do we establish ourselves as a not-for-profit foundation? And that is something that people are investigating at this moment, and we get the first report on our closed EC session on Wednesday. The benefits would be, of course, that we were allowed to have members who can pay a membership fee or allowed to accept donations, and from there be funded in a more structural way, hopefully, so that our plans will go through. Well, these are our plans. I don’t know if everybody has experience with these sort of topics, then please talk to us after this session. On Tuesday at 1230, we’ll be showing the video again at the Dynamic Coalition booth, so you’re invited to join the session, and if you’re interested to join the hub, let us know, and then we will send you the invite on the first meeting that I will be organizing with Janice Richardson in January. For now, I want to thank you, the presenters also, the people online, Elif, Steven, Janice, and Christina, Mark, for reporting, for the people in the back for the technique, thank you very much. It describes somewhere in the world, probably. Thank you very much. And for now, thank you for joining, and I hope you had a good session, which you learned some new topics, and if you’re interested in IC3C, please join us and just talk to us during the week. And Nico has a final comment. Nico.

NICOLAS FIUMARELLI: Just to invite everyone also to our session on Thursday from 11.15 to 12.15 will be our main session, our joint session with the Dynamic Coalition on the IoT with our Dynamic Coalition, so you are all invited also to that session.

WOUT DE NATRIS: Thank you for reminding me, Nico. Thank you very much. Thank you, and have a very good IGF, and we’ll see you soon, probably.

W

WOUT DE NATRIS

Speech speed

141 words per minute

Speech length

2980 words

Speech time

1259 seconds

Need for widespread deployment of existing security standards

Explanation

WOUT DE NATRIS emphasizes the importance of implementing existing security-related Internet standards and ICT best practices more widely and rapidly. This is aimed at making online activity and interaction more secure and safer.

Evidence

ISVC has published reports on IoT security by design, tertiary secure cybersecurity education and skills, and government procurement.

Major Discussion Point

Internet Security Standards and Best Practices

Agreed with

BASTIAAN GOSLINGS

STEVEN TAN

Agreed on

Importance of implementing security standards

Plans to become an Internet Society special interest group

Explanation

WOUT DE NATRIS discusses IS3C’s plans to apply to become an Internet Society special interest group. This move aims to allow IS3C to reach out beyond the IGF and bring funding of projects closer.

Evidence

Mentions the decision made by the leadership team and working group chairs.

Major Discussion Point

IS3C Organization and Future Plans

Consideration of establishing as a not-for-profit foundation

Explanation

WOUT DE NATRIS mentions that IS3C is considering establishing itself as a not-for-profit foundation. This would allow the organization to have members who can pay a membership fee and accept donations, potentially leading to more structural funding.

Major Discussion Point

IS3C Organization and Future Plans

Goal to move from theory to practice in implementing recommendations

Explanation

WOUT DE NATRIS expresses IS3C’s goal to move from theory to practice by turning recommendations into capacity building programs or workshops. This aims to create some form of harmonization around the world on specific topics.

Evidence

Mentions the example of procurement and Internet standards that can be procured.

Major Discussion Point

IS3C Organization and Future Plans

B

BASTIAAN GOSLINGS

Speech speed

170 words per minute

Speech length

1480 words

Speech time

522 seconds

Importance of DNSSEC and RPKI for securing internet infrastructure

Explanation

BASTIAAN GOSLINGS highlights the critical role of DNSSEC and RPKI in securing fundamental internet technologies like the domain name system and global routing. These standards are essential for maintaining trust in online services and presence.

Evidence

The domain name system and global routing system are described as fundamentally important for the functioning of the internet overall.

Major Discussion Point

Internet Security Standards and Best Practices

Agreed with

WOUT DE NATRIS

STEVEN TAN

Agreed on

Importance of implementing security standards

Challenges in implementing security standards due to cost and complexity perceptions

Explanation

BASTIAAN GOSLINGS discusses the barriers to implementing security standards, including perceived costs and resource constraints. Many organizations view these standards as technically complex and potentially risky to implement.

Evidence

Mentions perceptions of cost, resource constraints, technical complexity, and potential risks associated with implementation.

Major Discussion Point

Internet Security Standards and Best Practices

S

STEVEN TAN

Speech speed

152 words per minute

Speech length

1355 words

Speech time

531 seconds

Importance of building digital trust and secure systems

Explanation

STEVEN TAN emphasizes the critical need for building digital trust in the face of evolving cyber risks. He stresses the importance of creating secure systems that users can rely on without hesitation.

Evidence

Mentions the complexity of digital services, increasing cyber risks, and the need for digital trust in various online activities.

Major Discussion Point

Consumer Protection and Empowerment

Need for developers and service providers to prioritize security

Explanation

STEVEN TAN argues that developers and service providers have a primary responsibility to build security into their products from the start. This includes ensuring privacy, data protection, and secure development processes.

Major Discussion Point

Consumer Protection and Empowerment

Agreed with

WOUT DE NATRIS

BASTIAAN GOSLINGS

Agreed on

Importance of implementing security standards

Role of certifications and security labels in empowering consumers

Explanation

STEVEN TAN discusses the importance of certifications, security labels, and standards in empowering consumers. These tools provide transparency and give companies a competitive edge when they prioritize security.

Evidence

Mentions initiatives like the internet hygiene portal, safe app standard, and cybersecurity labeling scheme for IoT products in Singapore.

Major Discussion Point

Consumer Protection and Empowerment

Agreed with

KRISTINA MIKOLIŪNIENĖ

Agreed on

Consumer education and empowerment

Need for shared threat intelligence and common security standards

Explanation

STEVEN TAN emphasizes the importance of global cooperation in cybersecurity, including shared threat intelligence and common security standards. This cooperation is crucial for building collective resilience and ensuring no country is left vulnerable.

Evidence

Mentions partnerships with key industry players like Akamai, Google, Microsoft, and non-profit organizations like APNIC and Internet Society.

Major Discussion Point

International Cooperation on Cybersecurity

Agreed with

KRISTINA MIKOLIŪNIENĖ

Agreed on

International cooperation in cybersecurity

Importance of partnerships between countries and industry

Explanation

STEVEN TAN stresses the need for international partnerships between countries and industry to create a united front against cyber threats. These collaborations are essential for addressing cross-border cyber threats effectively.

Evidence

Mentions the potential benefits of such partnerships, including better protection, enhanced trust, and more resilient digital services.

Major Discussion Point

International Cooperation on Cybersecurity

Importance of balancing regulation and incentives for industry adoption

Explanation

STEVEN TAN argues for a balance between regulation and incentives to motivate companies to adopt stronger security measures. He suggests that clear duty of care rules can push ICT providers to adopt stronger security measures, while incentives can encourage them to exceed minimum requirements.

Evidence

Mentions initiatives in Singapore like the internet hygiene portal, safe app standard, and cybersecurity labeling scheme for IoT products.

Major Discussion Point

Internet Security Standards and Best Practices

Differed with

KRISTINA MIKOLIŪNIENĖ

Differed on

Approach to regulation and incentives

K

KRISTINA MIKOLIŪNIENĖ

Speech speed

111 words per minute

Speech length

1533 words

Speech time

828 seconds

Need for holistic approach to internet security regulation

Explanation

KRISTINA MIKOLIŪNIENĖ advocates for a comprehensive approach to internet security regulation. This involves impacting market participants at various levels, from interconnection to addressing issues like fraud and child sexual abuse material online.

Evidence

Mentions RRT’s role in market analysis, frequency allocation, and addressing various internet-related issues.

Major Discussion Point

Internet Security Standards and Best Practices

Differed with

STEVEN TAN

Differed on

Approach to regulation and incentives

Importance of raising awareness and educating consumers

Explanation

KRISTINA MIKOLIŪNIENĖ emphasizes the importance of educating consumers about internet safety. She highlights the role of regulatory authorities in helping consumers navigate the internet space safely.

Evidence

Mentions RRT’s role in consumer dispute resolution and efforts to make the internet safer, especially for minors.

Major Discussion Point

Consumer Protection and Empowerment

Agreed with

STEVEN TAN

Agreed on

Consumer education and empowerment

Value of learning from other countries’ experiences

Explanation

KRISTINA MIKOLIŪNIENĖ stresses the importance of international cooperation and learning from other countries’ experiences in addressing internet security issues. She argues that this approach can help avoid repeating mistakes and lead to more effective solutions.

Evidence

Mentions a Lithuanian proverb about learning from others’ mistakes and the importance of collecting experiences from other countries.

Major Discussion Point

International Cooperation on Cybersecurity

Agreed with

STEVEN TAN

Agreed on

International cooperation in cybersecurity

Need for clear problem definition and active participation

Explanation

KRISTINA MIKOLIŪNIENĖ emphasizes the importance of clearly defining the problem and involving active participants in international cooperation efforts. She suggests that this approach is crucial for addressing internet security issues effectively.

Evidence

Mentions the need to find a narrow description of the problem and involve the right people in the process.

Major Discussion Point

International Cooperation on Cybersecurity

N

NICOLAS FIUMARELLI

Speech speed

125 words per minute

Speech length

487 words

Speech time

232 seconds

Analysis of IoT security regulatory documents across countries

Explanation

NICOLAS FIUMARELLI discusses the comprehensive analysis of IoT security regulatory documents across 18 different countries and regions. The analysis identified 442 different best practices in four key areas: data privacy, secure updating, user empowerment, and operational resilience.

Evidence

Mentions the identification of 442 best practices across 18 different countries and regions.

Major Discussion Point

Emerging Technologies and Future Challenges

E

ELIF KIESOW CORTEZ

Speech speed

146 words per minute

Speech length

265 words

Speech time

108 seconds

Need for research on societal impacts of IoT and post-quantum cryptography

Explanation

ELIF KIESOW CORTEZ outlines a new research project focusing on the societal impacts of IoT and post-quantum cryptography. The project aims to provide a multidimensional analysis looking at societal, legal, economic, and environmental impacts.

Evidence

Mentions the collaboration between Working Group 1 and Working Group 9, and the plan to provide policy recommendations at both state and organization levels.

Major Discussion Point

Emerging Technologies and Future Challenges

J

JOÃO MORENO FALCÃO

Speech speed

114 words per minute

Speech length

139 words

Speech time

72 seconds

Importance of understanding social implications of current IoT security status

Explanation

JOÃO MORENO FALCÃO emphasizes the need to understand the social implications of the current IoT security status. He argues that it’s crucial to consider what would happen if the current security levels are maintained and what changes are needed to make society safer regarding IoT.

Major Discussion Point

Emerging Technologies and Future Challenges

Agreements

Agreement Points

Importance of implementing security standards

WOUT DE NATRIS

BASTIAAN GOSLINGS

STEVEN TAN

Need for widespread deployment of existing security standards

Importance of DNSSEC and RPKI for securing internet infrastructure

Need for developers and service providers to prioritize security

Multiple speakers emphasized the critical need for implementing existing security standards to enhance internet security and maintain trust in online services.

International cooperation in cybersecurity

STEVEN TAN

KRISTINA MIKOLIŪNIENĖ

Need for shared threat intelligence and common security standards

Value of learning from other countries’ experiences

Both speakers stressed the importance of international cooperation in addressing cybersecurity challenges, sharing knowledge, and developing common standards.

Consumer education and empowerment

STEVEN TAN

KRISTINA MIKOLIŪNIENĖ

Role of certifications and security labels in empowering consumers

Importance of raising awareness and educating consumers

Both speakers highlighted the need to educate and empower consumers about internet safety and security through various means such as certifications, security labels, and awareness programs.

Similar Viewpoints

Both speakers recognized the challenges in implementing security standards and emphasized the need for a balanced approach that combines regulation with incentives to encourage adoption by the industry.

BASTIAAN GOSLINGS

STEVEN TAN

Challenges in implementing security standards due to cost and complexity perceptions

Importance of balancing regulation and incentives for industry adoption

These speakers all emphasized the importance of understanding the broader implications of IoT security, including its societal impacts and the need for comprehensive research and analysis.

NICOLAS FIUMARELLI

ELIF KIESOW CORTEZ

JOÃO MORENO FALCÃO

Analysis of IoT security regulatory documents across countries

Need for research on societal impacts of IoT and post-quantum cryptography

Importance of understanding social implications of current IoT security status

Unexpected Consensus

Holistic approach to internet security

KRISTINA MIKOLIŪNIENĖ

STEVEN TAN

Need for holistic approach to internet security regulation

Importance of building digital trust and secure systems

Despite coming from different backgrounds (regulatory authority and cybersecurity agency), both speakers emphasized the need for a comprehensive approach to internet security that goes beyond technical measures to include trust-building and broader regulatory frameworks.

Overall Assessment

Summary

The main areas of agreement included the importance of implementing security standards, the need for international cooperation in cybersecurity, and the significance of consumer education and empowerment. There was also consensus on the challenges of implementing security standards and the need for a balanced approach combining regulation and incentives.

Consensus level

The level of consensus among the speakers was relatively high, particularly on the fundamental issues of cybersecurity and the need for international cooperation. This consensus suggests a shared understanding of the critical challenges in internet security and the potential for collaborative efforts to address these issues. However, there were some variations in emphasis and approach, reflecting the diverse backgrounds and perspectives of the speakers.

Differences

Different Viewpoints

Approach to regulation and incentives

STEVEN TAN

KRISTINA MIKOLIŪNIENĖ

Importance of balancing regulation and incentives for industry adoption

Need for holistic approach to internet security regulation

While both speakers emphasize the importance of regulation, STEVEN TAN advocates for a balance between regulation and incentives, whereas MIKOLIŪNIENĖ focuses more on a comprehensive regulatory approach without explicitly mentioning incentives.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on the approach to regulation and the specific aspects of international cooperation to prioritize.

difference_level

The level of disagreement among the speakers was relatively low. Most speakers generally agreed on the importance of cybersecurity, international cooperation, and the need for improved standards and practices. The differences were mainly in the nuances of approach rather than fundamental disagreements. This low level of disagreement suggests a general consensus on the importance of the issues discussed, which could facilitate more unified action in addressing cybersecurity challenges.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of international cooperation, but STEVEN TAN emphasizes shared threat intelligence and common standards, while KRISTINA MIKOLIŪNIENĖ focuses more on learning from others’ experiences and avoiding mistakes.

STEVEN TAN

KRISTINA MIKOLIŪNIENĖ

Need for shared threat intelligence and common security standards

Value of learning from other countries’ experiences

Similar Viewpoints

Both speakers recognized the challenges in implementing security standards and emphasized the need for a balanced approach that combines regulation with incentives to encourage adoption by the industry.

BASTIAAN GOSLINGS

STEVEN TAN

Challenges in implementing security standards due to cost and complexity perceptions

Importance of balancing regulation and incentives for industry adoption

These speakers all emphasized the importance of understanding the broader implications of IoT security, including its societal impacts and the need for comprehensive research and analysis.

NICOLAS FIUMARELLI

ELIF KIESOW CORTEZ

JOÃO MORENO FALCÃO

Analysis of IoT security regulatory documents across countries

Need for research on societal impacts of IoT and post-quantum cryptography

Importance of understanding social implications of current IoT security status

Takeaways

Key Takeaways

There is a need for more widespread deployment of existing internet security standards and best practices.

Consumer protection and empowerment are crucial for building digital trust and securing the internet.

International cooperation is essential for addressing global cybersecurity challenges.

Emerging technologies like IoT and quantum computing pose new security risks that need to be studied and addressed.

The Internet Standards Security and Safety Coalition (IS3C) is working to move from theory to practice in implementing cybersecurity recommendations.

Resolutions and Action Items

IS3C to start a new project on IoT security and post-quantum cryptography, with a report to be delivered at IGF 2025

IS3C to organize a first event on consumer protection in the new year

IS3C to apply to become an Internet Society special interest group

IS3C considering establishing itself as a not-for-profit foundation

IS3C to organize a meeting in January to discuss the creation of a cybersecurity hub

Unresolved Issues

How to effectively implement security standards across different countries and regions

How to balance regulation and incentives for industry adoption of security measures

How to address the fragmentation of IoT security best practices across different jurisdictions

How to prepare for the security implications of quantum computing on existing infrastructure

Suggested Compromises

Balancing regulatory requirements with industry incentives to promote security adoption

Combining mandatory security standards with voluntary labeling schemes to empower consumers

Collaborating internationally while respecting national sovereignty in cybersecurity matters

Thought Provoking Comments

We learned a couple of years ago when we did a study that in fact, young people are coming out of tertiary education, they’re really not prepared to kickstart their career in industry. Industry is decrying this lack, decrying the gap and asking for better tertiary education.

speaker

Janice Richardson

reason

This comment highlights a critical gap between education and industry needs in cybersecurity, challenging assumptions about the effectiveness of current educational approaches.

impact

It shifted the discussion towards the importance of education reform and industry collaboration in cybersecurity, leading to ideas about creating a hub for knowledge exchange.

These in-service trainings are really not working for anyone. Young people are there making the coffee when they should be there, really understanding how cyber security needs to work, and how they can be part of a team.

speaker

Janice Richardson

reason

This insight critiques current training practices and suggests a need for more meaningful engagement of young professionals in cybersecurity roles.

impact

It deepened the conversation about practical skills development and led to discussions about reforming recruitment and training procedures in the industry.

On one hand, you know, there’s the perception of cost and resource constraints, right? Like it takes additional knowledge, additional software, maybe additional hardware, control of this to manage all of this. People consider this to be quite technically complex.

speaker

Bastiaan Goslings

reason

This comment provides insight into the barriers to implementing security standards, highlighting both technical and resource challenges.

impact

It shifted the discussion towards addressing practical obstacles in implementing security measures and led to considerations of how to overcome these barriers.

In Singapore, we have rolled out initiatives like the internet hygiene portal which sets a strong example by encouraging businesses to adopt secure practices by default and then publicly recognizing those that excel in security through internet hygiene rating.

speaker

Steven Tan

reason

This comment introduces a concrete example of how government initiatives can incentivize better security practices in the private sector.

impact

It sparked discussion about the role of government in promoting cybersecurity and led to considerations of similar initiatives in other countries.

I think it’s important that we could advance capacity building initiatives. I think just now when Janice was actually bringing up about the hub, right, I didn’t previously heard about it before. I mean, through this platform, I actually heard about it. I’m very excited itself, whether we could actually pull in various experts on all around the place, right, to work together.

speaker

Steven Tan

reason

This comment demonstrates how the discussion itself led to new connections and enthusiasm for collaborative initiatives.

impact

It reinforced the value of the discussion forum and led to increased interest in the proposed hub concept.

Overall Assessment

These key comments shaped the discussion by highlighting critical gaps in cybersecurity education and implementation, introducing concrete examples of successful initiatives, and fostering enthusiasm for collaborative approaches. They shifted the conversation from theoretical concerns to practical solutions and emphasized the need for multi-stakeholder cooperation in addressing cybersecurity challenges. The discussion evolved from identifying problems to exploring potential solutions and international cooperation opportunities.

Follow-up Questions

How to create and implement a hub for cybersecurity collaboration?

speaker

Janice Richardson

explanation

A hub would bring together people from various backgrounds to discuss and find solutions for cybersecurity challenges, addressing the need for better education and collaboration in the field.

How to increase deployment of DNSSEC and RPKI security standards?

speaker

Bastiaan Goslings

explanation

Despite being available for a long time, these standards lack widespread adoption. Increasing their deployment is crucial for improving the security of internet routing and domain name systems.

How to balance regulation and incentives in promoting cybersecurity practices?

speaker

Steven Tan

explanation

Finding the right mix of regulatory requirements and incentives is important to encourage businesses to adopt and exceed minimum security standards without stifling innovation.

How to establish effective international cooperation on cybersecurity?

speaker

Steven Tan and Kristina Mikoliūnienė

explanation

Given the borderless nature of the internet, international cooperation is crucial for addressing cross-border cyber threats and creating unified security standards.

What are the societal implications of current IoT security levels?

speaker

João Moreno Falcão

explanation

Understanding the broader societal impacts of IoT security vulnerabilities is crucial for developing appropriate security measures and policies.

How will post-quantum cryptography affect IoT security?

speaker

Elif Kiesow Cortez

explanation

The advent of quantum computing will create new security challenges for IoT devices, requiring proactive research and planning.

How can IS3C organize itself to better achieve its goals?

speaker

Wout de Natris

explanation

IS3C is exploring options like becoming an Internet Society special interest group or establishing itself as a non-profit foundation to expand its reach and funding opportunities.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #61 Accelerating progress for unified digital cooperation

Day 0 Event #61 Accelerating progress for unified digital cooperation

Session at a Glance

Summary

This discussion focused on global digital governance, addressing key issues in artificial intelligence (AI), data management, and internet governance. Participants from government, industry, and international organizations shared insights on recent developments and future challenges.


The conversation highlighted the need for interoperable regulatory approaches to AI governance, balancing innovation with risk management. Speakers emphasized the importance of multi-stakeholder collaboration in developing frameworks that are flexible enough to adapt to local contexts while maintaining global consistency.


On data governance, the discussion centered on initiatives promoting data free flow with trust, addressing privacy concerns, and facilitating cross-border data sharing. Participants stressed the need for harmonized approaches to reduce fragmentation and ensure legal clarity for businesses and citizens.


The panel also examined the future of internet governance, particularly in light of the upcoming WSIS+20 review. Speakers advocated for strengthening existing multi-stakeholder processes like the Internet Governance Forum (IGF) rather than creating new structures. They emphasized the importance of inclusive participation, especially from developing countries and underrepresented groups.


Key themes throughout the discussion included the urgency of addressing governance challenges posed by rapidly evolving technologies, the need to preserve what works in current systems, and the importance of trust-building among stakeholders. Participants called for more focused, action-oriented approaches to governance that can deliver tangible results while maintaining the benefits of open, multi-stakeholder dialogue.


The discussion concluded with reflections on improving inclusivity, gender representation, and the overall effectiveness of global digital governance processes. Speakers emphasized the need for clear mandates, strategic vision, and practical outcomes in future governance efforts.


Keypoints

Major discussion points:


– The need for interoperable and aligned approaches to AI and data governance across jurisdictions


– The importance of preserving multi-stakeholder approaches in internet governance


– Preparing for the WSIS+20 review and the future of the Internet Governance Forum (IGF)


– Balancing innovation with risk mitigation in emerging technologies like AI


– Improving inclusivity and representation in internet governance processes


The overall purpose of the discussion was to take stock of recent developments in digital policy and governance, particularly around AI and data, and to look ahead to upcoming processes like WSIS+20 and the implementation of the Global Digital Compact. The goal was to identify priorities and approaches for improving global digital cooperation between governments, businesses, and other stakeholders.


The tone of the discussion was largely constructive and forward-looking. Speakers acknowledged challenges but focused on opportunities for progress. There was a sense of urgency about addressing governance gaps, balanced with caution about preserving what works well in the current system. The tone became more action-oriented towards the end, with calls to move beyond talk to concrete outcomes.


Speakers

– Timea Suto: Moderator


– Maria Fernanda Garza: Honorary Chairwoman of ICC (International Chamber of Commerce)


– Thomas Schneider: Ambassador and Director of International Relations at Ofcom Switzerland, Vice Chair of the Council of Europe’s Committee on Artificial Intelligence


– Flavia Alves: Director and Head of International Organizations for Meta


– Yoichi Iida: Assistant Vice Minister for International Affairs of the Ministry of International Affairs and Communications of Japan


– Maarit Palovirta: Deputy Director General at Connect Europe


– Irina Soeffky: Director for National, European, and International Digital Policy at the German Federal Ministry for Digital and Transport


– Larisa Galadza: Director General for Global Affairs Canada and Senior Official for Cyber, Digital and Critical Technology at the Government of Canada


– Amr Hashem: MENA Policy Director for the GSMA


Additional speakers:


– Bertrand de La Chapelle: Audience member


– Jacques Beglinger: Member of the board of EuroDIG and co-chair of the Swiss IGF


– Desiree Milosevic-Evans: Audience member


Full session report

Global Digital Governance: Navigating AI, Data, and Internet Challenges


This comprehensive discussion on global digital governance brought together key stakeholders from government, industry, and international organisations to address pressing issues in artificial intelligence (AI), data management, and internet governance. The panel explored recent developments, future challenges, and potential solutions for creating a more cohesive and effective global digital governance framework.


AI Governance: Balancing Innovation and Interoperability


A central theme of the discussion was the need for interoperable regulatory approaches to AI governance. Thomas Schneider, Ambassador and Director of International Relations at Ofcom Switzerland, emphasized the importance of the Council of Europe’s AI convention as a potential global standard. He stressed the need for flexible frameworks that can adapt to rapidly evolving AI technologies while ensuring interoperability between different regulatory approaches.


Flavia Alves, Director and Head of International Organizations for Meta, highlighted the potential of open-source AI to drive innovation and create better, safer products accessible on a global scale. She emphasized the importance of open-source approaches in fostering collaboration and improving AI systems.


Yoichi Iida, Assistant Vice Minister for International Affairs of the Ministry of International Affairs and Communications of Japan, discussed the G7 Hiroshima AI process and code of conduct as an example of international cooperation on AI governance. Audience members raised concerns about potential biases in AI datasets and the need for inclusive governance approaches that represent marginalized communities.


Data Governance: Trust, Privacy, and Cross-Border Flows


The discussion on data governance centered on initiatives promoting data free flow with trust, addressing privacy concerns, and facilitating cross-border data sharing. Yoichi Iida introduced the OECD’s work on data free flow with trust, highlighting the importance of balancing data utility with privacy protection. He also addressed the complex issue of government access to data for law enforcement purposes.


Maarit Palovirta, Deputy Director General at Connect Europe, outlined the EU approach to data protection and cross-border data flows, emphasizing the need for harmonized regulations that protect privacy while enabling innovation.


Amr Hashem, MENA Policy Director for the GSMA, highlighted the mobile industry’s crucial role in expanding internet access and connectivity. He stressed the importance of considering infrastructure development alongside governance issues, particularly in developing regions.


Future of Internet Governance: Multi-stakeholder Collaboration and Reform


The panel examined the future of internet governance, particularly in light of the upcoming WSIS+20 review. Irina Szovki, Director for National, European, and International Digital Policy at the German Federal Ministry for Digital and Transport, emphasized the continued importance of the multi-stakeholder model in internet governance.


Audience members, including Bertrand de La Chapelle, called for updating the WSIS vision and structures to reflect current technological realities. There was a strong push for improving the Internet Governance Forum (IGF) mandate and structure, with de La Chapelle proposing a dedicated effort to discuss new institutional arrangements.


Speakers advocated for strengthening existing multi-stakeholder processes rather than creating new structures. They emphasized the importance of inclusive participation, especially from developing countries and underrepresented groups. Jacques Beglinger, a member of the board of EuroDIG and co-chair of the Swiss IGF, raised concerns about defining stakeholders too narrowly and excluding grassroots participation.


Global Digital Cooperation: Aligning Priorities and Addressing Challenges


Larisa Galadza, Director General for Global Affairs Canada, discussed the implementation of Global Digital Compact commitments and Canada’s upcoming G7 presidency, which will focus on AI governance. She framed the coming year as “an inflection point” for global digital governance.


Maria Fernanda Garza, Honorary Chairwoman of ICC, highlighted the crisis in multilateralism and the need for greater alignment in digital governance while preserving flexibility to meet diverse local needs. She emphasized the importance of business involvement in shaping effective governance frameworks.


Gender Inclusion and Accessibility


An audience member raised the critical issue of gender inclusion in digital governance processes. Panel members acknowledged the importance of this concern and discussed strategies for improving gender representation and diversity in governance discussions and decision-making bodies.


Unresolved Issues and Future Directions


Several key issues remained unresolved, including how to effectively include developing countries and underrepresented groups in AI and data governance frameworks, addressing biases in AI datasets and algorithms, and determining the appropriate division of work between new AI governance bodies and existing internet governance structures.


The discussion concluded with reflections on improving inclusivity, gender representation, and the overall effectiveness of global digital governance processes. Speakers emphasized the need for clear mandates, strategic vision, and practical outcomes in future governance efforts.


Conclusion


As the global community grapples with rapidly evolving digital technologies, this discussion underscored the critical importance of collaborative, flexible, and inclusive approaches to governance that can adapt to local contexts while maintaining global consistency. The coming year promises to be a pivotal period for shaping the future of global digital governance, with significant implications for innovation, equity, and human rights in the digital age. Key takeaways include the need for improved coordination between stakeholders, greater inclusivity in governance processes, and more action-oriented approaches to addressing global digital challenges.


It’s worth noting that the panel experienced some technical difficulties throughout the discussion, which occasionally impacted the flow of conversation but did not significantly detract from the overall quality of the dialogue.


Session Transcript

Timea Suto: Can you hear me? Okay, perfect. Thank you so much. All right. Well, good afternoon, everyone. Welcome to this session. Everything is good with the technology. Everybody can hear. Everybody has a microphone. Channel three. Should be channel three. Can you hear me now? Yes. Perfect. Okay, good. I feel like a rock star with this microphone on. So, hello, everyone. Welcome to this business government roundtable that looks like a panel, but it will be a roundtable. We will talk of what has happened this year on all the various fronts on digital policymaking and a number of issues. And we will try and see how we move forward towards a more common digital cooperation and how we can work better together between the business and the government sectors. I don’t want to take up too much time in doing an introduction, but really just want to share with you how we are envisioning this session to go. We have set up three mini discussions within these two hours that we have together today. First, we were going to talk a little bit about the governance of artificial intelligence, what has happened throughout the year on this topic and where we are hoping to go forward. Then we will take the same stock around the conversations on data governance. So, how are we today with initiatives on data governance? What have we done so far? And where we hope to go under the aegis of digital cooperation. And then we are looking at a couple of processes that we have all been engaged in as part of the IGF community, the global digital compact and the WSIS plus 20 process and trying to look ahead after we have taken stock of these policy developments and try and see where we want to go in the context of these policy fora. We see as necessary as all of us up here, here on the panel, but together with you in the community. So we will have two speakers per topic to start a discussion, and then we’re going to turn to all of you in the room for a dialogue on those topics. So we won’t wait till the end to have the dialogue. We have two speakers and then you, and then again, two speakers and then you. But to set the scene, we will have first a keynote. I want to start by, first of all, thanking all of you, panelists who’ve accepted to be here with us. Just a quick introduction on who we have here in no particular order at the moment, but just the way it appears on my list here. We have Ms. Flavia, National Institutions and Relations at MEDA. We have Mr. Thomas Schneider, Ambassador and Director of International Relations at Ofcom Switzerland. He is also the Vice Chair of the Council of Europe’s Committee on Artificial Intelligence. So there will be my first panel on AI. We also have Mr. Yorichi Iida, Assistant Vice Minister for International Affairs of the Ministry of International Affairs and Communications of Japan. And Dr. Irina Szovki, I hope I’m pronouncing that correct, Director for National, European, and International Digital Policy at the German Federal Ministry for Digital and Transport. So they will be my second panel on data. And then for the third panel it’s for the business conversations. We will have Ms. Larissa Galazza, from the, who’s the Director General for Global Affairs Canada and Senior Official for Cyber, Digital and Critical Technology at the Government of Canada. Ms. Marit Palavirta, Deputy Director General at Connect Europe. And Mr. Amir Hashim, MENA Policy Director for the GSMA. Thank you all for joining us. To kick us off, we also have the Honorary Chairwoman of ICC to give a quick keynotes and a few thoughts on where we are and where we’re hoping to go. So Maria Fernanda, please.


Maria Fernanda Garza: Thank you very much. Do you mind I just nod your head if you can listen to me, please? Thank you. Let me start with a few quick words about the International Chamber of Commerce. For those of you who might not know, also the ICC is institutional representative of more than 45 million. businesses in over 117 countries with a mission to enable peace and prosperity through trade. We deeply believe in a world based in rules, benefits, business and society. And this mission is particularly relevant today. In a rapidly evolving digital world, the stakes have never been higher for us to collaborate effectively to shape policies that are inclusive, sustainable and forward-looking. This year, we have seen meaningful discussions on digital policy across multilateral fora, whether it’s the G7, the G20 or the OECD, including the adoption of the Global Digital Compact and the preparations for the 20-year review of the outcomes of the World Summit of the Information Society. These discussions address a number of pressing issues, from digital divides and cybersecurity to the governance of data, AI and our digital world in general. But these discussions are happening against the backdrop of a crisis in multilateralism. Deepening geopolitical tensions and competing national priorities have made it harder to achieve alignment, and the result is increasing regulatory and policy fragmentation. For business, this fragmentation creates uncertainty, disrupts cross-border digital trade, increases compliance costs and stifles innovation. For governments, it makes it more challenging to establish interoperable frameworks that support economic growth and cross-border collaboration. To address these challenges, we must pursue greater alignment while preserving the flexibility to meet diverse local needs. A single, centralized, global regulatory superstructure is neither feasible nor feasible. Instead, we should build on the strengths of expert organizations and forums, allowing them to contribute within their mandates while fostering collaboration. collaboration across sectors and regions. So looking ahead to 2025, our priorities must include, first, data governance, establishing principles and frameworks that support the free flow of data, while addressing legitimate concerns about privacy, and second, AI governance, and developing banks and companies that can It’s what we agreed. While addressing societal risks and ensuring equitable benefits, especially for under-reserved regions, and third, the internet governance, reinforcing the principles of an open, interoperable, and inclusive internet. So at the heart of this effort must be the multi-stakeholder approach that offers a model that brings together governments, businesses, civil society, academia, and technical experts to develop policies that are pragmatic, inclusive, and effective. The IGF is the embodiment of this approach. It is not a decision-making body, but it is invaluable in its ability to bring together all stakeholders to share knowledge and expertise, ensuring interoperable policy approaches that meet the diverse needs of everyone, everywhere. So looking ahead to the implementation of the Global Digital Compact and the WSIS Plus 20 review, we must follow through on the promise made 20 years ago to make the multi-stakeholder model the rule and not the exception. It is how we address the policy issues around the internet. and the digital technologies more broadly. So to move forward, we need to ensure that the voices of all stakeholders are heard and are valued. Business has a critical role to play, not just in implementing the policies, but in shaping them through expertise and practical experience. Today, I encourage us to have an honest, focused discussion in a true IGF fashion about how we can align our priorities, reduce regulatory fragmentation, and prepare for WSIS 2020 in a way that strengthens the next decade. So thank you all for your engagement and for meeting to these issues. Back to you, Timon. Thank you. Thank you, Maria Fernanda. I hope that everybody could hear you. Just trying to check with the panelists that everybody’s okay with the microphones and everybody’s okay with the headsets.


Timea Suto: Okay, thank you so much, Maria Fernanda, for leading us into this discussion. So on this imperative of talking openly and really in true IGF fashion, into our first panel, as I said, we will be starting with artificial intelligence and try and take a little bit of stock of the current state of play in global AI governance, trying to identify some commonalities on the initiatives that we are all aware of, but also trying to see if there are any barriers that we still need to surmount in the implementation. So to kick us off, I’m going to turn first to Thomas Schneider, and I’m going to ask you to wear two hats in this conversation. First of all, talk a little bit about the opportunities and challenges you see in opportunities. operationalizing AI governance and of the work that you’ve done at the CHI and the Council of Europe.


Thomas Schneider: Yes, thank you and I hope you can hear me. Okay, thank you very much Tymija. Before I go into more detail, one thing that helped me understand or get a vision on the concept of AI governance is to note that AI is not the first disruptive technology that mankind has learned to seize opportunities and minimize risks. And there’s a number of parallels that can be drawn with the way that we actually managed engines, combustion. Engine driven machines in the 19th century started to replace physical human and actually animal labor through putting engines into machines that were either used to move something from A to B or were used to automate production of goods or of food. And there are lots of parallels with the digital revolution of today where we use AI systems to replace not physical labor but cognitive labor. Mainly also in two ways to analyze data and prepare to take decisions. In both cases, the risks and impacts of the technology are very much context based. And if we try to figure out how to govern AI, I think it may be worth to look at how we’ve more or less managed to govern engines in different areas of their use. And if we look at engines, of course we are aware that there’s no single engine convention, no one engine law that regulates all aspects of the use of engine. In fact, there are thousands of technical norms, of legal norms and also different from culture to culture and how to manage risks. And in that case of engines used in different contexts. And there’s areas where we have. quite advanced harmonization internationally. If you take the airline industry, of course, to land an airplane is the same on every airport in the world. But if you take a cars, even in Europe, people drive on different sides of the road and so on and so forth. But there’s some level of interoperability so that the British also able to drive in Switzerland, although we are driving on the other side of the road. And I think the same is already happening in the field of AI. We also have there, we have tuitions in a technical field, ISO, IEC, I2, IEEE, but then also institutions like NIST and US and Senelec in Europe that are working on technical standards. We have a lot of legal instruments, binding and non-binding ones, starting from the UNESCO recommendation, DOECD, and the Council of Europe has already done some work before this binding instrument, and others have contributed to a number of legal instruments. And we will also have differences in how in a particular society you deal with risks or who you trust to actually cope with the risk, whether you task the government in your own hands, these things will probably keep varying. And in this sense, the convention that the Council of Europe has negotiated, and I happen to have been leading these negotiations in the last two years, is one, but not the only instrument that will hopefully help us to cope with AI in the sense that the purpose of this convention is not to create new rights or to raise protection levels or make new restrictions. It is to make sure that the existing safeguards and protection levels of human rights, democracy, and rule of law are also applied to AI like they apply to any other environment or technical development that we’ve seen. It’s important also to note that the instrument is meant to secure these rights and freedoms, but at the same time to… conducive to innovations or not to disadvantage those that are part of this of this structure compared to others that that may not be because we think that there is a mutual interest from the industry from consumers from from the states that we have a certain level of trust and clarity and rule allow us to be innovative but be more or less be able to to uh yeah to assess risks and impacts and deal with them in in a reasonable and appropriate way um if the word council of europe may apply that this is something european it is first of all not uh the same like the european union european union has 27 member states the council of europe has 46 and the council of europe has a system of observers that can also be an ad hoc observer to a process that allows actually to include countries from all uh and become signatories of an instrument which is the case also with this convention we’ve had 11 countries participating in the negotiations from latin america from north america from asia and we’re in touch also with countries from africa to join uh the work now so the idea is to have a global instrument globally in the sense that you would require a minimum level of respect for human rights democracy and rule of law otherwise the whole system would not be credible but every every country that respects a certain level of democracy rule of law and human rights is invited to to join the process the convention is also an instrument unlike a law that is meant to be more future proof therefore in terms of time and development it therefore needs to be a little bit more general a little bit more abstract but in a way translated into a concrete guidance for whatever the latest technology may be so it establishes some general principles about safeguarding existing protection levels of democracy and rule of law and goes into more detail about human rights but remains always at the level that it can be adapted to the concrete legal and institutional setting of a particular country and thus help to not fully harmonize the world because that may not be possible but at least build on the shared fundamental values and legal norms that many countries share and help to allow legal constructs in a way that they can not just be interoperable for the states but also for the industry and for the consumers so there’s a common basis and that is not just the legal text it’s also the and I’ll end with this it’s also a concrete instrument which is a concrete methodology for a human rights democracy and rule of law risk and impact assessment which is fundamental also to build the bridge not just between technical standards and legal standards but also to help operationalize something abstract like a convention into daily life that consumers but also for programmers and for regulators think. Thank you so much Tomasz


Timea Suto: and you’ve raised quite a lot of ideas in your speech so I’m just trying to pull out a couple of those. I’m noting interoperability of regulatory and policy approaches working in hand in hand with the stakeholders making sure that we’re working towards policy frameworks that can be global in nature but can flexible enough to be implemented in local contexts and the importance of providing actual tools to making those happen and I’m just from all of this trying to connect to the rest of the conversation I want to highlight one thing that you said that we need trust and clarity so that those who are implementing and working on implementation sides of these technologies it was the principles that we develop and actually make it part of their work so I think that’s a good segue to Flavia who’s going to speak next, I wanted to ask her about the voluntary commitments that industry is taking in a field of AI. And how do you see that linking up with some of these global conversations in policy, and treaties, and guidelines, and others that are happening around the world? And what is the meta focus in this? And how do you see that?


Flavia Alves: Thanks. Here today, I’m Flavia Alves, Director and Head of International Organizations for Meta. So first, let me tell you, Meta is committed to developing responsible AI. And we work to help ensure that AI at Meta benefits people and society. In addition to our internal processes to develop AI responsibly, we are also active on international level in contributing to development and implementation of AI governance frameworks. International cooperation is key to ensuring people around the world can fully harness the benefits of AI. Global AI governance frameworks promote trust and help to prevent fragmentation and jurisdictions. Given the quickly evolving capabilities of GenAI, we need frameworks that are agile and adaptable. As a company, we participate in industry bodies and international commitments and organizations. Industry bodies, to name a few, the AI Alliance, Partnership on AI, Frontier Model Forum, and there are others. As for voluntary and international commitments, we are signatories to the White House Voluntary AI Commitments, the Bletchley Declaration, Munich Accord on AI and Elections, and so Frontier AI Safety Commitments on the G7 Hiroshima process implementation. We need to avoid fragmentation. Governments should build on their progress to establish consistent international positions that support the development of AI, that benefit society in a responsible way. This was a key underpinning of the UN resolution on AI approved early this year. Similarly, the G7 leaders just commit to step up on efforts to interoperability on AI governance frameworks. This recent government affair, Nick Clegg, was in a stage with the Prime Minister of Japan as they discussed the importance of the G7 Hiroshima process in bringing stakeholders together in order to harness the benefits of AI. We are also very active at the G7 task force that helped develop the survey to apply the code of conduct on the G7 Hiroshima AI process. And in fact, we are looking forward to work with G7 Canada in the next steps of implementation of the Eurasian AI. As a stakeholder forum, we are active participation at the OECD. We are members of the business at the OECD and OECD experts on AI. We are involved in supporting the development of the 2019 AI principles. And we’re also very pleased to see that principles turning into parts of the EU AI Act. So this is exactly what we want to see. These frameworks evolving among themselves and building upon each other instead of fragmented among themselves. Special thanks to my fellow panelist, Mr. Ida Sun, for his leadership at the G7 Digital and Tech Working Group, but also at the OECD Digital Policy Committee. Remote Stakeholder Frameworks, I also said we are part of the UAF AI Governance Outliners. So there is no response one fits to all. We are part of all these different efforts. There is also the global effort from the UN. We participate at the UN Global Digital Compact and are looking forward on the implementation of that. We are also very pleased to see the outcomes of the work of the UN High-Level Advisory Body on AI. The report they issued was excellent, particularly on the government. We are now looking for how to participate on the Independent International Scientific Panel on AI and the Global Forum on AI. Can you hear me? Okay. So now is one part that we are looking to see in all of this framework. It’s the open approach to AI development. Through all of these initiatives, one aspect of governance that is crucial and very important for us is the promotion of open source AI models. Open source AI has real potential to provide access to the world’s most advanced models at a global scale. We favor this approach because in many contexts we believe it is the right thing to do. It drives innovation. It creates better, safer products that everyone can benefit from. We also believe open source will be the key to unlocking the potential of AI across developing nations. Open source has several strategic benefits. It’s good for Meta. We benefit from a developed ecosystem of tools, efficiency, and proven integrations. It’s also good for developers. The open source AI allows developers to train their own models, control their own destiny without being locked into a single closed model. And above all, it’s good for the world. Open source will ensure that more people around the world have access to benefits and opportunities of AI. The power isn’t concentrated in a small number of companies, and then the technology can be deployed more evenly and safely across society. As of today, we have $600 million being used by broad communities of researchers, entrepreneurs, developers, and governments, as well as international government bodies. For example, we created a no-language-left-behind AI model, which UNESCO is using to help support high-level translation, including in low-resource and marginalized languages, such as indigenous languages. As we converge around frameworks, it is critical that they support an open approach to development of AI, that those frameworks are interoperable and non-duplicative, and that they enable AI to deliver on their potential, also advancing progress to the SDGs.


Timea Suto: Thank you so much, Flavia, for that. So with these two introductory statements, both from the government and the industry side, I would like to turn first to the panelists to see if there’s any reaction statements, and then to those of you on the floor, if you might have questions or reaction statements to what we heard. Ides, I’m from the panel. I think we can just… …both in name and also in the work of Japan.


Yoichi Iida: Okay, thank you very much. My name is Yoichi Rikida, Assistant Vice Minister from the Japanese Ministry of Internal Affairs and Communications, and I have been working as Chair, Committee Chair at OECD for digital policymaking, and also last year I worked as Chair of G7 Working Group, as well as the Hiroshima AI Process Working Group. So, having listened to the wonderful, previous two wonderful speakers, I want to pick up three points from the progress and development. over the last two or three years in AI governance. The first thing is, as we frequently mentioned, G7 AI governance very actively. We agreed on Hiroshima process for the conduct last year and this year under Italian presidency, we are discussing the monitoring mechanism and also the brand for the companies and organizations to implement the code of conduct. And we have a lot of support from OECD Secretariat and we are almost agreeing on the monitoring mechanism and the brand, but it takes a little bit time, but I hope we will put the Hiroshima process for local mechanism into actions and invite the private sector players to announce their commitment to those instrument early next year. Of course, this is my personal hope, but I believe G7 could move quickly, continuously quickly to get together. And this year under Brazilian presidency, G20 discussed AI for development. I believe this is a very important element, aspect of AI governance, because we always talk about AI governance to leave no one behind and the developing countries, people in the marginal communities should not be left behind, of course. And AI for development is a very important notion. And one of the efforts to in action is a second element of my presentation, which is Hiroshima Process Friends Group. Hiroshima Process Friends Group is still a kind of Japanese government initiative, but also with a lot of support from other G7 member countries. And this group now covers more than 50 countries, including all EU members. And we cover a lot from Asia up to Africa, and we are still actively increasing the number of the members. And we often hear a lot of voices from those countries that they are… very much welcoming these opportunities because they have less opportunities to listen to the discussions on international AI governance and they have less opportunities to be involved. So we need to provide those such opportunities to countries and communities and the people in marginal communities and we need to realize the multi-stakeholder approach in AI governance discussion too. So this is the second element and the second development through the year and also the third one is the global partnership on AI and OECD AI community, those two communities are integrated into one or the two. The global partnership on AI was launched in June of this year for the U.S. residency of G7 and now it’s in the hands of the G20 and the UNHC. So this is the third element and the third development through the year and also the third one is the global partnership on AI and OECD AI community, those two communities are integrated into one or the two. So this is the third element and the third development through the year and also the third one is the global partnership on AI and OECD AI community, those two communities are integrated into one or the two.


Timea Suto:


Audience: We’ve got a microphone here to the lady, please. A microphone. Yeah, my name is not said I’m a prominent object. I’m talking about AI and that there is a great concern about the data used AI. I don’t think that it is really happening that we are governing AI. That would not leave anybody in mind. And we’re not representing the set of people about them like sets of data sets. This is not what is happening. I’m representing the people in Egypt. They are represented. The platforms are biased. They feel that they have to go around to express their opinions. And this is all is somehow. Yeah, so I have a great concern about that. Thank you.


Timea Suto: Thank you for that question and that reflection. I think it’s going to be a good segue then to our next conversation on data because, and then if we would have another one, it would be have to be on connectivity. And if we had another one, it had to be an electricity. So it starts, I think, very, very deep back the presentation from the very beginning of where those divides are and how we bridge them. But I think the spirit that we hear is that we do want to bridge them. And we need to find the right partnerships on where we start closing those gaps. And how can we make sure that we go as far up now where we are at the end of the development spectrum with gen AI, but who knows tomorrow, and then there comes quantum and other things. But I think this commitment that we see here that I’ve heard also on the panel is the first step there. Would anybody from the panel like to react any further to that? Please, we got a microphone here.


Larisa Galadza: I think it’s a really good comment. And I would say a couple of things at the risk of taking away from my main speaking segment. I think that there is a willingness I’ve seen my last few months in this job, a willingness to a different kind of partnership when it comes to AI, and AI for good and AI for development and all those things. But I’m not algorithms and saying they’re not good enough, or they’re biased, or the data being used is not representative. I think the partnership requires someone to say, hey, we’ve got data sets in our country, can you help us put them together? Can you help us make them? We want to front We would like to support an initiative that uses our local language. And we would like to work with you. So I think that when you hear those of us in country. And, and doing what we can to try to bridge the divide, talking about nobody, you know, nobody left behind wouldn’t be the language that I use but it’s that it’s for the common good, that we’re looking for partners who say yes. We’ve got language, and we’ve got models and we’ve got skills and we’ve got data sets. We need compute, or we need someone to do some translation for us or whatever it is that’s required. That’s the kind of partnership that that’s the Canada, Canada is going to be looking for as we head into our G7, G7 presidency.


Timea Suto: Thank you for that. Any other comments from the floor, or is there anybody online. If not, then I’m just going to give the microphone back for one minute each to Thomas and then to Flavia to close up this segment, and then we move into our data discussion which I hope will be as exciting as this one was.


Thomas Schneider: Thank you. I’ll just also react to the parliamentarian from Egypt, I think it is, it is important that we try to align or make these different initiatives and instruments interoperable, but they also help to provide for solutions for the ones that are not yet part of it. Both. Then it’s stereo. So I think, and I also invite you to come and join the Council of Europe, but this is the normative legal part of it. The Huderia is supposed to trying to help all countries to do risk and impact assessment. And of course the data component. an important one, if there’s no data about your people, then the algorithm is of no use, even if the… So I think there’s several aspects of this and discussions like these are good to raise the awareness of what are the elements, where have we made progress? Where do we need more progress? What is priority? So thanks very much for this.


Flavia Alves: Let’s see if we can hear you with that microphone, Flavio. Yeah, take the one that’s, yeah, that’s the one. Thank you. Yes, yes, yes. So first, I think one thing I wanna make clear is that our project, No Language Left Behind, is about translation. It’s not necessarily about assets and trying to put your language there about… Yeah, I’ll keep talking, I don’t know if anyone can hear me. No. All right. Sorry, guys, I have 30 seconds. With regards to data sets, we agree with you that we also have an open source model that can actually implement. We have partnered with the Gates Foundation and have funding projects in Africa Pacific. I’m not sure what the delegate of Canada said. Let me, your input come to us to see what type of data sets we’re working out there for that. I’ll stop here because it seems that it’s not working. If we can take a two-minute break now and try and see if we can find a microphone that works for the panel. Can we try that one then? Okay, so these two, we can, I think, get back to you. Yeah, times the charm. Hello? Yeah, okay. Yes, it works for now. So yes, please, that’s what we want. We want to work together. That’s why it’s an open source approach with stakeholders, researchers and developers, countries, governments, international governments, that we can help develop AI that is a particularly open source, that is an equalizer. We want to make sure open source AI or AI gets through everyone. And that we don’t get in the same bridge that we had before with connectivity, where people were left. Of course, we need connectivity to get to AI, but at the time we want to advance the bridge, if possible. So, back to you.


Timea Suto: Thank you so much, Flavia. And as I said before, it’s a good segue into our next discussion, which we’re going to talk a little bit about what we have done. They can hear me with this one, I think, so this one is okay. Yeah, okay. So, what we have done as a global community this year to try and advance a little bit the conversations on data governance. And what are the challenges that we faced? Where can we still go, or where we need to work more to expand on this? And what can we do to make sure that our approach to data governance? Yeah, yes. I’m very sorry that we don’t have the- It may not be the best solution, these things that don’t work. Yes, exactly. Maybe there is a new development technology that we can use for this, but we’ll bear through this. Next year at the IGF, we’ll all be transported into virtual headsets. But until then, let’s talk a little bit about data, where we are with our data governance issues. What has happened, hope to go. I’ll turn first to Ida-san again to talk a little bit about his insights on the operationalization of data free flows with trust, and what you have done to find enablers for trusted government access to data, to privacy protection, and the considerations of the transfers and sharing of data across borders. And where do you still see barriers that we need to overcome? Thank you very much. We can’t hear each other.


Yoichi Iida: As well as for this very complicated and difficult questions, and I’m not quite sure I can answer appropriately, but I’d be happy to share what I know. from my experience as a development over the last year. And actually, the Japanese government proposed the concept of a data free flow with a trust, which encourage the relevant stakeholders to make orders as free as possible while ensuring the trust regarding data flow appropriately and aspect of privacy protection or intellectual property protection or probably other human rights protections. And this concept was discussed over the years and this year, early this year, OECD launched a DLFT expert committee, if I remember correctly, in February. And 200 experts getting together to discuss how we can promote data flow across borders while ensuring some legitimate protection of human rights and other freedom or other rights. And this committee is now long discussing three pillars to promote data flow across borders and the financial data flow across borders while of course ensuring the security of data and privacy protection. Second pillar is privacy protection enhancement technology, which is often called PET. And there will be a lot of different types of technological solutions to protect privacy when we flow data across borders. And this group is discussing how we can enhance and also deploy such technologies to promote data flow across borders. And the third element is legal transparency around the data, I’m sorry, data flows. So I think different jurisdictions are taking different approaches on data flow and the data protection. And just like the people discussed with AI, the data policy is also needs a lot of interoperability and this group is discussing how we can promote interoperability across different jurisdictions and how we can ensure the transparency about the data governance framework including the regulations. So this is the development regarding data through the trust and one of the important elements here is the trust for what trusted the government access to the data held by private entities. And this is based on the declaration taken by the member countries of OECD at the end of the year 2022. And this declaration is discussing how the government has to evade or follow some principles when they access the data held by private sector entities. Even when they want to use data for law enforcement or some legal. So different countries have different systems when we need law enforcement bodies and the police and other entities wants to access to the private data. And we discussed what are the kind of consequences elements here and what are the gaps here. So this group is also now discussing what would be the next element. And the one element is this is just the agreement among only 38 member countries of OECD and now they are trying to approach the countries outside OECD and to understand what the OECD members are commonly following and what would be the potential gaps or potential commonalities with the countries outside the group and probably try to find the global commonality and the consistency about the government access to private sector data. And from the similar perspective, OECD also the data sharing and the data access. And this recommendation is also now being discussed to enhance the practical implementation of this recommendation into action. So quite a lot of approaches are taken now and the main point is again the interoperability. in different jurisdictions while we protect the common universal, kind of universal, I’m not quite sure we can say universal, but commonly held principles across different countries and different communities, different cultures around the world. So there is a kind of presumption that data should be used and to produce as much benefit as possible and for the people, for the common good. So I think, again, we always talk about, you know, no one leave behind, no one left behind, and it is always very difficult to achieve, but the continuous endless effort is very important and we never forget about this concept, no one left behind. So that is what I can share at this moment and look forward to further discussion. Thank you very much.


Timea Suto: Thank you. And so two things that I pick up from your input to what we’ve heard in the AI conversation is that need for interoperability of approaches to policy and to regulations and the need to avoid fragmented approaches in the spirit of wanting to make sure that, first of all, we create an environment where all stakeholders and businesses have the certainty and the reliability of where we’re going forward, but also to make sure that everybody is well represented and is part not just of the services themselves, but also of the governance conversations around it. So I’m going to turn to Marit. and now from Europe preface a little bit of, I think what you are going to bring in because I hope you will tell us a little bit about how the European approach is to this but also how industry in Europe sees the conversation on planetary governance and developing. Floor is yours.


Maarit Palovirta: Super, thank you very much. Thank you very much Tamea. I hope you can hear me loud and clear. So just for those of you who don’t know Connect Europe, we are a trade association based in Brussels and our members are the leading operators. And just to give you an idea, so our members today serve about 270 million Europeans with different types of connectivity services. And now you might be asking yourself, well, what is this lady doing here in the data session? She should be in the connectivity session. But of course there is a very close link between connectivity and content. So the data travels in the networks that our members are running. And also the provision of connectivity services and network services of European operators rely on cross-border cooperation with various different third parties. So whether it be vendors or partners or other types of service providers. And to make things even more complex, cloud and cloud computing has certainly brought another aspect into the data governance in that data in between the networks, of course then is stored and processed in the cloud. And yeah, if you look at the ecosystem, not the specialized cloud service providers, but also the operators increasingly involved in the cloud business, in edge cloud, et cetera. So there’s a kind of interdependence between the different players. And of course, it’s very important that we have a data governance model and hopefully some level of interoperability to make sure that costs, especially costs for the operators and different parties are kept intact, et cetera. Now, I’ll talk a little bit about the approach in Europe quite briefly. I think that Europe has been leading in the data protection in many ways because data privacy and protection is something that both our policymakers but also citizens hold very dear. And we have a, I think we have a, and also a policy framework within Europe. And then now more recently, we also start to have a data framework that goes beyond Europe. So looking at the third party relations, but just to look at within Europe first. So we of course have our GDPR, the General Data Protection Regulation for Personal Data. And I believe that this is quite well known also globally. And we consider this as really being the baseline and the basic rules for data in Europe. And while the GDPR is not perfect, I mean, we do consider it as a, let’s say, good example globally speaking. Then rules and maybe not so well known internationally, but something that we called the Privacy Directive, which was a kind of historical legacy piece of regulation and which imposes some sector specific, very restrictive rules regarding data management and especially on telecom operators. And we believe that today, this type of sector specific rules have become a kind of out of touch with the data economy today. And here we really come to the question that we need to, at the same time, while we protect, we need to start also promoting innovation. And this, I don’t think it’s such a good example coming from Europe, if you like. And then really putting it into today’s context, we believe that when we look at rules on privacy, that all digital players should be subjected to the same horizontal rules of privacy, as they often process the same kind of data, for example, localization data. And we can think of many digital services players or even car manufacturers that today plan. And so we believe that really a horizontal solution would be the most effective one. And hopefully this would also, if you like, level a little bit or reduce the fragmentation in terms of data governance frameworks. And then we have more recently, we have some new rules on cross-border data. And especially when it comes to cross-border government access to data in the shape of the Data Act that was… adopted earlier this year, and then we have some for example require cloud and other data processing services to prevent third country governmental access and transfer of industrial data held in the EU, if such a transfer access is illegal under the EU or the member states law. And this of course, well it complements the GDPR in many ways, and we have welcomed it as European Selecom operators, as it provides some level of legal certainty to our members. On an enabling side, the EU has concluded various free data flow agreements. I think also with the US, which is a major one. And from the industry side, I mean we believe that these agreements are very welcome, and they bring more data and legal clarity, and also safeguards for businesses and citizens. I think that, you know, as a final point from Europe, it is also very important to note that it’s not only about policy frameworks or regulation, it is also about technical solutions and interoperability. And we, for example, in a slightly kind of, let’s say marginal context, but there’s also ongoing work in the EU to work on some common cybersecurity certification schemes, which can be seen as, you know, helping to limit foreign government access to EU data and help us securing EU data. Now going to your question about risks of fragmentation, Tamir, I think that our common global and open and interoperable internet, and especially also at the technical standards and protocols level. And maybe just, I would like to mention here, I mean, there are many risks, as we have already heard, but I’ll mention two examples that certainly have come up in some ways in the EU context lately. One is the evolving global connectivity infrastructure and the connectivity ecosystem that carry our data traffic and store our data. And I already mentioned the cloud, but for example, today we talked a lot about submarine cables and satellite becoming really part of the connectivity ecosystem. And I think that’s, you know, and our members are also involved in these activities, but we need to then also consider interoperability, but also the legal certainty of carrying our data through the new and the evolved connectivity value chain. And also, of course, it’s a question of resilience. Now, we’re not here asking for regulation on this, we’re just hoping that when we look at this data governance framework, that we have a kind of holistic look on these things. The second thing, if I may, it is on data sovereignty. And this is a text, you know, kind of an example that has come up quite a lot. And we see this also in the EU context. So, of course, different parts of the world, they have the legitimacy to try and protect their own businesses and citizens with different kinds of data governance regimes, if you like. But then there is, can be a kind of protectionist or commercial also incentive to create these data areas that’s, you know, then when they go too much too far, it becomes still businesses, and this includes businesses from that region. So, of course, if everybody starts looking at things too much from the whole perspective, then businesses will face increased costs and also legal complexity. So here we would be, you know, pulling from balancing act, while of course, you know, in the European context, we privacy and data protection is a very important, important thing. So, very briefly, to conclude the way forward. So we believe that innovation, of course, and global digital commerce are important, and they need also to be protected. And but, you know, it needs to be very clearly in balance with the rights and values and European context, especially data protection. And we believe from the operator side that this is best achieved through a horizontal, so not sector specific policy framework, that are also flexible, future proof, ideally, although this is a challenge for all policymakers, and we are not, of course, jealous of their role, and also technologically neutral. So I would maybe stop my initial remarks there, thank you.


Timea Suto: Thank you so much, Marit. And then, so the conversation that we’ve had on AI and highlighting the need for interoperability, I think we can add to that a little bit more, if you’d be remarks and Edith’s remarks, the needs for a more holistic approach, so that we look at the various sectors and see, not just the sectors of the economy, but also the sectors of regulation that we have, sometimes regulation in one area might impact the regulation in other, and we’re not realizing those impacts. And then, of course, the needs to harmonize across regions. And as Maria said, she had to leave, unfortunately, for another speaking engagement, but she was emphasizing her opening remarks, is to try and reinstate the trust in global cooperation and multilateral cooperation as well. Otherwise, we will get into too much of an inward-looking situation where it’s not going to be good. So what we want to achieve, that we were talking just earlier about, including everyone in the data conversation. So with that, are there any comments or questions from the floor regarding the data segment of our panel? Anybody would like to say anything? Can we please get a microphone to the lady? Thank you.


Audience: Thank you, Shredda. I will try to be quick. It’s just about the data. I think that’s why we are fighting to get legislation, the EFD, you know, data and information inside, not just Egypt. I think in several countries in Asia and other places, so it’s very high, but thank you.


Timea Suto: I heard your concern. I will share this concerns. I think that the question of the question, the concern of the. She was. Yes. And I agree.


Yoichi Iida: It’s about not complete data. And our capacity to give it up. I think that’s the question. I think that’s the question. I think that’s the question. I agree. The answer is there. And I. I comment. It’s a very important issue that. We all have to think. Oh. And. Now. The question. Is. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. It’s. I can agree with you. Now. It’s. It’s. It’s. It’s. It’s taken to the. Technically. But yes. Here’s. We are sharing. If I take an example, for instance. Particular application, which is. Federated learning. Transfer data. Send the model to somebody. To train the model. And then you bring back the weight. And you can not transfer data. I think it’s important. To think about. Look at. It’s. It’s. It’s. It’s.


Audience: Thank you. prevention and solutions. Just this time, I came along with a big problem of access to electronic evidence to environmental solutions. And I’m really surprised that in this whole series of discussions about free flow data, you mentioned the question of access to information and law enforcement. But it’s very interesting to mention the question of stimulation. Stimulation is extremely important to the world. How that can be realized? And in the same country, especially in the profession of health, we need to think about human participation. I think as long as we don’t address this question straightly, stimulating people in situ, organizations that have answers, I screamed how it is accessible.


Timea Suto: Seeing no more questions, I’ve also spoken on this to address we’ve had a question about making sure that we process data and then we talk about governance, we have a question around whether we are talking about data transfer or access transfer and how do we deal with trying to get into the sphere of that. I’m sure you can answer those questions. Thank you for volunteering. And thank you to all of our presenters for being such level of discussion. But when we want to put such a notion into practice, we recognize there will be a lot of difference and gaps to be addressed across jurisdictions. And yes, government access is a very difficult question and also we have a cyber security convention or something like that. Cybercrime convention.


Yoichi Iida: Cybercrime convention, yes. So it is very sensitive and very difficult questions across the This is something we have to address and there is no single answer coming overnight, but relevant people, actually, when we started the discussion on government access to data, we were very much surprised to see, you know, different people are gathering, people from, we are the digital economy policy people, but actually we had a group of intelligence people, police and law enforcement people, or even some lawyers and people from the court. We always have a widespread aspect of data regulation and governance, and this is something we have to tackle all together, and the answer is not always very easy, but when we talk about the bias of data for AI, we have also struggling, the gap, and probably, you know, Japan is also struggling with the development of AI ecosystem, and most of the technologies are based on English, and Japanese is a very small language, so now we are trying to develop multi-language models using different kind of marginal, some small-scale languages together, and we are working with different partners from Asia and other regions to develop our own language models to reflect different differences, but also in cultures, which is very important when we talk about developing language models. So, we share a lot of difficulties and challenges together, and we hope, you know, we need to tackle, we hope we tackle these challenges all together, not only by the government, but across different communities. Thank you very much.


Maarit Palovirta: If I may maybe address a little bit the question of Bertrand, the evidence, or let’s say a more obligations, the regulatory obligations that, for example, operators have. vis-a-vis law enforcement. I mean, it’s a very tricky topic. If we’re looking at it kind of purely from a data economy perspective without any societal responsibility, of course it is a cost to operators. It complicates things. I mean, recently we discussed the law, the legal intercept obligations within Europe, and an operator from one of the bigger European countries said that it costs them 15 million a year just to comply with the legal intercept obligation. So it is not nothing. It is a big responsibility and obligation. But at the same time, for the society to work well and to put the criminals into jail, of course this then is maybe necessary. But from the data governance and how do you fit it into the framework, of course that’s not an easy task. And from our side, I mean, we wouldn’t want to go, of course, judging the rightfulness or the wrongfulness of it for the moment. It is the way it is.


Timea Suto: Thank you. Thank you for that. I see that we have no comments online, and I don’t see any hands up in the room. So I think we can move on from the data conversation. I think we’ve thrown up quite a few highballs in these first two rounds, and these are the topics that we see not just individual governments or stakeholders struggling with, but when we look at the global level, when we look at the United Nations or various regional fora, we see quite a lot of struggle on how we actually make sense of the governance of all of this. How do we take all these issues and try and connect those who are working on the various policy try to make them more aware of one another and then what are some of the other structures that might or might not be necessary to help with not only the practical implementation but the global governance discussions around all of this. So we had quite a significant process this year where Germany was holding one of the pan holders for the Pact for the Future, a huge feat I think in multilateralism on many issues but we’ll focus on digital today, complemented with the Global Digital Compact. And then we have a number of multilateral fora, we’ve mentioned the G7 so many times and Canada now is taking up the baton moving forward so how do we deal with all of this in that context and also talked about taking the work of G7 and making it broader in other fora so how do we move towards that idea and then of course where does the private sector come in in all of this so that’s what we’re trying to figure out in the remaining 40 minutes or so that we have on this panel, also keeping in mind that we are at the IGF which is the product of society that is coming up to a very significant milestone next year so in this context I’m just going to ask first Irina to take a little bit of stock of what has happened this year with the Pact for the Future and the Global Digital Compact and how do we move into moving all these discussions ahead both in the GDC context but also looking ahead to the OSCIS plus 20. Over to you Irina.


Irina Soeffky: Thank you very much, indeed internet governance is very important to us and maybe it’s not much to sense the basis of everything that we’ve been discussing so far so we are really digging to the core now and Yeah, it’s important decisions have been taken, others are still about to be taken, and I think a lot is at stake. I mean the, the internet as we know it is working incredibly well also through the pandemic on a technical level at the same time there are already very challenging developments that are taking place on the internet. That’s like deep fakes, misinformation, lots of phenomena that are deeply troubling, probably to all of us and really go to the root of our democratic societies. And, well, all this makes clear that we need internet governance also in the future and maybe it’s not only internet governance but digital governance but I know that an entire academic debate could turn around this question so maybe I leave that out for the moment, but indeed it’s very important and what I have to say is that for us at the core of internet governance really is multi stakeholder for stakeholder collaboration and the IGF is really the prime example to show how this is done, as we can experience here during the day already and in the coming days still. And this is also well, this is really the, the key for us, and really the basis that everything else is turning around and we really think that we need on one hand to protect it because I think it’s not, it’s not given that internet governance in a multi stakeholder way is going forward. And on the other hand, I think it’s also necessary to develop it further because it would be, yeah, nobody would believe me if I said, it’s already perfect in every single way and we can’t fix anything and we can’t develop it further in any regard. And this has also been well, so to speak, the guidelines for the processes that have already taken place. And we’re also very important for us. and the processes that are coming. Very much involved in the pact of the future, a little less in the global digital compact, but that such a document, such a compact exists is really a major achievement that countries at UN level managed to agree on a document. And we are happy with what we have as a product now. It’s not yet implemented and this process will, or these processes rather, will also be very important. But to have such a document, I think, is really of very big importance. Obviously, probably if we had a vote now, not every single country, not every single person or stakeholder is happy with every single bit of the global digital compact, but I think this is also something that is probably not doable at all. So we are happy with what came out, that something came out and it wasn’t really clear from the start or also along the way that we would succeed. And there are some elements in there that are particularly important to us. And again, they go to the, well, to the core, as I stated, multi-stakeholder model is mentioned. The IGF was an important form of multi-stakeholder discussion and involvement is mentioned in there. We have something really new, which is that internet shutdowns are not acceptable. So there’s a lot of important things that we agreed to on a very broad basis. Well, as I said, implementation will be important now. And I think was, as is probably the rule on such a high level and with so many partners involved that not everything is clear yet, but we need really to work hard to figure out how is all this working and how can all this fit together? And this is maybe the major challenge that we see that it would not be helpful to have a bunch. of new institutions, different fora, discussing the same topics with different players, because I think they’re more difficult and not easier. And in particular, if we want stakeholders really to participate, it would be possible. I mean, it’s hard already for governments to cover everything. Yuichi already hinted at it when he was talking about AI governance, which is really a complex picture by now, but it’s even more difficult for, especially for stakeholders, civil society, for example, to really cover all those different fora. So we really need focus. And as I said already, we really do think that the IGF should not only be one important forum for internet governance, but the important and the premier forum for that. And we try to work on that, work hard in that. And we have been having lots of discussions with different partners, different stakeholders on what it would need, how we could develop it further. It could be IGF or maybe even DGF, Digital Global Forum, could look like. And this is really a process that we want to work hard on. And well, as you said already, this is one milestone, a major milestone, I would say. Implementation remains to be done and is a fragile thing and things can go well and not so well, maybe in this regard. And then there is another process coming up, the WSIS Plus 20 review. And this is also very important because there indeed, we have to decide how we will go on with the IGF, how it will look like in the future. And therefore, well, we as a government try to be as involved, as engaged as possible. So we became or we will be becoming next year, which is quite around the corner, a member of the Commission on Science and Technology for Development has quite an important role to play in this process. So we try to be involved there. we also find important and I think there is probably still room for manoeuvre or for improvement if we look back to the GDC process, it’s really important to involve. If we talk about multi-stakeholder collaboration and how this can be done in a meaningful way in the future, we certainly cannot do it without involving all those players. So this is something we really hope to do, not alone, definitely not alone, together with partners. And well, I think as much conversations that we can have in this regard, also on how to convince really people and governments, but also outside governments, that this is really an important moment in time and it’s really right or wrong now, not just for us or for something we find important, but for something that is so basic for everybody in the world, I think this is really important. And we’re really looking forward to, well, to do our part of the job, but also to work together with partners, with stakeholders, to find a good solution there. And yeah, for that, obviously it’s also important, I mean, to have these conversations and as many for us as we can, to really get a sense of what is important and what is maybe why we are very much looking forward to the Canadian presidency of the G7, because I think this is one of the fora we can really talk and also strategize of how we can get that right. With that, I think it’s best if I just


Larisa Galadza: give you the microphone. Thanks. It’s really good to be here. It’s really good to be learning from multi-stakeholderism is really great for an education about what happens in all the different parts of society and business and government and for all people in managing this resource that we share collectively in the internet and digital space. To hear about it all, and I’m not going to rip repeat all the progress that has been made because you’ve heard about it all here from people who have been part of the process or even leading the process. The contribution I’d like to make to this is from the perspective of where I sit. So I am Canada’s senior official on cyber, critical tech, and I also have a responsibility for democratic resilience in the government of Canada. And I sit in the international security branch of the foreign ministry. I’m not a digit or anything. I’m in the international security branch of our foreign ministry and I work for our political director. And so my perspective on progress and the year ahead is from that seat and also in being in Canada, I guess, from the seat of the next president of the G7. I’d like to call our attention to sort of the context for the work over the next year. I think we’re in a context where actually it ain’t broke. The internet I think is working a heck of a lot better than the technology in this room today. And it wasn’t until I started this job that I realized just how much complexity there is to making it work. So there’s many, we have many sayings about this problem that we’re studying right now in English. It’s, you know, if it ain’t broke, don’t fix it. Don’t throw the baby out with the bath water. Leave well enough alone. It’s not to say, as my colleagues have said, that there isn’t room for improvement, but we are in a context where we know who lives in Kiev. I work out with him three times a week. The Zoom works, the encrypted texting works, and I can send him money once a month. And when I tried to send him money at an address that was of a bank that was in a part of Ukraine that is occupied, I wasn’t allowed to send money. So there we see every day in our lives that there are really. that the internet works. That’s my first point. The second thing that has struck me is the extent to which geopolitical strategic competition is playing out at the most strategic levels, and it’s playing out at the most practical levels. What I’m very heartened to hear is actually when it comes to the practical, it works. This is an area where there is still trust. This is an area where countries that have very different ideas about how this universe should work still managed to cooperate, because it’s at that very practical level that we see what the benefit of the current system that we have is to all of us. We feel the benefits. The third bit of context is that there’s real urgency to this work. There’s serious urgency to it. Why? Because the technology isn’t waiting for us to figure it out. It’s not waiting for our policy frameworks or for our legislative frameworks or for us to figure out how we’re going to do things together. The technology is actually dual-use technology. As Thomas said, we’ve done this before, but never, I don’t think, have we had the speed of the evolution of a dual-use technology. It is imperative for us to deal with technology like AI. Thirdly, as the opening speaker said, we are living a crisis of the multilateral system. Over the next year, the multilateral system needs to figure out some of these questions. Whether that system breaks or survives will be borne out in these discussions. The last thing is that there’s a real, even in something as hard-edged as the international security world, there’s a real recognition of the importance of the multilateral system. that what we’re talking about is a global public good that must be shared, that our security, the stability of the world is not unless we share the benefits of the technology and the benefits of connectivity, the benefits of the internet. And so ensuring participation of the global majority at all governance tables is really important. And certainly doing development differently with the tools, the new tools we have is the only option that we have. So the year ahead has lots of opportunities. Yes, Canada will continue to focus on AI in particular. And we really thank the Japanese and the Italians for making significant progress and working urgent way that the subject matter demands. WIS is plus 20. You all know it better than I do. What’s at stake there? There’s another IGF. Then there’s the definition of how we’re going to implement the GDC commitments on AI, the scientific panel and the global dialogue. How those are defined, form will define function. And so this is a real opportunity. And in all of that, Canada will be active because of the urgency of the situation. We will uphold fervently multi-stakeholderism. We will look to do things that advance the SDGs. We will advocate for transparency in all of these processes and we will do it because we recognize that trust is absolutely critical. Trust between every. aspect of the puzzle that sits on this stage, trust with our citizens, trust between different parts of the world. And I would just conclude by saying this the next year is really an inflection point. There is an awful lot going on. It’s going to test our resolve. It’ll make or break some of the governance that we have that has done very well. And I think it’s going to be a year where that geopolitical strategic competition continues to play itself out. And managing that, managing the pace of change, managing the urgency, the demands and the commitments will continue to be exhausting. But we’ve done it before. We’ll just keep doing it again.


Timea Suto: Thank you so much, Larissa, for that. I’m going to ask you to please pass the microphone to the end of the row. Thank you so much.


Amr Hashem: Thank you. And I know that I’m the last speaker, so I will not take much of time, especially that I’m an engineer. So as an engineer, we usually don’t manage to get so many words to speak about, but we like to talk in numbers. So just to share with you some kind of numbers about the mobile industry that I am representing in this gathering, that currently 96% of the world population is mobile coverage and mobile broadband coverage. Actually, 4.6 billion people, almost 57% of the global population access the internet primary through mobile broadband. And I believe that this percentage of 57% will go even higher when we are talking about developing countries where mobile coverage is much wider and much more reachable and affordable. to people compared to fiber connectivity and other means for internet access. Yet, we are facing a challenge, the original WSIS document reflects the time that it was written in the early days of the information society. So it doesn’t really recognize the key role that mobile has to come to play in the community and businesses around the world. When we are talking about SDGs, again, using some numbers to describe the impact that the mobile industry is contributing to the SDGs, the figures that we have showed that the mobile industry achieved almost 58% of its potential contribution to SDGs. We are trying to measure what direction or what dimension that we were most impactful and we found that the SDG 9 related to industry innovation and infrastructure is where we were most impactful, mainly driven by the reach of the mobile. By end of 2023, the share of the mobile population without internet broadband coverage is less than 4%, 350 million people, while 57% of the world population are actually using the mobile broadband as we mentioned before. So the use of mobile is not only limited to connectivity or accessing the internet as you have mentioned. financial services are actually a major area where mobile industry has made a impactful contribution. With almost 3 billion people, more than 50% of the mobile subscribers are actually using mobile money and mobile banking services by 2023. Yet we hope that throughout the process there will be a real multi-stakeholder approach when it comes to the way forward in order to connect the rest of the people. Whereas we, the mobile operators and the mobile industry are projected to spend about 1.5 trillion dollars over the period from now till the end of 2023. There remains a gap between the projected investment and that needed to realize the government’s digital policy objectives. Especially when we are taking into consideration the growing need for broadband. I mean when we are talking about broadband now it is completely different experience than the broadband that we will need to experience in 2030 when the metaverse will be realized and when we are talking about this new technology. So in order to realize these technologies we have or we need to think about new means for financing this connectivity and for creating an environment that will enable that. And we hope that the review of the Waze plus 20 will recognize this and will encourage governments and other stakeholders to contribute to this investment gap that we are witnessing and we are trying to bridge in order to really leave nobody behind. Thank you.


Timea Suto: Thank you, Amer. So we’ve had three speakers talk about taking stock of the actual governance conversations that happened this year and looking ahead for WSIS plus 20. So what I’ve noted here as you were talking of how we look ahead and what is it that we would like to see done. So of course, you’ve all talked about more cooperation, less new structures, but better coordination of what is already out there. You have talked about the importance of informed policymaking and the need to have stakeholders contribute to that. You’ve talked about making sure that we don’t throw babies out with bathwaters and we actually make sure that we preserve the core of what is actually working, the technology, and adjust and make sure that our policy and regulatory frameworks enable the technology to continue on working and not pose extra barriers to that. And then you’ve all mentioned multistakeholderism, multistakeholderism and input to policymaking, multistakeholderism and the multistakeholder approach to policy conversations, but also the multistakeholder approach to implementation, whether that comes in forging partnerships, whether that comes in making the investments that are necessary, or in making sure that the policy frameworks that we come up with actually enable the technology to work, enable those who don’t have a voice to have a conversation, and also enable the innovation that we need to balance out with the potential areas where we want to address risks. So a lot of rich ideas coming out from the panel, but we have about 15 minutes to hear a little bit from the audience on how you see the road going. Are these the right elements that we should take away from this panel discussion to move into the WSIS++20 process at the GDC implementation? Do you have other ideas? Do you have any remaining questions to our speakers? So I would like to turn it over to the audience. And with that, I hope that our technician colleagues are ready to also share the microphone with you all so that you can speak. Are there any questions or comments from the floor? Raise your hand, we’ve got your microphone. Yeah, there’s a question for Jorge and then for Desiree. There, a microphone please, there. Thank you. There in the back. Yes, I think we’ll share the one up here.


Audience: I hope you hear me okay. So I just wanted to break the ice, but I saw that Desiree also raised her hand. So I’ll be very brief. I think it would be really great, well, Jorge Gánzio Swiss government to pass a very clear message coming from this IGF. And I think we’ve been hearing it in your panel. The first one is that we are still very deeply committed to a vision that was laid out in WSIS of a human-centric information society, a digital society that we want to work towards that goal. That we have to update, of course, the substance of what we agreed 20 years ago, what we reviewed 10 years ago, looking into connectivity, what it means today, the human rights implications of our digital world on data governance, on AI governance, and you cannot have… one without the other, and that we are eager to update also the structures we have to govern this, that we have a very good basis with WSIS, that there’s a good impulse with the new chapter written by the GDC, but it’s just a new chapter in a book we’ve been writing for 20 years, that we are ready to be innovative in how we update the multi-stakeholder approach of doing things. We have good ideas coming, for instance, from the Sao Paulo multi-stakeholder guidelines that were agreed in Sao Paulo earlier this year, and that it’s very important to commit to a non-proliferation of processes, because more processes, more governance kills inclusivity, and you wouldn’t have two spoons and a Swiss army knife. So let’s be functional, let’s respect the forms, and let’s avoid duplications that are unnecessary. Thank you. Thank you Jorge.


Timea Suto: We have the question there from Desiree first, and then Bertrand.


Audience: Thank you, my name is Desiree Milosevic-Evans. I listened to the takeaways of what’s been discussed, and I think much of what you’ve suggested seems like a common concept, but also looking forward in terms of reviewing the WSIS action lines, bearing in mind that a lot of good things are already in the document laid out in the Tunis agenda and the Geneva plan of action. What does the panel think about gender as an issue? Was it there in 2003 and 2005? Because we’re talking about inclusivity and digital inclusion. I wonder whether that’s something that the panel thinks should also be an issue to be discussed. The WSIS lines are good, and we’ll see how much the progress is done, but I think I single out that particular issue. Thank you. Thank you, Desiree. Bertrand, just behind you. Hi, Bertrand de La Chapelle again, there’s a lot of issues that are going to be addressed in 2025 in the context of the WSIS plus 20 other aspects. I like very much the comment about it’s going to be an inflection point, and I hope it will be an inflection point. We’re taking stock of 20 years, and some of us have been here for those 20 years. Even here I have the bag of the first IGF of 2006, which is a testimony of the sustainability approach that they adopted, by the way. Kudos to Marcus Coulomb. That’s a private joke for those of you who were there. But more seriously, among all those issues, there is one topic that I think of particular importance, which is what is going to be the future of the IGF, not only just the continuation, but how do we improve it, restructure it? Isn’t it time to have a serious discussion, maybe a little bit like the Working Group on Internet Governance back in 2004, to have a dedicated effort, not just a series of reports, some of which were very good, but let’s be honest, most of them have been filed the moment they were assigned, having a group that could, after the WSIS Plus 20 review, discuss seriously what is the new structure, what is the institutional arrangement that will be set in place. We know, and I finish with this, that there will be no agreement from the start by all governments, and therefore I think that there is a particular role for the governments who have hosted the IGF, who have made the effort to host the IGF. and that includes Japan, that includes Germany, that includes Switzerland, and to the countries who will be in presidency of different groups, and that includes Canada for their weight, behind an effort that could take place at the IGF in Norway, in the middle of the year, to send a clear message to the drafters of the resolution in the UN General Assembly, that there needs to be a paragraph that says, it is time to have a serious discussion on the new mandate. We will have 2026 to really discuss this in a multi-stakeholder manner, and not just a discussion in New York among the governments. Important element in the agenda, it doesn’t exclude all the different other topics, but I’m just taking the opportunity of having a few key governments here on the panel to raise the idea.


Timea Suto: Thank you, Bertrand. Jacques, please, and then we’ll go back to the panel.


Audience: My name is Jacques Beglinger, and I’m speaking here with my hat as a member of the board of EuroDIG, and co-chair of the Swiss IGF, and I would like to emphasize my question on the definition of stakeholder. I thought it pretty demanding to follow what all has been said, now in the past two hours, and explaining this to stakeholders might be quite difficult. Now, what do we see as a stakeholder? Are stakeholders just saying, well, they are just different groups represented by the top most possible understanding bodies, or is in the future of the IGF, still the little citizen, the corporate citizen, the editorial citizen? C of the multi-stakeholder process.


Timea Suto: Thank you, Jacques. So I’m going to turn back to the panel and we’ve had four types of intervention, four questions that I think all move in the same way, like how Bertrand put it, future doesn’t have to equal only continuation. Future needs to mean some sort of progress or improvement. So how do we take that, and this is going to be my final question to the panel, and you can pick and choose which question you want to elaborate more on this, but what do we take from our discussions as a hope for improvement as we look into the future? How do we improve the existing WSIS action lines? How do we improve the inclusion of various stakeholders or our governance models? How do we improve the IGF’s mandate? And how do we improve inclusion, especially gender inclusion? So what are your one-sentence takeaways, or I know now the the x limit is longer, so I got 140 characters, but what are your short takeaways with views to improvement as we move to WSIS plus 20? So I’m going to start, Flavia, you volunteered, so with you.


Flavia Alves: Yes, sure. So I want to start with the questions in the interoperability or getting all governments from groups together at IGF. This is a great idea. In fact, from what we understand is that Norway is planning to do that, similar to last time, to put the group together to have some documents to discuss at the IGF. As I spoke today, there are several international frameworks around AI, but it’s not only on AI, there are several other international frameworks on several other issues. So it would be crucial for these groups to get together to see how a real interoperability can exist of even the working methodologies, because for us, we don’t have enough time for people to engage substantially in all various issues. In addition to that, I think you’ll be… important for us to really give a voice for all stakeholders. Sometimes having frameworks that you invite only stakeholders to speak and provide like a session is not necessarily inclusive. We need to give time to our stakeholders to provide comments to what is being proposed and actually have feedback and then getting out together. A good example of this is NetMundial, of course, a document that has existed and has been developed again early this year in Brazil. But how can we actually make that, then implement that? And so giving a real voice to stakeholders, getting groups, head of hosts of G7, G20, the OECD, and the IGF, what is it that we can do at the IGF together to address that?


Timea Suto: Thank you, Flavia. Any volunteers? If I’m not, I’m just going to ask you to pass the microphone. Yeah, Larissa.


Larisa Galadza: I think on the question of commit to nonproliferation of processes, I think it’s really incumbent upon every process to be clear about what its comparative advantage is to all the other processes. Because the decisions about what process survives and doesn’t isn’t going to be made by people who have participated in all of these. Because we’re all stakeholders in our own understanding. And if the comparative advantage of a given process isn’t clear, then it won’t survive, and perhaps it shouldn’t survive. So I think that’s incumbent upon, in this case, the IGF to make sure that that is clear. And I think that IGF should be as open as possible. So not just the topmost bodies, but whoever wants to come. There’s just, it’s kind of a low stakes environment. Come and participate. It doesn’t crowd the space. There’s lots of room here. And in terms of the future of the IDF, I really like the idea that we talk about what is the new mandate, because it puts the default at it is continuing, should it continue? It is continuing and how do we make sure that it’s fit for purpose? So however that goes forward is important. As for gender, I mean gender at this point should be mainstreamed through everything and that’s actually what we should be aiming for. That’s how to future-proof the issue of gender. In Canada we have a model that does that analysis of everything through a gender lens without putting up a lightning rod that says gender.


Irina Soeffky: Yeah, thank you. I can go on. Well, I have a lot of sympathy for many of the suggestions that I heard and really have a deep discussion of where we want to go, who are going to be relevant actors, how we can also achieve this in practice. And I do also agree that probably it’s a hard thing to do next year, basically, or even less than a year. And this is maybe the note of Realpolitik that I want to finish with. I think our minimum goal should be that there is an IGF with an unlimited term, full stop. But looking back at the GDC discussions, we have seen that there are very different ideas and they’re strong and it’s all about alliance building. So I would say we should have reasons and probably we can’t convince others without having at least a glimpse of it. But I also think we shouldn’t overburden the discussion that we have ahead of us. So I think we should really focus on the core, as I said already, of what we want to achieve. And if the trick is by really having an ongoing process afterwards and really digging into the details, that would be wonderful. But I really think, well, especially having followed New York discussions quite closely, I’m indeed a bit worried that things could also turn in. Indeed we want to avoid that and coming to that or coming to the conclusion it’s maybe we also well we have to be visionary but we also have to be tactic and how best to move ahead to build alliances to convince to convince partners and I think I’m not not decided on that yet but I think we really have to think hard to multi stakeholder world that we do have


Thomas Schneider: Thank you first first to start with a reply to to to Jacques I think yeah but we we thought 20 years ago that the world was complicated and the Internet was something complicated looking looking back now it seems that things then were quite quite simple so I’m what I’m trying to say is we have to basically do the margin walk between trying to be inclusive, trying to be specific. But at the same time trying to be understandable to an average person although this is a little bit of an illusion if you’re honest because we don’t have 5 billion average persons sitting here. We have to serve different levels of interest and knowledge to patrons question I think something that actually struck me again came to my mind this morning. I think it’s paragraph 72 point G or whatever of the Tunis agenda. The one of the key things of the IJF, and no matter what the latest emerging technologies what we should not change is the IJF mandate to look into emerging issues on Internet or whatever you call it governance, because if the latest thing today is AI tomorrow there will be something else and I think this is one of the deliverables that the IJF has. It’s always the first platform to get. new issues on the agenda, set it on the agenda of others. So this is, I think, in my view, one of the core deliverables. Inclusivity, of course, is another thing. Although the question is, how do we get those at the table that do not want to be at the table? Not those that cannot. We can fund them and support them. Those that do not want to be at the table is another thing. They may not want to be at the table because others are at the table, which I will not go into too much detail. But I think a good question is also on the new stuff that is being built around AI. If it now is, what do you do with the rest? Do we just subordinate everything else under the new stuff that is created on AI? And with everything else, I mean everything else. Or what is the division of work between those new things that will be created on AI? And the more legacy things like the IGF and the WSIS process, but also even more legacy stuff that is looking into issues per se, whether it’s health or climate or whatever. And I think, well, maybe not everything is yet fully thought through with the…


Timea Suto: Great, thank you. Thank you, Tomasz. All right, Ida-san? Okay, so just quickly, the gender. I think gender equality is very important.


Yoichi Iida: But now, look at the panel. The male is minority now. So I’m not saying that gender balance is not important, but probably we need to address some of the asymmetry points. Because digital space, women, girls face different types of risk and challenges, rather than males. So we need to address those challenges and risks. And then we may… achieve a kind of very equally enabling space for both men and women. That is probably the central issue in the future discussion on gender equality. And comment by Bertrand really struck me. Yeah, we often talk about the bottom line to keep, protect, and promote IGF and the stakeholder approach. But we believe that will be the bottom line, a minimum level, as Irina said. And of course, we want to achieve more, because the condition and situation is very different in digital economy. We have AI. We have mobile. We have very, very many different factors compared to 2005. So we have to look and we have to see, because GDC negotiation was really difficult. And we saw a lot of gaps and differences and diversity from different, not only different governments, but also from different communities. So probably opening more open and enabling discussion space for different stakeholders would be very important. And then we need to think about our own strategy, probably to IGF itself, but maybe reform and strengthen this framework for the future internet space, which I believe cover AI and other new technologies as enabling factors. So I think that’s all. I can’t say now, but thank you very much for the very productive questions.


Timea Suto: Thank you. We have two more panelists left who haven’t had their last words. Who wants to go first? Yeah.


Maarit Palovirta: Yes, I mean, I can only but agree, just maybe to Jacques’s question, I mean, how to involve everybody and who are the stakeholders and, you know, how can we make sure that everybody has a say, who wants to have a say indeed. I mean, I think it’s all about preparation and I think it’s also important that although we’re talking about the global internet governance, that we, you know, at the same time, it comes bottom up. So, for example, in Europe, we have the regional initiatives on internet governance and when we now look at the WSIS plus 20, for example, the European Commission, who is just was the European voice in there together with the member states, they have now put out open public consultations on internet governance, so that European citizens can comment, etc. And of course, you can argue that, you know, you still need to be in the know and maybe, you know, it’s not available for everybody. But I think that, you know, at least in that way, you also open a little bit the discussion at the national and regional level and, you know, for people who want to have a say. Thank you.


Amr Hashem: And Amir, for the last word. Okay, having the challenge of saying something new after all what was said. My idea is that sometimes you turn this nice governance forum into a platform for actually taking some actions that would result in impactful action. I mean, the discussion and the open discussion and having everybody saying things are great and it is nice to talk and maintaining this multi-stakeholder approach is always welcome. But actually, you might like to start thinking about supporting reforms, supporting people, recognizing the effort, something like that. I hope when you are thinking about the process for the plus 20, to think about it more from a private sector perspective rather than from a government perspective. In the private sector, we don’t like to talk. We like to work and to achieve our objectives. So our KPIs are not that we went and we talked, no, our KPIs should be a result that we have improved our situation. We have changed this. So we hope that the forum will be more for private sector driven, more private sector inclusion and all the best of luck.


Timea Suto: Thank you, Amir. That all leaves me with thanking our panelists. We are already beyond time, so I will not share my takeaways. But I do want to thank you all for bearing with us, with bearing with the technology issues. It’s day zero. We always have kinks to work out. It will get better from here, I’m sure. But I do want to thank for all the rich contributions of industry and what we discussed here and actually contributed to the process that we worked to. So thank you all and a big round of applause to all of you. Thank you. Thank you. I hope I did not exhaust you.


T

Thomas Schneider

Speech speed

168 words per minute

Speech length

1717 words

Speech time

611 seconds

Need for interoperable regulatory approaches

Explanation

Schneider emphasizes the importance of creating interoperable regulatory frameworks for AI governance. He suggests looking at how previous disruptive technologies were managed to find parallels for AI governance.


Evidence

Draws parallels between AI and the governance of combustion engines in the 19th century


Major Discussion Point

AI Governance


Agreed with

Flavia Alves


Yoichi Iida


Maarit Palovirta


Maria Fernanda Garza


Agreed on

Need for interoperable regulatory approaches


Differed with

Flavia Alves


Differed on

Approach to AI governance


Balancing innovation and risk mitigation

Explanation

Schneider discusses the need to balance innovation with risk mitigation in AI governance. He argues for approaches that allow for technological advancement while addressing potential societal risks and ensuring equitable benefits.


Major Discussion Point

Global Digital Cooperation


F

Flavia Alves

Speech speed

152 words per minute

Speech length

1424 words

Speech time

561 seconds

Importance of voluntary industry commitments

Explanation

Alves highlights Meta’s commitment to developing responsible AI and participating in various international AI governance frameworks. She emphasizes the need for agile and adaptable frameworks given the rapidly evolving capabilities of GenAI.


Evidence

Meta’s participation in industry bodies like AI Alliance, Partnership on AI, Frontier Model Forum, and international commitments like White House Voluntary AI Commitments


Major Discussion Point

AI Governance


Agreed with

Irina Soeffky


Larisa Galadza


Agreed on

Importance of multi-stakeholder collaboration


Differed with

Thomas Schneider


Differed on

Approach to AI governance


Y

Yoichi Iida

Speech speed

96 words per minute

Speech length

1947 words

Speech time

1211 seconds

G7 Hiroshima AI process and code of conduct

Explanation

Iida discusses the progress made in AI governance through the G7 Hiroshima process. He mentions the development of a code of conduct and ongoing work on monitoring mechanisms and branding for companies implementing the code.


Evidence

G7 agreement on Hiroshima process for AI conduct, discussions on monitoring mechanism and branding under Italian presidency


Major Discussion Point

AI Governance


Agreed with

Thomas Schneider


Flavia Alves


Maarit Palovirta


Maria Fernanda Garza


Agreed on

Need for interoperable regulatory approaches


Data free flow with trust concept

Explanation

Iida explains the concept of data free flow with trust, which encourages stakeholders to make data flows as free as possible while ensuring appropriate trust regarding privacy protection and other rights. He mentions ongoing discussions at OECD on this topic.


Evidence

OECD launch of DLFT expert committee in February, discussing three pillars to promote data flow across borders


Major Discussion Point

Data Governance


A

Audience

Speech speed

111 words per minute

Speech length

1251 words

Speech time

674 seconds

Concerns about biased data and representation

Explanation

An audience member raises concerns about the data used in AI systems, particularly regarding representation of diverse populations. They argue that current AI governance efforts are not adequately addressing these issues.


Evidence

Example of people in Egypt feeling underrepresented in AI datasets and platforms


Major Discussion Point

AI Governance


Need for government access to data for law enforcement

Explanation

An audience member highlights the importance of considering government access to data for law enforcement purposes in data governance discussions. They suggest that this aspect is often overlooked in conversations about free flow of data.


Major Discussion Point

Data Governance


Need to update WSIS vision and structures

Explanation

An audience member suggests that the vision and structures established by the World Summit on the Information Society (WSIS) need to be updated. They argue that while the core vision of a human-centric information society remains relevant, the substance and governance structures should be revised to reflect current realities.


Major Discussion Point

Future of Internet Governance


Improving IGF mandate and structure

Explanation

An audience member proposes a dedicated effort to discuss and improve the Internet Governance Forum’s (IGF) mandate and structure. They suggest that this discussion should take place after the WSIS+20 review and involve multiple stakeholders.


Evidence

Suggestion for a working group similar to the Working Group on Internet Governance from 2004


Major Discussion Point

Future of Internet Governance


Ensuring inclusivity and stakeholder participation

Explanation

An audience member raises questions about the definition of stakeholders and how to ensure true inclusivity in internet governance processes. They emphasize the importance of involving not just top-level representatives but also individual citizens and smaller entities.


Major Discussion Point

Future of Internet Governance


M

Maarit Palovirta

Speech speed

147 words per minute

Speech length

1755 words

Speech time

714 seconds

EU approach to data protection and cross-border data flows

Explanation

Palovirta discusses the European approach to data protection and cross-border data flows. She highlights the importance of GDPR as a baseline for data protection in Europe and mentions newer rules on cross-border data transfers.


Evidence

GDPR as the basic rules for data in Europe, Data Act adopted earlier in the year


Major Discussion Point

Data Governance


Agreed with

Thomas Schneider


Flavia Alves


Yoichi Iida


Maria Fernanda Garza


Agreed on

Need for interoperable regulatory approaches


A

Amr Hashem

Speech speed

113 words per minute

Speech length

760 words

Speech time

403 seconds

Mobile industry’s role in expanding internet access

Explanation

Hashem emphasizes the significant role of the mobile industry in providing internet access globally. He argues that mobile broadband is the primary means of internet access for a majority of the global population, especially in developing countries.


Evidence

96% of world population has mobile coverage, 4.6 billion people (57% of global population) access internet primarily through mobile broadband


Major Discussion Point

Data Governance


Private sector perspective on achieving concrete outcomes

Explanation

Hashem suggests that the Internet Governance Forum should focus more on achieving concrete outcomes rather than just discussions. He proposes thinking about the process from a private sector perspective, emphasizing results and measurable improvements.


Major Discussion Point

Global Digital Cooperation


Differed with

Irina Soeffky


Differed on

Focus of internet governance discussions


I

Irina Soeffky

Speech speed

156 words per minute

Speech length

1557 words

Speech time

598 seconds

Importance of multi-stakeholder model

Explanation

Soeffky emphasizes the critical role of multi-stakeholder collaboration in internet governance. She argues that this approach is essential for addressing complex digital policy issues and should be protected and further developed.


Evidence

Reference to IGF as a prime example of multi-stakeholder collaboration


Major Discussion Point

Future of Internet Governance


Agreed with

Flavia Alves


Larisa Galadza


Agreed on

Importance of multi-stakeholder collaboration


Differed with

Amr Hashem


Differed on

Focus of internet governance discussions


L

Larisa Galadza

Speech speed

152 words per minute

Speech length

1553 words

Speech time

612 seconds

Implementing Global Digital Compact commitments

Explanation

Galadza discusses the importance of implementing the commitments made in the Global Digital Compact. She emphasizes the need for clarity on how these commitments will be put into action, particularly regarding AI governance.


Evidence

Mention of upcoming definition of implementation for GDC commitments on AI, including the scientific panel and global dialogue


Major Discussion Point

Global Digital Cooperation


Agreed with

Irina Soeffky


Flavia Alves


Agreed on

Importance of multi-stakeholder collaboration


M

Maria Fernanda Garza

Speech speed

103 words per minute

Speech length

648 words

Speech time

377 seconds

Aligning priorities and reducing regulatory fragmentation

Explanation

Garza emphasizes the need to align priorities and reduce regulatory fragmentation in digital governance. She argues that this is crucial for creating certainty for businesses, supporting economic growth, and fostering cross-border collaboration.


Evidence

Reference to increasing regulatory and policy fragmentation due to geopolitical tensions and competing national priorities


Major Discussion Point

Global Digital Cooperation


Agreed with

Thomas Schneider


Flavia Alves


Yoichi Iida


Maarit Palovirta


Agreed on

Need for interoperable regulatory approaches


Agreements

Agreement Points

Need for interoperable regulatory approaches

speakers

Thomas Schneider


Flavia Alves


Yoichi Iida


Maarit Palovirta


Maria Fernanda Garza


arguments

Need for interoperable regulatory approaches


Importance of voluntary industry commitments


G7 Hiroshima AI process and code of conduct


EU approach to data protection and cross-border data flows


Aligning priorities and reducing regulatory fragmentation


summary

Multiple speakers emphasized the importance of creating interoperable regulatory frameworks for AI and data governance to reduce fragmentation and ensure consistency across jurisdictions.


Importance of multi-stakeholder collaboration

speakers

Irina Soeffky


Flavia Alves


Larisa Galadza


arguments

Importance of multi-stakeholder model


Importance of voluntary industry commitments


Implementing Global Digital Compact commitments


summary

Speakers agreed on the critical role of multi-stakeholder collaboration in internet governance and the implementation of global digital initiatives.


Similar Viewpoints

Both speakers emphasized the need for flexible and adaptable governance frameworks that can accommodate rapidly evolving AI technologies while ensuring responsible development.

speakers

Thomas Schneider


Flavia Alves


arguments

Need for interoperable regulatory approaches


Importance of voluntary industry commitments


Both speakers discussed approaches to balancing free flow of data with necessary protections for privacy and other rights, highlighting the need for trust in cross-border data transfers.

speakers

Yoichi Iida


Maarit Palovirta


arguments

Data free flow with trust concept


EU approach to data protection and cross-border data flows


Unexpected Consensus

Recognizing the effectiveness of current internet infrastructure

speakers

Larisa Galadza


Amr Hashem


arguments

Mobile industry’s role in expanding internet access


explanation

Despite coming from different sectors (government and industry), both speakers acknowledged the current effectiveness of internet infrastructure, particularly in mobile connectivity, which was unexpected given the focus on challenges and improvements in most discussions.


Overall Assessment

Summary

The main areas of agreement centered around the need for interoperable regulatory approaches, the importance of multi-stakeholder collaboration, and the balance between innovation and risk mitigation in AI and data governance.


Consensus level

There was a moderate level of consensus among speakers on key issues, particularly on the need for collaborative and flexible governance frameworks. This consensus suggests a shared understanding of the challenges in digital governance and a common direction for addressing them. However, there were also divergent views on specific implementation strategies and the role of different stakeholders, indicating that while there is agreement on broad principles, the details of implementation remain contentious.


Differences

Different Viewpoints

Approach to AI governance

speakers

Thomas Schneider


Flavia Alves


arguments

Need for interoperable regulatory approaches


Importance of voluntary industry commitments


summary

Schneider emphasizes the need for interoperable regulatory frameworks, while Alves focuses on voluntary industry commitments and agile, adaptable frameworks.


Focus of internet governance discussions

speakers

Irina Soeffky


Amr Hashem


arguments

Importance of multi-stakeholder model


Private sector perspective on achieving concrete outcomes


summary

Soeffky emphasizes the importance of multi-stakeholder collaboration, while Hashem argues for a more results-oriented approach focused on measurable improvements.


Unexpected Differences

Gender representation in digital governance

speakers

Yoichi Iida


Audience


arguments

Data free flow with trust concept


Concerns about biased data and representation


explanation

While discussing data governance, Iida unexpectedly brought up the issue of gender representation on the panel itself, suggesting that men were now a minority. This contrasts with the audience’s concern about underrepresentation of diverse populations in AI datasets, highlighting a potential disconnect in understanding representation issues.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI and data governance, the focus of internet governance discussions, and the understanding of representation and inclusivity in digital spaces.


difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing AI and data governance issues, speakers differ significantly on the specific approaches and priorities. These differences could potentially impact the development of cohesive global digital cooperation strategies, particularly in balancing regulatory frameworks with industry-led initiatives and in ensuring true inclusivity in governance processes.


Partial Agreements

Partial Agreements

All speakers agree on the need for some form of AI governance, but differ on the specific approach. Schneider advocates for interoperable regulatory frameworks, Alves emphasizes voluntary industry commitments, and Iida focuses on international processes like the G7 Hiroshima AI process.

speakers

Thomas Schneider


Flavia Alves


Yoichi Iida


arguments

Need for interoperable regulatory approaches


Importance of voluntary industry commitments


G7 Hiroshima AI process and code of conduct


Similar Viewpoints

Both speakers emphasized the need for flexible and adaptable governance frameworks that can accommodate rapidly evolving AI technologies while ensuring responsible development.

speakers

Thomas Schneider


Flavia Alves


arguments

Need for interoperable regulatory approaches


Importance of voluntary industry commitments


Both speakers discussed approaches to balancing free flow of data with necessary protections for privacy and other rights, highlighting the need for trust in cross-border data transfers.

speakers

Yoichi Iida


Maarit Palovirta


arguments

Data free flow with trust concept


EU approach to data protection and cross-border data flows


Takeaways

Key Takeaways

There is a need for interoperable and flexible AI governance frameworks that can be implemented globally while adapting to local contexts


Data governance approaches should balance free flow of data with privacy protection and security concerns


The multi-stakeholder model remains crucial for internet governance, but needs to be updated and strengthened


There is an urgency to address governance of emerging technologies like AI due to their rapid development and potential impacts


Future internet governance structures should avoid fragmentation and proliferation of processes, while improving inclusivity and stakeholder participation


Resolutions and Action Items

Work towards implementing the Global Digital Compact commitments on AI, including establishing the scientific panel and global dialogue


Develop a monitoring mechanism and branding for companies to implement the G7 Hiroshima AI Process code of conduct


Use the upcoming WSIS+20 review to update the vision and structures for internet governance


Discuss potential improvements to the IGF mandate and structure at the next IGF in Norway


Unresolved Issues

How to effectively include developing countries and underrepresented groups in AI and data governance frameworks


Balancing innovation with risk mitigation for emerging technologies


Addressing biases in AI datasets and algorithms


Determining the appropriate division of work between new AI governance bodies and existing internet governance structures


How to engage stakeholders who are unwilling to participate in multi-stakeholder processes


Suggested Compromises

Focus on protecting and developing the core multi-stakeholder model, while allowing flexibility for implementation in different contexts


Balance the need for global frameworks with preserving national sovereignty on certain governance issues


Mainstream gender considerations throughout governance frameworks rather than treating it as a separate issue


Combine visionary goals for internet governance reform with tactical, achievable steps in the near-term


Thought Provoking Comments

AI is not the first disruptive technology that mankind has learned to seize opportunities and minimize risks. And there’s a number of parallels that can be drawn with the way that we actually managed engines, combustion.

speaker

Thomas Schneider


reason

This comment provides a valuable historical perspective, framing AI governance within the broader context of how society has dealt with disruptive technologies in the past. It’s insightful because it suggests that while AI presents unique challenges, we can learn from previous experiences in technology governance.


impact

This comment shifted the discussion towards considering historical precedents and lessons learned, encouraging participants to think about AI governance in a broader context. It led to further discussion on the need for flexible, context-based approaches to AI governance.


Open source AI has real potential to provide access to the world’s most advanced models at a global scale. We favor this approach because in many contexts we believe it is the right thing to do. It drives innovation. It creates better, safer products that everyone can benefit from.

speaker

Flavia Alves


reason

This comment introduces the important concept of open source AI as a potential solution to issues of access and innovation. It’s thought-provoking because it challenges the notion that AI development should be proprietary and suggests a more collaborative, global approach.


impact

This comment sparked discussion about the role of open source in AI development and its potential to address issues of global access and equity. It led to further consideration of how open source approaches could be incorporated into AI governance frameworks.


To address these challenges, we must pursue greater alignment while preserving the flexibility to meet diverse local needs. A single, centralized, global regulatory superstructure is neither feasible nor feasible.

speaker

Maria Fernanda Garza


reason

This comment highlights the tension between global alignment and local flexibility in digital governance. It’s insightful because it acknowledges the complexity of creating a governance framework that can be both globally coherent and locally relevant.


impact

This comment set the tone for much of the subsequent discussion, encouraging participants to consider how to balance global and local needs in their approaches to digital governance. It led to further exploration of multi-stakeholder approaches and the role of different forums in governance.


The next year is really an inflection point. There is an awful lot going on. It’s going to test our resolve. It’ll make or break some of the governance that we have that has done very well.

speaker

Larisa Galadza


reason

This comment emphasizes the critical nature of the upcoming year for digital governance. It’s thought-provoking because it frames the current moment as a pivotal point that will significantly impact the future of digital governance.


impact

This comment heightened the sense of urgency in the discussion and encouraged participants to think concretely about the immediate future of digital governance. It led to more focused discussion on specific upcoming events and processes, such as the WSIS+20 review.


Isn’t it time to have a serious discussion, maybe a little bit like the Working Group on Internet Governance back in 2004, to have a dedicated effort, not just a series of reports, some of which were very good, but let’s be honest, most of them have been filed the moment they were assigned, having a group that could, after the WSIS Plus 20 review, discuss seriously what is the new structure, what is the institutional arrangement that will be set in place.

speaker

Bertrand de La Chapelle


reason

This comment proposes a concrete step forward in improving internet governance structures. It’s insightful because it suggests a specific mechanism for addressing the challenges discussed throughout the panel.


impact

This comment shifted the discussion towards more concrete, action-oriented proposals for the future of internet governance. It sparked discussion about the potential for a new working group and the need for substantive reform of governance structures.


Overall Assessment

These key comments shaped the discussion by broadening the historical and theoretical context of digital governance, highlighting the tension between global and local needs, emphasizing the urgency of current governance challenges, and proposing concrete steps for future action. They moved the conversation from abstract principles to more specific considerations of governance structures and processes, while also encouraging participants to think creatively about solutions to global digital challenges. The discussion evolved from a general overview of current issues to a more focused consideration of immediate next steps and long-term structural changes in internet governance.


Follow-up Questions

How can we ensure that AI development and governance includes and represents marginalized communities and developing countries?

speaker

Audience member (unnamed)


explanation

The speaker expressed concern that current AI governance efforts are not truly inclusive and may be biased against certain populations.


How can we improve access to electronic evidence for environmental solutions while balancing privacy and security concerns?

speaker

Audience member (unnamed)


explanation

This was raised as an important issue that needs to be addressed in data governance discussions.


How can we update and improve the WSIS action lines to reflect current technological realities, particularly around AI and data governance?

speaker

Jorge Cancio


explanation

Updating the WSIS framework was identified as necessary to address new technological developments since its creation.


How can gender issues be better incorporated into future internet governance frameworks and processes?

speaker

Desiree Milosevic-Evans


explanation

The speaker highlighted gender as an important issue that may need more explicit focus in governance discussions.


What should be the new mandate and structure for the IGF to make it more effective and relevant?

speaker

Bertrand de La Chapelle


explanation

Reimagining the IGF’s role and structure was proposed as a key area to explore for improving internet governance.


How can we ensure true multi-stakeholder participation that includes individual citizens, not just high-level representatives?

speaker

Jacques Beglinger


explanation

The speaker raised concerns about defining stakeholders too narrowly and excluding grassroots participation.


How can we make internet governance processes and frameworks more understandable and accessible to the average person?

speaker

Thomas Schneider


explanation

This was identified as an ongoing challenge for ensuring broad participation in governance.


How can we address the specific risks and challenges faced by women and girls in digital spaces?

speaker

Yoichi Iida


explanation

The speaker highlighted this as an important aspect of gender equality in internet governance.


How can we make internet governance forums more action-oriented and focused on achieving measurable results?

speaker

Amr Hashem


explanation

The speaker suggested shifting from discussion to more concrete outcomes and private sector involvement.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #75 Addressing Information Manipulation in Southeast Asia

Day 0 Event #75 Addressing Information Manipulation in Southeast Asia

Session at a Glance

Summary

This discussion focused on foreign information manipulation and interference (FEMI) in Southeast Asian countries. Experts from Indonesia, Australia, the Philippines, and Vietnam shared insights on the information landscape and challenges in their respective countries. They highlighted how disinformation, both domestic and foreign, impacts public opinion and political processes.

The speakers noted that while disinformation is widely recognized as a problem, FEMI is not consistently perceived as a threat across Southeast Asian nations. They discussed various approaches to combating disinformation, including government regulations, platform accountability, and digital literacy campaigns. However, they also acknowledged the difficulties in balancing effective governance with preserving democratic freedoms and free speech.

The discussion revealed that the sources and nature of disinformation vary across countries, with some facing more domestic issues while others contend with foreign interference. The rise of generative AI and deepfakes was identified as an emerging challenge, particularly in election contexts. The speakers emphasized the need for multi-stakeholder approaches involving governments, civil society, and tech platforms to address these complex issues.

Questions from the audience prompted discussions on the real-world impacts of disinformation, the role of social media platforms, and the challenges of determining who should be the arbiter of truth. The speakers agreed on the importance of regional cooperation and inter-regional dialogue to tackle FEMI effectively. They also highlighted the need for context-specific solutions and the challenges of implementing uniform approaches across diverse political systems in Southeast Asia.

Keypoints

Major discussion points:

– The information landscape and challenges with disinformation/foreign interference in Southeast Asian countries like Indonesia, Philippines, Vietnam

– Government and civil society responses to combat disinformation and foreign information manipulation

– The role of social media platforms and need for better content moderation

– Balancing regulation of disinformation with freedom of expression

– The need for regional cooperation and multi-stakeholder approaches

Overall purpose:

The goal of this discussion was to examine the issue of foreign information manipulation and interference (FIMI) in Southeast Asia, share case studies from different countries, and explore potential solutions and best practices for addressing this challenge.

Tone:

The overall tone was academic and analytical, with speakers presenting research findings and policy perspectives in a neutral, factual manner. There was a sense of concern about the impacts of disinformation, but the tone remained measured and solution-oriented throughout. The Q&A portion allowed for some more pointed questions and debate, but the tone remained largely collegial and constructive.

Speakers

– BELTSAZAR KRISETYA: Researcher from CSIS Indonesia, moderator

– PIETER ALEXANDER PANDIE: Researcher at the Safer Internet Lab and Department of International Relations at CSIS Indonesia

– FITRI BINTANG TIMUR (FITRIANI): Senior Analyst at Australia Strategic Policy Institute (ASPI)

– MARIA ELIZE H. MENDOZA: Assistant Professor, Department of Political Science, University of Philippines

– BICH TRAN: Postdoctoral fellow, Lekwungen School of Public Policy, National University of Singapore

Additional speakers:

– Alexander Pandi: Researcher from CSIS Indonesia

– Koichiro: Cybersecurity expert from Japan

– Luisa: Advisor for the German-Brazilian Digital Dialogue Initiative

– Nidhi: Audience member

– Eliza: From Vietnam, working in Germany

– Fawaz: From Center for Communication and Governance, New Delhi

Full session report

Foreign Information Manipulation and Interference in Southeast Asia: Challenges and Responses

This discussion brought together experts from Indonesia, Australia, the Philippines, and Vietnam to examine the issue of foreign information manipulation and interference (FIMI) in Southeast Asia. The speakers shared insights on the information landscape and challenges in their respective countries, highlighting how disinformation, both domestic and foreign, impacts public opinion and political processes.

Research Initiatives and Information Landscape

Beltsazar Krisetya introduced the Safer Internet Lab (SAIL) research program, which focuses on studying online harms, platform governance, and digital rights. SAIL collaborates with various stakeholders, including civil society organizations, to address these issues.

Pieter Alexander Pandie presented findings from an Indonesian case study, drawing on a database of FIMI instances in Southeast Asia from 2019 to 2024. He noted the increasing use of AI-generated disinformation in elections, including a deepfake video of a former Indonesian president.

Bich Tran outlined Vietnam’s information landscape, consisting of three main components: domestic media, foreign media with Vietnamese language service, and social media. She highlighted Vietnam’s concerns about China’s disinformation campaigns regarding South China Sea disputes.

Maria Elize H. Mendoza described the Philippines’ information ecosystem as saturated with “independent” media practitioners spreading disinformation, mentioning AI-generated audio of the current Philippine president as an example.

Government and Societal Responses

The speakers discussed various approaches to combating disinformation, including government regulations, platform accountability, and digital literacy campaigns. However, they also acknowledged the difficulties in balancing effective governance with preserving democratic freedoms and free speech.

There were notable differences in how governments approach the issue. Bich Tran mentioned that Vietnam created Task Force 47 to counter “wrong views” on the internet, taking a more active and restrictive approach. In contrast, Maria Elize H. Mendoza stated that the Philippine government has failed to effectively address electoral disinformation, leading to civil society taking on more responsibility.

Fitri Bintang Timur (Fitriani) shared information about the ASEAN task force on countering fake news and its guidelines, highlighting regional efforts to address the issue.

The speakers agreed on the need for a multi-stakeholder approach involving government, civil society, and tech platforms to address disinformation effectively. They also emphasised the importance of regional cooperation and intelligence sharing, particularly given the disparities in cybersecurity capabilities among Southeast Asian nations.

Challenges in Combating Disinformation

Several key challenges were identified in the fight against disinformation:

1. Defining and attributing foreign information manipulation and interference consistently, with a need for context-specific definitions for Southeast Asia or the Asia-Pacific region

2. Balancing political stability concerns with freedom of expression

3. Addressing the lack of digital literacy, which exacerbates susceptibility to disinformation

4. Combating confirmation bias, which makes people susceptible to believing disinformation

5. Dealing with the rise of generative AI and deepfakes, particularly in election contexts

6. Potential misuse of anti-fake news laws to infringe on freedom of speech

The speakers agreed that technical solutions alone are insufficient to combat disinformation. They highlighted the need to consider sociological factors and implement a more holistic approach.

Recommendations and Future Directions

The discussion yielded several recommendations for addressing disinformation:

1. Develop a Southeast Asian or Asia-Pacific specific definition for FIMI

2. Strengthen regional cooperation and intelligence sharing on disinformation issues

3. Incorporate digital literacy education at all levels of schooling

4. Engage in multi-stakeholder and inter-regional cooperation to research disinformation and its real-world impacts

5. Implement voluntary codes for tech platforms while maintaining government’s ability to intervene if needed

6. Balance effective governance of information ecosystems with protections for democratic freedoms and civil liberties

7. Encourage tech platforms to address confirmation bias through algorithm transparency

The speakers emphasised the need for context-specific solutions, acknowledging that a one-size-fits-all approach to combating disinformation in Southeast Asia may not be effective.

Unresolved Issues and Future Considerations

Several issues remained unresolved and warrant further discussion:

1. How to effectively regulate tech platforms without infringing on freedom of speech

2. Who should be the arbiter of truth in determining what constitutes disinformation

3. How to address the broader sociological problem of confirmation bias and incentives for spreading disinformation

4. How to improve the effectiveness of digital literacy campaigns, especially for those who haven’t formed their opinions yet

5. How multiple countries can work together to more effectively demand action from tech platforms in addressing disinformation

6. Determining the best platform for addressing FEMI in the Asia-Pacific region, as raised by Koichiro from Japan

In conclusion, the discussion highlighted the complex and multifaceted nature of foreign information manipulation and interference in Southeast Asia. While there was consensus on the need for collaborative, multi-stakeholder approaches, the speakers also acknowledged the challenges in implementing uniform solutions across diverse political systems and information landscapes. As the threat of disinformation continues to evolve, particularly with the rise of AI-generated content, ongoing regional cooperation and adaptive strategies will be crucial in addressing this pressing issue.

Session Transcript

BELTSAZAR KRISETYA: to Alexander Pandi, my colleague, researcher from CSIS Indonesia as well, and also Dr Bik Chan, postdoctoral fellow, Lekwungen School of Public Policy, National University of Singapore. And also joining us online is Maria Mendoza, Assistant Professor, Department of Political Science, University of Philippines, as well as Dr Fitriani, Senior Analyst at Australia Strategic Policy Institute, or ASPI. Okay, before we begin our session, kindly allow me to provide a little bit of context about who we are as the organisers, and also why do we pick this topic to be presented amongst other ongoing research projects that we are also presenting, that we are also conducting in Southeast Asia and the Pacific in general. So, MeatSaver Internet Lab, it is a research program that were co-constructed, if you will, or co-concepted by CSIS, our home institution in partnership with Google, Google Indonesia back then, and followed by Google Asia Pacific later on. It is a research hub that convinces all researchers and also practitioners working on information ecosystem. So, on the first year, we are trying to capture the whole supply chain, if you will, of this information. So, we conducted some kind of anthropological research to disinformation actors. We tried to cover how buzzers or cyber troopers and bots conducted the influence operation campaign in Indonesia. We also conduct a user-centered research by conducting surveys on public susceptibility on disinformations to promote the balance between digital information and political literacy. And we also conducted a platform-facing research on, we want to explore further, what are the co-governance models that are acceptable and yet better responses and mitigations, and can bring along the government actors, tech platform, as well as civil society, in one forum and in one institution. And so, we’ve been doing this for the second year in a row now. We’ve concurrent with the 2024 general elections that were conducted in Indonesia. And so, we collaborate a lot with information actors, as well as electoral actors in Indonesia. We also shaped the dialogue with international communities in which we joined as speakers and also participated in the UNESCO forum, UN forum, and also several diplomatic embassies. We’ve also hosted an academy conference on disinformation in Indonesia, as well as publishing ad reports in which you can find the printed version of the report in our booth just outside of this room. We’ve established a booth for the entire IGF 2024, so feel free to drop in anytime. And for this year, 2024 going forward, we will be focusing on three research. The first is the impact of deepfakes on online fraud, how AI, how generative AI would worsen the topography of online scams in Southeast Asia. We also take a closer look into the impact of disinformation to democratic resilience, how is the net sum of democracy after a series of electoral tsunami, if you will, in 2024, and where does the information resilient place a part in this. And lastly, the one that we are going to present on this occasion is on information manipulation and interference. We are also a part of the Global Network of Internet and Society Research Center, or Global Network of Centers, in which institutions such as Harvard University, Oxford Internet Institute, the CIS at Stanford, and other probably 100 to 200 institutions focused on internet society convene in an academic discussion globally. So that’s a short presentation on SAIL, but we will delve further into one topic that is probably in growing interest across the region, which is on the information manipulation. We have an Indonesian case study, a Vietnamese case study, Philippines case study, and some perspective from Australia. Without further ado, I will let Peter, probably 10 to 15 minutes, to present the case on foreign information manipulation and interference, and whether what is happening in this part of the world, which is Southeast Asia, there are some parallels that can be drawn to instances that are also happening elsewhere. So please, Peter, the time is yours.

PIETER ALEXANDER PANDIE: Thank you very much, Beltz. Again, thank you everyone for attending the session. My name is Peter Pandey, a researcher at the Safer Internet Lab and also a researcher at the Department of International Relations at CSIS Indonesia. So as Beltz has very well introduced, the Safer Internet Lab this year has three research streams that we’ve tried to conduct for the second year of research of this research lab, and I will be focusing mostly on the foreign information manipulation and interference and instances of that occurring in Southeast Asia, and I’ll also be covering a little bit about the information landscape in Indonesia specifically, and how that correlates between foreign-based disinformation or domestic sourced disinformation. So as part of the research stream for FEMI in SAIL this year, we’ve tried to create a database that’s tried to make records of FEMI instances in Southeast Asia from 2019 to 2024. So what we’ve done is, from open source sources, we’ve tried to make a database of cases of where information operations, whether traditional, digital, or offline, have been made in Southeast Asia from 2019 to 2024. And for the data set, we’ve tried to make those three categories. So for traditional media influence, examples include when influence actors place advertisements, hires or pays or influences a journalist or opinion leader to share their part of the story on media, and so on and so forth. For digital media influence, these would be cases such as coordinated and non-ethnic behaviors, creation of troll and bot networks to share narratives on digital media, and offline influence include diplomatic influence, economic investment, and so on and so forth. But for part of this research, we’ll be focusing mostly on the digital aspect of it. So part of the ongoing research, what we found so far as part of our data set, is that while disinformation has been discussed openly by countries in Southeast Asia, FEMI has not been discussed that much across Southeast Asian states. And that’s, we’ll delve into the reasons why later, but for disinformation specifically, what we found is that countries in Southeast Asia tend to focus more on disinformation as the topic, but not FEMI. So what they’ve tried to address by policy is disinformation that’s occurred domestically, but not much discussion of FEMI more broadly. So as part of our data set, we’ve discovered that earlier, through our early findings, is that so far the data set shows a tail of two halves between 2019 and 2024. So from 2019 to 2021, we found that cases of FEMI were not quite high. Most of the disinformation cases that occurred in Southeast Asian states were domestically sourced, that were attributed, so they were mostly domestic, created by local actors or sometimes government actors. But from 2022 to 2024, what we found is that there has been an increase of FEMI case, reported FEMI cases, and also a greater diversity of threat actors that have been operating in Southeast Asia’s information landscape. So the correlation that we’ve sort of made as a result of these data findings is that there has been an increase of FEMI and influence operations in Southeast Asia, concurrent with rising geopolitical tensions between great powers and also a rising number of international conflicts. So the Russian-Ukraine conflict, the ongoing conflict in the Middle East, and so on and so forth. These have in fact increased the number of influence and FEMI operations in Southeast Asia, whereas from 2019 to 2021, it was still mostly domestic focus. So in addressing disinformation, as I’ve covered before, most countries still use national approaches to legislation, rarely through attribution, so very few countries, if any, attribute where the sources of disinformation are. If they’re foreign, if they’re domestic, it’s more likely the case that it would occur, and even more rarely through retaliation. I don’t think we have a case of that that we found so far. So as part of our data set, we’ve recorded from 10 different countries in Southeast Asia and drawing on lessons from Taiwan and Australia as well. And what we found is that it was quite difficult to find cases of, because our team is quite small and we’re mostly English-speaking, so the most of our sources were English-speaking media and newspapers and so on and so forth, and we found that that was a great limitation in how we identified cases, particularly in countries where the information space is much, much smaller and much less exposed to English language media. So countries such as Cambodia and Laos, we found it was quite difficult to identify cases of foreign-based disinformation. Number one is because attribution rarely occurred, where they attributed a foreign actor as part of the disinformation operation. And number two is that if it were to occur, it would most likely be in the local language. So the language would be localized, whereas in countries where the information landscape and the social media users were much more exposed to international media, it was a lot easier to detect cases of FEMI operations.

BELTSAZAR KRISETYA: And moving forward, we also identified a few foreign influence actors. These actors include, from reported cases, these are actors such as China, Russia, Iran, and also some non-state actors that were unattributed, either whether they were supported by a state actor or not. And also one of the examples that we found was also the United States, in fact, engaging in some information operations in Southeast Asia. So to wrap up how the data set that we found is that sources of disinformation and the information landscape more broadly in Southeast Asia is very different and very contextual from different Southeast Asian states, especially during election periods and so on and so forth. There’s also very different threat perceptions, particularly relating to FEMI. While disinformation is considered a challenge, and is likely so for many, many states, even outside of Southeast Asia, not all governments consider FEMI as a current threat. Some are quite comfortable with leaving certain cases of FEMI to fester because it’s not deemed as a big threat to the existing political regime, or it’s not creating the social disturbances that other sources of domestic disinformation might. There’s also, I think, with the different cyber capabilities across Southeast Asian states, there’s also a difficulty in addressing these issues or even attributing the source of disinformation. So in Southeast Asia, while there is, in ASEAN, for example, while there is the cybersecurity cooperation agreements and so on and so forth, these are still mostly led or hosted by countries such as Singapore or Malaysia, who have higher, I would say, cyber capabilities compared to other Southeast Asian states who are still building on those capabilities. So not everyone is on the same page, either threat perception-wise or capabilities-wise. And moving on specifically to Indonesia, we just held elections in 2024, presidential elections. And while the data is still very, very fresh, very new because the election just occurred in February of this year, we found that most of the disinformation cases were still domestic-sourced. So either by non-state actors that were paid by government actors or certain political actors, but still very, very domestic-based. And as part of that, we found that there were differences in how disinformation was created in previous elections. So in 2016 or 2019 presidential and regional elections, the game in 2024 was a lot different. Whereas in previous elections prior to 2020, most of the disinformation that was created was very text-based and image-based and distributed on platforms that were text-based and image-based. So platforms such as Instagram, Twitter, Facebook, but they were either image or just text-based disinformation or on messaging apps like WhatsApp. Whereas in 2024, we saw a greater proliferation of disinformation incidents that involved Gen AI, either visual or audio form. So three of the examples that I’ve noted down here is, the first is a video-based, this deepfake of our former president who has passed away, who stated support for one of the political candidates. So that was a deepfake that was made. He was making a speech saying that you should support this certain candidate. Two other examples that was posted on TikTok was audio-based. So one of them was an argument that occurred between a certain political candidate and the head of the party that supported him, which was very convincing for a lot of people. And the third one was one of the presidential candidates giving a speech in fluent Arabic when he did not in fact speak fluent Arabic. So these are three different ways where Gen AI has affected how disinformation has proliferated in Indonesia. And one thing that we found is that our election bodies that are trying to deal with these disinformation cases are still playing on the playbook from 2019 and previous elections. They were not adequately prepared to deal with how disinformation would be proliferated in future elections because of the creation of Gen AI. And I think this is another problem that will continue moving forward. So to wrap up the presentation, what’s the way forward after this? So I’ve identified three things. Number one is I think that especially this is for an Indonesian context. Of course, I can’t speak for every country since everyone has a very, very different contextual information landscape. But I think for Indonesia specifically, a multi-stakeholder approach involving government, civil society, and social media platforms will be needed to comprehensively address

PIETER ALEXANDER PANDIE: disinformation either during elections or other instances. Obviously, with Gen AI developing the way it is, it will be very, very difficult to create policy that will form itself as guardrails for it since with increasing geopolitical tensions and the tech competition between great powers, I think we’re going to see the rapid, rapid development of Gen AI. So I think we need to do what we can and involve as much stakeholders as possible in that regard. Number two, I think as I said before, emerging technologies will intensify the speed, nature, and spread of disinformation. While I think now there are still cases of Gen AI with video and audio that are still a little bit easy to identify where it’s fake or not, I think moving forward, the capabilities of these technologies will improve where it will be increasingly difficult even for the trained eye to detect whether that’s disinformation or not. And lastly, and I think this is very important to say, especially for the Indonesian context, is that we need to strike a balance between effective governments of the information landscape and ensuring that democratic freedoms for civilians are still upheld. Because this is drawing from previous research at the Safer Internet Lab is that while there are policy responses from the government to address disinformation, oftentimes they can step into civil freedoms for expressing opinions and so on and so forth. So they don’t address disinformation, but they limit freedoms for expression and so on. So I think that balance is of course a very, very difficult strike, but I think it’s something that we need to note on moving forward. I think that will be it for my presentation. I’ll pass it back to Beltz.

BELTSAZAR KRISETYA: Thank you, Peter. Before we move on with Dr. Fitri, allow me to delve further into something that you just said. Please paint a further picture on the users. You’ve explained really well on how threat perceptions inhibits the effort against information manipulation. You’ve also painted a picture on the different topography of threats in Southeast Asia. But how does the receiving end look like? How does the users look like? Do the Indonesian users serve as a fertile ground for disinformation, if you will? Or because they have been the quote-unquote victim for disinformation by domestic actors, and does it make them a fertile ground for foreign interference, in your opinion?

PIETER ALEXANDER PANDIE: Right. So I think with disinformation, and this can be extrapolated to not just Indonesians, but people from other countries as well, is that disinformation is most effective when it reinforces certain opinions or ideas that someone already has. This is something that I’ve spoken about with counterparts from the US and Australia as well, is that even whether foreign or domestic, the confirmation bias is a very, very big thing in how disinformation is spread. So when you already have pre-existing notions of a certain idea or a certain political position, disinformation can reinforce those ideas and in fact make it stronger. And I think in the Indonesian context more specifically, we are one of the most populated countries in the world. I think number four right now. Digitalization is occurring rapidly and a lot of the youth are starting to become more and more exposed to social media. And I think while that increase has happened, digital literacy has not increased with it. And I think that’s another challenge that we need to take, is that improving digital literacy for social media users, whether young or old in Indonesia, to be able to differentiate between fact and fiction, real or hoax information, I think is another really important step forward. I think this is also part of a public opinion survey that SAIL has conducted last year, and the numbers were quite low for the amount of people who have participated in a digital literacy program that was held by the government. Even though these programs existed for public, not a lot of people were aware of them and even less people were involved in them. So I think this is another challenge moving forward.

BELTSAZAR KRISETYA: Thank you. Moving on to Dr Fitriani, Senior Analyst at Australia Strategic Policy Institute. Can the IT team prepare for Dr Fitriani’s slides?

FITRI BINTANG TIMUR (FITRIANI): Hi Belz, thank you. Good afternoon everyone in Riyadh, in Canberra. It’s 1am, so apologies if I look pretty sleepy. Thank you for having me. It’s an honour to be able to speak at the Internet Governance Forum 2024. And I would like to extend my gratitude to CSIS, as well as Google, for bringing this timely discussion on the issue that is essential, I think, for our digital future and security. So my presentation today, if the IT team can manage to pull out the slide, is focusing on how we can tackle information manipulation in Southeast Asia by a drawing lesson and what does not work in Australian experience. If I can go to the next slide, I’ll share in how this information and if we can go to the next slide, this information and foreign information manipulation is a global challenge. And I think, as we know, and has been discussing, it undermined democratic processes and acid by societal divides, we can public trust an institution. And I would argue here that Australia is similar to Southeast Asia, where threats are happening in a fertile ground where the society is diverse in social, political, as well as opinion. In Australia, for example, we’re open to do protests on the street and because of that, and we have many population that’s coming from different parts of the world, and they are often leaving the country, but still have a connection from the country. Sometimes the government from the country actually conduct information operation. to influence how they actually say good things about the country where they’re from. If I can go to the presentation before the previous slide, I want to share about how this information is actually exploiting the sensitive issues of different political ideologies, and it is not common for a state-sponsored actor to employ this information campaign aimed at fostering division, confusion, and mistrust among the population, and for Australian experience is to wedge distrust against allies. It happens, for example, the top example is where in the recent US election, the recent BBC News was saying that Mr. Simon Novikov, he’s an Australian-born individual, but he was being known as the Russian spokesperson in Australia, and he was paying X account of Alphafox $7,800 to post in Alphafox Twitter account, X account, a fake AI video that falsely claimed Haitian immigrants engaging voting fraud in Georgia swing state, and this actually pose a concern for Australia because such activities could tarnish Australia reputation and connection to its allies, and it is implicating in a way that Australia can be considered as a launchpad for foreign interference in other countries, so this can be concerning. And I don’t say that ASEAN country might be like this, but we can see it in the increasing geopolitical tension that situation might happen in the future. Another example is how the disinformation as Peter was sharing has become more sophisticated and leveraging social platform. And the second example the photo below in is from Southeast Asia where there’s actually. I think we’re losing victory. Are you still with us. bogus website channel and that have news that is produced by AI in a post that is unfounded really fake. And they use a drone or flight that was being used for Ukraine in the example of South China Sea. And it actually trying to increase the tension by saying that US is sending anti-tank missile to support the Philippines and so on. And actually they copy pasting from top GPT I think because in the posting it actually said I am a language model AI and I cannot perform tasks that require real time information. But concerningly this news on South China Sea was shared over one of them are shared over 25 times. And I think we need to be aware of how this campaign is not only accent by regional tension but post significant rates to the security and stability in Southeast Asia. Asia. And here in my presentation, I would like to share how Australian recent experience could provide valuable insight to addressing this challenge, and perhaps give measure to combat information manipulation. So if we can go to the next slide, I will share of how Australia deal with information manipulation in the in last year voice of Parliament, which is a referendum that called on whether the First Nation, the indigenous aborigin, people of Australia can have direct seat in the Parliament like allocated seat. But this, this election, unlike the previously Russian operation in Australia, this was identified, there’s allegedly linked with the Chinese Communist Party. And there’s a lot of TikTok and social media, other social media being used to distribute false narrative that include racial segregation, and actually having the narrative as you can read there, and say that it’s a way to actually change how Australia currently is working. So how learning from the voice of Parliament failed to actually provide a stronger position for the First Nation people of Australia, then the government and the people are trying to address this challenge using three main ways. If I can go to the next slide, the three main ways is one legislative effort, two is public and joint attribution, and three is fact checking and awareness campaign. So let me start with the law, making the law. I know creating a law is a process that takes long, and I don’t know whether the ASEAN 10 and will be perhaps with Timor-Leste joined. meaning, hopefully soon, the countries of Southeast Asia can issue the update of law. But even in Australia, the proposed combating misinformation and disinformation bill was actually shut down by countries that disagree, by people that actually disagree, and saying that perhaps this is just a way of trying to silence the people. So the disinformation and misinformation bill campaign is actually receiving disinformation campaign. And one of the senators that actually thanking Elon Musk is because Elon Musk actually shared this bill draft saying that Australia is creating this bill. And after Elon Musk tweeted it, it’s the government receive. And behind that, there’s another local parliamentarian that say like, if you want to disagree with this bill, this is how you do it. And after that, there’s 16,000 submission saying how this bill should not go. So that bill was failed, although the effort should be appreciated. The second is having public and joint attribution. And for example, in the attribution might be difficult and cannot be done. For example, for countries that small and medium country that said, what’s the benefit of saying that big country, major power are conducting information operation to us, and we cannot, you know, respond to it. So the way Australia responds to the APT40 cyber threat activities. is by actually calling other like-minded actors, like-minded states that also become the victim of this advanced persistent threat 40, that infiltrating government computer system. So they call, the government also call the US, UK, Canada, New Zealand, South Korea and Japan to issue a joint attribution. And this is, this is called to a specific Chinese state sponsored group. And the way it does it is not political attribution, but technical attribution. So maybe this is one of the way that can be done. And the third way is fact checking and awareness campaign. And the government endorse and support, although the effort is done by independent institutions such as RMIT Fact Lab and Fact Checked AAP, that is systematically debunk false claim. I think other countries in Southeast Asia region have that, like such as Mafindo in Indonesia and Fair File in Philippine, for example, that maybe Maria will share later. So if I can go to the next slide, how this is relevant for Southeast Asia, because in Southeast Asia also have diverse social political environment that present unique vulnerabilities to information manipulation. I think it is, Australia experience is similar with Southeast Asia. But the differences is there’s fragmented regulation that hinders platform accountability. If I can give you example, the top right is one of the example, how in several university journalism majors in Indonesia. recently have signed an agreement, MOU, with Russia’s state media Sputnik to share how to do journalism. So it can be a bit concerning. And meanwhile, other countries in Southeast Asia, for example, Singapore, actually implement sanctions toward Russia. So there’s a discrepancy of regulation addressed on certain, as Peter was saying, actors that conduct information operation to the region. And this can be a concern, especially when there’s limited public awareness that actually exacerbates susceptibility. So what happened in the region, I think, as well as in Australia, is that the government then is called to play a greater role in actually verifying what is fact and what is disinformation. The bottom example of the photo is where Singapore Law Minister Shah Mugam actually clarifying and saying how Israel diplomat are being insensitive of posting a comment on how many times Israel actually, the word Israel is actually mentioned in Quran. It’s actually, it’s insensitive because that posting was shared in the hate of Gaza conflict, but that how Singapore manage it is managed to control and the harmony of the country to not escalate the issue. So I call for the need of regional cooperation to counter shared threat, to actually communicate together to share information of what happened in one country. And perhaps the content sharing agreement, for example, need to be something that the region needs to talk to each other because having content sharing agreement, for example, with Sputnik or with other countries, state media that might not be democratic or might not be correct in reporting the certain issue might increase tension in the region unnecessarily. If I can go to the next slide on the recommendation on Southeast Asia, there’s actually in terms of Transcribed by https://otter.ai of what there’s a diagram in terms of how what kind of content that can be addressed and regulate. First, the measurement is to address the one that leads most harms and and that would be equal with the level of intervention. There’s five step here that I suggest on how Southeast Asia can, you know, address information operation or information influence versus adopt clear regulation. So if there is a violation in certain social media platform, therefore, the if the government have established clear and enforceable regulation, then that violation can be brought to to the criminal and justice law processes. So, for example, the regulation should include minimum content moderation standard that is that is published, for example, and mechanism of how to hold platform accountable. The second is having the threatening of regional cooperation and intelligence sharing as well as capacity of the government to address disinformation campaign. The third one is enhanced media literacy and and ASEAN actually did this with trainer of trainers and under the the education minister in share in countering disinformation. And we have model in ASEAN. What the next step is to actually translate that model to two different ASEAN language. The third one is to promote transparent. The fourth one is sorry to promote transparency by encouraging platform to label trusted source, for example, to label whether this image is AI generated, whether the video is AI generated. The more difficult is perhaps the voice. How can we actually label voice to be to be AI? generated, but maybe we can find a way. The last one is to build multi-stakeholder framework with civil society and the private sector because somehow the technology that hosts the disinformation are owned by the private sector. And the civil society is the one that do mostly the checking while the government is the one that supervise how the game is played. I think that’s the end of my presentation. I thank you so much for the time given to me. I intend to go as a moderator. Thank you, Fitri. Perhaps two minutes elaborations on what kind of lessons does Southeast Asian country can learn from the Australian experience in developing the code of conduct against misinformation and disinformation, and what kind of parallels that can Southeast Asian country adopt, whether unilaterally or through original organization. I think good practical question. I think one that can be done is actually asking, for example, Google, as well as other social, other platform actually rank the website that is most credible to show first, like news from the government. And actually what happened with the COVID time, there’s a labeling or this is new related to COVID-19. So that actually would help the people to actually be more aware. If they can do that on COVID-19, I think they can do that for other things, like, for example, scam that actually quite prevalent not only in Australia, but also perhaps in Southeast Asia, because there’s a lot of scam generating, taking on platform as well. And while the platform is actually showcasing, for example, job opportunities or advertisement. discount or sale somewhere, they need to have this verification, that government disclaimer that please check, double check before you like input your details, for example. I think those two are the one that I recommend. Thank you.

BELTSAZAR KRISETYA: Thank you. Thank you, Vitry. Let’s move on to Maria Elise from the Philippines, from the University of the Philippines, Jiliman. You have 10 to 15 minutes and place for yours.

MARIA ELIZE H. MENDOZA: Okay. Hi. Good day, everyone. Good evening from Manila. I’m sorry. I also cannot join you physically, but I’m also pleased to be given this opportunity to join the panel. So I am Assistant Professor Maria Elise Mendoza from the Philippines, and I’m here to present the case of Philippines in terms of addressing information manipulation. So I don’t have slides, so I’ll just be going through the suggested talking points. First, is to provide an overview of the Philippines information landscape. So one thing that the Philippines has been known for many years is that we are the social media capital of the world, and we are also known as the patient zero of global disinformation, almost like the petri dish or the lab experiment of disinformation. So Filipinos are hyper-connected to social media and are among the top internet users in the world, especially Facebook. So that’s the top social media application being used in our country. Television, radio, and the internet are among the top three sources of people of information about politics and the government. But since the 2016 presidential campaign of former President Rodrigo Duterte, the country has seen an increase in the use of social media for political and electoral purposes. So the 2016 presidential elections marked a pivotal shift towards social media-driven campaigning. So Duterte set the playbook for it. His victory was significantly influenced by coordinated digital campaigns on Facebook and YouTube, where content creators that we have come to know as social media influencers or bloggers have spread and amplified narratives supporting his policies, including the controversial and violent war against illegal drugs. So in the 2019 midterm elections, which were in the vote for several national positions and local positions, the same playbook was adopted and the opposition suffered an extreme blow in the Senate race. No opposition candidate won in the senatorial election. So all candidates allied with the Duterte administration won in the 2019 midterm elections. And in our next presidential elections, our most recent one, last 2022, the victory of Ferdinand Marcos, Jr., who is the son of the late dictator Ferdinand Marcos, Sr., was also largely attributed to the spread of online disinformation across different social media platforms. And these contents spread on social media did not necessarily promote Marcos, Jr. as a candidate, but rather they twisted historical narratives, attempted to cleanse the family name of the Marcoses because they still have a lot to answer regarding the atrocities committed during the dictatorship, and it also contributed to the demonizing of the political opposition. So this information during Duterte’s time also attempted to demonize the political opposition and discontinued until the 2022 presidential elections. Investigative reports from civil society groups and independent media outlets show that Marcos, Jr. benefited the most from this information at the expense of the main opposition candidate who is our former vice president. So at present, the Philippine information system is saturated. with so-called independent media practitioners. These are technically the vloggers or the influencers who are not necessarily nor formally affiliated with any political party. What’s interesting is that these vloggers and influencers who are followed and watched and heard by many Filipinos, millions of Filipinos are not covered by existing media accreditation policies or the regulations surrounding journalists, for example. So they exert influence when it comes to shaping public opinion compared to official campaign teams of candidates because their online contents are extensively consumed by the general public. There is also evidence that they have been hired by politicians in previous elections and that millions of pesos, which is around thousands of dollars or almost millions of dollars, have been spent for these kinds of campaigns. And what’s troubling is that the social media domain of these vloggers and influencers remains largely unregulated. So the contents are there and add to that the poor content moderation policies of platforms such as Facebook and YouTube. These are aggravating the problem. So as a result of the saturation in the information ecosystem, a survey conducted in 2022 found that majority of Filipinos find it difficult to detect fake news. And similarly, despite the internet being a top source of information about politics and the government, the internet is also perceived as a top source of disinformation, mostly spread by influencers. So moreover, Filipinos have developed a growing distrust towards traditional media and journalists. And these findings together with the fact that Filipinos are among the top social media users in the world is a dangerous combination. So how does foreign information manipulation and interference, or PHIMI, enter the picture? So we have had our share of PHIMI in the past, of PHIMI in the form of… is a sponsor of disinformation and propaganda, has been around during Duterte’s time, who is relatively more friendly to China as compared to previous Philippine presidents. From 2018 to 2020, China launched a disinformation campaign known as Operation Naval Gazing, an attempt by China to penetrate the Philippine information space. So what happened was that a network of fake accounts originating from China promoted and supported the Duterte family and Aimee Marcos, who is the sister of the current president. So from 2018 to 2020, these fake accounts attacked the government critics, Duterte critics, including opposition senators and Philippine media. However, platforms such as Facebook have taken down some of these or pages and groups linked to China for coordinated inauthentic behavior. So in a nutshell, Phimi has not yet made impacts comparable to the domestic level of influence operations. A media outlet in the Philippines named SMNI is pro-Duterte and pro-China, but it was recently denied a legislative franchise to operate on television, so they are mostly operating on social media. So in the Philippines, disinformation and influence operations are mostly domestically created and spread by these social media influencers, bloggers, celebrities, digital workers, independent media practitioners, or even ordinary Filipinos who make a living out of creating and spreading disinformation or hyper-partisan content online. The last part is actually interesting, the hyper-partisan content, because not all contents are fake or false. Some are facts, but these were exaggerated and twisted to suit political agenda. But still, the threat of Phimi must not be disregarded because we’ve had a glimpse of it in the form of pro-China content. One thing that we must also be wary of would be the potential use and misuse of generative AI in the upcoming elections. Very recently, a few months ago, our own president was a victim of this. An AI-generated audio of him ordering an attack against China in light of the West Philippine Sea issue was spread and flagged by the government as false. So given this, how has the Philippine government worked to address these challenges? Over the years, the Philippine government has failed to effectively address electoral disinformation. Three electoral cycles have passed since 2016, yet we are still facing a worsening problem and we have an upcoming election in 2025, this coming May. Legislative proposals to combat false information and regulate social media campaigns have not seen any progress. As a result, civil society actors, particularly media groups and academic institutions, have shouldered the responsibility of ensuring the integrity of facts by launching fact-checking initiatives, digital literacy campaigns, voter education programs. However, without robust government support, a comprehensive legal framework, and systemic changes, the impact of these initiatives is limited. It was just last September 2024 when the country’s election commission released a resolution that provided guidelines on the use of artificial intelligence and the punishment for the misuse or for the use of mis- or disinformation in elections just in time for the upcoming elections in 2025. This September 2024 resolution also establishes the COMELEC or the Commission on Elections Formal Collaboration Networks with Civil Society Actors. However, this is very late and it remains to be seen whether it will be really implemented effectively given the extent of the problem that we have now. On the other hand, social media platforms such as Meta and TikTok have expressed their commitment to cooperate in the upcoming elections. This is bad news because proactive content moderation measures and accountability must be demanded from and exercised by social media platforms. At present, content Contents that are obviously false and hyper-partisan, even if they were posted in the last electoral cycle, are still present in these platforms. They have not yet been taken down despite multiple reports, so these content moderation policies really have to be looked at. Moving forward, COMELEC must also sustain and strengthen its engagements with civil society. Civil society actors alone cannot solve this problem, and they’ve been shouldering the burden of fighting against this information for the longest time. So this strong cooperation between the government and civil society is needed. Moreover, cybersecurity infrastructure in the country must also be strengthened. Outside of elections, Filipinos are highly susceptible to online scams, fraud, banking scams, and phishing attempts. Multiple government websites have also been hacked recently. There were also instances of data breaches in government agencies where millions of data have been allegedly sold in the dark web. Lastly, to end my short presentation, in the long run, digital and media literacy must be fully incorporated in basic and higher education because at present, under the Philippine education system, only students in their last two years of high school have media literacy in their curriculum. The rest are not really institutionalized. So this needs to be expanded across all levels of education to fully empower citizens in the fight against disinformation and information manipulation. So that’s my short presentation on the case of the Philippines. I’m very much looking forward to the questions and the discussion later.

BELTSAZAR KRISETYA: Thank you very much for having me. Thank you so much, Maria. Again, another quick question. I remember during COVID times, there was an influence corporation allegedly done by the Pentagon for the Philippine public to sow disbelief against Chinese-issued vaccines. And the Filipino public bought that idea. They chose to wait for a more non-Chinese version. vaccine instead, and it creates little consequences to the Philippine public health during that time. So would you say that in the realm of influence corporations, what is happening, is it what happened in digital realm serves as an extension to geopolitical realities, particularly in Philippines’ relations to the great powers?

MARIA ELIZE H. MENDOZA: Probably yes, because some of, in another forum that I attended who were in, there were some analysts who looked at posts in China that are related to the Philippines. Some posts are actually discrediting the US-Philippines alliance, and at the same time, still supporting the Duterte’s, because Duterte is known as a president who is friendly to China, and Marcos Jr. is not exactly that. It is greatly perceived that Marcos Jr. is more leaning towards the United States. So there are posts being spread on Chinese social media wherein they are discrediting Marcos because he’s pro-US, discrediting the Philippines-US alliance. So yeah, I think these kinds of disinformation can also be related to the geopolitical realities.

BELTSAZAR KRISETYA: Thank you. So we’ve got the case study from Indonesia, from Australia, from Philippines. None of them seems to bear good news. So we rely on you, Dr. Bik Chan, from Vietnam. How does the situation look like in Vietnam?

BICH TRAN: Thank you, and I’m grateful for the opportunity to be here. So, you know, first I would like to give a brief description of Vietnam’s information landscape. So there are three main components here. So the first one is domestic media and foreign media with Vietnamese language service and social media. So in terms of domestic media, most of them are state-owned or related to the government. So they are heavily regulated by the Communist Party of Vietnam, and they, of course, they adhere to official narrative. And in terms of foreign media with Vietnamese language service, there are actually several of them, but I will give some examples from China and some Western media. So for China, there are the China Global Television Network, or CGTN, and then the PeopleGov Radio and TV. So both of them have Vietnamese language. And then for the Western media from the UK, there’s BBC, and then there’s like US-funded as well, like Voice of America or Radio Free Asia. And the third one is social media. And, you know, unlike China, actually you can access a lot of Western platforms in Vietnam, you know, like according to, you know, several sources, Facebook, YouTube, and Instagram, actually among the tops of social media in Vietnam. And besides that, also there is a Vietnamese platform called Zalo. It’s a messaging app like WhatsApp. And then also TikTok was so very popular. So there is a kind of very many social media platforms that the Vietnamese can access and use. So in terms of foreign information manipulation and interference, so I will focus on the foreign interference part of this. So in Vietnam, you know, because of its political system, so phimmy in election is actually not a big issue. The Vietnamese government is mostly concerned about China’s disinformation, about the South China Sea disputes, and also what they call peaceful evolution from the West. So peaceful evolution is kind of defined as efforts by external forces seeking regime change without the use of militaries. I’m sorry. Okay. So with this, you know, sometimes, you know, in terms of South China Sea issues, you know, China has a lot of disinformation out there. But related to phimmy, I would say that the first one is that sometimes they misquote Vietnamese leaders. For example, in 2016, only two days after the ruling of the arbitral tribunals regarding the case initiated by the Philippines against China, only two days then, you know, the Vietnamese prime minister met the Chinese counterpart in Mongolia. And then after the event, a lot of Chinese media, you know, newspapers kind of reported that the Vietnamese prime minister actually said that Vietnam supported China’s stance regarding the ruling. But actually, he didn’t say so. So the Vietnamese media immediately, you know, because they got the permission from the government to kind of clarified on that. So they said that during the meetings, the Vietnamese prime minister mentioned things like the agreement in 2011 that Vietnam and China had regarding principles to settle the sea related issues. And then things like the declaration on the code of conduct, or the code of conduct itself and UNCLOS. And he never said anything about Vietnam supporting China’s stance. So, you know, with this kind of false information, it can, you know, undermine the legitimacy of the Vietnamese Communist Party. So that’s the concern here. And also, other China’s narrative is to try to drive a wedge between Vietnam and Western partners by saying that, you know, close relationship with external powers will not help Vietnam in the South China Sea disputes. And then in terms of peaceful evolutions. So for Vietnam, for the Vietnamese government, they perceived any kind of criticism on the Communist Party is peaceful evolution. So it’s sometimes it can be like narrative, for example, the government is too weak in terms of response to China’s behavior in the South China Sea, for example, try to undermine their legitimacy. Or sometimes, you know, even the promotion of human rights or democracy can be seen as a peaceful evolution. And then other kind of narrative to try to advise Vietnam to, you know, the Vietnamese people try to be, you know, they should be anti-China or pro-US. So this kind of discourse can cause disunity in the society. And then sometimes, you know, with the South China Sea disputes, there are some certain groups, kind of urge the people to stand up and to join the protest. So this is, you know, with this, the Vietnamese government is concerned about, you know, from the protest against China, it could, you know, lead to some other issues as well and cause instabilities in the society. So here, I just want to emphasize that, you know, between this information and PHIMI, there is actually very thin line that we can work here. So they are related, but they are two different concepts. And in the case of Vietnam, sometimes, you know, the perceived PHIMI can be also quite significant because for the government and for the Communist Party of Vietnam, they have their concerns as well. So, you know, so for that, I think it’s very difficult for them to strike the balance between political stability and freedom of speech sometime. So in terms of action on how the Vietnamese government has done to deal with PHIMI, so I focus on the government part because there are not much to, you know, from the civil society itself. So for the government, you know, they repeatedly rebuked China’s full narrative on the South China Sea, either through the spokespersons of the Ministry of Foreign Affairs or through state-owned media. So they try to do that every time they discovered any disinformation from China. And, you know, to deal with peaceful evolution, in 2016, the Vietnamese Ministry of Defense created what they call Task Force 47 to counter wrong view on the internet. And after that, in 2017, only one year later, they created a cyber comment. So, you know, it’s interesting because, you know, compared to some other cyber comments, then the Vietnamese one actually also in charge of countering peaceful evolution. So I will end here and hope to open to the discussion. Thank you. Thank you, Dr. Bic. Before we get on to the discussion part of the session, one little question for you. You mentioned something about the balance between regulation, also freedom of expression, but I believe that’s not the only balance that the government is also facing because there is also this balance of countering information manipulation while also the dependence or interdependence economically to a certain actor. So how does the Vietnamese government balance between this dependence and also, you know, combating the foreign interference? Can you hear me now? Yes. Okay. So, you know, I forgot to mention that, you know, to deal with phimmy, actually in Vietnam, people can still access the Chinese media, you know, the Chinese newspapers with Vietnamese language service, but they cannot access, you know, other media, for example, BBC or Voice of America. So I think for the Vietnamese government, you know, because they know that I think very, so this very, it speaks to what Peter and if you already mentioned that I think for the government, they know that no matter what the Chinese say about the South China Sea, the Vietnamese people will not believe. Yeah, so they’re not too concerned about Chinese media. But for the Western one, it’s a different issue because in Vietnam, it’s a one-state party. So I think they are a little more sensitive in that area. And to your question about some kind of dependence on economic issues with some partners, I think that could be one of the reasons as well. But I believe that what I mentioned earlier is the main reason why, because for Chinese media, there’s not much worry. Thank you.

BELTSAZAR KRISETYA: So I believe we have at least time for three questions. So for anyone that wants to raise questions, please make yourself identifiable, and then our staff will come to you. Sorry.

AUDIENCE: So, hello. Thank you for your presentation. It was very insightful. My name is Luisa. I am an advisor for the German-Brazilian Digital Dialogue Initiative to Promote Digital Transformation, and we also address this information as a topic. So I haven’t had many contacts with the Southeast context so far, so I wanted to ask you if you have any cases of this information having effects on the physical world, so to say. Because like in Brazil, we had the attack to the Supreme Court, and also in South Africa. I know there has been some complications with the Electoral Commission and et cetera. So are there any records of this in Southeast countries as well, from Asia? Thank you.

BELTSAZAR KRISETYA: So that’s one question on the impact of this information to real-life incidents. Shall we gather two more questions? Please, sir. And then the lady in the back. Okay. Thank you.

AUDIENCE: My name is Koichiro from Japan. I’m a cyber security expert, and I have a few questions. First of all, with the Fitorani’s presentation, I feel it is contradictory, because on one hand, we need to expect performers to do more in this regard. And at the same time, countries like Australia, the United States, and others, we already decide to ban certain online performers from our market. So I’d like to ask any panelists for their view on which is better. Expect more for performers, or ban them from your own economy. Of course, some of this initiative is funded by one giant performer. So how you can trust one performer, how you can say one performer is trustworthy than others. My last question is, there’s a movement to revitalize the discussion at the ASEAN Regional Forum. I was wondering, while listening to you, all the presentation, I was wondering which is the best platform to discuss our step forward in FEMI and disinformation, since at the ASEAN Regional Forum, we have China, Russia, and others. Of course, IGF might be a decent platform as well. But I’d also like to ask panelists where we should go for our next round of discussion. Thank you very much. Fantastic.

BELTSAZAR KRISETYA: Thank you, Koichiro-san. And last question for this term, please.

AUDIENCE: Yeah. Hi, my name is Nidhi. And I have a question. When it comes to dealing with misinformation, I think that we’ve all discussed how you can have digital literacy campaigns and maybe something along those lines, some technical solutions. But as you talked about, a large part of, I think, misinformation comes from confirmation bias. And also, I think there is something to be considered that the people who are in most power actually tend to have a greater role in spreading it. So even if you did manage to achieve digital literacy, which I think there are a lot of technical solutions for, this is a larger sociological problem at this point, where if you’re getting views for it or if you’re getting power out of it, there’s no reason for anybody to stop sort of putting out disinformation. And even if you know it’s wrong from believing it, so unless you have some way of sort of tackling that larger sociological problem on what has become alternative truth, it won’t really matter so much what technical solutions you come up with. But I’m not so sure how you would go around doing that because nobody has incentive to do that right now.

BELTSAZAR KRISETYA: Thank you for the intervention. So let’s go on with the three questions first before we go open another session. So the first question from Lisa, whether disinformation ever transform into real life incidents in Southeast Asia, from Koichi Urosan. So specifically to P3, which one is better? Should platform do more or should we ban them entirely? And also a question to the general speakers, what will be the best platform regionally to discuss this issue further, whether it is a multilateral platform such as Asian Regional Forum or multi-stakeholder forum such as APR-IGF, for example. And some remarks from Randy on no matter how technical solution is available to address this session, there’s this key opinion leaders that can, you know, breeze through and confirms to the confirmation bias of the audience. So whether there is a means for us to curb or to curb the influence of these people in power, whether it is true, whether in government or in tech platform. Please, Peter, you want to go first?

PIETER ALEXANDER PANDIE: Sure. I think for the first one, cases of disinformation affecting the physical world, I think the case that we discussed earlier on, on the US’s influence operations in the Philippines that was actually posted, declassified by the Pentagon and posted again by Reuters, a Reuters investigation. So the influence operation was more or less them trying to sow disconfidence against Chinese made vaccines in the Philippines, which resulted in people not taking the vaccine and waiting for Western options. So I think that’s a really big example of a foreign entity outside of Southeast Asia creating an influence operation that had real life physical effects. And I’m sure there are others as well. But I think that’s off the top of my head. That’s a big one that we could reference. And then to the question from Koichi Rosan about the best platform to discuss FEMI in the Asia Pacific. I think the conversation shouldn’t start where the best platform is. I think we should take the discussion a little back towards whether or not countries in Southeast Asia or the Asia Pacific have the same threat perceptions towards FEMI and whether or not that’s the case. Because I think I can speak from a Southeast Asian perspective where I don’t think everyone is on the same page as far as FEMI. I’ve said before that ASEAN has a cyber security cooperation strategy and a lot of different cyber initiatives. But they mostly focus on cyber crimes, so financial scams, deepfakes and financial fraud and so on. But as far as Southeast Asia-wise, but I think FEMI, especially in the Asia Pacific where you have some victims and you have some threat actors, government and non-state, I think the conversation shouldn’t start which platform is best and getting everyone on the same page first I think is the real challenge because everyone has different threat perceptions and dealing and addressing how they want with FEMI. And for the intervention from a colleague about confirmation bias and a broader socio-psychological problem with this information, I fully agree with your statement and I think it’s why Fitri and I and other, Bik and Maria is sort of proposing for this research to take on a more multi-stakeholder, multi-disciplinary approach because I think most of us in this panel are sort of IR or cyber security specialists and I think involving people from different lines of academia or others as well I think would be a good step forward in understanding the problem a bit broadly.

BELTSAZAR KRISETYA: Fantastic.

BICH TRAN: Bik, you want to do next? I would like to add on what Peter said, you know, regarding to Nidhi’s question. Yes, so even though, you know, like certain bias make, you know, the readers have more appetite for disinformation, for example, but I still believe that digital literacy campaigns will help because especially for those who haven’t formed their opinion yet, then, you know, the skills to identify trusted sources will serve them in a lot of issues.

BELTSAZAR KRISETYA: Thank you. Fitri, specific question on platforms.

FITRI BINTANG TIMUR (FITRIANI): Thank you. In Australia we have Australian Communication and Media Authority, ACMA, Voluntary Code that actually call for platform digital media to develop and report on safeguard to prevent harm that may arise from the propagation of mis and disinformation on their services. So it’s a voluntary code. But there’s a concern of how about if the code does not work as especially we know there’s a certain platform that after some rich people buy that platform, that platform is being used for disinformation. And that’s why in Australia, there’s a call for the bill, disinformation and misinformation bill that failed to be tabled, it shut down. So whether we call to regulate the platform or just do away with it, I think it’s good to have a voluntary code is very mature. And if we kind of expect the platform have goodwill in doing their business, they need to be able to show that they can prevent harm. But we know their platform like telegrams that actually very rarely responds to the government call when there’s like information of like terrorism and whatnot, that is quite concerning. So that kind of so perhaps we can do both side, we can allow the voluntary code to let the platform to, you know, to safeguard themselves. And as well, when that doesn’t work, the government need to have tool to actually intervene. So that’s one. And for me, I want to if I may answer what how can we discuss in the regional platform, I think in ASEAN, we have the ASEAN task force, or countering fake news, and that task force actually managed to issue the guideline on how the government can manage and combat fake news. It is the task force only issued last year, established last year, and the guidelines also just recent. So I think if if ASEAN can do it, I encourage other region, perhaps able to do it because that guidelines is actually telling what the government pen weighs, for example, what the government do when the when there’s fake news detected. So that’s my insight. My, my suggestion. Thank you.

BELTSAZAR KRISETYA: Thank you, feature. Maria, you want to respond to any of the question?

MARIA ELIZE H. MENDOZA: Okay, so hi, yeah, the transformation of this information or the effects of this information on the physical world, the vaccine example is a good one. And aside from the campaign against Chinese vaccines, this is this information surrounding the side effects of vaccines in general have also made physical effects here in the Philippines, because there is a high there has been a high level of vaccine hesitant, hesitancy in the past years, because of another issue, another vaccine before COVID. So that’s one. And also probably the lies that the Marcus family spread about themselves were actually being cited by their supporters as the reasons for voting for them, especially when they attend campaign rallies and are interviewed by the vote for the Marcus. So I think that’s also an effect of this information on the physical world that people actually wholeheartedly believe these lies spread on social media. And regarding the confirmation bias, I think the question of confirmation bias, an additional insight that I can provide would be tech platforms would still have a responsibility regarding this issue because of how they control the algorithm. So we know that if we react to the same kinds of posts, or comment on the same kinds of posts, these, these posts will keep appearing on our on our feeds. So if we are, if hyper partisan contents keep appearing on our feeds due to the algorithm, then it worsens the problem. So with that, tech platforms also have a responsibility with regards to the transparency of the algorithm probably or controlling the algorithm in general, because Facebook, for instance, has been under fire for allegedly prioritizing posts that have more angry reactions. So those that are really emotionally charged get more exposure on people’s news feeds. And in that way, they also contribute to the problem. So still, even if it’s a sociological issue, Peter is correct, a multi-sectoral approach involving digital platforms of society would still be an important step in terms of solving this problem. Thank you.

BELTSAZAR KRISETYA: Thank you, Maria. One or two more questions before we close the session. Okay, the lady in the back.

AUDIENCE: Hi, my name is Eliza, I’m from Vietnam and working in Germany. And my question is actually addressed to the first speaker, but I welcome responses and contributions from other speakers as well. So in your research, how would you define FEMI? Do you include people from the diasporic communities as perpetrators of disinformation? And in your research, sorry, in your findings, you mentioned that there are state and non-state actors. Can you please give us an example of non-state actors? And in your research, did you also find evidence of the participation of the Islamic state in spreading disinformation in the case of Indonesia? And I just want to add one kind of like input to the question of Eliza. Actually, when you ask about online disinformation and the real life, you know, incidents and court, I must emphasize in the case of Vietnam, only the government can decide what is disinformation or not. And in the case of one party in Vietnam, one party state in Vietnam, we have the legislative, executive, and judiciary powers belonging to the state, which means the head of all these state agencies must be the Communist Party members. And so when they say that is disinformation, they have the power to punish. So I would say that on a monthly basis, there are cases where online disinformation, whether it’s just a small post critical of a state-backed company, or just one small video mimicking a state leader, can be punished. And the highest punishment in the case of Vietnam is 20 years of imprisonment. So I just say that disinformation in Vietnam is very hard to detect. Oh, sorry, I forgot one question for twofieldtree. How do you see the political will of ASEAN in fighting disinformation spread and created by the government? So you talked about ASEAN fighting disinformation in general. How about the disinformation spread deliberately by the government? Thank you.

BELTSAZAR KRISETYA: Thank you. And one last question from the gentleman in the back. Preferably a quick question.

AUDIENCE: Thank you so much. I’m Fawaz from Center for Communication and Governance, New Delhi. We’ve been having very similar conversations. It was very interesting, very useful to join this. We also had a general election this year. And one problem that I think across the board we are facing, that the last question also spoke to, is the discourse around disinformation, misinformation has also now become weaponized, where fact-checking or countering disinformation, often these narratives are appropriated by the people, you know, who sometimes might be causing real world harm. So I fully, this is just a short intervention to say we are seeing very real world harm linked to online disinformation. At the same time, the lack of the kind of multi-stakeholder research that we’ve been talking about is leading, you know, it’s making possible this kind of appropriation. So yeah, just a short intervention to say we really do need. Not just multi-stakeholder, but also maybe inter-regional cooperation to bring out how disinformation is happening, how it’s related to online events, and also how the discourse is being misappropriated. Thank you.

BELTSAZAR KRISETYA: Thank you, Fawaz. I think the parallel between Fawaz and Elie’s intervention is, who is the arbiter of truth? Like in which power should we endow the government or the civil society or tech platform to be the arbiter of truth, and what kind of multi-stakeholder, multilateral cooperation can be done to that. Last response from each of the speakers regarding these two interventions. You can go first. Okay. Hello?

PIETER ALEXANDER PANDIE: Can you hear me? Okay. Yes. Great. So addressing the question about defining FEMI. So the way we’ve defined FEMI is with a pattern of mostly manipulative information that threatens or has the potential to negatively impact values, procedures, and political processes in a country conducted by a foreign state or non-state actor and their proxies. Still, I think while we’ve conducted this research, we’ve also conducted a focus group discussion with various experts from both Southeast Asia and external countries. And what we found through those discussions is that FEMI is still a very, very hard thing to define. You know, FEMI was first coined by the European Union External Action Service, and that’s where we drew the first definition. But I think another step forward that we can take is sort of taking a more maybe Southeast Asian or Asia Pacific specific definition for FEMI, and I think that’s one of the research direction that we could take is finding a definition for FEMI that is context specific and more palatable, I would say, or more applicable to different information landscapes. So I understand that it is a very difficult thing to define. And another question I think was about the role of the Islamic State in information operations in Indonesia. Our time period for research was 2019 to 2024, and I think from the top of my head, while we’re still early in the data set and we’re still adding cases to it, I don’t think we’ve found cases of Islamic State perpetuating influence operations in Indonesia, although I would say with a disclaimer that this is still very early on in the data set, and we could find cases later on that were reported. But so far, I don’t think we’ve found any. And I think an explanation for that, it could be because I think terrorism cases in Indonesia, I’m not a terrorism studies expert, but I think broadly speaking, terrorism and terrorist groups in Indonesia have taken a downturn in activity in recent years. I could be very, very wrong in that regard, but that is, I think, a broad assumption that I could make as to why that has not occurred. Thank you.

BELTSAZAR KRISETYA: Bit, you want to add to that?

BICH TRAN: So I just want to say something about your question, actually, about who should we give the power to be a visual of truth. So because in the discussion, we mentioned about digital literacy campaigns. So I think if we make it mandatory to be taught in school, it will reach more people, of course, but then who textbook would we use, right? So what kind of curriculums and the definition, so that’s actually the very big issue that we…

BELTSAZAR KRISETYA: There you go. Okay. Okay. Fitri and Maria, quick response.

FITRI BINTANG TIMUR (FITRIANI): Thank you. Difficult question. The text, the textbook that… that the guidance, the ASEAN guideline on management of government information in combating fake news and information in media is actually strategically saying that this is the perspective and stand of the ASEAN government. But interestingly, there’s a chapter there if you want to take a look of it, there’s type of how government address this information. So there’s whole of government approach, and there’s strategic government approach or combination. The whole government approach is having a different agency, civil society as well. But the strategic government approach, as you know, how we know this government side of things, as Beltanzan mentioned, is the one that decide what’s the truth and what the people can listen to. And I think ASEAN embrace that and it’s aware of that. But having this, you know, being aware of it and having multiple ways of approaching the issue would actually not alienate countries that actually non-democratic in a way, but also struggling with this information or foreign information coming from abroad. So that trying to separate or wedge ASEAN countries against each other. So that’s why the ASEAN is actually trying to address this information.

BELTSAZAR KRISETYA: Thank you. Maria, one last remark.

MARIA ELIZE H. MENDOZA: Yeah, I would just like to agree with the last intervention regarding cooperation within the region, whether it’s in Southeast Asia or the Indo-Pacific, because as in the case of the Philippines, we really have a lot to catch up on in terms of addressing this information. And as I keep mentioning in my presentation, there are no clear legislative frameworks at present to address this problem, but we also have to be very careful with passing legislation that might infringe on freedom of speech. Because as far as I know, there are some countries with anti-fake news laws, but these are being weaponized by the current government that anything that is dissent equals fake news. So must be careful regarding that. So we really have a lot to learn from our neighbors in Southeast Asia and the greater Indo-Pacific region in terms of addressing this problem. So I do agree that regional cooperation is important. And I think a single country like us engaging with tech platforms, calling them to be more accountable might have less effect as compared to when multiple countries come together and really demand action from the government. tech platforms that that latter action might be that latter strategy might be more effective for them to really be able to address this problem. So that’s it for my end. Thank you.

BELTSAZAR KRISETYA: Thank you, Maria. Thank you all to all to all the speakers and from the participants for engaging the discussion. I will not conclude because simply one there is not much time and second the only concluding remarks that I can deliver is we have no option to isolate this information issue solely as an information issue because when it becomes an electoral issue then we have to answer it through an electoral means when it becomes an economic and trade issues we also need to consider the participation economic trade actors so forth as and so on and so forth. So the discussion needs to continue beyond this room and also beyond the region of Southeast Asia so please feel free to drop by to our booth whenever you have the time to learn more about our works and potentially you know cooperate for the for the next research. Thank you very much for your participation. Please join me in giving the round of applause to the speakers and best of luck for your IGF participation. Goodbye. Thank you. Thank you Vic and Peter. Hi, Petri. Bye-bye. Bye-bye. Thanks Belz, Dieter, Vic, Sifa and Maria. Good afternoon. you you you

P

PIETER ALEXANDER PANDIE

Speech speed

171 words per minute

Speech length

2225 words

Speech time

776 seconds

Indonesia faces increasing use of AI-generated disinformation in elections

Explanation

In the 2024 Indonesian elections, there was a greater proliferation of disinformation incidents involving generative AI, particularly in visual and audio forms. This marks a shift from previous elections where disinformation was mostly text-based and image-based.

Evidence

Examples include a deepfake video of a former president supporting a candidate, an audio of an argument between a candidate and party head, and a candidate giving a speech in fluent Arabic when they couldn’t speak the language.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Explanation

Election bodies in Indonesia are still using strategies from previous elections to deal with disinformation. They were not adequately prepared for the proliferation of AI-generated disinformation in the 2024 elections.

Major Discussion Point

Government and societal responses to disinformation

Agreed with

MARIA ELIZE H. MENDOZA

Agreed on

Importance of digital literacy

Difficult to define and attribute foreign information manipulation and interference

Explanation

FEMI (Foreign Information Manipulation and Interference) is challenging to define and attribute. While the research used a definition based on the European Union External Action Service, there’s a need for a more context-specific definition for Southeast Asia or the Asia Pacific region.

Evidence

The research conducted focus group discussions with experts from Southeast Asia and external countries, revealing the complexity of defining FEMI.

Major Discussion Point

Challenges in combating disinformation

Agreed with

BICH TRAN

MARIA ELIZE H. MENDOZA

Agreed on

Challenges in defining and combating foreign interference

Differed with

BICH TRAN

Differed on

Perception of foreign interference threats

Multi-stakeholder approach involving government, civil society and platforms needed

Explanation

A comprehensive approach to addressing disinformation requires involvement from government, civil society, and social media platforms. This is particularly important in the context of rapidly developing AI technologies and increasing geopolitical tensions.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Agreed on

Need for multi-stakeholder approach

B

BICH TRAN

Speech speed

128 words per minute

Speech length

1490 words

Speech time

694 seconds

Vietnam is concerned about China’s disinformation on South China Sea disputes

Explanation

The Vietnamese government is primarily concerned about China’s disinformation regarding the South China Sea disputes. This includes instances of Chinese media misquoting Vietnamese leaders and spreading false narratives about Vietnam’s stance on regional issues.

Evidence

An example was given of Chinese media falsely reporting that the Vietnamese prime minister supported China’s stance on a 2016 arbitral tribunal ruling, which the Vietnamese government had to immediately clarify.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Differed with

PIETER ALEXANDER PANDIE

Differed on

Perception of foreign interference threats

Vietnam created Task Force 47 to counter “wrong views” on the internet

Explanation

In 2016, the Vietnamese Ministry of Defense established Task Force 47 to counter what they consider “wrong views” on the internet. This was followed by the creation of a cyber command in 2017, which is also responsible for countering “peaceful evolution”.

Major Discussion Point

Government and societal responses to disinformation

Differed with

MARIA ELIZE H. MENDOZA

Differed on

Role of government in combating disinformation

Balancing political stability and freedom of speech is challenging

Explanation

The Vietnamese government faces difficulties in striking a balance between maintaining political stability and ensuring freedom of speech. This challenge is particularly evident in their efforts to combat what they perceive as foreign information manipulation and interference.

Major Discussion Point

Challenges in combating disinformation

Agreed with

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Agreed on

Challenges in defining and combating foreign interference

M

MARIA ELIZE H. MENDOZA

Speech speed

149 words per minute

Speech length

2313 words

Speech time

928 seconds

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Explanation

The Philippine information system is saturated with so-called independent media practitioners, including vloggers and influencers, who are not formally affiliated with political parties. These individuals have significant influence in shaping public opinion and are not covered by existing media accreditation policies.

Evidence

There is evidence that these influencers have been hired by politicians in previous elections, with millions of pesos spent on such campaigns.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Philippine government has failed to effectively address electoral disinformation

Explanation

Despite three electoral cycles since 2016, the Philippine government has not effectively addressed electoral disinformation. Legislative proposals to combat false information and regulate social media campaigns have not progressed.

Evidence

Civil society actors, particularly media groups and academic institutions, have had to shoulder the responsibility of ensuring the integrity of facts through fact-checking initiatives and digital literacy campaigns.

Major Discussion Point

Government and societal responses to disinformation

Agreed with

PIETER ALEXANDER PANDIE

BICH TRAN

Agreed on

Challenges in defining and combating foreign interference

Differed with

BICH TRAN

Differed on

Role of government in combating disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Explanation

The rapid digitalization in the Philippines has not been accompanied by an increase in digital literacy. This gap makes the population, especially the youth, more susceptible to disinformation on social media platforms.

Evidence

A public opinion survey conducted by SAIL last year showed low numbers of people participating in government-held digital literacy programs, with many unaware of their existence.

Major Discussion Point

Challenges in combating disinformation

Agreed with

PIETER ALEXANDER PANDIE

Agreed on

Importance of digital literacy

Digital literacy must be incorporated into education at all levels

Explanation

To combat disinformation effectively, digital and media literacy must be fully incorporated into basic and higher education in the Philippines. Currently, only students in their last two years of high school have media literacy in their curriculum.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

PIETER ALEXANDER PANDIE

Agreed on

Importance of digital literacy

F

FITRI BINTANG TIMUR (FITRIANI)

Speech speed

114 words per minute

Speech length

2782 words

Speech time

1453 seconds

Australia experiences foreign interference attempts, particularly from China

Explanation

Australia has faced foreign interference attempts, with a notable focus on China’s activities. These attempts have included disinformation campaigns aimed at fostering division, confusion, and mistrust among the population and wedging distrust against allies.

Evidence

An example was given of an Australian-born individual known as a Russian spokesperson in Australia paying for a fake AI video claiming Haitian immigrants were engaging in voting fraud in Georgia, a US swing state.

Major Discussion Point

Information landscape and foreign interference in Southeast Asian countries

Australia is developing voluntary codes for platforms and considering legislation

Explanation

Australia has implemented a Voluntary Code through the Australian Communication and Media Authority (ACMA) that calls for digital media platforms to develop and report on safeguards against mis- and disinformation. There have also been attempts to introduce legislation, though a recent bill failed to pass.

Evidence

The disinformation and misinformation bill campaign in Australia faced opposition and was ultimately shut down.

Major Discussion Point

Government and societal responses to disinformation

Regional cooperation and intelligence sharing should be strengthened

Explanation

To combat foreign information manipulation and interference effectively, there is a need for enhanced regional cooperation and intelligence sharing. This includes improving the capacity of governments to address disinformation campaigns.

Major Discussion Point

Recommendations for addressing disinformation

Agreed with

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Agreed on

Need for multi-stakeholder approach

B

BELTSAZAR KRISETYA

Speech speed

147 words per minute

Speech length

2319 words

Speech time

942 seconds

Confirmation bias makes people susceptible to believing disinformation

Explanation

Disinformation is most effective when it reinforces existing opinions or ideas that someone already holds. This confirmation bias plays a significant role in how disinformation spreads and is believed by individuals.

Major Discussion Point

Challenges in combating disinformation

Balance needed between effective governance and ensuring democratic freedoms

Explanation

There is a need to strike a balance between effective governance of the information landscape and ensuring that democratic freedoms for civilians are upheld. Policy responses to address disinformation should not infringe on civil liberties and freedom of expression.

Major Discussion Point

Recommendations for addressing disinformation

Agreements

Agreement Points

Need for multi-stakeholder approach

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Multi-stakeholder approach involving government, civil society and platforms needed

Philippine government has failed to effectively address electoral disinformation

Regional cooperation and intelligence sharing should be strengthened

The speakers agree that addressing disinformation requires collaboration between government, civil society, and tech platforms, as well as regional cooperation.

Challenges in defining and combating foreign interference

PIETER ALEXANDER PANDIE

BICH TRAN

MARIA ELIZE H. MENDOZA

Difficult to define and attribute foreign information manipulation and interference

Balancing political stability and freedom of speech is challenging

Philippine government has failed to effectively address electoral disinformation

The speakers highlight the difficulties in defining foreign interference and balancing efforts to combat it with maintaining freedom of speech and political stability.

Importance of digital literacy

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Digital literacy must be incorporated into education at all levels

The speakers emphasize the need for improved digital literacy to combat disinformation, particularly in the face of evolving technologies like AI.

Similar Viewpoints

Both speakers highlight the increasing sophistication of disinformation campaigns, particularly those originating from foreign actors, and their potential impact on domestic politics and regional disputes.

PIETER ALEXANDER PANDIE

BICH TRAN

Indonesia faces increasing use of AI-generated disinformation in elections

Vietnam is concerned about China’s disinformation on South China Sea disputes

Both speakers discuss the challenges posed by actors spreading disinformation, whether domestic ‘independent’ practitioners or foreign state-sponsored efforts, and the need for effective countermeasures.

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Australia experiences foreign interference attempts, particularly from China

Unexpected Consensus

Limitations of technical solutions

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

BELTSAZAR KRISETYA

Indonesian election bodies are unprepared to deal with AI-generated disinformation

Lack of digital literacy exacerbates susceptibility to disinformation

Confirmation bias makes people susceptible to believing disinformation

There was an unexpected consensus among speakers that technical solutions alone are insufficient to combat disinformation. They agreed that sociological factors, such as confirmation bias and lack of digital literacy, play a crucial role in the spread and belief of disinformation, necessitating a more holistic approach.

Overall Assessment

Summary

The main areas of agreement among speakers include the need for a multi-stakeholder approach to combat disinformation, the challenges in defining and addressing foreign interference, and the importance of digital literacy. There was also consensus on the limitations of purely technical solutions and the need to consider sociological factors.

Consensus level

The level of consensus among the speakers was moderate to high, particularly on the need for collaborative efforts and the complexity of the disinformation landscape. This consensus implies that addressing disinformation in Southeast Asia and beyond requires a comprehensive, multi-faceted approach involving various stakeholders and considering both technical and sociocultural aspects. However, the specific strategies and priorities may vary depending on each country’s unique context and challenges.

Differences

Different Viewpoints

Role of government in combating disinformation

BICH TRAN

MARIA ELIZE H. MENDOZA

Vietnam created Task Force 47 to counter “wrong views” on the internet

Philippine government has failed to effectively address electoral disinformation

While Vietnam has taken a more active and restrictive approach through government intervention, the Philippines has struggled to effectively address disinformation through government action, leading to civil society taking on more responsibility.

Perception of foreign interference threats

BICH TRAN

PIETER ALEXANDER PANDIE

Vietnam is concerned about China’s disinformation on South China Sea disputes

Difficult to define and attribute foreign information manipulation and interference

Vietnam has a clear focus on China as a source of disinformation, while the Indonesian perspective acknowledges the difficulty in defining and attributing foreign interference, suggesting a more nuanced view of the threat landscape.

Unexpected Differences

Approach to platform regulation

FITRI BINTANG TIMUR (FITRIANI)

MARIA ELIZE H. MENDOZA

Australia is developing voluntary codes for platforms and considering legislation

Philippine government has failed to effectively address electoral disinformation

While both countries face challenges with disinformation, Australia’s approach of developing voluntary codes and considering legislation contrasts with the Philippines’ lack of progress in this area. This difference is unexpected given that both are democratic countries facing similar challenges.

Overall Assessment

summary

The main areas of disagreement revolve around the role of government in combating disinformation, the perception of foreign interference threats, and the approaches to platform regulation.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the need to address disinformation, there are significant differences in how each country perceives and approaches the problem. These differences reflect the varied political systems, levels of digital development, and geopolitical contexts of the countries represented. The implications of these disagreements suggest that a one-size-fits-all approach to combating disinformation in Southeast Asia may not be effective, and regional cooperation efforts will need to account for these diverse perspectives and approaches.

Partial Agreements

Partial Agreements

All speakers agree on the need for a comprehensive approach to combat disinformation, but they emphasize different aspects: Pandie focuses on multi-stakeholder involvement, Mendoza on education, and Fitriani on regional cooperation. While these approaches are not mutually exclusive, they represent different priorities in addressing the issue.

PIETER ALEXANDER PANDIE

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Multi-stakeholder approach involving government, civil society and platforms needed

Digital literacy must be incorporated into education at all levels

Regional cooperation and intelligence sharing should be strengthened

Similar Viewpoints

Both speakers highlight the increasing sophistication of disinformation campaigns, particularly those originating from foreign actors, and their potential impact on domestic politics and regional disputes.

PIETER ALEXANDER PANDIE

BICH TRAN

Indonesia faces increasing use of AI-generated disinformation in elections

Vietnam is concerned about China’s disinformation on South China Sea disputes

Both speakers discuss the challenges posed by actors spreading disinformation, whether domestic ‘independent’ practitioners or foreign state-sponsored efforts, and the need for effective countermeasures.

MARIA ELIZE H. MENDOZA

FITRI BINTANG TIMUR (FITRIANI)

Philippines information ecosystem is saturated with “independent” media practitioners spreading disinformation

Australia experiences foreign interference attempts, particularly from China

Takeaways

Key Takeaways

Foreign information manipulation and interference (FIMI) is an increasing concern in Southeast Asian countries, with different manifestations in each country

Governments in the region are struggling to effectively address disinformation, especially with the rise of AI-generated content

There is a need for multi-stakeholder approaches involving government, civil society, and tech platforms to combat disinformation

Digital literacy efforts are crucial but face challenges in implementation and reaching wide audiences

Balancing effective governance of information ecosystems with protecting democratic freedoms is a key challenge

Resolutions and Action Items

Explore developing a Southeast Asian or Asia-Pacific specific definition for FIMI

Strengthen regional cooperation and intelligence sharing on disinformation issues

Incorporate digital literacy education at all levels of schooling

Engage in multi-stakeholder and inter-regional cooperation to research disinformation

Unresolved Issues

How to define and attribute foreign information manipulation and interference in a consistent way

How to effectively regulate tech platforms without infringing on freedom of speech

Who should be the arbiter of truth in determining what constitutes disinformation

How to address confirmation bias and the sociological aspects of disinformation spread

How to balance political stability concerns with freedom of expression in addressing disinformation

Suggested Compromises

Implement voluntary codes for tech platforms while maintaining government ability to intervene if needed

Use a combination of whole-of-government and strategic government approaches to allow for different governance styles within ASEAN

Balance effective governance of information ecosystems with protections for democratic freedoms and civil liberties

Thought Provoking Comments

So in Southeast Asia, while there is, in ASEAN, for example, while there is the cybersecurity cooperation agreements and so on and so forth, these are still mostly led or hosted by countries such as Singapore or Malaysia, who have higher, I would say, cyber capabilities compared to other Southeast Asian states who are still building on those capabilities. So not everyone is on the same page, either threat perception-wise or capabilities-wise.

speaker

Pieter Alexander Pandie

reason

This comment highlights the disparity in cybersecurity capabilities and threat perceptions among Southeast Asian countries, which is a crucial factor in addressing regional information manipulation issues.

impact

It led to a deeper discussion on the challenges of regional cooperation and the need for context-specific approaches in combating disinformation.

Contents that are obviously false and hyper-partisan, even if they were posted in the last electoral cycle, are still present in these platforms. They have not yet been taken down despite multiple reports, so these content moderation policies really have to be looked at.

speaker

Maria Elize H. Mendoza

reason

This comment brings attention to the ongoing issue of ineffective content moderation by social media platforms, highlighting a critical gap in addressing disinformation.

impact

It shifted the discussion towards the responsibilities of tech platforms and the need for more effective content moderation policies.

So I think for the Vietnamese government, you know, because they know that I think very, so this very, it speaks to what Peter and if you already mentioned that I think for the government, they know that no matter what the Chinese say about the South China Sea, the Vietnamese people will not believe.

speaker

Bich Tran

reason

This comment provides insight into the unique dynamics of information manipulation in Vietnam, highlighting how cultural and historical factors influence the effectiveness of foreign disinformation campaigns.

impact

It introduced complexity to the discussion by showing how different countries may have varying vulnerabilities to foreign information manipulation based on their specific contexts.

Even if you did manage to achieve digital literacy, which I think there are a lot of technical solutions for, this is a larger sociological problem at this point, where if you’re getting views for it or if you’re getting power out of it, there’s no reason for anybody to stop sort of putting out disinformation.

speaker

Nidhi (audience member)

reason

This comment challenges the effectiveness of purely technical solutions to disinformation, highlighting the deeper sociological roots of the problem.

impact

It prompted the speakers to address the need for a multi-disciplinary approach to tackling disinformation, beyond just technical solutions.

Only the government can decide what is disinformation or not. And in the case of one party state in Vietnam, we have the legislative, executive, and judiciary powers belonging to the state, which means the head of all these state agencies must be the Communist Party members. And so when they say that is disinformation, they have the power to punish.

speaker

Eliza (audience member)

reason

This comment raises important questions about who has the authority to define and combat disinformation, especially in non-democratic contexts.

impact

It led to a discussion about the challenges of addressing disinformation in different political systems and the potential for misuse of anti-disinformation measures.

Overall Assessment

These key comments shaped the discussion by highlighting the complexity of addressing information manipulation and disinformation in Southeast Asia. They brought attention to the disparities in capabilities and threat perceptions among countries, the responsibilities of tech platforms, the influence of cultural and historical factors, the limitations of purely technical solutions, and the challenges of defining and combating disinformation in different political systems. The discussion evolved from a focus on specific country cases to a broader consideration of regional cooperation, multi-stakeholder approaches, and the need for context-specific strategies in combating disinformation.

Follow-up Questions

How can we define FEMI (Foreign Information Manipulation and Interference) in a way that is context-specific and more applicable to different information landscapes in Southeast Asia or the Asia Pacific?

speaker

Pieter Alexander Pandie

explanation

A more regionally-specific definition could help better understand and address FEMI issues in the context of Southeast Asian countries.

What are the best platforms or forums to discuss FEMI issues in the Asia Pacific region, considering the different threat perceptions and approaches of various countries?

speaker

Koichiro (audience member)

explanation

Identifying appropriate platforms for discussion could lead to more effective regional cooperation in addressing FEMI.

How can we address the broader sociological problem of confirmation bias and the incentives for spreading disinformation, beyond just technical solutions?

speaker

Nidhi (audience member)

explanation

Addressing the root causes of disinformation spread could lead to more effective long-term solutions.

How can we improve digital literacy campaigns and make them more effective, especially for those who haven’t formed their opinions yet?

speaker

Bich Tran

explanation

Effective digital literacy campaigns could help prevent the spread of disinformation and improve information resilience.

How can we balance the need for platform regulation with concerns about censorship and freedom of expression?

speaker

Fitri Bintang Timur (Fitriani)

explanation

Finding this balance is crucial for effective policy-making in combating disinformation while preserving democratic values.

How can we improve multi-stakeholder and inter-regional cooperation to better understand and address disinformation, its real-world impacts, and the misappropriation of anti-disinformation discourse?

speaker

Fawaz (audience member)

explanation

Enhanced cooperation could lead to more comprehensive and effective approaches to combating disinformation.

How can we develop curricula and textbooks for digital literacy that are objective and widely accepted?

speaker

Bich Tran

explanation

Developing appropriate educational materials is crucial for implementing effective digital literacy programs.

How can multiple countries work together to more effectively demand action from tech platforms in addressing disinformation?

speaker

Maria Elize H. Mendoza

explanation

Collective action by multiple countries could potentially have a greater impact on tech platform accountability.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #1 IGF LAC Space

Session at a Glance

Summary

This discussion focused on various aspects of internet governance and digital issues in Latin America and the Caribbean. Representatives from regional organizations like ICANN, LACNIC, and Internet Society shared updates on their work in areas such as policy development, technical training, and promoting multi-stakeholder participation. Key themes included efforts to strengthen regional cooperation, address cybersecurity challenges, and increase internet access and digital skills.


The session also featured presentations from researchers on topics including online child protection, indigenous youth and technology adoption, and cybersecurity policies in Brazil. These studies highlighted issues such as the risks of online grooming in gaming platforms, disparities in internet access between urban and rural indigenous communities, and the need for clearer cybersecurity frameworks.


Additionally, researchers presented work on combating misinformation through AI-assisted fact-checking tools for journalists, as well as the adoption of AI in judicial systems. The discussions emphasized the importance of responsible AI implementation that respects human rights and maintains human oversight.


Throughout the session, participants stressed the value of multi-stakeholder dialogue and regional cooperation in addressing internet governance challenges. They also noted the evolving nature of digital issues, with new topics emerging alongside longstanding concerns. The discussion underscored the ongoing need for spaces that facilitate cross-sector collaboration and knowledge sharing on internet policy in Latin America and the Caribbean.


Keypoints

Major discussion points:


– Updates from various Latin American and Caribbean internet governance organizations on their recent activities and initiatives


– Reflections on current global internet governance processes like WSIS+20, IGF, and the Global Digital Compact


– Presentations of research projects on topics like online child protection, indigenous youth and technology, cybersecurity policy in Brazil, and AI tools for journalism and the judicial sector


Overall purpose:


The goal of this discussion was to provide a space for Latin American and Caribbean internet governance stakeholders to share updates, discuss regional perspectives on global processes, and highlight relevant research being conducted in the region. It aimed to foster collaboration and knowledge-sharing among regional actors.


Tone:


The overall tone was informative and collaborative. Speakers shared updates and research findings in a professional manner. There was an underlying sense of enthusiasm about regional cooperation and contributions to global internet governance dialogues. The tone became slightly more academic during the research presentations but remained accessible overall.


Speakers

– LITO IBARRA: Moderator


– FEDERICA TORTORELLA: Co-host


– LIDIA ANCHAMORO: Part of Colnodo, Colombian organization; Participates in IGF Secretariat


– OLGA CAVALLI: Organizer of South School on Internet Governance


– RODRIGO DE LA PARRA: ICANN Latin America representative


– SEBASTIAN BELAGAMBA: Internet Society representative


– BASILIO RODRIGUEZ PEREZ: LAC-ISP representative


– ROCIO DE LA FUENTE: LAC-TLD representative


– LIA SOLIS: LACNIC representative


– MARIA FERNANDA MARTINEZ: CETIS representative


– PAULA OTEGUY: LACNIC representative, moderator for research presentations


Additional speakers:


– JOSE ROJAS: Lawyer, expert on civil crime, researcher on child grooming


– CAMILO ARATIA: Sociologist from Bolivia, researcher on indigenous youth and technology


– THAIS AGUIAR: Lawyer from Brazil, researcher on cybersecurity policies


– SOLEDAD ARENGUEZ: Expert in new technologies and education, researcher on Trust Editor project


– MARIA PILAR CHORENZ: Doctor in law and social rights, expert on technology and rights, researcher on AI adoption in judicial sector


Full session report

Internet Governance in Latin America and the Caribbean: A Multi-Stakeholder Dialogue


This session focused on the Latin America and Caribbean Internet Governance Forum (LAC IGF) space, a regional initiative that brings together diverse stakeholders to discuss internet governance issues. As explained by moderator Lito Ibarra and co-host Federica Tortorella, the LAC IGF aims to foster collaboration, knowledge-sharing, and multi-stakeholder dialogue on internet governance in the region.


LAC IGF Structure and Purpose


The LAC IGF serves as a platform for discussing regional perspectives on global internet governance processes and highlighting relevant research. It operates through a multi-stakeholder model, involving participants from government, civil society, the private sector, and the technical community. The forum’s structure includes a steering committee and working groups focused on various aspects of internet governance.


Regional Organizational Updates


Representatives from key regional internet governance organizations provided updates on their recent activities:


1. Lidia Anchamoro (IGF Latin America and Caribbean Secretariat): Implementing new statutes and collaborating with Colnodo on various initiatives.


2. Olga Cavalli (South School on Internet Governance): Organizing the 17th edition in Mexico and celebrating their WSIS prize and champion status.


3. Rodrigo de la Parra (ICANN Latin America): Focusing on increasing regional participation in policy development processes and reaffirming multi-stakeholder principles.


4. Sebastian Belagamba (Internet Society): Implementing a new 5-year strategic plan and addressing challenges related to the WSIS+20 review and IGF mandate renewal.


5. Basilio Rodriguez Perez (LAC-ISP): Advocating for 6 GHz frequency allocation for Wi-Fi and expressing concerns about “fair share” proposals impacting network neutrality for small ISPs.


6. Lito Ibarra (LAC-IX): Deploying new infrastructure across 34 internet exchange points in the region.


7. Paula Oteguy (LACNIC): Supporting local internet governance initiatives through programs like FRIDA and detailing the LIDRES program for developing internet governance leaders.


8. Rocio de la Fuente (LAC-TLD): Developing a single server for domain name queries to improve efficiency and reduce costs for ccTLDs in the region.


LIDRES Program


Paula Oteguy introduced the LIDRES program, an initiative by LACNIC to develop internet governance leaders in Latin America and the Caribbean. The program aims to strengthen regional participation in global internet governance discussions and foster local expertise.


Research Presentations on Internet Governance Issues


The session featured presentations from researchers on various internet governance topics relevant to Latin America:


1. José Rojas: Presented research on child grooming risks in online gaming environments, highlighting the paradox between real-world and online safety practices for children. His study emphasized the need for better education and awareness about online risks in gaming platforms.


2. Camilo Aratia: Shared findings on technological appropriation among indigenous youth in Bolivia, revealing significant disparities in internet access between urban and rural indigenous communities. His research employed ethnographic methods to understand how limited access affects digital literacy and cultural practices.


3. Thais Aguiar: Discussed the evolution of cybersecurity policies and frameworks in Brazil, focusing on the complex governance structure involving multiple stakeholders. Her research analyzed policy documents and interviewed key actors to map the cybersecurity ecosystem in Brazil.


4. Soledad Arenguez: Presented on the development of an AI tool called Trust Editor for detecting misinformation in news articles. Her work involved collaboration with computer scientists and journalists to create and test the tool’s effectiveness in identifying false or misleading information.


5. Maria Pilar Chorenz: Explored the adoption of generative AI in judicial systems in Argentina, raising questions about the implications of AI in legal decision-making. Her research methodology included surveys and interviews with legal professionals to gauge attitudes and concerns about AI integration in the justice system.


Conclusion


The LAC IGF space continues to play a crucial role in facilitating dialogue and collaboration on internet governance issues in Latin America and the Caribbean. This session demonstrated the diverse range of activities undertaken by regional organizations and the depth of research being conducted on topics ranging from cybersecurity to digital inclusion. By bringing together various stakeholders and perspectives, the LAC IGF contributes to a more inclusive and informed approach to addressing the complex challenges of internet governance in the region.


Session Transcript

FEDERICA TORTORELLA: Perfecto, listo, ya. Pueden dejar de hacerme, pueden sacarme como anfitrión. Muchísimas gracias, Lito.


LITO IBARRA: Perfecto, lo digo ahora, ahora. Okay, you can… Ya, ¿comenzamos?


FEDERICA TORTORELLA: Sí, ya comenzamos y le doy la palabra a Federica. Gracias.


LITO IBARRA: Hola. Hola, buenas tardes. ¿Me escuchan aquí todos en la sala? Lito speaking. Hello everyone. Can you hear me in the room? Thank you. Can you hear me on Zoom as well? We can hear you as well. Thank you very much. So, let’s begin with today’s session with the LAG space. We have interpretation into English and into Spanish. If anyone needs interpretation into Spanish or into English, you can choose the language of your preference on Zoom. We had a technical issue, that’s why we started a few minutes later, but now it’s been solved, so you can choose, as I said, the language of your preference, either Spanish or English. So, welcome to this space that traditionally takes place in international events where we try to catch up with everything that Latin American and Caribbean organizations do, these organizations that we operate in the region. So, for today’s program, today’s agenda, we have several issues. We have Federica Tortorella in Zoom. You can see her on the screen. And Rocío de la Fuente from LAG-TLD. will be co-hosts online of this session. Without further ado, we will start. We will have three main spaces. One for the reports of the organizations of the region. Each organization has three minutes each for their presentation. We will try to stick to our schedule. Then we will have a second round with some organizations of the region, who will answer to the reflections of the organizations regarding the main current processes, the GLC, the WSIS, the S20, the IGF, and so on and so forth. And also the reports on recent research on internet governance that are relevant for the region. Federica, would you like to say something?


FEDERICA TORTORELLA: Federica speaking. Thank you, Lito, for the support. Greetings, everyone. Thank you for being here in this eighth edition of IGF Lackspace. As Lito said, I would like to remind the dynamics of the session. First of all, we will have the regional organizations and two rounds of questions of three minutes each. So we ask you to please stick to the agenda so that we can deal with all the topics. And on the second part of the session, we will have our researchers, those who are part of the leaders program, who will share with us their findings and their research and projects. Lito, would you like to take the floor once again, or would you like me to continue, Lito speaking? Unless you need to add something, I can call upon people to participate, following the agenda you provided. If you see that I’m missing something, please do not hesitate to interrupt me. So we will call Lidia. Lidia, please go ahead. Please turn on the mic.


LIDIA ANCHAMORO: Can you hear me? This is Lidia speaking. Thank you, everyone. It’s a pleasure for me to see you here in Rialde. And those of you who are here online, I am Lidia Anchamorro. I am part of Colnodo, which is a Colombian organization. And I also participate in the Secretariat of IGF. From Colnodo, we accompany processes such as the Colombian Table of Internet Governance that participated in 11 editions of the forum, and who has the participation of different sections or sectors of the organization, of society, and to discuss, to create, or to draft documents, and to make recommendations to the government, and everything regarding digital policies. And based on this experience in 2023, we applied to become the Secretariat of IGF. We were chosen by the stakeholders of IGF. This is what I will share with you today. This year, we organized the 17th edition of the forum. This edition took place in Santiago de Chile. Last year, we had the 16th edition in Bogotá. And it was like the reactivation after the pandemic of these in-person events. We had organized these events in a virtual manner for three years. So it was a real challenge to foster once again the gathering and the interest of people to participate in debates of the multi-sector, cross-section debates of IGF. We had several sessions in a room where the program committee proposed some panels that took place with the participation of different organizations. And the challenge for this year was to implement the new status of IGF. These new statutes were adopted in 2021. They were adopted after a consultation process that was organized with the Latin American community. And the goal was to specify in a more clear manner the roles of the different bodies of IGF. The bodies established in these statutes were the Committee of Massive Stakeholders, that is to say the strategic committee, the Committee for the Selection of Workshops, a secretariat that, until that time, was organized by LACNIC. From the beginning of LACNIC IGF, it was organized by ACNIC until 2023, where the application process was opened, where Cornado was assigned as a secretariat. With this new structure and And with these roles more clearly defined in the structure, the idea was to strengthen the levels of representation and participation of the different actors. And throughout this year, we have implemented these groups. This is Federica, Lilian speaking. I’m sorry, this is Federica speaking. So this is Lilian speaking. I will try to discuss the workshops later on.


LITO IBARRA: Lito speaking. We have just three minutes, OK? So I will give the floor to Olga Cavalli.


OLGA CAVALLI: Can you hear me? This is Olga Cavalli speaking. Thank you. Thank you for inviting us. And thank you for being here. I will do my best for you to hear me properly. In our school, we are organizing the 17th edition in Mexico in the second week of May, from the 6th to the 9th of May. The program has evolved and grown throughout the years. It started with a week of training. And now it’s a six-month program. It has a prior course of two months, virtually. And then we have a course in a hybrid format. And then we have signed an agreement with the University of Mendoza, where I studied. And I got my diploma of engineer, where the evaluation of the first two weeks, after this first evaluation, they are able to do research. And then they can get a diploma of internet governance. We’ve had already four cohorts of this course of studies. We have trained 8,000 fellows, and this training is for free. And they also can have a right to accommodation and food, the different meals. We’ve received the WSIS prize and also the WSIS champion for the impact on internet training. We’ve had 50 scholars or fellows that had accommodation and meals. We had more than 100 speakers from all over the world, and fellows were from Latin America in person. And virtual fellows participated from all over the world, from North America, Asia, Europe, among others. And we have also organized the eighth edition of internet governance that is a three-day training that is similar to the one organized in a hybrid manner. And it took place last month in November. And I think that my three minutes are up. Thank you, Lito.


LITO IBARRA: Lito speaking. Thank you, Olga. Thank you for sticking to the three minutes. We will give the floor to Aiken with Rodrigo de la Parra, who is connected online. Please, go ahead.


RODRIGO DE LA PARRA: Rodrigo de la Parra speaking. Thank you, Lito. Good afternoon. Greetings, everyone. Greetings to those of you who are in person and to those of you who are connected online. Congratulations. Thank you for organizing this space. It’s very important to keep updated regarding the work of the organizations and what we do regarding internet governance and its importance all over the world and the importance of the IGF. As you know, this work is always carried out in a very coordinated manner and with an amazing capability. And I can, the Latin American and Caribbean team, continue to work in two main issues. The first one is to foster a greater participation of actors of the region of Latin America and the Caribbean in the development process of policies of ICANN. This work goes on, and many activities are organized in our in-person meetings that this year took place in three different places in the world, as you know, San Juan, Kigali, and Istanbul, where we met with LAG Space, where we had the participation of different of our colleagues that are working in ICANN. And something that is very important that I would like to make focus on is that we centered many of our efforts in training at the technical level of those actors that are involved in the operation of TNS in the region. We have to remember that part of the internet governance organization lies in the activities. And I think that we’ve had a very good result that these activities helped us understand, from the operational point of view, how internet can be interoperable and that can be safe. So thank you for this opportunity. We will go on working. in this Light of Thoughts next year, and I hope to meet you all real soon. Thank you very much.


LITO IBARRA: Okay, let’s now go with ISOC. Sebastián is speaking. Thank you,


SEBASTIAN BELAGAMBA: Litos. I am Sebastián Belagamba. And let me tell you that this is a very particular year for our organization because we have a five-year strategic plan and we are starting to implement our strategic plan next year. So let me share with you some news on our strategic plan and the work we will do with the community. Our strategy was defined in March this year. The Internet Society Board approved the strategy based on two main challenges that were identified at one point in time. One of them is global inequality. We understand there is an issue of global inequality we need to address and the lack of trust on the Internet. People lack trust on the Internet. These two challenges are big issues that we can address. I mean, global inequality. We need to see how we can connect people, those that are not connected to the Internet. And we also need to see how we can improve connectivity to countries that are already connected. We need to make that connection more efficient. These two global challenges are being translated into our strategic goals that we have set for next year. One of them is that people all over the world may have access to a resilient and affordable Internet. And secondly, people need to have a safe and robust internet experience. They need to feel protected on their daily life. These are the two main goals we have for this next five-year plan. And implementation will be related to some programs. I am circulating a PowerPoint presentation where you’ll see more details on these programs. And I would like to invite you all to visit our website where you’ll see our strategic plan, our five-year strategic plan, and the implementation of our strategic plan during 2026.


LITO IBARRA: Lito speaking. Thank you so much, Sebastian. And thanks for sticking to the time. We are okay with the time and the agenda. Now we will give the floor to Esteban Lezcano. He will be speaking on behalf of LAC-ISP. Esteban, please go ahead.


ESTEBAN LEZCANO: Thank you, Lito. This is Esteban Lezcano speaking. And Basilio will be the one presenting today.


BASILIO RODRIGUEZ PEREZ: Basilio speaking. Good afternoon, everyone. LAC-ISP is a ISP organization from Latin America. We are an association of Latin American and Caribbean ISPs. And we work on the development of the ISP market. We work on different regulatory asymmetries for ISPs to be able to provide their services, their network maintenance services, particularly to those regions that are underserved. In Brazil, thanks to these regulatory asymmetries, we can say that ISPs cover 52% of the market. in fixed and white band in Brazil. We have two main issues in Latin America that we need to advocate for. One of them is the frequency of six gigahertz for Wi-Fi. This is really important for small ISPs to be able to deliver their services with quality. This frequency needs to be used outdoor in order to improve the service in case areas may not be able to use a wireless service. There is another issue we’re facing and this is the talks and the discussions we see from some regulators and the free share. We are against the idea of free share, because this has nothing to do, this has nothing of fear. This implies a huge risk for the whole Internet. Impacting on the network neutrality, and this will lead to issues for small ISPs. In South Korea, they started to charge for content, and this cannot be applied in Latin America or in the thousands of thousands of ISPs we have in Latin America. I am almost finishing my intervention. So, this would be a task or something that will cause huge problems to ISPs and Latin America.


LITO IBARRA: Lito speaking. Thank you, Basilio, and thanks, LAC-ISP, and now Lito Ibarra, this is me, will speak about LAC-IX. So, LAC-IX is the Latin American, the Caribbean organization for internet exchange points. In case you don’t know, an exchange point is where the local traffic of a country is being exchanged among providers in order to minimize the time and the cost and the delay in communications. The IX, or exchange points, are growing with the passing of time, they are part of the internet critical infrastructure to weather with the DNS. This is part of the internet critical infrastructure. In the case of LAC-IX, we’re deploying new infrastructures, LAC-IX, once at the end of the year to update its database, and this year we had four new exchange points and we are now 34 traffic exchange points in the regions that are part of LAC-IX. These are members of our organization, they are not the total amount, but they are part of the organization. We are still working with other organizations, particularly with the Internet Society, LACNIC, and other technical organizations in the region, and we are also articulating and working together with the technical community. We have a general assembly in the LACNIC event at the beginning of the year, in May, and we hold a virtual meeting throughout the year. This is really relevant for participants. We held four training sessions for technicians. Bear in mind that these exchange points are looking for CDNs. These are copies of contents from large Internet content providers in order to provide a more efficient access to users. They also work with technicians. There are working groups. There are some working groups. One of them is a public policy working group, and there is another working group that follows up the LACNIC policy development process, which as you know, is open and public. They report on any policy proposal on the critical infrastructure of the Internet, particularly affecting IXPs. And we have communications on LinkedIn and in our websites for our members and those of you who are interested. And this is the end of my intervention. Now, I would like to give the floor to Alai, Raúl Echeverria. Are you connected?


ROCIO DE LA FUENTE: Rocio speaking. We don’t see Raúl online, but he can take the floor at the end of the session.


LITO IBARRA: Lito speaking. Okay, now we are now going to give the floor to LACNIC, Lia Solis. Please go ahead.


LIA SOLIS: Lia speaking. Hello, can you hear me? Okay, good morning, everyone. Let me begin with my intervention. LACNIC is the Caribbean and Latin American network of operators. This is a non-for-profit organization and it is based in Montevideo, Uruguay. And we have been operating for 14 years now. Our mission is to gather… operators together. I mean, technical people operating the networks, helping us to communicate, and we’re aiming at being a reference association. Our mission is to strengthen the relationships among operators all around the region by fostering knowledge and by promoting the work of our working groups. We want to foster discussions, exchange information, collaborate with our community. Our organization culture is based on volunteers. We have more than 50 people working with volunteers and in an inclusive manner. As for our structure, we have a committee, a program committee, asking for technical proposals that are related to the internet operation, and we have a board, working groups, and the community. The next year, we are working on different programs. We held webinars and interviews to our members of the community. We have podcasts recorded. These are sort of talks and discussions for technicians to be able to get information, and we also published relevant content, and we spread that content through our discussion list. We also have alliances with the different organizations, such as the Internet Society, LACNIC, ICANN, among others, and we also try to strengthen our institutional image, and we would like to position our organization as a technical organization. We have an annual event in the LACNIC meeting. and we hold an event in each of the countries. This year, we’re working within the framework of strengthening the technical community, on creating a training or a working group, and we deliver trainings on IPv6, security, peering, among other topics. In our working groups, we have the participation of the Latin American and the Caribbean region in the ITF. And I think this is the end of my intervention. Thank you.


LITO IBARRA: Thank you, Lia, for this report. Now, I would like to give the floor to the CETIS. Fernanda Martinez is here, Fernanda speaking.


MARIA FERNANDA MARTINEZ: Thank you, Lito. CETIS is the Center for Study of Technology and Society at the University of San Andres. This is an interdisciplinary and academic center. The goal of our center is to promote and train people on different topics related to internet policies. We were able to cover our goals this year. We have three programs, and we hold two events throughout the years. We participated in 19 events. We had over 450 people participating in different events. And I will share with you the annual report of our activities, but I will give you a very brief summary in the link that I will be posting in the chat. This year, we are happy because one of our most important reports have been published. We have been working on this report throughout 2020 and 2021. This is the universality indicator report. This is a project that is led by UNESCO. And the goal of this project is to map, to do a mapping of each of the countries based on five internet-related access, rights, openness, multi-stakeholderism, gender, access. It is a very relevant report, and even though some elements may have changed, the methodology being used is really useful, because we can map the situation in each of the countries. And this is also very relevant, because we believe that in order to create public policies, we need to base ourselves on evidence. So this report covers one of the aspects that is very relevant for us, and this is the creation of evidence in order to translate that evidence, or to provide the evidence to decision makers in order for them to be able to craft policies that are robust. The approach is really enlightening, because it is based on the recommendations that were required, and the report includes an advisory board. It is a multi-stakeholder body, and this gave us the opportunity to have a very relevant dialogue with the stakeholders in the ecosystem, particularly in Argentina, meaning the private sector, public sector, civil society, and we were able to draft some recommendations based on each of these areas, and we highly recommend reading this report. I will end my intervention now, and there is another very interesting project, and I will talk about it in my second intervention. It’s a pleasure for me to be able to participate here. And thanks again, Lito.


LITO IBARRA: Lito speaking. Thank you, Fernanda. Let’s move to LACNIC. We will give the floor to Paula Oteguy. Thank you, Paula.


PAULA OTEGUY: Paula Oteguy speaking. Thank you, Lito. Greetings, everyone. Good morning, good afternoon for those of you who are in Riyadh. It’s a pleasure to present on behalf of LACNIC the work that we’ve been doing in this space, in the LAC IGF, in the IGF NAC space. So I would like to comment on the support program to analyze these local programs, local support, to tell you a little bit what it’s about, and also to seize this opportunity to emphasize the importance and the relevance of these initiatives in the stakeholder model. This program provides support to the internet governance initiatives in our region for them to organize their events. And it’s organized and it’s addressed to regional, national initiatives, and also young people’s initiatives at the local level, and also internet governance schools. For you to get to know a little bit what it’s about, this support is translated mainly in funds for the organization and for the implementation and execution of these spaces. And also, as long as the initiative requires it, a webinar with up to 500 participants, and also the possibility for LACNIC experts to contribute and to cooperate with the topics of the agenda of those initiatives. Participating actively in panels and discussions with experts and technical experts. dealing with cyber security, DNS security, among other topics. And some numbers for you to get to know are 2024, we supported 10 local initiatives in the region, two youth initiatives at the local level. Here, I’d like to mention that they are emerging in the region, these youth initiatives at the country level. And also, we supported like IGF in its 17th edition, and youth IGF, and internet governance schools, highly recognized, and also the virtual schools by internet governance, and a recent initiative of the Chilean University for Internet Governance and International Relationships. In order to apply for this support, you have to visit our website in the opportunity section. Therein, you will find internet governance, and there is a form that you need to complete, and we will contact you in order to make this support possible. Yes, I’m nearly finished. I’d just like to highlight the role of IRs, and their importance in the multi-stakeholder model.


LITO IBARRA: Lito speaking, thank you, Paula. We will close this section with the LAC-TLD presentation in charge of Rocio de la Fuente.


ROCIO DE LA FUENTE: Rocio de la Fuente speaking, thank you, Lito. Thank you for your support in the in-person event, and also everyone for participating in another edition of the IGF LAC space. I’d like to tell you the progress we’ve made in the single server user that enables to make a unique. consultation of a domain name under multiple ccTLDs of countries and territories in the region. In this manner, we provide an additional channel to make consultations for the regions that are available for registration and those already registered. For those already registered, it enables to add direct users to the websites of ccTLDs to make consultation of the available information. This year that is operating in a beta model with a lot of effort of ccTLDs, so we invite you all to use it and to diffuse it there. It’s used because it’s a way to promote the use of the domain names in the region. I’d like to take this opportunity as well to tell you about the efforts or rather the activities and the events we’ve been developing with other organizations, with the technical community in the last two years and mainly in the last year where our goal was to strengthen the relationship with other actors of the ecosystem and also with governmental bodies. I think that these efforts are very effective because it’s enabled to gather different organizations among others and other ICANN organizations. We’re very happy with its work and our will is to go on consolidating this technical community as a sector. I would like to give Federica the floor for this last minute that I have left. Federica,


FEDERICA TORTORELLA: you have the floor. Federica speaking. Greetings, everyone. I’d like to tell you some information as Rocio said. Thanks to this session, we have built a repository at the regional level and the idea is to consolidate in a single document the information. The main information regarding the organizations in the region that deal with internet governance issues. So we invite you to look at this repository. And for those of you who would like to contribute, you are most welcome to do so. We will share the link in the chat. You can write to me or to Rocio, and we will tell you how you can cooperate. The main idea is to consolidate the information so that it is easily accessible to know which are the organizations that are there and what are they doing. That will be it on my side. Dito, you have the floor.


DITO: Thank you very much, Dito, speaking. Thank you for sticking to your three minutes. Yes, we will be waiting for that link. I think that it’s very, very useful for many of us because we’ve heard a lot of information and we’ve made notes. But I think that it’s a great initiative to have this information online. Thank you very much. Let’s move to the next round of Black Space. The question was to share the reflections on the organization you belong to relating to the processes that are taking place. And we have five organizations, rather six organizations, sorry, that are registered. And you have three minutes each. We will start with Internet Society. Sebastian, please.


SEBASTIAN BELAGAMBA: Sebastian speaking, I know that three minutes is not a long time. But just to give you an idea, I think that we’re going through a very important year for internet governance. Next year, the mandate for IGF will be ended, and the WSIS mandate, not only IGF, which is the mechanism that we have to gather all this information, but also the elements regarding to WSIS, lines of action, and so on and so forth, that emerged in 2005 in Tunis. We had a mandate for 10 years that was renewed in 2015, and now in 2025, we will reach the end of the mandate. This is in parallel to other events. The global digital compact was approved last year that has some contact points with the lines of action of WSIS, and we need to understand at the intergovernmental process how they interact. It’s not very clear how the implementation of GDC will be, and which will be the review of WSIS plus 20, and how the line of actions of both the WSIS and these lines of action will interact if that is the case. So what’s important is to bear this in mind, and all of this is included in the agenda to be discussed in the next 12 months. We, as a community devoted to internet governance, either directly or indirectly, we have to have a position and a line of action. So in the next few months and the next few weeks, we have to define this coordinated action. I think that the most important thing to highlight is that for those of us who are part of the technical community of the internet, we are trying to have a coordination, a collaboration, a position that is maybe not a single position, but a consolidated one, at least in the context of the internet. in order to submit a productive proposal to have an efficient product emerging out of these processes. So it’s a year full of challenges. Thank you very much.


LITO IBARRA: This is Lito. We will give the floor to Rodrigo de la Parra with ICANN Latin America. You have three minutes, please.


RODRIGO DE LA PARRA: Rodrigo de la Parra speaking. Thank you, Lito. Yes, of course, the challenge to share in these three minutes so many thoughts about these processes that are so different. But these three processes show us this opportunity to reaffirm the principles that we agreed upon throughout the crisis process. When discussing in the internet governance and everything that’s been going on in this process that has the participation and the consensus of the multi-stakeholder model. In order to reaffirm these principles and the bodies that have been created around this consensus, internet mundial or world net has been a process that reminded us of these processes. And the global digital compact has many reaffirmation principles of these main principles regarding internet governance. So I think there are some implementation challenges. And it’s very important, as Sebastian said, we need to be coordinated in order to verify that any implementation made throughout or around these topics is based on these principles. So we have WSIS plus 20. Next year, I think that in our region, we can prove that the examples of collaboration that we have, such as taxes, that some important cases of how we’ve been working in the last few years, not only in the technical community, but how we have integrated ourselves in a very practical and very effective manner with other sectors, such as the governmental sector and different organizations, intergovernmental organizations at the regional level. So it’s a huge task as a region, I believe. We need to foster this collaborative mindset in the region. So once again, thank you for this opportunity,


LITO IBARRA: Lito speaking. Thank you, Rodrigo. So Sebastián and Rodrigo have defined a context, the context we are in, and also the one that we will have next year. It’s been a very critical year with big changes. And I think that many of us like that in that meeting with CISPLAS 20, the decision of going on with IGF was taken, undertaken, and to have a greater budget to go with these debates that take place at the global level. We need to think about something in the Latin American region. Again, as the Secretary will discuss this, we have some examples at the regional level. And we also have to see which are the national examples of the IGF. If in a very pessimistic scenario, the IGF would change, would be suspended by a decision of the states. the countries and the regional events wouldn’t have to be suspended as well. We have developed a culture and an environment in which we can go on discussing these topics at the national and the regional level. Of course, we’ll have to have more financing. This is always a challenge. But the machine is running. So I think that this is a process that we have copied, so to speak, from the summits of the OASIS of 2003 and 2005. But we can contribute with Latin America and Caribbean flavor. And we have to go on with these efforts. We have a very important community of events, of people in the region, in our countries, some stronger or with more capabilities than others. But we can go on supporting ourselves. This is in the event that the IGF was suspended or was provided a different form. But as Rodrigo and Sebastian said, we have to be very, we have to be very, we have to pay attention to what happens to this. I believe that we have, likely, we have a very strong and solid ecosystem of these organizations at the regional level. Many of them in the House of the Internet in Montevideo, they are physically there. But we can go on working in this manner. Thank you very much. Let’s give the floor to Olga Cavalli. Thank you.


OLGA CAVALLI: Olga speaking. I will be doing a retweet to Sebastian, Rodrigo, and other colleagues. What can you tell me about the South School of Internet Governance and the IGF? Well, this was our place. This is where we were born. What we should do as parts of different processes to strengthen ourselves. We need to be very engaging, we need to be diverse, and this takes time, resources. But we also need to be inclusive. The most important example was the IGF this year, and I would like to highlight Lillian’s role. She has been key because we were able to reinvigorate a space based on a very intense work of coordination. This is an example to replicate, and something that I said at the ICANN meeting because I had a similar question, and this is that we need to understand that stakeholders are different, and being on an equal footing doesn’t mean that we are the same. Governments do have their own process, if they need to take their own decisions, they have their own responsibilities, but we need to work together anyway, and that is our responsibility. Reinforcing the space has to do with this. Since the mandate of the IGF was renewed, in the school we expect to have an IGF, and we are a very collaborative community, and we know that we can work together, but the school is always open to offer the space. Thank you Olga, you have one minute left, but now we will give the floor to Lillian Chamorro from Colnado. And like IGF, I think there is an open flying mic.


LILLIAN CHAMORRO: Lillian speaking. To close this idea and to open the next idea, let me add the following. In this IGF edition, we have 99 proposals from different Latin American countries and from different organizations to, you know, be here. We selected 15 sessions were really high quality. This was a very complex process. Recommendations were being made and defining the sessions was a very dynamic process as well. We have over 120 panelists from different countries, over 300 face-to-face participants. All sessions were streamlined. And when it comes to the second question, we need to make the most of these platforms and we need to make the most of what all we have done in Latin America and the Caribbean region. We need to add our own flavor, as Lito said, and this is what we do at the LAC IGF. We give our own flavor to internet governance. Internet governance is not only crafted in these spaces because we meet everyone from all over the world, but governance is also crafted in our local spaces and these regional governance spaces are being, you know, discussed at a national regional space. This is where realities, problems, and even good aspects are being reflected. I love seeing the IGF in El Salvador because they had a rock session, a heavy metal music session. And this is what we have in Latin America, you know, we need to show ourselves as we are, you know, we need to participate in the WSIS, in the GDC. We need to show what we do, the power we have, and how we can help to, you know, relate those spaces. We are addressing topics that are important for the GDC and these topics are being discussed in our communities. They are part of our regional processes. We also need to understand that the discussion doesn’t need to be technology-centered. We need to center ourselves on technology, but on human aspects and environmental aspects, because this is something that we need to take into account in our region. We have the Amazonia, we have other regions and things to take care of, and this needs to be done in our discussions. IEGF has been a great platform for working with different ecosystems and stakeholders, but it is also an opportunity to create synergies, to show what we are doing. I think that we have been appropriating this ecosystem, and we have new opportunities to strengthen our spaces and to give our flavor to these spaces in order to work together. We are learning, really, a lot. I am always learning from the communities I work with, and I think we can replicate this in other discussion spaces.


LITO IBARRA: Thank you, Lillian. And to close this second block of the LAC-ISPs, I would like to give the floor to the LAC-ISP representative. Basilio, please go ahead.


BASILIO RODRIGUEZ PEREZ: Thank you so much. At LAC-ISP, we are always aware and participating in different discussions such as the IEGF, NETMundial and WSIS discussions. And this is thanks to the support we receive in order to participate in those spaces. The multi-stakeholder mechanism of Internet is really important for us. Internet itself will not make sense without the whole mechanism that was created. throughout time. And I like what Lito said on the possibility of not continuing with the IGF or with having a local IGF. And this is what we have to do. We need to focus and work to keep this multistakeholder model and this multistakeholder mechanism. But we also need


LITO IBARRA: to work as required. Thank you, Lito speaking. Thank you, Basilio. So we will now close this second block of the LAG space. And we have a third block or a third session. And we will now hear from researchers from CETI and LACNIC. So now I would like to give the floor to Paula Oteguyi. She is online and she will be the moderator of this part of the session. Paula,


PAULA OTEGUY: please go ahead. Paula Oteguy speaking. Thank you very much, Lito. We will now begin with the second part of the session. This is a space we started three years ago and we would like to keep on promoting the space. The idea is to share research projects that are relevant to our region on the internet development. So this space will be devoted to researchers from the LIDRES project in LACNIC and researchers supported by CETI will be now presenting us an overview of all the work they have been doing so far. Having said this, we will start with LACNIC and the LIDRES program and then we will give the floor to the CETI representative. The LIDRES program in LACNIC supports researchers, and with a particular overview, and this is on local projects, that is a period of time of three months, they have the support of well-known mentors in our regions, and the results of the researchers, and from LACNIC we support researchers, little, sorry, but


ESTEBAN LEZCANO: we are hearing you. Can you hear me okay? Esteban is speaking. Yes, Paula, we are hearing you.


PAULA OTEGUY: Please go ahead. Paula is speaking. So, as I said, the results of the research are of the ownership of the authors, but we help with the promotion of these results. There will be three presenters today, from last year, from the researchers being carried out last year. They will have eight minutes each, so I would kindly ask our presenters to stick to their time, and I would like to introduce José Alberto Rojas from Peru. He is a lawyer, an expert on civil crime, and he will be introducing a research on child grooming, on online gaming, and protection of children in Latin America. José, if you are there, you have the floor.


JOSE ROJAS: José Rojas speaking. Thank you, everyone. Today I have the honor of introducing an investigation on child grooming and online gaming. One of the most concerning aspects of cybercrime is this. This research aims at describing this phenomenon and offering recommendations that could be useful for educators, parents, legislators, and for the industry itself. Grooming… is known as sexual-related proposals to children or adolescents, either face-to-face or by using technology and communication technologies. This is a problem that is well-known, but it has new dimensions when we speak about gaming. To give you a different overview, let’s think about this paradox. We teach children not to talk to strangers outside, but in the digital world, they talk to strangers without knowing who is the person behind avatars or usernames. Online gaming are essential spaces for socializing, but they also represent a very risky environment. Platforms such as Minecraft, Roblox, or Fortnite gather young people together, and they’re always interacting. Part of the research that was carried out in Chile reveals that 82 percent of children recognize the risk of sexual harassment online, but 40 percent of these children do have permission from their parents to play on virtual spaces with strangers. They do not have supervision, and they have the consent of their parents to be able to interact in these video games with people they don’t really know. This gives us an idea of the risk and the supervision techniques. When it comes to grooming and video games, where this is facilitated due to the fact that groomers may adopt false identities, the interaction in chats and using real-time communications or tools, allow groomers to establish trust relationships very quickly. One of the main issues and something we saw in each of the researches is the You know, the sending of gifts or virtual coins. These are being used as manipulation tools. The main goal of this research is to see how grooming operates in online gaming and the implications it has in Latin America. So after this research, I set up the specific goals for this research, and this is to identify the most vulnerable platforms and video games, to analyze the behavior patterns of groomers in this environment, to assess the level of risk and the knowledge about this risk among parents and educators, and to provide measures to mitigate the issue. So this research provides a qualitative and quantitative analysis. We gather different cases based on different interviews, and we also interviewed digital experts. We analyzed the legislation in Latin America to see the effectiveness to fight against grooming in online gaming. One of the main findings of this research is that 180 cases were reported in Roblox and other platforms, and this report was done before law enforcement agencies. I consulted with law enforcement agencies in different Latin American countries, and this is being detailed in the research. The platforms that are most vulnerable, those platforms that have the option of online and real-time chat, I mean, the use of avatars are widely used among groomers, and the lack of knowledge among parents is also important to take into account. During the COVID pandemic, there was an increase of this risk, The cultural acceptance of these video games is something that we saw in our research in Chile. When it comes to victims, miners said that they experienced anxiety, fear, social isolation, among other situations. Sometimes these experiences create problems in the long term. And some of the examples that we gathered, particularly we had one example in Peru that gave rise to the investigation, is something that happened in 2023. There was a first judgment. There was a man contacting children through a platform and they asked him for pictures. In Argentina, there was one adolescent that was manipulated by a video game platform. And then he was contacted via WhatsApp and he was requested to provide sexual related content. During the pandemic, grooming reports increased 81%. And this has to do with the time that the miners spend online. Among the proposals that are being delivered in the research, we have strengthening digital education, creating campaigns for parents and educators in order to prevent grooming, teaching miners to recognize risky or suspectful behaviors online. And it is important to foster the multi-stakeholder participation and to engage governance and child protection organizations to design the protection measures and create agreements among countries to prosecute cases. Also to promote innovation, such as the use of artificial intelligence to moderate content and to identify. behavioral patterns to improve verification and parental control features on platforms and it is also important to promote laws in Latin America allowing the real prosecution of grooming in all its shapes. One of the biggest issues that we realized in our research is that there are no concrete or proper protocols for grooming online. The support on video gaming or the video gaming support areas were the ones addressing these issues but they were not able to share information when the criminal investigation started. So in order to finish my intervention let me add the following. Grooming in online gaming is a growing threat and it requires a coordinated answer. My research aims at providing over a shed light on you know this aspect but to promote protection. The protection of children in online environments is a collective action involving families, governments, companies and the society as a whole. Thank you so much for your attention.


PAULA OTEGUY: Paula Oteguy speaking. Thank you Jose. Thank you for sticking to the time. Congratulations for that and for sharing the main findings of your investigation. It’s very of your research is very important to note vulnerabilities regarding this topic and the recommendations of your research are of high value to all of us. So thank you very much. We will go on with our next researcher Camilo Aratia who is a sociologist from Bolivia who is online with us. His research is called of young indigenous people and technological appropriation. Camilo, welcome.


CAMILO ARATIA: Camilo speaking. Good morning, good afternoon, depending on where you are connecting yourself from. I’d like to start by introducing this topic of Peters. And let me tell you what this investigation is about. My research is related to the appropriation of technology in young population that identifies itself as indigenous. I tried to understand the framework of what we understand by technological appropriation and the model that involves four aspects, access, accessibility, learning, transformation. And by understanding this, technological appropriation should move forward these stages so that we can say, OK, we have technological appropriation regarding these young people. We will have to analyze first which is the access to technology, how they’re learning, once they have access, how they have integrated this. Because we cannot forget that mostly regarding self-identified indigenous populations, we have to understand how this is integrated in their culture. And also transformation. How much has technology transformed their perception as indigenous people, their communities, but also in a more broader sense. That is what we are talking about when we talk about technological appropriation. Based on this, I interviewed a lot of young indigenous peoples. I participated in focus groups, not in all the territory of Bolivia, but in some specific regions. I worked with Aymara, with Quechua young people, with young people from African communities, from Afros and from Chiquitano communities, and also with Chaqueños young people. Chaqueño is not one of the indigenous peoples from Bolivia, but in the Bolivian Chaco, which is in the border with Argentina, there are 20 indigenous peoples that live there. So they identify themselves as Chaqueños. They have roots in these indigenous peoples. And when we did the focus group, we included them there because they identify themselves as indigenous people. So the first point was to understand that young indigenous peoples, at least in Bolivia, many of them inhabit almost urban spaces. There are not isolated populations regarding technology. So in this sense, we asked them about accessibility, taking this into account. There are many interesting answers. Depending on the population and the geographical location where the research was carried out, we had very different realities. For instance, regarding internet access, by meaning not only that the cables exist, but that we have devices to connect to the internet, or that in schools or local… regional governments and in their communities, they do have access to the internet. So in those spaces, which are more urban spaces with a larger population, such as the Madras, Zacqueños, and Afros, they have a good connection, a good access to the internet. But the connection was not a broadband, but rather a mobile connection. That is to say that they were connected to the internet, but most of its use was with mobile devices or mobile data. So they depended of the three companies that we have in Bolivia to connect themselves to the internet. But there was a huge contrast with the lowlands, with the Chiquitanos young people who lived in a community between Concepción de Bolivia and the border with Brazil, which is next to the Amazonian territory. Internet access was scarce. There we have the first contrast. There’s just Intel, one of the telecommunication companies. They could not choose, because if they had another company as a server, they wouldn’t have access to the internet. And internet was invoiced by the hour. And what they told me is that they didn’t have internet in their houses or in their homes. They had to go to some places in their town, in their village. And they needed to go to a more urban region in order to connect to internet. That is the first. The first impression that we have, these young people that identify themselves as Aboriginal people, they have different possibilities of accessing the Internet. So when we started asking about the learning, such as more focused on a digital area such as Chattopadhyay, for instance, they didn’t have this in mind, they couldn’t have access to this because they had more basic problems for accessing Internet. And there we could see the difference between those who lived in the capital city and those who lived in more isolated or rural areas. This is one of the first differences that was very interesting to study. We could have discussed about Chattopadhyay and the Internet, but their access was more limited to it. Regarding the transformation in this community and in the integration, there were many answers that they tried to integrate, that many of the people that are connected are trying to understand that it is a reality that it is here to stay. There was a collective of Aymaras and Afros people in Coroy, in La Paz, they were devoted to artistic activities, and they have a new technology law in their community, and they discussed problems such as digital violence and grooming, and there was a notion, a broader notion of all these matters in those spaces that were more isolated. The reality was different, but there is an effort to integrate technology in all these processes. They saw technology as a bridge. That is what they told us. Something interesting as well is that in the Aymara community, technology was a boom throughout the pandemics because it was their connection to civilization and that created an interest in migrating to urban areas in order for them to study. Because many of these young people, they cannot study these technological courses of study in their communities. That is why they want to move to the city in order to do this. And regarding transformation, it was very complicated because to see that technology as a transformation element in these communities, this is not yet very visible in their cases. I think that this is the most difficult aspect to understand when we discussed about artificial technology and all of this.


PAULA OTEGUY: Paula Oteguy speaking. Sorry to interrupt you. You need to wrap up.


CAMILO ARATIA: OK, Camilo speaking. The conclusion would be that indigenous peoples that live in rural areas or small urban areas, they still have problems accessing technology because in many cases, there is no network, but rather mobile data. So we are still in these preliminary stages. Thank you very much.


PAULA OTEGUY: Paula Oteguy speaking. Thank you, Camilo, for sharing your research and your conclusions. This enables us to know these specific perspectives and points of view. Thank you very much. In order to close up research from the leaders program, I will give the floor to Thais Aguiar, who is a lawyer from Brazil and a researcher in digital topics. And her research is called cybernetic policy, security policies in Brazil, where we come from and where are we going.


THAIS AGUIAR: Thais speaking, first of all, greetings, everyone. Thank you very much for this opportunity to present my research next to these very interesting researchers. And I am very happy to participate in this forum. I would like to thank Paula and my tutor, and also those of you who participate in the leaders program. It’s an honor for me to present my research that wants to analyze the regulatory framework of cyber security in the country. The goal was to research on the gaps and the progresses in Brazil for the implementation of these policies. Regarding methodology, it is a qualitative study, exploratory and documentary one that analyzes public policies and case studies throughout the COVID. And they tried to see which is the history of cybersecurity policies in Brazil and the future thereof. So this is a very brief resume. And I invite you to a brief review. And I invite you to read the whole work on the internet. Where do we come from? In the last few decades, the appearance of internet society has been a great change for Brazil. There is a challenge of the promotion in a way that promotes the use of technology in a secure manner, in a safe manner, in order to preserve internet in an open manner and a safe manner to promote human rights. And the path of Brazilian cybersecurity is marked by a complex evolution of bodies and with a need to balance individual rights and cybersecurity. This is a very complex study that involves a lot of bodies and actors. And this structure of cybersecurity and internet governance has many challenges, such as the need of a greater clarity of notions and the cooperation of the different bodies. In 1995, we had the Committee of Internet Management that promoted the multi-stakeholder model and served as a model to different bodies, not only in the country, but also at the worldwide level. In 2015, we had the Committee of Internet Management that promoted the multi-stakeholder model and served as a model to different bodies, not only in the country, but also at the worldwide level. In 2015, we had the Committee of Internet Management that promoted the multi-stakeholder model There were several bodies like NIC.ER among others, and there are many findings around the states. We have the Committee of Cybersecurity, and also, as I said before, we need more clarity and cooperation among the stakeholders. And there are several events that gave shape to cybersecurity in the country, such as the World Cup of FIFA 2014, the U.K. Olympic Games in 2016 in Rio, and the increase of cyber threats. And these events needed more participation from the bodies that needed to guarantee cybersecurity and also to include cybersecurity in fundamental rights. Brazil has a history of multistakeholder governance in spite of the challenges, and in my research there is the approval of the National Strategy of Cybersecurity in 2020, and this strategy has its main goal to strengthen cybersecurity in the country with a multi-sector approach that involves the civil society and the public-private sector. And the INSC reaches a mature model of cybersecurity with different aspects, such as political one, and there are also some limitations and threats. transparency problems, and securitization of the cyberspace. So when we reached the conclusion that where do we come from and where are we heading, Brazil faces the need to have a more solid cybersecurity framework, more effective one, by promoting the cooperation among the stakeholders. That is to say that we need to have a unified and effective strategy to protect infrastructure services and individuals in the digital space. Policies need to have evidence-based approaches and to involve technical aspects of the public and private sectors and the civil society ones so that the values are in line with democratic values and to respect fundamental rights. Brazil has to seize its experience in a massive stakeholder model in order to have a sovereign nation, digitally speaking, and to protect, at the same time, individual rights and to promote an inclusive and safe environment, digital environment. Brazil has the possibility to become a leader in this sense and also to guarantee cybersecurity for it to become an example in the region by improving public policies and having a more solid, massive stakeholder model to have a sovereign digital nation. Thank you for your time.


PAULA OTEGUY: Thank you very much, Thais. Without a doubt, the issue of cyber security in front of big companies is a complex one. Cyber security faces huge and complex challenges. You mentioned some of them, namely cooperation among different institutions and organizations. So I invite you all to have a look at all these researches that we have. sharing today. I will be posting the link on the chat for you to be able to access these researches. I would like to thank especially the presenters and for representing a group of 16 great researchers that we had in our 2023 edition. And now I would like to give the floor to Fernanda.


MARIA FERNANDA MARTINEZ: Fernanda speaking. Thank you, Paula. Let me echo your comments on congratulating the presenters for their researches. I will read the researches later on and congratulations on these leaders or leaders programs for incentivating research in Latin America and the Caribbean. Now I will introduce the next two researchers. They’re going to share with us their findings. On the one hand, we have Soledad and Pilar Llorenz. And let me say that the idea of this space is to introduce some of the SETI’s investigations or researches, but also to, you know, add new researchers. And we do have at SETI’s different researchers ongoing, but we would like to bring colleagues and we would like to introduce these colleagues to this so important space as it is the IGS. So, Soledad Arenguez will be presenting first. She’s an expert in new technologies and education. She’s a graduate and postgraduate. a teacher in different universities in Argentina and she’s an expert on technologies and education. She’s also coordinator of the communication department in the USCA and she’s also researcher on the misinformation on social media and media literacy. Today she will be sharing with us the progress of her research on the Trust Editor. Soledad, please go ahead. Soledad speaking,


SOLEDAD ARENGUEZ: thank you so much for the introduction. It’s a pleasure to be part of this LAG Space session and to speak about our work on the Trust Editor. Let me tell you what we are from the project. This is an organization that was born in Argentina. We are committed to fighting against misinformation and we want to have an ecosystem based on information we can trust. We have different initiatives ongoing such as media literacy and the development of ideas, products and solutions. One of the solutions we have been working on is a Trust Editor. What is this Trust Editor about? We know we are facing an issue and this is a trust crisis that media are undergoing and this is not the case only in Argentina. This is a worldwide issue because there is a situation of news disconnection and we also need to add fake news and misinformation. Misinformation is not new but it has new dimensions and new complexities based on the advancements that we see on media. So based on this situation and the quick proliferation of these new pieces of misinformation, we see new challenges and challenges being posted by artificial intelligence. This is affecting journalism and communication media because there is no time to check or verify information. So having said all this, the question of how we can face misinformation from media led us to create our Trust Editor prototype. So what is this development about? This is a prototype that uses artificial intelligence to be able to detect inconsistencies in news, in different posts before they are being published. And the idea is to alert editors for them to be able to adjust the information before posting or publishing that piece of information. The goal is to be able to intervene timely, that is, to work in the pre-banking section in order to avoid sharing misinformation or to generate information with fake news or false information. The solution is focused on two key aspects. On the one hand, the idea is to reduce the possibilities of sharing fake news, taking artificial intelligence as one of the tools. And then I will add on that. and to increase trust on media and the news being posted, reducing biases and polarization. This is a project that is part of the LEAP project and with the support of Trusting News. This is an organization working on the creation of indicators to raise or increase trust in the media. This prototype aims at working with a CFS. That is to say, to work with publishers of media agencies and help editors in the news making process. Let me give you some background information. We have the journalist. We have the edition unit. Journalists start producing or drafting their article. They add this information to the SCF. They have the, this is already working. We have the trust editor working. So when the post is sent to editors, the system gets activated. So you can read the article and you can get some indicators. This is what we call quality indicators. And when I say quality, I know that this concept may be quite complex. And what do I mean by this? Why are we emphasizing this complexity? Well, we have a group of publishers. I’ve been working with the publishers, group, editors, and journalists in order to understand what makes an article trustworthy. And let me give you an example. This has to give, or the article needs to cite the sources or the voices of authorities. Let’s say if we have, for example, an article providing diversity in the sources they are citing. So this has been created together with professionals to be able to create these indicators. The development is now in Spanish. We would like to escalate the project and to start working in the pre-banking space. This is going to be integrated. There is a user-friendly visual interface. So in the text, you can identify in colors the inconsistencies or the paragraph or phrases that need to be improved. And this can be analyzed based on a dashboard. In a nutshell, let me also add with you in the chat the presentation for you to see the demonstration and how this dashboard would look like and work with one particular article as an example. But the trust editor identifies, and we are working on the other indicators. But this is training we need to carry out. And the trust editor is being trained to identify, for example. in this case, adjectives or entities, because what we want to see is how these inconsistencies are being reduced and translated into certain indicators. So we work with the sources, the expressions that are being used, for example. So this editor will analyze this trust editor, and that is the name, the reason of the name. This trust editor will analyze the companies that are being mentioned, people, the entities, the times these names appear, because these may lead to certain bias, the adjectives that are being used, the amount of adjectives, and words or terms that are allowing us to differentiate between information and opinion. This is an ongoing project. We want to help publishers. That’s why trust editor will deliver red flags for the human eye, for journalists to be able to review the information. This is not automatized. I mean, this is not to eliminate journalists, but we want to strengthen the task of the journalist. We want to show the red flags. The editor will be checking any indicator or any red flag. And this will give rise or room to improvement. Thank you so much for the time, and we expect to have further news in future sessions.


PAULA OTEGUY: Thank you, Soledad. This is really interesting. Please share the information in the chat so that we can check that information when it is published. And very important to highlight that the human eye is there. There is a human revision behind this tool. Now I’m going to give the floor. to Maria Pilar Chorenz. She’s a doctor in law and social rights, expert on technology and rights. She’s a professor in Cordoba and CETI’s researcher. She will talk about her research on support on judicial sectors in the responsible adoption of generative artificial intelligence. And she’s going to speak about some cases of Argentina, Brazil, Colombia, and Mexico. And Pilar is leading the research in Argentina. Pilar, please go ahead.


MARIA PILAR CHORENZ: Pilar speaking. Thank you, everyone. And thanks those of you who are still in this session. All presentations have been really interesting. Today, I will very briefly summarize all the findings we have seen throughout the year in terms of the generative AI adoption, particularly in the judicial sector of Argentina, because that is my area of expertise. But the framework of this project is led by CETI. And the idea is to understand why the different judicial branches in Latin America are adopting artificial intelligence tools, particularly in the decision-making processes. The question is, are these branches opened to these processes, or are they using these processes explicitly? What are the risks that are being associated to the use of these tools? Judges, legal operators are taking other elements into account. Is this tool being used only for adopting certain measures and not for final decisions? So, the idea of this project, as Fernanda said before, is to get evidence for us to be able to provide this evidence to the legal ecosystem in order to develop certain products allowing us to use generative artificial intelligence and this implementation needs to be done in a responsible manner, taking into account respect for human rights and other legal standards. The source of this project has to do with interviews and the document analysis and the idea is to understand how legal operators and the judicial ecosystem embraces the use of generative artificial intelligence and related tools in decision making. Argentina has a federal system and therefore we have different jurisdictions with different realities in terms of resources and in terms of the different cases they manage. For example, only four jurisdictions manage 60% of the cases that take place in the country. This is quite important to take into account because it has a direct impact on the legal system at the time of implementing these tools. The legal branches that have adopted these tools are not the ones with the highest amount of cases but are the ones that have a low volume of cases and this calls our attention. because this does not solve the issues that all jurisdictions have. All jurisdictions have something in common in the country, and this is that there is low trust or a perception of low trust in the judicial sector. That is to say, the society does not trust in the judicial system because they believe that justice is slow and that decisions are not fair. So the judicial operators see this as a way of improving the delivery of justice. So the emergence of generative AI tools created interest among operators, legal operators, because they were able to adopt resolutions or make decisions or even draft resolutions in a short time, and this was translated into an improvement in the amount of cases they could address. In this context, there are no specific use cases in Argentina implementing generative AI. We can identify three large universes where this tool is being tested. One is those having the support of the superior courts. We have the province of San Luis, San Juan and Rio Negro. They have their own protocol for the use of AI tools. There is a second use that is the one supported by academic institutions and by some state-based institutions. This is a program being developed. throughout the country and the idea is to identify the use that all legal operators are doing with these tools. There are no published results, so we cannot really assess the impact of the tool in this context. And the larger universe has to do with the individual uses that the legal operators are making, let’s say judges or other legal operators, in order to facilitate some task. In this case, we speak about judgment summaries, the summary of cases, looking for some case law element. And there is one particular case where a judge is mentioning the use of AI in a specific resolution, but it has no public repercussions. So, it leads me to analyze the reactions. There is no institutionalized reaction, there is no standpoint from the vast associations. There is an expecting position, if you will, from the legal branch and from the representatives of the ecosystem, and they are expecting to see what these tools will cause. However, there are some consensus that we see in the interviews, and this is that the use and the introduction of AI is a fact and the judicial branch has to embrace this and adapt to its use. There is some consensus, legal actors do understand that there are some tasks, for example, the summary of case law or a judgment are tasks that can be done by artificial intelligence. However, this can be used by using other tools and not necessarily using AI. But they know that in the legal practice and in decision-making, these tools should not be used because there are certain responsibilities that need to be met by those making decisions. And these functional responsibilities have to do with data protection and the way in which those personal data are being protected and managed in legal processes. Human control is another element that needs to be taken into account when working on resolutions or when using these tools. And there is consensus on the fact that the personnel needs to be trained in order to use generative AI. To wrap up, so far we have seen that most of the respondents believe that there is a need to give up the tolerance that they have in terms of some processes and that they need to adapt generative AI, but the lack of regulation may lead to some issues when using these tools. The multi-stakeholder dialogue is also necessary when discussing the use of generative AI. artificial intelligence, and data management is another aspect to be taken into account.


MARIA FERNANDA MARTINEZ: Thank you very much. The idea is that this research is published in March, will be published in mid-March in the CETI website. This is Fernanda Martinez speaking. Before listening to Federica for the close-up, I’d like to thank all researchers for their researches and the large scope of themes that are under the umbrella of internet governance. If we thought about it 10 years ago, the themes that we discussed at the time are still being discussed. Many of them are still being discussed. Are there many new topics? Today, the academia discusses with the technical aspects or with the technical experts, and there is a dialogue across different sectors and across the different professionals with very different backgrounds, and this is very, very enriching. And going back to the question regarding the second section and the first part, it shows how invigorating and how vital these dialogues are among the different sectors in practice, in the field. Then we will see what happens with those spaces, with those sectors, but I think that this is a way to see the very, very rich dialogue that takes place among the different sectors. And something that we say at the CETI is when we start an activity or a project that needs continuity is that it’s difficult to get it started. But once that space disappears, there is something about that debate that goes away as well. So I am calling you to maintain and to keep up these dialogue spaces. Because afterwards, it’s very hard to have them back. Federica, you have the floor.


FEDERICA TORTORELLA: Federica speaking. Yes, thank you. Thank you very much to all researchers and the regional organization, to our remote participants. Thank you to our interpreters to help us with this very valuable task. With this, we close this edition of the LAC space of IGF and see you in Norway very soon. Thank you very much. Have a wonderful rest of the day. Thank you. Bye, everyone. Thank you very much. Thank you, Dito. Thank you, Dito. Bye, everyone.


L

LIDIA ANCHAMORO

Speech speed

108 words per minute

Speech length

426 words

Speech time

235 seconds

IGF Latin America and Caribbean Secretariat implementing new statutes

Explanation

The IGF Latin America and Caribbean Secretariat is implementing new statutes adopted in 2021. These statutes aim to clarify the roles of different bodies within the IGF structure.


Evidence

New bodies established include the Committee of Massive Stakeholders, Committee for the Selection of Workshops, and a new secretariat organized by Colnodo.


Major Discussion Point

Updates from Regional Internet Governance Organizations


O

OLGA CAVALLI

Speech speed

118 words per minute

Speech length

563 words

Speech time

285 seconds

South School on Internet Governance organizing 17th edition in Mexico

Explanation

The South School on Internet Governance is organizing its 17th edition in Mexico. The program has evolved from a week-long training to a six-month program with various components.


Evidence

The program now includes a two-month virtual course, a hybrid format course, and an agreement with the University of Mendoza for a diploma in internet governance.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

RODRIGO DE LA PARRA


PAULA OTEGUY


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


R

RODRIGO DE LA PARRA

Speech speed

113 words per minute

Speech length

573 words

Speech time

303 seconds

ICANN Latin America focusing on regional policy development participation

Explanation

ICANN Latin America is focusing on fostering greater participation of regional actors in ICANN’s policy development processes. They are also emphasizing technical training for DNS operators in the region.


Evidence

Activities organized in ICANN’s in-person meetings in San Juan, Kigali, and Istanbul, with participation from regional colleagues.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


PAULA OTEGUY


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


Need to reaffirm multi-stakeholder principles in global processes

Explanation

There is a need to reaffirm the principles of multi-stakeholder governance in global internet processes. This includes processes like the Global Digital Compact and WSIS+20 review.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

SEBASTIAN BELAGAMBA


LITO IBARRA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


Differed with

BASILIO RODRIGUEZ PEREZ


Differed on

Approach to internet regulation and governance


S

SEBASTIAN BELAGAMBA

Speech speed

130 words per minute

Speech length

638 words

Speech time

294 seconds

Internet Society implementing new 5-year strategic plan

Explanation

The Internet Society is implementing a new 5-year strategic plan starting next year. The plan focuses on addressing global inequality and lack of trust in the Internet.


Evidence

Two main goals: ensuring people worldwide have access to a resilient and affordable Internet, and providing a safe and robust internet experience.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Challenges of WSIS+20 review and IGF mandate renewal

Explanation

The upcoming year presents challenges with the WSIS+20 review and the renewal of the IGF mandate. These processes will shape the future of internet governance discussions.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


LITO IBARRA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


B

BASILIO RODRIGUEZ PEREZ

Speech speed

92 words per minute

Speech length

380 words

Speech time

246 seconds

LAC-ISP advocating for 6 GHz frequency for Wi-Fi

Explanation

LAC-ISP is advocating for the use of 6 GHz frequency for Wi-Fi, especially for outdoor use. This is seen as important for small ISPs to deliver quality services.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Concerns about “fair share” proposals impacting network neutrality

Explanation

LAC-ISP expresses concerns about “fair share” proposals, arguing they could negatively impact network neutrality. They believe such proposals could cause significant problems for small ISPs in Latin America.


Evidence

Example of South Korea starting to charge for content, which LAC-ISP argues cannot be applied in Latin America.


Major Discussion Point

Current Internet Governance Processes and Challenges


L

LITO IBARRA

Speech speed

122 words per minute

Speech length

1346 words

Speech time

661 seconds

LAC-IX deploying new internet exchange point infrastructure

Explanation

LAC-IX is deploying new internet exchange point infrastructure in the region. They now have 34 traffic exchange points that are members of the organization.


Evidence

Four new exchange points added this year, bringing the total to 34 members.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


RODRIGO DE LA PARRA


PAULA OTEGUY


Agreed on

Need for regional cooperation and knowledge sharing


Importance of regional examples of collaboration

Explanation

Lito Ibarra emphasizes the importance of regional examples of collaboration in internet governance. He suggests that these examples can contribute to global discussions.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


P

PAULA OTEGUY

Speech speed

118 words per minute

Speech length

1074 words

Speech time

544 seconds

LACNIC supporting local internet governance initiatives

Explanation

LACNIC is providing support to local internet governance initiatives in the region. This support includes funding, webinar services, and expert participation in events.


Evidence

In 2024, LACNIC supported 10 local initiatives, 2 youth initiatives, and several internet governance schools.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


RODRIGO DE LA PARRA


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


R

ROCIO DE LA FUENTE

Speech speed

128 words per minute

Speech length

294 words

Speech time

137 seconds

LAC-TLD developing single server for domain name queries

Explanation

LAC-TLD is developing a single server for domain name queries across multiple ccTLDs in the region. This tool aims to provide an additional channel for checking domain availability and registration information.


Evidence

The tool is currently operating in beta mode with participation from various ccTLDs.


Major Discussion Point

Updates from Regional Internet Governance Organizations


L

LILLIAN CHAMORRO

Speech speed

143 words per minute

Speech length

451 words

Speech time

188 seconds

Opportunity to showcase Latin American internet governance model

Explanation

Lillian Chamorro argues that there is an opportunity to showcase the Latin American internet governance model in global discussions. She emphasizes the importance of regional and national governance spaces in shaping internet governance.


Evidence

Examples of local initiatives like the IGF in El Salvador incorporating cultural elements like heavy metal music.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LITO IBARRA


Agreed on

Importance of multi-stakeholder model in internet governance


J

JOSE ROJAS

Speech speed

119 words per minute

Speech length

915 words

Speech time

460 seconds

Child grooming risks in online gaming environments

Explanation

Jose Rojas presents research on the risks of child grooming in online gaming environments. The research aims to describe this phenomenon and offer recommendations for various stakeholders.


Evidence

Study in Chile showing 82% of children recognize the risk of sexual harassment online, but 40% have permission to play with strangers without supervision.


Major Discussion Point

Research on Internet Governance Issues in Latin America


C

CAMILO ARATIA

Speech speed

104 words per minute

Speech length

1037 words

Speech time

596 seconds

Technological appropriation among indigenous youth in Bolivia

Explanation

Camilo Aratia presents research on technological appropriation among indigenous youth in Bolivia. The study examines access, learning, integration, and transformation aspects of technology use among different indigenous communities.


Evidence

Findings show varying levels of internet access and use among different indigenous communities, with urban areas having better access than rural areas.


Major Discussion Point

Research on Internet Governance Issues in Latin America


T

THAIS AGUIAR

Speech speed

102 words per minute

Speech length

735 words

Speech time

430 seconds

Evolution of cybersecurity policies and frameworks in Brazil

Explanation

Thais Aguiar presents research on the evolution of cybersecurity policies and frameworks in Brazil. The study analyzes the regulatory framework and identifies gaps and progress in implementing cybersecurity policies.


Evidence

Approval of the National Strategy of Cybersecurity in 2020, which aims to strengthen cybersecurity with a multi-sector approach.


Major Discussion Point

Research on Internet Governance Issues in Latin America


S

SOLEDAD ARENGUEZ

Speech speed

107 words per minute

Speech length

960 words

Speech time

536 seconds

Development of AI tool to detect misinformation in news articles

Explanation

Soledad Arenguez presents the development of an AI tool called Trust Editor to detect inconsistencies in news articles before publication. The tool aims to alert editors to potential misinformation and improve trust in media.


Evidence

The tool uses quality indicators developed with publishers and editors to identify potential issues in articles.


Major Discussion Point

Research on Internet Governance Issues in Latin America


M

MARIA PILAR CHORENZ

Speech speed

106 words per minute

Speech length

983 words

Speech time

553 seconds

Adoption of generative AI in judicial systems in Argentina

Explanation

Maria Pilar Chorenz presents research on the adoption of generative AI in judicial systems in Argentina. The study examines how legal operators are embracing AI tools in decision-making processes and the associated risks and challenges.


Evidence

Findings show varying levels of AI adoption across different jurisdictions, with some courts developing protocols for AI use and others relying on individual use by legal operators.


Major Discussion Point

Research on Internet Governance Issues in Latin America


Agreements

Agreement Points

Importance of multi-stakeholder model in internet governance

speakers

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LITO IBARRA


LILLIAN CHAMORRO


arguments

Need to reaffirm multi-stakeholder principles in global processes


Challenges of WSIS+20 review and IGF mandate renewal


Importance of regional examples of collaboration


Opportunity to showcase Latin American internet governance model


summary

Multiple speakers emphasized the importance of maintaining and strengthening the multi-stakeholder model in internet governance, both at regional and global levels.


Need for regional cooperation and knowledge sharing

speakers

OLGA CAVALLI


RODRIGO DE LA PARRA


PAULA OTEGUY


LITO IBARRA


arguments

South School on Internet Governance organizing 17th edition in Mexico


ICANN Latin America focusing on regional policy development participation


LACNIC supporting local internet governance initiatives


LAC-IX deploying new internet exchange point infrastructure


summary

Several speakers highlighted initiatives aimed at fostering regional cooperation, knowledge sharing, and capacity building in various aspects of internet governance.


Similar Viewpoints

Both speakers emphasized the importance of improving internet access and infrastructure in the region, albeit through different approaches.

speakers

SEBASTIAN BELAGAMBA


BASILIO RODRIGUEZ PEREZ


arguments

Internet Society implementing new 5-year strategic plan


LAC-ISP advocating for 6 GHz frequency for Wi-Fi


Both researchers focused on using technology to address online risks and improve trust in digital environments, particularly for vulnerable groups like children and news consumers.

speakers

JOSE ROJAS


SOLEDAD ARENGUEZ


arguments

Child grooming risks in online gaming environments


Development of AI tool to detect misinformation in news articles


Unexpected Consensus

Integration of cultural elements in internet governance discussions

speakers

LILLIAN CHAMORRO


CAMILO ARATIA


arguments

Opportunity to showcase Latin American internet governance model


Technological appropriation among indigenous youth in Bolivia


explanation

Both speakers, despite focusing on different aspects of internet governance, highlighted the importance of integrating local cultural elements into discussions and research on internet governance in Latin America.


Overall Assessment

Summary

The main areas of agreement among speakers included the importance of the multi-stakeholder model, regional cooperation in internet governance, and the need to address both infrastructure development and sociocultural aspects of internet use in Latin America.


Consensus level

There was a moderate to high level of consensus among speakers on the importance of regional collaboration and the multi-stakeholder approach. This consensus suggests a strong foundation for continued cooperation in addressing internet governance challenges in Latin America. However, speakers also presented diverse research topics and organizational focuses, indicating a rich and varied approach to internet governance issues in the region.


Differences

Different Viewpoints

Approach to internet regulation and governance

speakers

BASILIO RODRIGUEZ PEREZ


RODRIGO DE LA PARRA


arguments

LAC-ISP expresses concerns about “fair share” proposals, arguing they could negatively impact network neutrality. They believe such proposals could cause significant problems for small ISPs in Latin America.


Need to reaffirm multi-stakeholder principles in global processes


summary

While LAC-ISP emphasizes concerns about specific regulatory proposals like ‘fair share’, ICANN focuses on broader multi-stakeholder principles in global processes. This indicates a difference in approach to internet governance, with LAC-ISP focusing on specific industry concerns and ICANN emphasizing broader governance principles.


Unexpected Differences

Focus of technological development efforts

speakers

BASILIO RODRIGUEZ PEREZ


ROCIO DE LA FUENTE


arguments

LAC-ISP advocating for 6 GHz frequency for Wi-Fi


LAC-TLD developing single server for domain name queries


explanation

While both speakers represent technical organizations, their focus on technological development differs unexpectedly. LAC-ISP is advocating for specific frequency allocation for Wi-Fi, while LAC-TLD is developing a centralized domain name query system. This highlights the diverse technical priorities within the region’s internet governance ecosystem.


Overall Assessment

summary

The main areas of disagreement revolve around specific regulatory approaches, priorities in technological development, and the focus of multi-stakeholder involvement in global processes.


difference_level

The level of disagreement among speakers is moderate. While there are differences in specific approaches and priorities, there seems to be a general consensus on the importance of multi-stakeholder involvement and regional cooperation in internet governance. These differences reflect the diverse interests and perspectives within the Latin American internet governance ecosystem, which could lead to rich discussions and potentially comprehensive solutions that address various stakeholder needs.


Partial Agreements

Partial Agreements

All speakers agree on the importance of multi-stakeholder involvement in global internet governance processes. However, they differ in their specific approaches: Sebastian Belagamba focuses on the challenges of WSIS+20 review and IGF mandate renewal, Rodrigo de la Parra emphasizes reaffirming existing principles, while Lillian Chamorro suggests showcasing the Latin American model as a unique contribution.

speakers

SEBASTIAN BELAGAMBA


RODRIGO DE LA PARRA


LILLIAN CHAMORRO


arguments

Challenges of WSIS+20 review and IGF mandate renewal


Need to reaffirm multi-stakeholder principles in global processes


Opportunity to showcase Latin American internet governance model


Similar Viewpoints

Both speakers emphasized the importance of improving internet access and infrastructure in the region, albeit through different approaches.

speakers

SEBASTIAN BELAGAMBA


BASILIO RODRIGUEZ PEREZ


arguments

Internet Society implementing new 5-year strategic plan


LAC-ISP advocating for 6 GHz frequency for Wi-Fi


Both researchers focused on using technology to address online risks and improve trust in digital environments, particularly for vulnerable groups like children and news consumers.

speakers

JOSE ROJAS


SOLEDAD ARENGUEZ


arguments

Child grooming risks in online gaming environments


Development of AI tool to detect misinformation in news articles


Takeaways

Key Takeaways

Regional internet governance organizations in Latin America and the Caribbean are actively working on various initiatives to strengthen internet governance in the region


There are ongoing challenges and opportunities related to global internet governance processes like WSIS+20 review and IGF mandate renewal


Research on internet governance issues in Latin America covers a wide range of topics, from child online protection to AI adoption in judicial systems


Multi-stakeholder collaboration and dialogue remain crucial for addressing internet governance challenges in the region


Resolutions and Action Items

Continue promoting and supporting local and regional internet governance initiatives


Prepare for upcoming global internet governance processes like WSIS+20 review


Further develop and implement tools to combat misinformation and enhance cybersecurity


Expand research on emerging technologies and their impact on internet governance


Unresolved Issues

How to effectively address the digital divide, particularly for indigenous communities


Balancing cybersecurity needs with protection of individual rights and freedoms


Regulatory frameworks for emerging technologies like generative AI in various sectors


Long-term sustainability of the multi-stakeholder internet governance model


Suggested Compromises

Adopting a balanced approach to AI implementation in judicial systems, maintaining human oversight


Developing region-specific solutions for internet governance challenges while aligning with global principles


Fostering collaboration between technical experts and policymakers to address complex internet governance issues


Thought Provoking Comments

We teach children not to talk to strangers outside, but in the digital world, they talk to strangers without knowing who is the person behind avatars or usernames.

speaker

José Rojas


reason

This comment highlights the paradox between real-world and online safety practices for children, drawing attention to a critical issue in online child protection.


impact

It set the stage for a deeper discussion on the risks of online gaming platforms and the need for better digital education and protection measures for children.


There were several bodies like NIC.ER among others, and there are many findings around the states. We have the Committee of Cybersecurity, and also, as I said before, we need more clarity and cooperation among the stakeholders.

speaker

Thais Aguiar


reason

This comment emphasizes the complexity of cybersecurity governance and the need for better coordination among various stakeholders.


impact

It led to a discussion on the challenges of implementing effective cybersecurity policies and the importance of multi-stakeholder cooperation.


The idea is to understand how legal operators and the judicial ecosystem embraces the use of generative artificial intelligence and related tools in decision making.

speaker

Maria Pilar Chorenz


reason

This comment introduces the important topic of AI adoption in the judicial system, raising questions about its implications for legal decision-making.


impact

It sparked a discussion on the potential benefits and risks of using AI in the judicial sector, as well as the need for responsible implementation and human oversight.


Internet access was scarce. There we have the first contrast. There’s just Intel, one of the telecommunication companies. They could not choose, because if they had another company as a server, they wouldn’t have access to the internet.

speaker

Camilo Aratia


reason

This comment highlights the stark digital divide that exists even within a single country, particularly affecting indigenous communities.


impact

It broadened the discussion to include issues of digital inequality and the challenges of technological appropriation in marginalized communities.


Overall Assessment

These key comments shaped the discussion by highlighting critical issues in internet governance across various domains – from child online safety to cybersecurity policy, AI in judicial systems, and digital inequality. They broadened the scope of the conversation beyond technical aspects to include social, legal, and ethical considerations. The comments also emphasized the need for multi-stakeholder cooperation and context-specific approaches in addressing these complex challenges.


Follow-up Questions

How can the multi-stakeholder model of internet governance be preserved and strengthened at national and regional levels if the global IGF were to be suspended or significantly changed?

speaker

Lito Ibarra


explanation

This is important to ensure continued dialogue and collaboration on internet governance issues in Latin America and the Caribbean, even if global structures change.


What are the specific implementation challenges for the Global Digital Compact and how can they be addressed while maintaining core internet governance principles?

speaker

Rodrigo de la Parra


explanation

Understanding these challenges is crucial for effectively implementing the GDC while preserving the multi-stakeholder model and other key principles.


How can Latin American and Caribbean countries better coordinate their positions and contributions to global internet governance processes like WSIS+20?

speaker

Rodrigo de la Parra


explanation

Improved regional coordination could strengthen the voice and influence of LAC countries in shaping global internet governance.


What are the most effective strategies for improving digital literacy and awareness of online risks like grooming among parents, educators, and children in Latin America?

speaker

José Rojas


explanation

This is critical for protecting children from online exploitation and ensuring safe use of technology.


How can policies and infrastructure development be tailored to address the significant disparities in internet access and use between urban and rural indigenous communities?

speaker

Camilo Aratia


explanation

Addressing these disparities is essential for ensuring equitable digital inclusion of indigenous populations.


What are the best practices for fostering cooperation and clear role definition among the various institutions involved in Brazil’s cybersecurity framework?

speaker

Thais Aguiar


explanation

Improving institutional cooperation is key to developing a more effective and cohesive national cybersecurity strategy.


How can the Trust Editor tool be further developed and implemented to effectively combat misinformation while respecting journalistic integrity and freedom of expression?

speaker

Soledad Arenguez


explanation

Balancing technological solutions for misinformation with fundamental press freedoms is crucial for maintaining trust in media.


What ethical guidelines and regulatory frameworks are needed to ensure responsible adoption of generative AI in judicial decision-making processes across Latin America?

speaker

Maria Pilar Chorenz


explanation

Developing appropriate guidelines is essential to harness the benefits of AI in the justice system while protecting rights and maintaining public trust.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations

Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations

Session at a Glance

Summary

This panel discussion focused on the impacts of artificial intelligence (AI) on marginalized populations, exploring both the opportunities and risks presented by AI technologies. Experts from government, industry, and civil society organizations shared insights on how AI can advance equity in areas like healthcare and education, while also potentially exacerbating existing biases and inequalities.


Key concerns raised included the lack of diverse representation in AI development, biases in training data that reinforce discrimination, and the misuse of AI for surveillance and censorship by authoritarian governments. Panelists emphasized the need for human rights impact assessments, increased transparency from companies and governments, and the inclusion of marginalized voices in AI governance discussions.


Specific recommendations included conducting regular bias audits of AI systems, strengthening data protection regulations, creating inclusive digital spaces, and establishing clear accountability mechanisms. The importance of addressing AI use in military contexts was also highlighted as a critical area requiring more attention and safeguards.


The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panelists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations.


The conversation concluded by emphasizing the timeliness of these issues in light of ongoing UN processes and the need for continued international collaboration to shape inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.


Keypoints

Major discussion points:


– The risks and opportunities of AI for marginalized populations


– The need for diverse representation in AI development and governance


– Challenges of AI use in military/defense contexts and lack of transparency


– The importance of human rights considerations in AI policy and regulation


– The role of multistakeholder collaboration in addressing AI challenges


The overall purpose of the discussion was to examine how artificial intelligence impacts marginalized populations, identify key risks and opportunities, and explore ways that governments, companies, and civil society can work together to ensure AI benefits all of society while mitigating potential harms.


The tone of the discussion was largely serious and concerned, with speakers highlighting significant challenges and risks posed by AI to vulnerable groups. However, there were also notes of cautious optimism about AI’s potential benefits if developed responsibly. The tone became more action-oriented toward the end, with concrete suggestions for next steps and collaborations to address the issues raised.


Speakers

– Alisson Peters: Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor in the U.S. State Department


– Desirée Cormier Smith: Special Representative for Racial Equity and Justice at the U.S. State Department


– Dr. Geeta Rao Gupta: Special Representative for Gender Equity and Equality at the U.S. State Department


– Jessica Stern: U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons


– Kelly M. Fay Rodriguez: Special Representative for International Labor Affairs at the U.S. State Department


– Sara Minkara: U.S. Special Advisor on International Disability Rights


– Nicol Turner Lee: Expert on the intersection of race and technology, digital divide, and digital equality


– Nighat Dad: Founder and Executive Director of the Digital Rights Foundation, expert on online harassment and digital security for women


– Rasha Younes: Human rights advocate and researcher at Human Rights Watch, expert on LGBTQI+ rights


– Amy Colando: Head of Microsoft’s Responsible Business Practice, expert on technology and human rights


– Guus Van Zwoll: Representative from the Netherlands government, involved in the Freedom Online Coalition


Additional speakers:


– Dr. Lee: Audience member asking a question


– Usama Kilji: Representative from Bolo B, a digital rights organization in Pakistan


– Khaled Mansour: Member of Meta Oversight Board


Full session report

Expanded Summary of Panel Discussion on AI’s Impact on Marginalised Populations


Introduction


This panel discussion brought together experts from government, industry, and civil society to explore the impacts of artificial intelligence (AI) on marginalised populations. Alisson Peters, Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor, opened the discussion by emphasizing the importance of a multistakeholder approach to understanding AI’s societal impacts and the U.S. government’s commitment to addressing AI governance.


Key Themes and Discussion Points


1. Risks and Opportunities of AI for Marginalised Populations


The panel acknowledged the dual nature of AI’s potential impact. Nicol Turner Lee, an expert on the intersection of race and technology, highlighted both opportunities and challenges. She noted AI’s potential to improve healthcare outcomes and educational access for underserved communities, while also cautioning that AI systems often reinforce historical discrimination against marginalised groups. Turner Lee provided specific examples, such as AI-driven hiring tools potentially discriminating against women and minorities, and facial recognition systems misidentifying people of color at higher rates.


Jessica Stern, U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons, offered a nuanced perspective, stating that “Computers might be binary, but people are not.” She suggested that generative AI could help reimagine inclusive futures and allow for safe and authentic self-expression, while also cautioning about the need to address biases in AI training data.


Sara Minkara, U.S. Special Advisor on International Disability Rights, pointed out that AI development often leaves out the disability community entirely, highlighting the need for inclusive design and development processes.


2. Addressing Biases and Harms in AI Systems


A significant portion of the discussion focused on strategies to mitigate biases and potential harms in AI systems. Nicol Turner Lee emphasised the need to interrogate AI models for bias and question whether automation is appropriate in various contexts. Nighat Dad, Founder and Executive Director of the Digital Rights Foundation, called on companies to do more to address harms on their platforms.


Rasha Younes, a human rights advocate and researcher at Human Rights Watch, proposed concrete steps, suggesting that “developers should conduct regular bias audits and build diverse representative data sets.” She also recommended that policymakers require independent testing of AI systems for biases, particularly when deployed in public-facing roles.


Amy Colando, Head of Microsoft’s Responsible Business Practice, shared insights on how her company employs various approaches to combat societal biases in AI systems. She discussed Microsoft’s efforts to increase transparency while maintaining customer confidentiality, highlighting the tension between these two objectives. Colando also detailed Microsoft’s responsible AI development practices, including ethical guidelines, diverse team composition, and ongoing research into AI safety and fairness.


3. Ensuring Inclusive AI Governance


The need for more inclusive approaches to AI governance was a recurring theme throughout the discussion. Alisson Peters highlighted U.S. government efforts, including a national security memorandum on AI that emphasizes human rights considerations. She noted that the U.S. government has policies to ensure human rights assessments in AI procurement.


Nighat Dad pointed out that AI governance conversations are heavily concentrated in Global North countries, often excluding perspectives from regions where these technologies are deployed. Rasha Younes stressed the need to strengthen protections against digital targeting of vulnerable groups.


Guus Van Zwoll, representing the Netherlands government and the Freedom Online Coalition (FOC), discussed efforts to keep human rights central in AI governance discussions. He mentioned that the FOC would be updating its 2020 statement on AI and human rights in the coming year, organizing workshops to educate policymakers on AI challenges for human rights, and spotlighting examples of AI that can advance human rights for marginalised groups.


4. Transparency and Accountability in AI Development


Khaled Mansour, a member of Meta’s Oversight Board, highlighted the challenge of transparency in human rights impact assessments. The discussion also touched on the need for more conversation on the embedded militarisation of everyday AI tools, a concern raised by audience member Usama Kilji, who called for more discussions and safeguards around military use of AI.


Unresolved Issues and Future Directions


Despite the comprehensive nature of the discussion, several issues remained unresolved. These included questions about how to effectively include marginalised voices in AI governance discussions, balancing transparency in human rights impact assessments with customer confidentiality, addressing AI use in military settings and its potential humanitarian impacts, and closing the digital divide to ensure equitable access to AI benefits.


The panel suggested several areas for further research and action, including:


1. Revising and updating the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasise the disproportionate impact on marginalised communities.


2. Identifying and highlighting the disproportionate impact of AI on marginalised groups through tools like Stanford’s AI MISUSE tracker.


3. Conducting community-rooted AI research that prioritises diversity and addresses AI impacts on marginalised groups.


4. Exploring how AI can be leveraged to empower marginalised groups while ensuring accountability and ethical development.


Conclusion


The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panellists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations. The conversation highlighted the ongoing challenges and opportunities in shaping inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.


Session Transcript

Alisson Peters: All right, good afternoon, good evening, everyone, we’re going to get started, forgive us for some technical difficulties. Can everyone hear us? Hopefully everyone online can hear us as well. Well, thank you all for joining us, both in person and online. I’m going to try to talk as loudly as I possibly can, because I know it’s been a bit challenging for folks online today to hear every session. It’s my pleasure to be here on behalf of the United States government, where I serve as Deputy Assistant Secretary of State for our Bureau of Democracy, Human Rights, and Labor in the State Department. Before we get started in today’s session, we have the esteemed honor of welcoming virtually a number of our special envoys and representatives to the United States government representing various different marginalized populations, and they wanted to send their greetings as well.


Desirée Cormier Smith: So AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI. There is vast evidence showing how AI systems can reinforce historical patterns of discrimination that disproportionately impact people of African descent, indigenous peoples, Roma people, and other marginalized racial and ethnic communities. And the risks of harm are the most pronounced for people who experience multiple


Dr. Geeta Rao Gupta: That’s right. AI tools are aiding the creation and dissemination of technology facilitated gender-based violence, or TFGBV, especially against women and children. This especially pernicious form of harassment and abuse is already threatening the ability of women and girls to participate in all spaces, online and offline, and has grave consequences for democracy.


Jessica Stern: Yes, computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives. Nuanced data about LGBTQI plus people with appropriate privacy protections can help ensure that recommendation algorithms governing, for example, our shopping habits or content we consume on social media don’t entrench harmful social stereotypes or censor the beautiful diversity of humanity. AI is an exciting set of technologies that have the potential across all sectors to help us consider and integrate diverse perspectives.


Kelly M. Fay Rodriguez: Humans are the future of work, and freedom of association and collective bargaining are central to safeguarding workers’ rights and standards amid the rapid expansion of AI technologies, including and in particular for marginalized populations. Unions play an essential role in advocating for practices that can increase the meaningful representation of women and diverse groups and marginalized populations in AI. They advocate for safe work environments, limiting invasive and unsafe workplace monitoring. They ensure fair employment practices, secure equitable compensation, and ensure that benefits are shared.


Sara Minkara: AI is a reality of our present and our future, but also what is a reality is that a lot of time AI is built in a way that leaves us behind. And when I say us, I’m saying the disability community. We need to ensure that AI in development, in design, in the testing, in implementation is accessible for everyone, including the disability community. And not just on the assistive technology side of things, but for all technology.


Alisson Peters: Thank you very much to all of our special representatives and envoys. I think as you heard here at the start of our session, our shared goal in the United States government is really to harness the opportunities of artificial intelligence, whether that be on economic growth, increased access to quality education and advancement in medical care, while mitigating the risk. And we know all too often some of AI’s most egregious harms fall on marginalized populations and those experiencing multiple and intersecting forms of discrimination, including algorithmic biases, increased surveillance, and online harassment. We’re witnessing around the globe an unfortunate trend of governments misusing artificial intelligence in ways that significantly impact marginalized populations, such as through social media monitoring and other forms of surveillance, censorship, and other harassing information manipulation. To counter these abuses over the last four years, the United States government has taken several steps to encourage safeguards at the national level. We’ve introduced a number of executive orders and memos into our government system to safeguard the use and deployment and development of artificial intelligence for human rights. And at the international level, we’re working closely with the Freedom Online Coalition and other key like-minded partners through the UN and other multilateral systems to lay the groundwork for continued international and multi-stakeholder collaboration for years to come. But there’s more work to be done, and that’s where today’s discussion ends. discussion really comes in. The only way that governments can work to ensure that merchants aren’t disproportionately harmed by technological advancements is in partnership with you all around the room, around IGF and those that are online. We’re focused heavily on ensuring that we can easily create safeguards into our systems to ensure that we do not see the dissemination of disinformation and harmful synthetic imagery that can harm marginalized populations, how AI systems can exacerbate existing digital and real world divides, how they can reinforce stereotypes that further stigmatization, especially when these systems are not accessible for all their users. So we’re quite fortunate today by joining with an esteemed panel of experts that have really gathered to work to fight back against these worrying threats and trends. We have a number of our panelists online and we’re fortunate to be joined here in the room by two of them as well. First, we’ll hear from Dr. Nicole Turner Lee, who’s a leading voice on the intersection of race and technology and the digital divide and is a recognized expert on issues of digital equality and inclusion. Her work ensures that all communities, particularly marginalized ones, benefit from technological advancements. We’re also joined by Amy Colando, who’s a lawyer with deep expertise on the intersection of technology and human rights. As the head of Microsoft’s Responsible Business Practice, she leads a team dedicated to advancing Microsoft’s commitment to human rights norms and a responsible value chain that respects and advances human rights. Nagat Dodd, our friend and partner, is a globally recognized lawyer and advocate for women’s rights and digital privacy. She’s the founder and executive director of the Digital Rights Foundation, which focuses on issues of online harassment. data protection, and digital security for women and marginalized populations in Pakistan and is a member of MEDA’s Oversight Board. And Rasha Younis is a prominent human rights advocate and researcher at Human Rights. Her work has highlighted the systemic discrimination and violence faced by LGBTQI plus individuals in the MENA region and beyond, and her efforts have been instrumental in bringing international attention to these issues and pushing for legal reforms. So first I wanted to start out a little bit in the scene in terms of both the risks and opportunities that come from AI and the threats to marginalized populations. Let’s kick things off with Nicole, I’ll turn to you first. Some of these issues benefited from extensive international conversations, from the recognition in the engineering community over the past decade that it is critical to address harmful biases in AI, to efforts to curb the misuse of artificial intelligence and generative AI tools for image-based sexual abuse. Help us set the stage. Where do you think important progress has been made over the past several years and what are current challenges do you think that need to be addressed or elevated on the agenda, particularly as we’re all gathered here this week at IGF to address critical internet governance discussions? I think it’s really important that you help us think a little bit thoughtfully about where the current gaps and opportunities exist that we can leverage.


Nicol Turner Lee: Well thank you so much for the kind introduction and also thank you to all of you and the IGF for hosting this conversation. Before we start though, I do also want to say that I am the author of a new book, Digitally Invisible, to ensure that people know that there is content that I’ve written about this disconnect between the opportunities of technology and those who are marginalized or impacted by it. So I want to lean into this conversation on where we have seen some opportunity and where we have challenges. And in particular, in my few short moments just answering this question, I do want to point out that one of the opportunities that has become most prominent is our ability to engage in artificial intelligence, given the distributed compute power that we have. So I think it’s really important to have this conversation because it also lends itself by the opportunities also pose themselves threats. But what we are seeing is the ability to distribute networks in a way because we are building compute power that has, you know, capacity, which I’ve been doing this for about 30 years in terms of technology and its accessibility by people of color in particular, and we’ve not seen this very distributed network evolve as it is done today with chips and power. The other thing that has been an opportunity of AI has been the way it’s been integrated into a variety of verticals at the Brookings Institution. We started what’s called an AI equity lab allows us to workshop journalism and AI health care and AI criminal justice and AI. And why we do that, first and foremost, by putting the name of the sector and then AI is that we’ve seen an incredible influence of technology tools on these verticals that in essence determine quality of life on the social welfare side, as well as the economic opportunity side. And so I think we’ve come a long way, for example, in health care. We’re actually seeing personalized medicine. We’re seeing more efficiency among doctors when it comes to personalized medicine and the management of health. We’re seeing a lot more contemporary reaction and quick reaction. We saw that during the COVID vaccine development when it comes to pinpointing things that would have taken a very long time in our intellectual discovery are now happening through AI. And I think another area where we’ve seen a lot of promise has been in climate. where we’re able to use drone-enabled surveillance to look at where we have thermal outputs or throughputs that have potential danger for natural disaster or wildfires. We’re also seeing for agriculture, for example, because many of these are very intersectional, the ability to look at climate as it relates to watering times or when we’re able to be most productive in crop development. So I wanna put that out there because I often sound like a pessimist, which I will sound like now when it comes to AI and marginalized communities. So where we see these efficiency growth spurts, one of the areas that we’re seeing a lot of bias, as it’s already been indicated by many of the speakers, is when it comes to flipping these opportunities into challenges or hurdles. So I’ll just close with a couple of thoughts that will frame, hopefully, the rest of the conversation. Obviously, there’s demographic bias. In the United States, that demographic bias is profoundly defined by race and ethnicity, and gender has become more of a human rights concern. In other countries outside of the United States, class has also found its way into the demographic biases, as well as both the United States and outside the United States, geography has become biases. Where you live, who you are, and what you do matters because it is reflected in what we call the Brookings Institution, the traumatized nature of the data which is training these models. It comes with those historical biases, and those historical biases are often traumatized, meaning if there are systemic inequalities that point to the unequal access to education, for example, they will show up in the training data, and as a result, have a consequential outcome of either greater surveillance or less utility for students that may be in that category or impacted. The other area where we actually have challenges is not just by who’s commoditized by AI, those who are impacted by them, but who’s creating them. The lack of representation of who sits at the table to design the model. absent of the people who are actually impacted from them, creates, I think, an over-judgment of power that has consequences that can foreclose on the economic social opportunities that AI models can, the ones that I just spoke about. For example, when we think about who is developing models for the health of black women, let’s just take that for example, people may not understand that the lack of participation of black women in clinical trials may mean that they may not show up, particularly when it comes to breast cancer diagnosis in training models. This was actually recently put out by the Journal of American Medicine, that black women disproportionately experience breast cancer because their data is not represented in major data sets. That actually shows up in AI because AI is not divorced of the market-based data that is actually training these systems. The other thing when it comes to the challenges that we have with AI is the fact that, as it’s been mentioned and as my book suggests, we have a digital divide. We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit. And when you also think about generative AI, and I’ll sort of close here to provide enough time for my colleagues to chime in as well. When we think about the global majority, we do a lot of work at the Brookings Institution on how these systems show up, not only in terms of marginalized populations in the US, but all over the world. In the African Union, for example, we know that there’s a digital language divide, and generative AI is primarily English-based, and it is not necessarily trained on the plethora of dialogue that comes out of a variety of global majority countries. As a result of that, we see challenges when it comes to representation, not only in training data, but whether or not populations actually see themselves in these tools, particularly generative AI that is meant and designed to be, again, a lever for economic and social mobility in those areas. I mean, that along with the rights of workers. who is taking those jobs to be able to annotate the data. I could go on and on, but there are so many structural, behavioral, as well as output or consequential outcomes that occur when we don’t have the right people at the table, when we continue to commoditize subjects to demarginalized populations to fuel the AI models that we’re developing. Third, we don’t interrogate these models. And I’ll just say this, we don’t interrogate them for bias. We also don’t interrogate whether or not they should be used at all, or a decision should be automated in the first place. So I will stop here and look forward to this conversation. Hopefully I gave you enough to talk about as we go into the next speakers. And thank you so much for having me.


Alisson Peters: Thank you so much, Nicole. I think you did a really phenomenal job, first and foremost, plugging your book, which I encourage everyone to buy, but also both laying out the real tangible opportunities that we see from AI, everything from journalism, healthcare, addressing the impacts of climate change, and then laying out in detail some of the tremendous risks that we see for marginalized populations. So you addressed issues around the accessibility divide, exacerbating existing in our societies through use of big data. You talked about who gets a seat at the table and the design deployment and use of these technologies and beyond. So I next wanted to turn to Nagat. Your organization has really been on the front lines of documenting, I think, some of the risks that Nicole just laid out, the exact impact to marginalized populations, whether that be from women and girls. And I know you’ve done a lot of work on tech-facilitated gender-based violence or impacts to human rights defenders or religious minorities. And I’m hoping you can sort of build off of what Nicole was talking about in terms of the broader risks that she laid out and give us some tangible examples or two of where you. seen both the benefits and risks of AI tools to marginalized populations, and then really because we do have many different stakeholders at the table this week in the IGF conversations, whether that be from governments or the private sector, where do you think there’s gaps that require more attention in our international discussions?


Nighat Dad: At Digital Rights Foundation, we have been doing a lot of work around addressing tech-facilitated gender-based violence, and I feel that talking about AI or AI tools is an extension of what we have been talking for years around digital tools or digital rights, and all the harms that we are now connecting with an AI is actually an extension of those harms with the usage of AI, and they have become more sophisticated and advanced. That’s the same case with the tech-facilitated gender-based violence, where we are now seeing how deep-fake images of women and young girls are actually creating more risks for them, specifically when they are from the regions and cultures which are more conservative, where the honor of families or the society is connected to women bodies. One challenge that we are witnessing is basically verifying whether these deep fakes are actually real or unreal. That was not the case before AI generated content when it comes to images and videos. I think another challenge is that regulating this space. Tech companies really have to do a lot, and sitting on META’s oversight board, we actually framed our own experience as a board in terms of what companies like Meta can do to use automation around dealing with the harms on their platforms and released a paper on this. When it comes to the governments, I feel that there is a huge gap of governing AI, the conversations and I always say this, even while sitting at the UN Secretary General’s AI high level body, that the concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions. No, that exactly is the case with the AI tools as well. I mean, there are some benefits where, you know, like it’s it’s also being used in the health care and climate monitoring, climate change and AI power to translation tools are also breaking down language barrier for marginalized groups. But I feel that all these opportunities are still connected to the entire cycle of how AI is being developed, designed, processed and deployed. I think there are lots of things to say, but there is a huge responsibility on AI companies on tech platforms where, you know, all these harms are being being increased by the use of AI. But the government’s also that how we can bring more accountability and oversight into the regulations that they are they are framing without including civil society voices and without having a conversation on human rights violations when it comes to AI tools.


Alisson Peters: Thanks so much, Nagat, I think, you know, you raise a really important point that I suspect we will have a lot of additional conversations this week at IGF about, which is if we don’t protect this multistakeholder model of Internet governance, a multistakeholder model of conversations around the regulation and governance of AI and emerging technologies. Then we will be missing sort of an entire part of the conversation, which is how are these tools being deployed and used in ways that are impacting the whole of society, not just the governments and the people representing them. I think that’s a good pivot over to you, Rasha, as you’ve done a lot of work looking at the impacts of AI tools from government misuse of these technologies. And I know you’ve done an incredible amount of work documenting the ways in which autocratic governments have used technology to repress marginalized populations, particularly LGBTQI plus persons. I’m hoping you could share a little bit of insight on how policymakers and AI developers should be thinking about these issues in relation to the governance and regulation of artificial intelligence, particularly sort of reflecting on the years of research that you’ve done.


Rasha Younes: Thank you so much, and thank you for having me today. In 2023, we published a report on the digital targeting of LGBTQI plus people across the Middle East and North Africa region, particularly in Iraq, Lebanon, Egypt, Jordan and Tunisia. What we found is that governments are using monitoring tools, usually manual monitoring, not sophisticated tools to target and harass LGBTQI plus people. And the significant finding that we had is that these abuses do not end in the instance of online harm in the sense that they are not transient, but reverberate through individual lives in ways that often ruin their lives entirely. In our report and in our follow-up campaign, which we published in 2024, we urge particularly technology platforms such as meta platforms, grinders, same-sex dating apps. etc. to address some of the structural issues that are related to content moderation, that are related to biases, that facilitate and allow for these abuses to take place, especially when they are in the wrong hands. So especially when they are exploited for malicious purposes, such as government targeting of LGBTQI plus people in contexts where they already face criminalization, whether it be direct criminalization of same-sex relations or other laws, such as cybercrime legislation and morality and indecency, debauchery laws that are used to target LGBTQI plus people simply for expressing themselves online. In developing this work, I also want to acknowledge that we are building off of work that Article 19 has done for many years on this specific issue, as well as the framework that Afsaneh Raboo introduced, which is designing from the margins, specifically in technology and AI systems, being able to design technologies with the interests, impacts, and rights of the most marginalized in mind. In some of the recommendations that we aim for, we really want to strengthen protections against digital targeting, while acknowledging that technology can always be used for malicious purposes. There are many ways that regulations and addressing biases and algorithms, for example, can help mitigate some of these abuses that take place offline as a result of online targeting. For example, AI systems often amplify historical biases, as my other co-panelists have said, embedded in the data that they are trained for, which leads. to discriminatory outcomes for LGBTQI plus individuals. So to mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools. Incentives for inclusive algorithm design that incorporate the input of LGBTQI plus advocates and civil society experts should be central in requiring and enhancing these systems to better protect the most vulnerable users. When it comes to content moderation systems, we saw and investigated that automated systems frequently misidentify LGBTQI plus content as harmful or inappropriate, especially in languages other than the English language, such as the many dialects of the Arabic language as we found in our reporting, which inadvertently silences advocacy around LGBTQI plus rights, especially in contexts where advocates, activists and community organizers resort to technology in order to empower and connect and build community around their rights. When public discourse and any offline organizing around gender and sexuality is either prohibited or could lead to criminalization and arbitrary harassment of these activists. So particularly in content moderation, there must be a training of moderation algorithms on inclusive data sets that recognize the diversity of LGBTQI plus discourse and incorporating human oversight, particularly for sensitive content, ensuring nuanced understanding of this context. And finally, establishing appeal mechanisms that allow for an effective. remedy for users to challenge automated systems of automated decisions of moderation that unfairly remove LGBTQI plus content or otherwise leave content online that could be harmful and lead to the arbitrary arrest, harassment, torture and detention and other abuses of LGBTIQ plus individuals that, as I said before, reverberate throughout their lives. Finally, I definitely think that this should happen with the privacy and data security of individuals in mind and enforcing robust data protection regulations that allow for penalties, for misuse of sensitive data, especially when it comes to the outing of individuals who are LGBTIQ plus people on public platforms, online harassment, doxing and the resulting discrimination and violence that people face offline in their individual daily lives. As I said earlier, centering LGBTQI voices in design of AI tools is extremely important. So engaging directly with organizers, activists, experts to understand the unique needs and challenges of LGBTIQ plus individuals and also for tech platforms to prioritize the creation of these inclusive digital spaces that actively counter discrimination and harassment that could also happen in tandem. Human rights impact assessments are extremely important. We already know that comprehensive evaluation of risks associated with content moderation, government surveillance and other issues is incredibly important in informing the changes and the upgrading of these tools to be able to safeguard. the human rights of those most impacted by these technology-facilitated harms. Establishing accountability platforms both for governments, for developers, and establishing clear grievance mechanisms for individuals and groups affected by AI-driven decisions is central to beginning to address these harms and the offline consequences of these harms across the globe. Thank you.


Alisson Peters: Thank you so much, Rasha. I think you gave us some really tangible recommendations on how the harms from automated systems. You talked a bit about doing bias audits. I heard human rights impact assessments, providing access to grievance mechanisms, access to remedy. A number of the recommendations you raised are actually expectations set out in the UN Guiding Principles on Business and Human Rights. Earlier this year, the United States government led in the full UN General Assembly agreed to a resolution on safe, secure, and trustworthy artificial intelligence, which encourages and calls for increased implementation of the UN Guiding Principles. Certainly, all governments of the UN have agreed with a number of the recommendations that you laid out in terms of expectations, both for governments and private industry, private sector. I think that’s a good pivot over to you, Amy, as we’ve heard some really tangible recommendations that Rasha has laid out, building off of some of the risks that both Nicole and Nagat outlined. I’m hoping you can share a little bit of self-reflection from Microsoft’s perspectives. What do you think that companies should be doing more of to mitigate the harms that have just been laid out by our speakers? Also, if there are particular steps that you feel like we as governments can and should be taking in terms of industry to help promote these steps. action, I think that would be quite helpful as well. So, over to you, Amy, and thank you for joining us.


Amy Colando: Thank you so much, and thank you so much for having me, both, oops, let me see whether the audio will work out. I’m just going to keep on talking, and we’ll hope it works out. So, thank you so much for inviting me, and I’m learning a lot already in terms of our engagement. These multi-stakeholder conversations are incredibly important to shine a light on our practices, to help us think of additional steps we can and should be taking to commit on the promise of AI. So, let me start a little bit with sharing some examples from Microsoft with the understanding that these are just simply examples, and the multi-stakeholder process is incredibly important in terms of getting that feedback and scrutiny in terms of areas we can do better. My team coordinates Microsoft’s corporate-level human rights due diligence, including human rights impact assessments, under our commitment to respecting human rights and providing remedy under the UN Guiding Principles. That process includes, and is very intentional about, interviewing marginalized populations, and allows us to understand the needs of diverse groups of our users, our supply chain, and our employees, so we can enhance our respect for the rights of marginalized populations. Turning to AI, we recognize there are particular areas of promise and potential, as well as particular areas that might exacerbate existing divides and harms. AI, at its foundation, as Nicole said, requires infrastructure and connectivity, and we’ve established our Global Data Center Community Pledge, which commits us to building an operating infrastructure that addresses societal challenges and creates benefits for communities. This forms the basis of how we engage with stakeholders during all steps of the data center process, including after it is up and operationalized and is tailored to every location so it is respectful of local cultures and contexts and environmental needs. For example, in Australia, this meant weekly meetings over an eight-month period to incorporate traditional indigenous practices into our design process. Through engagement, we introduce the project, gather insights that help us inform our data center design, respecting our neighbors and the environmental resources around them. Next, for the development and deployment of AI, Microsoft’s Office of Responsible AI has partnered with the Stimson Center to bring a greater diversity of voices from the global majority to the conversation on responsible AI through our Global Perspectives Responsible AI Fellowship Program. The fellowship program convenes a multidisciplinary group of AI fellows from around the world, including Africa, Latin America, Asia, and Eastern Europe, across a series of facilitated activities. These activities and the fellows take part are intended to foster a deeper understanding of the impact of AI in the global majority, exchange best practices on the responsible development and use of AI, and inform an approach to responsible AI. To combat the societal biases in AI systems, we employ a variety of approaches and are constantly learning from dialogues exactly like the one we’re having here. In 2018, we identified our six responsible AI principles, including fairness. Our policies are designed to clarify how fairness issues may arise and who may be harmed by them, and we take active steps to implement them into tactical controls and code of conduct. For generative AI systems, we’ve leveraged the U.S. National Institute of Standards and Technology Risk Management Framework to develop tools and practices to map, measure, and manage bias issues, which involve the risk of generating stereotyping and demeaning outputs. In alignment with the goal of minimizing representational harms, we’ve made significant investment in red teaming to identify areas of harms across different demographic groups. manual and automated measurements to understand the prevalence of stereotyping and demeaning outputs and mitigations to flag and block those outputs. We look forward to working with governments, multilateral institutions and multi-stakeholder processes to continue to develop these frameworks, including through OECD due diligence conversations, to help a consistent and aligned approach to improve the offering of A.I. and the potential to serve marginalized populations. For our own for our own generative services, we’ve established a customer code of conduct which prohibits the use of Microsoft generative services for processing, generating, classifying or filtering content in ways that can inflict harm on individuals or society. We have developed and deployed a framework for customer use of sensitive A.I. features, including facial recognition and neural voice. Customers must register for these services, a process that includes defining proposed use cases and may not use the service for other use cases. And we institute technical controls for abuse monitoring and detection. The classifier models that we’ve developed detect harmful text and or images and user prompts, inputs and completions or outputs. The abuse monitoring system also looks at usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected. These prompts and completions are then flagged through content classification and were identified as part of a potentially abusive pattern, are subject to additional review processes to help confirm the system’s analysis and inform actioning decisions. That’s conducted through human review and A.I. review. And then we have a feedback loop with customers and that in turn includes improvements to our own systems. Finally, I’d like to close on a theme that have been identified by my my fellow panelists in terms of the need for more representative data. and to ensure that we are bringing forward marginalized populations to be able to see themselves in the promise of AI. Recently, we identified that our services, our generative AI services, could be improved in terms of representation of people with disabilities, a one billion population around the world. We then partnered with Be My Eyes, which is a service and app that generates videos to allow vision impaired individuals to be able to communicate with others in a crowdsourced platform to actually visualize, visualize items that they’re looking at. This license to the Be My Eyes content allows us to ensure and advance the representation of people with disabilities in our service. In short, or not in short, because I’m closing now, I appreciate the opportunity to be here and to learn from others on the panel about how we can improve our processes and continue to work with government and civil society to advance AI. Thank you.


Alisson Peters: Thanks so much, Amy. I know I have a bunch more questions for you all. I mean, I think we just heard from you, Amy, the amount of work that Microsoft is doing to develop effective safe, happening amongst industry in this space. And yet what we’ve heard from Nagat and from Nicole in Russia is that, there’s real challenges in terms of developing effective safeguards. We know that, I think Nagat, you talked about the need to also ensure that we’re not concentrating a lot of these discussions in specific regions or specific countries or specific companies. And I think all of you across the board talked a bit about ensuring that we have more representative data that we’re recognizing that’s exacerbating biases and discrimination that occur. our society, or in the case of online harms, you know, things like to facilitate gender-based violence, it’s exacerbating gender-based violence that exists in our societies already. And so, I do want to go to the audience for questions, but in sort of reflecting on the questions that we might get, it’s also helpful for us to hear a little bit more about your recommendations on how we overcome some of the challenges that we’re seeing in developing effective safeguards if we have time. So, let me go over to the audience, I know we have folks online as well, if I could just ask our IT friends to pull up any questions, please put them in the chat, and if there’s any questions in the audience, I see folks are having problems hearing as well, so hopefully you can hear us, but if you have any questions, please put it in the chat, and any questions in the room.


Dr. Lee: Hi, thank you so much, Dr. Lee, I’m a big fan, thank you all for taking the time. Amy just mentioned infrastructure and data centers, and I have a question, as the government, U.S. government, sorry, as the U.S. government is integrating AI more and more into public systems, what is the government doing to ensure that patterns of environmental racism and issues with pollution and things that have affected marginalized communities in the U.S. will not be replicated with more and more AI use?


Nicol Turner Lee: I guess I can jump in, I think that’s a great question, I mean the type of power generation that’s going to be required for data centers are definitely going to, in many respects, lead us into areas where there is either more land or less respect for the dignity of the land some people have. So I think we have to, I like the way Amy’s talked about it with Microsoft, we’ll come up with the criteria and some values on where we decide to put those data centers because in the United States, the type of gigabit plus power that is required to actually not just keep these systems operating, but also to keep them cool, will have a disproportionate effect on communities that are either of color or indigenous or communities in which we used to have this term a long time ago in economic development, brownfields, where there’s a possibility to go in and exploit the land for the purposes of the type of potential nuclear reactor objects that are going to be needed to do data centers. And so I urge, I’m not a government employee, but I urge more conversation in this, right? Because it is an area that is becoming increasingly important as nuclear power becomes more distributed and hope that we can find the same type of reputational as well as harm reduction that we’ve spoken about today in terms of the models themselves and how we actually deal with this physical infrastructure.


Alisson Peters: Amy, is there anything you wanted to add to that as well?


Amy Colando: No, Nicole, that was such an excellent comment. I think that it’s recognizing the kind of continuing trends that we see in other words, it’s not AI, it’s like a brand new issue. There’s many new aspects about it, but the trends in terms of power and discrimination continue. Again, like many aspects of AI, I’d say there’s advantages and disadvantages. We are using AI, in fact, to develop new types of concrete that are less impactful on the environment. We have our own sustainability pledge. Other companies do as well, of course. We are continuing to uphold our pledge on carbon outputs. that we made prior to the advances of AI in the last couple of years, and we’ll continue to uphold that as we move forward and look for carbon-free sources of power.


Alisson Peters: And I will just say, you know, from the U.S. government perspective, we have over the last four years under the Biden administration, rolled out a number of new policies, executive order memos from our White House that are really focused on ensuring that as our own government is purchasing artificial intelligence systems, is using automated systems for decision-making, is deploying AI in different ways, and is also providing AI to other governments, that human rights is a core element of sort of that risk assessment that we’re doing, and that is a component in a lot of the new actions and regulations that we have rolled out. One of the things that I will note is we are working currently in the Council of Europe as government on a new convention on artificial intelligence, AI, human rights, rule of law, and democracy, and this framework convention is the globe’s first ever legally binding treaty on artificial intelligence, and one of the key things that that process is doing is also building out a risk assessment framework that has human rights at its core. So as government, we have a framework that we can actually look to that helps us assess what the risks are, whether that be to environmental rights, to environmental defenders, or other fundamental freedoms, freedom of expression and beyond, that that is core to everything that we’re working on. So this is a key piece of a lot of the work that we’re doing as it relates to safe, secure, and trustworthy AI in the US, and I know I speak for other governments that are here at IGF on that as well. And if we could just pull up the questions online, I just wanna make sure also that we’re not missing those.


Usama Kilji: Thank you very much for a very insightful discussion. I’m Usama Kilji. I’m with Bolo B, which is a digital rights organization in Pakistan. So my question specifically around AI use in military and in war, and I think around the world. and we’ve seen increasing use of AI in facial recognition technologies in conflict and in war, but we’re seeing that a lot of these conversations are to at least get the military use of AI, which has acute human rights impacts. So I’m wondering, what can governments and companies do to have more conversations around military use and what safeguards they can put in place, because currently in conflicts, we’re seeing very bad consequences for civilian populations.


Alisson Peters: One more question.


Khaled Mansour: Thank you. My name is Khaled Mansour. I am with Meta Oversight Board. It’s a follow-up to Osama’s question, because I bet you we will get the answers either from you that you do all these checks, human rights impact assessment. Our challenge here is transparency. So what is preventing you from publishing at least a portion of these reports so people who are affected by AI technologies, especially either clients of Microsoft or the U.S., actually happening and how the U.S.


Alisson Peters: Thanks so much. So maybe I’ll turn it over to the panelists first. But we have two questions. One is, how do we better address AI use in military settings with recognition that quite often as we’re having conversations around safeguards, around automated systems, we’re excluding the defense sector from those discussions. So what more could we be doing there? And then second question in terms of transparency reporting. And I know I saw another question back here. I’ll see if we get time. But maybe I’ll turn over to you online first, colleagues, if anyone wants to jump in on either of those questions.


Amy Colando: Sure, I can jump in a little bit. And this is an area of on which I welcome feedback because one of the cornerstones of how I think my team operates is commitments to accountability and transparency in terms of how we uphold Microsoft’s responsibility to respect human rights. At the same time, of course, there are confidentiality commitments to our customers and that those commitments are the same regardless of any customers. Let me just put that out there as a way we, that’s kind of the cornerstones of how we operate. I’d mentioned briefly during my opening remarks that we divide our AI services into potentially sensitive AI services, including facial recognition and neural voice. For those services, we do require defined use cases regardless of customer. And we review those defined use cases against our own responsible AI commitments, which are grounded in respect for human rights. We are endeavoring to increase transparency. So for example, during this last year, my team worked directly on updating some of our transparency around data center operations and the types of services we offer in data centers. But as you know, I’m sure there’s more we can do, more we can do as an industry and where we can do in terms of the kind of industry accepted level of due diligence. I think that’s going to be enormously helpful. So there’s this floor and rather than a race to the bottom, it’s a race to the top in terms of how private sector can work with governments and with civil society to ensure that we’re upholding universal human rights.


Nicol Turner Lee: And I’ll jump in with regards to that question in terms of militarization. So the challenges that we have with AI is that we have a militarization when it comes to just human rights and civil rights. But then we’ve also seen, and I like the way that the audience members sort of talked about this, this integration of a variety of technologies sort of embedded for the use of militarization. So what do I mean by that? We’re seeing facial recognition embedded into other AI-enabled technology that are being used for force. We’re seeing less accountability and transparency about that integration in many respects. And I think, you know, for the United States in particular, and other countries who have an ongoing race to AI with China, these create certain vulnerabilities and national security concerns that we have to pay attention to. So that’s the first thing I want to say. The other thing I think is really important, and I love the way we’re talking about, particularly the United States government’s sort of integrated diplomacy with human rights and AI security, is that I once heard someone say, and I’ll share with the truth because that was so profound, that in the absence of data privacy or international data governance strategy, we actually are also contributing to a national security concern. And so really thinking about ways in which we’re not handling data privacy, like Rasha also spoke about, right, really lends itself to greater militarization because it allows for governments, particularly authoritarian governments, to obstruct the type of transparency and accountability that we need when it comes to these systems of weaponization. And so, you know, I think that, you know, we have, we’re probably going to see a shift to more national security conversations in the United States. National Security Memo is an example of that. I just served as Secretary of Mayorkas on the AI Safety Board about critical infrastructure and AI protections. And I think across the world, I was just in Barcelona at the Smart City Depot, we’re seeing a lot of conversations about embedded militarization of just everyday AI tools and how they can be reversed for that type of application. So I think it’s a conversation we definitely need to have and the U.N. needs to continue.


Alisson Peters: Thanks so much, Nicole. And I will just say on the really important question in terms of how do we address the use of automated systems in our military apparatuses and not just use, but also development and design, right? You know, there’s two things that we’re working on, at least in the U.S. government context. First, we completely agree. with you that we can’t exclude. Test, test, it may work. Apologies all, unless IT issues always. But first and foremost, I think we agree with you on the importance of these conversations. It’s why we started a political military declaration to actually start a global conversation on use of AI in the military. And we would encourage governments that have not joined that declaration, not just because the importance of the declaration, but the importance of the policy conversations around this declaration to do so. And we’re happy to talk to any governments that are here at IGF and beyond. And then the second piece, which I think Nicole talked about is our national security memorandum on AI use in our national security system. We fully recognize that we can’t actually look at how to address everything from human rights impacts of AI to actually how our government is designing and deploying AI itself. Would actually address this in our national security institutions. And so we issued a pretty groundbreaking national security memorandum and to the point on transparency, that’s all public. And that deal is deploying these tools, but all elements of our national security system. So if you have not already had a chance to take a look at that national security memorandum, then happy to also share offline with you. But I think that is certainly an approach that we’re quite proud of as it relates to government transparency and accountability. Before I close this session, I wanted to invite our friends and colleagues from the governments in Netherlands. We have Hoest van Zwolle, who has been, I’ll say a partner of crime in all of our efforts to address the human rights impacts of artificial intelligence. Netherlands is going to be the chair of the Freedom Online Coalition Task Force on Artificial Intelligence and Human Rights next year. For those that are not familiar with the FOC, it’s a coalition of over 40 governments that are dedicated to addressing and ensuring the protection of human rights online, in which Netherlands serves as the chair this year. So I’m hoping to turn over to you, Hustu, close us out, and really just share some concrete ideas, particularly in reflection to some of the great questions that we’ve received on how the FOC can work with other governments next year to address these challenges under your leadership and in partnership with my government and other governments around the room.


Guus Van Zwoll: Thank you so much, Alison. Can you hear me? Yeah, okay. So it will be the TFAIR next year, as we call it, Task Force on AI and Human Rights. And human rights are, of course, a very different thing than humanitarian rights, but I do want to, humanitarian rights, but I just want to briefly touch on the issue of military and AI. In 2023, we started RE-AIM, which is the Responsible Use of AI in a Military Domain. We did it as the Netherlands, and this year the conference was held by South Korea. And together with South Korea, we launched a first committee resolution last month in the UN on exactly this issue, on what is responsible use of AI by the military. And that resolution had a broad support. We had 165 countries in favor, only two against, and six countries abstained. So I think that that is a pretty good start, at least for this conversation. I’m happy to discuss this later after the sessions as well. So I made some quick notes on what our plans are for next year. And basically, we basically want to continue this discussion, this fabulous discussion. Thank you so much, Alison and the US for organizing this. Because we see that the Freedom of the Line Coalition must practically. engaged on AI governance now, as critical global norms and standards are being shaped in the upcoming months. It will not take years, it will be literally months. And this is why we as the Netherlands want to co-lead the TFAIR next year. Our responsibility is to ensure that human rights remain central to these frameworks, protecting vulnerable populations and shaping inclusive and equitable AI systems. In 2020, the FOC already published a joint statement on AI and human rights. But 2020, that is in terms of AI a couple of centuries ago, basic age history was before image generation and was also before larger language models. So we know that the AI system really changed. And I think it’s our task next year to revise this successful 2020 FOC statement, emphasizing as well on the disproportionate impact on AI on marginalized communities. The aimed update will provide clear guidelines for embedding human rights principles into AI governance globally. How do we do this? Well, for example, by collaborating with Stanford’s AI MISUSE tracker, we will try to identify and highlight disproportionate impact of AI on marginalized groups, such as through biased surveillance or exclusionary practices. This tool will ensure transparency and accountability while driving advocacy and for equitable AI practices. We will also organize practical workshops and simulations to equip policymakers and diplomats with the tools and knowledge needed to address these AI challenges and opportunities for human rights with a focus on marginalized communities and women. We will try to get leading voices, very much like the ones we have heard today, to an organization to educate us diplomats and policymakers on the challenges that they see most daunting. Another focus would be on community-rooted AI research. that prioritizes diversity and addresses AI impacts on marginalized groups. These contributions would offer valuable perspectives for fostering an inclusive and rights-based AI governance. We will also spotlight examples of AI that can advance human rights, such as tools for bypassing censorship or supporting civic engagement. These studies will demonstrate how AI can be leveraged to empower marginalized groups while ensuring accountability and ethical development. A great example of this is the Signpost project by the International Rescue Committee. This initiative leverages AI to provide critical information to displaced populations via mobile apps and social media, delivering content in multiple languages. The choices that we will make now and that we are discussing at the IGF this week will determine how AI helps create a fairer world or deepens the current inequalities. Through T-FAIR, we will aim to keep human rights at the center of the AI governance discussion, supporting marginalized communities and building a future based on fairness and accountability. As the upcoming chair for T-FAIR, but also as the chair of the Freedom Online Coalition this year, we are convinced that the FOC, the Freedom Online Coalition, provides a great networking platform to advance this goal. Thank you.


Alisson Peters: Thank you so much, Joost. The Netherlands has been such an incredible leader of the Freedom Online Coalition this year, and I know I speak for my government when I say we’re really eager and excited to work with you all and really build out on today’s discussion next year on the task force. I know there’s never enough time for these conversations, especially when we have such incredible panelists, but I really do want to thank you all for joining on your weekends, where you’re all located, and everyone for joining us in the room. I will say, in concluding this discussion, we will have at IGF throughout the this week and beyond as we look ahead to WSIS Plus 20 and other UN processes, continued debates around the future of artificial intelligence. How do we both leverage the opportunities from AI, the opportunities that Nicole talked about, and how do we also ensure that we are mitigating the risks? So the rewards and the risks are continued conversation that we’re having in AI policy debates. And I think what you have heard from each of our panelists is this question about who is actually setting the table for those debates? Who is at the table? Are they representative of the populations that we as governments are tasked with protecting? Are they representative that industry is actually working and has access? And are they representative of the populations that will be most impacted by how these technologies are being designed, deployed, and used in their societies? We have a lot of conversations in the UN around ensuring that we’re advancing AI for good. But we know that we can only advance AI for good when basic human rights of all people, no matter where they’re located, no matter their faith, no matter their gender, no matter their sexual orientation and beyond are respected. So this is a really timely conversation for IGF. It’s a timely conversation given we just had Human Rights Day this last week. I thank you all for coming, and I hope that we can continue these discussions throughout this week at IGF and beyond. So on behalf of the United States government and my Bureau of Democracy, Human Rights, and Labor, I want to thank you all and thank you all online. And we look forward to being in touch through the Freedom Online Coalition to continue these important discussions. Thank you. Thank you. you


D

Desirée Cormier Smith

Speech speed

123 words per minute

Speech length

82 words

Speech time

39 seconds

AI can advance equity in healthcare, education, and economic opportunity

Explanation

AI has the potential to increase access to healthcare, education, and economic opportunities for those who need them most. This could help reduce inequalities and promote equity in these crucial areas.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Nicol Turner Lee


Jessica Stern


Agreed on

AI can both advance equity and reinforce discrimination


N

Nicol Turner Lee

Speech speed

175 words per minute

Speech length

1906 words

Speech time

653 seconds

AI systems often reinforce historical discrimination against marginalized groups

Explanation

AI systems can exacerbate existing biases and inequalities in society. This is because they are often trained on historical data that reflects past discriminatory practices and societal inequities.


Evidence

Example of breast cancer diagnosis models not accurately representing black women due to lack of participation in clinical trials.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Desirée Cormier Smith


Jessica Stern


Agreed on

AI can both advance equity and reinforce discrimination


Need to interrogate AI models for bias and whether automation is appropriate

Explanation

It is crucial to examine AI models for potential biases and discriminatory outcomes. Additionally, there should be careful consideration of whether certain decisions should be automated at all.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Agreed with

Sara Minkara


Rasha Younes


Agreed on

Need for diverse representation in AI development


Differed with

Amy Colando


Differed on

Approach to addressing AI biases


D

Dr. Geeta Rao Gupta

Speech speed

113 words per minute

Speech length

53 words

Speech time

28 seconds

AI tools are enabling technology-facilitated gender-based violence

Explanation

AI technologies are being used to create and spread technology-facilitated gender-based violence (TFGBV). This form of harassment and abuse particularly targets women and children, threatening their ability to participate in online and offline spaces.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


J

Jessica Stern

Speech speed

124 words per minute

Speech length

112 words

Speech time

54 seconds

AI can help reimagine inclusive futures but biases in data must be addressed

Explanation

Generative AI has the potential to create more inclusive futures and allow for safe self-expression. However, it’s crucial to address biases in the data used to train AI systems to prevent reinforcing harmful stereotypes.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Desirée Cormier Smith


Nicol Turner Lee


Agreed on

AI can both advance equity and reinforce discrimination


K

Kelly M. Fay Rodriguez

Speech speed

120 words per minute

Speech length

86 words

Speech time

43 seconds

Unions play a key role in safeguarding workers’ rights amid AI expansion

Explanation

Labor unions are essential in protecting workers’ rights as AI technologies rapidly expand. They advocate for fair employment practices, safe work environments, and equitable compensation in the context of AI implementation.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


S

Sara Minkara

Speech speed

123 words per minute

Speech length

79 words

Speech time

38 seconds

AI development often leaves out the disability community

Explanation

AI is frequently developed without considering the needs of the disability community. It’s crucial to ensure that AI is accessible for everyone, including people with disabilities, in all aspects of its development and implementation.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Nicol Turner Lee


Rasha Younes


Agreed on

Need for diverse representation in AI development


N

Nighat Dad

Speech speed

130 words per minute

Speech length

474 words

Speech time

218 seconds

Companies must do more to address harms on their platforms

Explanation

Tech companies need to take more responsibility in addressing the harms caused by AI on their platforms. This includes issues like deep fake images and videos that disproportionately affect women and girls.


Evidence

Experience from META’s oversight board in framing how companies like Meta can use automation to deal with harms on their platforms.


Major Discussion Point

Addressing Biases and Harms in AI Systems


AI governance conversations are concentrated in Global North countries

Explanation

Discussions about AI governance are primarily taking place in developed countries. This leads to a lack of input from regions where these technologies are often deployed, particularly affecting marginalized groups in those areas.


Major Discussion Point

Ensuring Inclusive AI Governance


R

Rasha Younes

Speech speed

114 words per minute

Speech length

878 words

Speech time

458 seconds

Developers should conduct regular bias audits and build diverse datasets

Explanation

To mitigate biases in AI systems, developers need to regularly audit their systems for bias and ensure they are using diverse, representative datasets. This is particularly important for protecting LGBTQI+ individuals from discriminatory outcomes.


Evidence

Findings from a report on digital targeting of LGBTQI+ people across the Middle East and North Africa region.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Agreed with

Nicol Turner Lee


Sara Minkara


Agreed on

Need for diverse representation in AI development


Need to strengthen protections against digital targeting of vulnerable groups

Explanation

There is a pressing need to enhance safeguards against the digital targeting of vulnerable populations, particularly LGBTQI+ individuals. This includes addressing biases in content moderation systems and ensuring privacy protections.


Evidence

Examples of government targeting of LGBTQI+ people using monitoring tools and cybercrime legislation.


Major Discussion Point

Ensuring Inclusive AI Governance


A

Amy Colando

Speech speed

153 words per minute

Speech length

1472 words

Speech time

576 seconds

Microsoft employs various approaches to combat societal biases in AI systems

Explanation

Microsoft has implemented multiple strategies to address societal biases in their AI systems. This includes policies on fairness, tools to map and manage bias issues, and investments in identifying areas of harm across different demographic groups.


Evidence

Examples include the Global Data Center Community Pledge, partnership with the Stimson Center for the Global Perspectives Responsible AI Fellowship Program, and development of a customer code of conduct for generative AI services.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Differed with

Nicol Turner Lee


Differed on

Approach to addressing AI biases


A

Alisson Peters

Speech speed

157 words per minute

Speech length

3211 words

Speech time

1220 seconds

US government has policies to ensure human rights assessments in AI procurement

Explanation

The US government has implemented policies requiring human rights assessments when purchasing or deploying AI systems. This includes executive orders and memos aimed at safeguarding human rights in the development and use of AI.


Evidence

Mention of executive orders and memos introduced into the US government system over the last four years.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Multistakeholder model needed to understand AI’s societal impacts

Explanation

A multistakeholder approach is crucial for comprehensively understanding how AI tools are impacting society. This model ensures that discussions include perspectives beyond just governments and the people representing them.


Major Discussion Point

Ensuring Inclusive AI Governance


US government working on frameworks for risk assessment with human rights at core

Explanation

The US government is developing risk assessment frameworks that prioritize human rights considerations in AI development and deployment. This includes work on international conventions and national security memoranda.


Evidence

Mention of the Council of Europe convention on AI, human rights, rule of law, and democracy, and the US national security memorandum on AI use in national security systems.


Major Discussion Point

Ensuring Inclusive AI Governance


K

Khaled Mansour

Speech speed

137 words per minute

Speech length

82 words

Speech time

35 seconds

Challenge is transparency in human rights impact assessments

Explanation

There is a lack of transparency in the human rights impact assessments conducted by companies and governments. Publishing at least portions of these reports would allow people affected by AI technologies to understand how their rights are being considered.


Major Discussion Point

Transparency and Accountability in AI Development


G

Guus Van Zwoll

Speech speed

152 words per minute

Speech length

710 words

Speech time

279 seconds

Freedom Online Coalition working to keep human rights central in AI governance

Explanation

The Freedom Online Coalition, through its Task Force on AI and Human Rights, is working to ensure that human rights remain at the center of AI governance frameworks. This includes updating previous statements to address the evolving AI landscape and its impact on marginalized communities.


Evidence

Plans for collaborating with Stanford’s AI MISUSE tracker, organizing workshops for policymakers, and spotlighting examples of AI that advance human rights.


Major Discussion Point

Ensuring Inclusive AI Governance


Agreements

Agreement Points

AI can both advance equity and reinforce discrimination

speakers

Desirée Cormier Smith


Nicol Turner Lee


Jessica Stern


arguments

AI can advance equity in healthcare, education, and economic opportunity


AI systems often reinforce historical discrimination against marginalized groups


AI can help reimagine inclusive futures but biases in data must be addressed


summary

The speakers agree that while AI has the potential to advance equity and create inclusive futures, it can also reinforce existing biases and discrimination if not properly addressed.


Need for diverse representation in AI development

speakers

Nicol Turner Lee


Sara Minkara


Rasha Younes


arguments

Need to interrogate AI models for bias and whether automation is appropriate


AI development often leaves out the disability community


Developers should conduct regular bias audits and build diverse datasets


summary

The speakers emphasize the importance of including diverse perspectives, particularly from marginalized communities, in the development and auditing of AI systems to mitigate biases and ensure inclusivity.


Similar Viewpoints

These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.

speakers

Nighat Dad


Rasha Younes


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Need to strengthen protections against digital targeting of vulnerable groups


Multistakeholder model needed to understand AI’s societal impacts


Unexpected Consensus

Importance of unions in AI governance

speakers

Kelly M. Fay Rodriguez


Alisson Peters


arguments

Unions play a key role in safeguarding workers’ rights amid AI expansion


Multistakeholder model needed to understand AI’s societal impacts


explanation

While most discussions focused on government and tech company roles, there was unexpected consensus on the importance of labor unions in shaping AI governance and protecting workers’ rights in the context of AI expansion.


Overall Assessment

Summary

The main areas of agreement include the dual nature of AI in both advancing equity and potentially reinforcing discrimination, the need for diverse representation in AI development and governance, and the importance of addressing biases and harms in AI systems.


Consensus level

There is a moderate to high level of consensus among the speakers on the key challenges and necessary steps for ensuring inclusive and responsible AI development and governance. This consensus suggests a growing recognition of the need for multistakeholder approaches and increased attention to the impacts of AI on marginalized communities, which could potentially influence future policy and industry practices in AI development and deployment.


Differences

Different Viewpoints

Approach to addressing AI biases

speakers

Nicol Turner Lee


Amy Colando


arguments

Need to interrogate AI models for bias and whether automation is appropriate


Microsoft employs various approaches to combat societal biases in AI systems


summary

While both speakers acknowledge the need to address biases in AI systems, they differ in their approaches. Turner Lee emphasizes the need for critical examination of AI models and questioning the appropriateness of automation, while Colando focuses on Microsoft’s implemented strategies and tools to manage bias issues.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to addressing AI biases and ensuring inclusive AI governance.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers agree on the fundamental issues surrounding AI’s impact on marginalized populations and the need for more inclusive governance. The differences mainly lie in the specific strategies and focus areas each speaker emphasizes. This level of disagreement suggests a general consensus on the importance of addressing AI’s risks for marginalized groups, but highlights the need for further discussion and collaboration on the most effective approaches to tackle these issues.


Partial Agreements

Partial Agreements

Both speakers agree on the need for more inclusive AI governance discussions. However, Dad emphasizes the lack of input from regions where these technologies are deployed, particularly affecting marginalized groups, while Peters focuses on the importance of a multistakeholder approach to comprehensively understand AI’s societal impacts.

speakers

Nighat Dad


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Multistakeholder model needed to understand AI’s societal impacts


Similar Viewpoints

These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.

speakers

Nighat Dad


Rasha Younes


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Need to strengthen protections against digital targeting of vulnerable groups


Multistakeholder model needed to understand AI’s societal impacts


Takeaways

Key Takeaways

AI offers opportunities to advance equity but also risks reinforcing discrimination against marginalized groups


Effective safeguards and inclusive governance are needed to mitigate AI harms to vulnerable populations


Multistakeholder collaboration is crucial to ensure AI development considers diverse perspectives


More transparency and accountability are needed in AI development, especially regarding human rights impacts


AI governance must center human rights and protect marginalized communities


Resolutions and Action Items

The Freedom Online Coalition will update its 2020 statement on AI and human rights in the coming year


The FOC will organize workshops to educate policymakers on AI challenges for human rights


The FOC will spotlight examples of AI that can advance human rights for marginalized groups


Unresolved Issues

How to effectively include marginalized voices in AI governance discussions


Balancing transparency in human rights impact assessments with customer confidentiality


Addressing AI use in military settings and its potential humanitarian impacts


Closing the digital divide to ensure equitable access to AI benefits


Suggested Compromises

Developing industry-accepted standards for due diligence and transparency in AI development


Creating inclusive digital spaces that actively counter discrimination while protecting privacy


Thought Provoking Comments

AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI.

speaker

Desirée Cormier Smith


reason

This comment succinctly captures the core tension at the heart of AI’s impact on marginalized groups – its potential for both benefit and harm.


impact

It set the stage for the entire discussion by framing the key issues around AI and marginalized populations that subsequent speakers explored in more depth.


Computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives.

speaker

Jessica Stern


reason

This comment insightfully highlights how AI systems can reinforce or challenge existing social constructs around identity, particularly for LGBTQI+ individuals.


impact

It broadened the conversation to consider AI’s impact on gender and sexual identity expression, which was further explored in later comments about LGBTQI+ rights.


We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit.

speaker

Nicol Turner Lee


reason

This comment draws attention to how existing digital divides can be exacerbated by AI, potentially widening inequality.


impact

It shifted the discussion to consider not just the design of AI systems, but also who has access to them, leading to further exploration of global inequities in AI development and deployment.


The concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions.

speaker

Nighat Dad


reason

This comment highlights the global power imbalances in AI development and governance, raising important questions about representation and self-determination.


impact

It prompted further discussion about the need for more inclusive, global approaches to AI governance and development.


To mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools.

speaker

Rasha Younes


reason

This comment offers concrete, actionable steps to address AI bias, moving the conversation from problem identification to potential solutions.


impact

It shifted the discussion towards more practical considerations of how to implement safeguards and protections in AI development and deployment.


Overall Assessment

These key comments shaped the discussion by progressively deepening the analysis of AI’s impact on marginalized populations. The conversation moved from identifying broad tensions and challenges to exploring specific impacts on different groups (e.g. LGBTQI+, Global South populations) and finally to proposing concrete actions and governance approaches. This progression allowed for a comprehensive exploration of the complex interplay between AI, human rights, and marginalized communities, while also highlighting the urgent need for more inclusive and equitable approaches to AI development and governance.


Follow-up Questions

How can we ensure patterns of environmental racism and pollution affecting marginalized communities in the U.S. are not replicated with increased AI use?

speaker

Audience member (Dr. Lee)


explanation

This question addresses the potential environmental impacts of AI infrastructure on marginalized communities, which is an important consideration as AI becomes more integrated into public systems.


What can governments and companies do to have more conversations around military use of AI and what safeguards can they put in place?

speaker

Usama Kilji


explanation

This question highlights the need for more discussion and safeguards around AI use in military and conflict situations, which can have severe human rights impacts on civilian populations.


What is preventing companies and governments from publishing at least a portion of their human rights impact assessment reports related to AI technologies?

speaker

Khaled Mansour


explanation

This question addresses the need for greater transparency in how companies and governments assess the human rights impacts of AI technologies, which is crucial for accountability and public trust.


How can we revise and update the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasize the disproportionate impact on marginalized communities?

speaker

Guus Van Zwoll


explanation

This area for further research is important to ensure that human rights principles are embedded in AI governance globally, reflecting the rapid changes in AI technology since 2020.


How can we identify and highlight the disproportionate impact of AI on marginalized groups through tools like Stanford’s AI MISUSE tracker?

speaker

Guus Van Zwoll


explanation

This research area is crucial for ensuring transparency, accountability, and advocacy for equitable AI practices that don’t disproportionately harm marginalized communities.


How can we conduct community-rooted AI research that prioritizes diversity and addresses AI impacts on marginalized groups?

speaker

Guus Van Zwoll


explanation

This research direction is important for fostering inclusive and rights-based AI governance by incorporating diverse perspectives and experiences.


How can AI be leveraged to empower marginalized groups while ensuring accountability and ethical development?

speaker

Guus Van Zwoll


explanation

This area of research focuses on identifying and developing AI applications that can advance human rights and support marginalized communities, balancing the potential benefits with ethical considerations.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.