WS #194 The Internet Governance Landscape in The Arab World

WS #194 The Internet Governance Landscape in The Arab World

Session at a Glance

Summary

This discussion focused on internet governance in the Arab region, exploring challenges, priorities, and future directions. Panelists emphasized the importance of multi-stakeholder engagement in shaping internet policies and governance frameworks. They highlighted the need for increased participation from civil society and the private sector, noting that funding and awareness were key barriers to involvement.

The conversation touched on several priorities for the region, including enhancing connectivity, addressing the digital divide, promoting digital literacy, and strengthening cybersecurity measures. Participants stressed the importance of aligning regional priorities with the global internet governance agenda.

The role of national and regional Internet Governance Forums (IGFs) was a recurring theme, with speakers noting their potential to foster dialogue and shape policies. The upcoming Arab IGF in Amman was highlighted as a crucial opportunity for regional stakeholders to contribute to the global WSIS+20 review process.

Panelists discussed the challenges of combating misinformation and disinformation, acknowledging the complexities introduced by emerging technologies like AI. They emphasized the need for balanced regulatory frameworks and international cooperation to address these issues.

The future of the IGF itself was debated, with participants calling for a more empowered and sustainable model. Suggestions included improving linkages between IGF outcomes and decision-making processes, and clarifying the distinction between digital governance and internet governance.

Overall, the discussion underscored the importance of continued collaboration, capacity building, and active participation from all stakeholders to shape the future of internet governance in the Arab region and beyond.

Keypoints

Major discussion points:

– The need for greater multi-stakeholder engagement and collaboration in Internet governance in the Arab region

– Challenges and opportunities for civil society and private sector participation

– The future of the Internet Governance Forum (IGF) and its evolution

– Priorities for Internet governance in the Arab region, including youth engagement and capacity building

– The upcoming WSIS+20 review process and Arab participation

The overall purpose of the discussion was to examine the state of Internet governance in the Arab region, identify key challenges and priorities, and explore ways to strengthen multi-stakeholder participation and regional engagement in global Internet governance processes.

The tone of the discussion was largely constructive and forward-looking. Participants spoke candidly about challenges but focused on opportunities for progress. There was a sense of optimism about the future of Internet governance in the region, particularly regarding increased collaboration between national and regional IGF initiatives. The tone became more urgent towards the end when discussing the need for Arab voices to be heard in upcoming global processes like the WSIS+20 review.

Speakers

– Qusai AlShatty: Moderator

– Christine Arida: Government, African Group

– Ayman El-Sherbiny: UN ESCWA

– Zeina Bouharb: Ogero, Lebanon

– Charles Shaban: International Trademark Association

– Ahmed N. Tantawy: NTIA Egypt

Additional speakers:

– Waleed Alfuraih: Saudi IGF

– Nana Wachuku: Advisory board member for Digital Democracy Initiative

– Maisa Amer: PhD researcher at Leipzig Berlin, Germany

– Nermine Saadani: Regional vice president at the International Society of the Arab Region

– Tijani bin Jum’ah: Former member of the Civil Society Bureau at WSIS

– Desire Evans: Technical community and RIPE region

– Shafiq (no surname provided)

Full session report

Internet Governance in the Arab Region: Challenges, Priorities, and Future Directions

This discussion, held during a global Internet Governance Forum (IGF) event, focused on the state of internet governance in the Arab region, exploring key challenges, priorities, and future directions. The conversation brought together a diverse panel of experts representing various stakeholders, including government bodies, international organisations, civil society, and the private sector.

Multi-stakeholder Engagement and Regional Initiatives

A central theme throughout the discussion was the critical importance of multi-stakeholder engagement in shaping internet policies and governance frameworks. Christine Arida emphasised the need for dialogue and collaboration among all stakeholders, while Ayman El-Sherbiny highlighted the importance of promoting regional cooperation through initiatives like the Arab Internet Governance Forum (IGF) and the Arab Digital Agenda.

There was broad agreement on the need for greater involvement from all sectors, with a particular focus on increasing participation from civil society and the private sector. Charles Shaban, representing the International Trademark Association, stressed the importance of strengthening civil society and private sector participation, noting the need for sustainable funding. Ahmed N. Tantawy emphasised the need to focus on youth engagement and capacity building.

Priorities and Challenges

The discussion touched on several priorities and challenges for internet governance in the Arab region:

1. Enhancing connectivity and digital infrastructure: Zeina Bouharb of Ogero, Lebanon, stressed the importance of improving digital infrastructure to ensure equitable access.

2. Addressing the digital divide and promoting digital literacy: Ahmed N. Tantawy and Zeina Bouharb highlighted these as key areas of focus.

3. Strengthening cybersecurity measures and data protection: Zeina Bouharb raised concerns about these issues.

4. Combating misinformation and disinformation: Maisa Amer, a PhD researcher, discussed the challenges of tackling these issues, particularly in light of emerging technologies like AI.

5. Aligning regional priorities with the global internet governance agenda: Ahmed N. Tantawy stressed the importance of this alignment to ensure the Arab region’s voice is heard in global discussions.

6. Balancing intergovernmental processes with multi-stakeholder engagement: Ayman El-Sherbiny provided a nuanced perspective on this challenge.

7. Clarifying concepts: Christine Arida noted the need to define the difference between digital governance and internet governance to focus future discussions and policy work.

The Role of Internet Governance Forums

The role of national and regional Internet Governance Forums (IGFs) was a recurring theme. Speakers noted the potential of these forums to foster dialogue and shape policies. Ayman El-Sherbiny called for revitalising the Arab IGF and creating an “Arab IGF 2.0”, while Desire Evans emphasised the importance of linking national, regional, and global IGF initiatives. Evans also suggested including NRI representatives on the IGF leadership panel.

The upcoming Arab IGF in Amman, scheduled for February 23-27, 2025, was highlighted as a crucial opportunity for regional stakeholders to contribute to the global WSIS+20 review process. Christine Arida and Nermine Saadani both stressed the importance of connecting IGF discussions to actual decision-making processes. Saadani also mentioned ongoing research on intermediary liability in the region.

WSIS+20 Review and Arab Participation

Tijani bin Jum’ah and other speakers emphasized the importance of the WSIS+20 review process and the need for strong Arab participation. This global review presents an opportunity for the Arab region to contribute to shaping the future of internet governance.

Future Directions and Call to Action

Looking to the future, participants called for a more empowered and sustainable model for the IGF. Suggestions included improving linkages between IGF outcomes and decision-making processes, and creating a network of regional internet governance initiatives to enhance collaboration and focus on regional challenges. Dr. Waleed highlighted the importance of educating society about the internet ecosystem.

The discussion concluded with a strong call to action for participants to engage in open consultations and upcoming meetings to contribute to these important conversations. Ayman El-Sherbiny specifically mentioned an upcoming consultation meeting, urging stakeholders to participate and provide input.

Conclusion

The discussion underscored the importance of continued collaboration, capacity building, and active participation from all stakeholders to shape the future of internet governance in the Arab region and beyond. As the region prepares for upcoming global processes like the WSIS+20 review and the Arab IGF in Amman, there is a clear need for coordinated input and a strong Arab voice in shaping the future of internet governance. Stakeholders are encouraged to engage in open consultations, participate in upcoming meetings, and contribute to the ongoing dialogue on internet governance in the Arab region.

Session Transcript

Christine Arida: wants to open up to dialogue with other stakeholders within the classical intergovernmental processes, so specifically within the Arab League, to shape the future of policies related to digital governance as we approach the WSIS plus 20 and also to talk about the future of the IJF in that perspective. So I think governments do play a role and I think there is a need for enhancing and empowering the role and the dialogue that governments have by injecting further multi-stakeholder processes into that and I see here that national and regional IJFs that are in the Arab region, you mentioned Tunisia, Lebanon, now we hear about Saudi Arabia but also the Arab IJF, the North African IJF, they can play a great role to bring those processes and paths together. I hope this helps. Thank you, Koussaï.

Qusai AlShatty: Thank you, dear Christine, for your intervention and very interesting points. I’ll shift to my dear colleague Ayman Sherbini from the UN SQA and I would like to address the question on how can intergovernmental organisations such as the UN SQA facilitate more substantial synergies among and between the Arab countries and the global internet governance ecosystem, including if you can shed the light on the digital cooperation, the panel of digital cooperation.

Ayman El-Sherbiny: Hello, thank you, do you hear me? Yes, we hear you. Okay, thank you very much, my dear colleague.

Qusai AlShatty: No, we’re not hearing you clearly.

Ayman El-Sherbiny: There you go Thank you very much Poseidon for Organizing and organizing this session and the way I see very dear faces and partners that we haven’t met for some time and That is not by coincidence actually we in you and economic and social commission for Western Asia Have Collaborated with our dear colleagues in the Saudi government the digital government authority and others more than a year ago during the hosting arrangements and preparations for this very dear and very Honorable event that we do in the region for the second time in the last 15 years I recall Sharma Sheikh in 2009 and here we are in Riyadh and We are proud to be here in the right time in the right place like 10 months or less from the WSIS plus 20 review, so What what is the role of the United Nations in the internet governance Arena and in the region first of all as you all know the WSIS itself which gives spin-off the global IGF was a platform Under the auspices of the United Nations Secretary-General and General Assembly during 2003 2005 the inception as you define of the definition of the internet governance was part of the world in group on internet governance during 2003 2005 Which give birth in Tunis agenda to the IGF it also gave birth to other processes like in has cooperation and others During this period During this period In the beginning, few IGFs spawned off in different parts of the world. So in 2009 in Sharm el-Sheikh, we in ESCWA came up with the idea of having an Arab Internet Governance Forum, similar to other regions, and we took a couple of years for internal consultation with the League of Arab States.

Qusai AlShatty: Thank you.

Ayman El-Sherbiny: From 2009 Sharm el-Sheikh till 2011, we got all the arrangements in partnership between us and the League of Arab States and member countries, and I here commend the role of the Egyptian government, the National Telecom Regulatory Authority, in a very important event which Kosai shared and Shafi and many others, Christine also, called Habtour event, January 2012. We put the first building block for the Arab Internet Governance Forum, which is under the auspices of the ESCWA and LAS, and has its mag also like the global model, as well as hosting changes from one place to the other. I also recall the role of Kuwait as the first host of the Arab Internet Governance Forum and the role played by KITS and Kosai at that time. So we did a model that mimics the global model, and this is what the UN brought. We created the Arab IGF, created the model, built the partnership and championships, and we hosted six annual Arab Internet Governance Forums since then, and it’s a good chance to ask my colleague Rita to distribute some information about the Arab IGF7 that will take place in Amman in February 2025, which is like six, eight weeks from now. We would like that all of you here today in the session by Arab IGF in the Global IGF to do two things. First of all, to mark your calendar for the Arab IGF7 in Amman, in the region, and spend the week over there, not only for the Arab IGF, but for the Arab voices. which is the Arab Forum for the WSIS and 2030 Agenda. Furthermore, for the first conference on the Arab Digital Agenda that we with all member states have developed in the region, for the region, adopted by heads of states, which has 35 goals for digital development in the region until 2033. So we’ll have three different communities coming together in the same week. You will have the brochure now from Rita on the DCDF, and we’d like that you register. But what is the second thing we want from you? We want you to be here tomorrow by two rooms towards room workshop 11. But at 10.30, if I’m not mistaken, we will double check the date now. And tomorrow we’ll mention what Shafiq alluded to. We will make a consultation on the WSIS plus 20 review the next 10 months. We will also connect this to the internet governance process, as well as to the GDC Global Digital Compact process. It’s a round of consultation that we are doing tomorrow for 90 minutes. And your presence tomorrow is no less important than your presence today. So please be there. We will touch more, deep dive into the GDC, its relationship with Arab Digital Agenda, Arab IGF, IGF, and the Saudi IGF. And I’m proud that we have, as Shafiq mentioned, had the North African IGF, and we have here Charles also, he is a head of the MAG of the Arab IGF. And now we have a colleague partner from the Saudi IGF, and we have the Lebanese IGF. So tomorrow it will be very important that we come together to shape also our strengths and presence, not only for Amman IGF7, but hopefully we are going to do an announcement also related to the next edition of the Arab IGF, Arab IGF8, very soon, inshallah. Thank you so much.

Qusai AlShatty: Thank you. Thank you, dear Ayman. I would like also to welcome our second online panelist, dear Zaina Abu Harbo from Ojero, Lebanon. And she is with us online.

Zeina Bouharb: Yes, good morning, everyone.

Qusai AlShatty: Good morning, Zaina. And I would like to direct you to the question, what policies and recommendations would you suggest in the Arab region to ensure a more robust and sustainable internet governance framework? I’ll pass the floor to you.

Zeina Bouharb: Thank you, Koussaie. First, let me thank also my, let me join my colleagues in thanking the government of Saudi Arabia for hosting this important event. And thank you and Shafiq for organizing this workshop. Well, if I want to answer your question about recommendation, you know, it’s to ensure a more robust and sustainable internet governance framework in the Arab region. The governments maybe should first address technical, regulatory and social challenges. And the picture differs from country to country, you know, because there is also this difference in technological advancement between the Arab countries. So first, I would say my recommendation would be to enhance connectivity and to invest in the development of reliable digital infrastructure because sustainable internet governance requires strong and accessible digital infrastructure before everything else to enable equitable participation. participation across the region. My second recommendation would be regarding cyber security and data protection. Also, a more secure and trusted internet is fundamental for internet governance. There is a need to develop comprehensive cyber security laws and data protection frameworks that are aligned with international best practices, while at the same time accounting for local needs and local contexts. And one very important recommendation would be to promote the multi-stakeholder government approach, where all the relevant stakeholders, governments, business, civil society, technical community and academia, they all can play an active role in policy making by establishing national frameworks for dialogue between stakeholders, such as the national IGF and the regional IGF, and also by encouraging participation in the global internet governance institutions like ICANN and the IGF. So, if you want, we can list a lot of recommendations, but you know, there is also a need for digital literacy programs within the Arab countries to integrate this literacy into educational curricula. And also, there is something very important, which is to foster the regional cooperation. I think Mr. Ayman already tackled this issue and he will go deeper into this topic. So this would be my recommendation. Thank you, Kosar.

Qusai AlShatty: Thank you, Zainab, for your valuable input. I will shift the floor to my dear colleague, Charles. And actually, I will ask you a compounded question, because you represent a civil society organization, but your membership base is also private sector. So my question will be in that compounded form. When we see the internet governance landscape, civil society is the most stakeholder that participates. Private sector is from the least. So how, in that context, what role does the civil society play and the private sector would play in advancing internet governance in the Arab region? And how can this role be further enhanced?

Charles: Thank you very much, my dear Kosar. And thank you for the invite. Thanks for the Saudi government for having this wonderful event here. In fact, as you mentioned, good to mention that, because now I represent the International Trademark Association, which is officially a nonprofit organization. But more than 95% of our members are private sector. So this is my history with the Arab IGF. I was in the private sector before even joining INTA. So to answer your question, Kosar, I think the best way, the role, I don’t think I will talk a lot about it, because everybody knows the important role of the multi-stakeholders and what you, Anshan, specifically was talking about all the time. So we need the opinion and the views of these two important sectors in everything we do here, of course, and in the Arab IGF and the local IGFs. But maybe how to enhance it better, I can maybe think about it. I think we need more sustainability. financially, especially for the civil society to be able to continue participation. I know that many organizations usually don’t have the budgets to attend such forums and participate actively. Even with the online now, it’s easier, but still, you know, sometimes the in-person presence helps a lot in this. Going to the private sector, I think we need to show them more the importance of being part of the policy making, let’s say, because I know this is mainly a non-binding forum. At the same time, it’s important not to wait to see what the others want. We need to know the private sector, which is mainly the runners of the big economy around the world. We heard in the morning, Minister Sowaha excellently put it in a wonderful presentation that we are talking about 20.6 trillion. I mean, so we know that the drivers are mainly the private sector. We need them from the beginning to know exactly what they expect from the Internet, how to be part of it as we talked. So I think this, especially in our region, needs to be clearer for the private sector, because this is exactly what we face at the Arab IJF. We always had a lack of private sector. So this is exactly what we need to tell them, not only to concentrate on your work. You need to work, of course, but at the same time, be part of this, because this will affect your work in the future. So how to pass this, I’m not sure. So my recommendation would be awareness, I think, more of the importance of this for all the stakeholders, especially the private sector. Thank you. And if you allow me, Qusai, you know I have a conflicting meeting. Thank you. If you allow me to leave slightly so I don’t disturb anyone. I’ll see you tomorrow in the other workshop if I need to. one looking forward. Thank you Charles for being with us. Thank you

Qusai AlShatty: I will pass the floor to my dear colleague Ahmed from the NTIA Egypt and I will ask him a question. What are, in your view, the top priorities for the Arab region and internet governance and how can government align these priorities with the global internet governance agenda?

Ahmed N. Tantawy: Thank you Saeed, thank you Sharif for organising this workshop and thank you for the KSA for hosting the Global Ajaif and give us the opportunity to be here this year. In my view about the priorities in our region, I think there is a common need in our region called engagements. I think we should support the multi-stakeholder engagement in all our process in the Arab region. Actually at the past years we have many initiatives actually as you mentioned in the presentation. We have now national IJFs, we have regional IJFs and we have youth IJFs by the way in Lithuania. I think creating kind of a networking between all these initiatives in our region is something will help us to focus on our challenges in the region. I got this idea actually after our webinar organised one week ago and it was co-hosting between the Arab IJF, North African IJF and Lebanese IJF. Actually it was a remarkable webinar, gave me this idea about the network who manage, who compile the initiative in our region, okay, and through this networking we can share our thoughts. If you are talking about how to effectively enforce our voices in the global tracks, okay, I think this might be helping our priorities. Referring to the priorities, okay, I think how to make youth engagements and capacity buildings, this is one of my priorities, okay. I see a lot of youth, they have the knowledge, they have the capability, okay, and I think sense a capacity building for them to be engaged and to be leaderships in the future. This is one of my priorities, I think, and of course the digital divide between inside the same countries, between men and women, all this comes as a priority in our region. Thank you, thank you, thank you.

Qusai AlShatty: If I would like to ask a question.

Nana Wachuku: Thank you very much. My name is Nana Wachuku, advisory board member for Digital Democracy Initiative. It’s a program that focuses on supporting initiatives that help provide digital access, particularly in the global south. My question, there’s a lot of conversation around multi-stakeholder engagement. and there’s also the conversation about funding for civil society in participating in these multi-stakeholder platforms. But I’m also curious because there’s also the part about the government not being a very active participant in these multi-stakeholder engagement platforms. And I was wondering, beyond funding, are there other reasons that prevent civil society organizations from participating? And for the government, how democratized are these engagements? Is there also a reason that keeps the government away from engaging actively with the different stakeholders in these platforms? That’s one. My second question is, what are the top two priorities for civil society in this region? Specifically civil society, what are the top two priorities for civil society in this region? Sorry, I just want to get the questions out. And then the last one is, considering we’ve had conversations around private sector, how involved or how open is the policy making process enough to invite private sector for them to understand that we can be willing participants in the process? How involved are they in some of these policies that are made in the region? Thank you.

AUDIENCE: Let me start from your last question, because this is what the third question is. So we’ll start from the third question. I recall it correctly. So, I correct me if I’m wrong, please. 1st of all, what does it take to engage more social society here in Saudi? I think more awareness. Will bring more focus to the Internet community as a whole. Especially educating the society about Internet ecosystem as a whole. And the end user benefit out of engaging with us. So, we will engage their voice, take their voice all the way to the decision makers. And this is where we act as a society internally. To shape up the Internet ecosystem as a whole. So, they will be part of developing the whole ecosystem. So, I think education is a focal point where we need. And we are doing it already. So, we are doing a lot of workshops, seminars, online and face-to-face workshops in Saudis. And we are getting momentum every month. We are seeing growth, we are seeing reactions. We launched several initiatives where we educate the end user about what we could do to them. Especially hearing their needs, requirements, stuff like that. This is in one front. I think the second question was about… Involvement in policy making, how to ensure that others can be involved in the development of policy making. Absolutely. And this is one of the major things that we are doing right now. As part of the education, we are telling end user in order to be… part of the effective development of internet ecosystem, you have to be part of us. We have to hear your voice, you have to engage with us in seminar, workshop, discussion groups, stuff like that. And this is where we take their opinion very actively and passing their needs to the decision maker from policy, in the policy making or in the regulator side. And this is on internet, on cybersecurity, on other domains, not only internet usage.

Qusai AlShatty: Thank you. Thank you, Dr. Waleed. For Christine with us online, kindly if you can reply to the first question, which what stops the civil society other than funding from participating, if you see another reason other than the issue of funding. So, dear Christine with us and can reply to this.

Christine Arida: Thank you, Kosai. Can you hear me now?Yes. Okay. So, I think for civil society, likewise for all the other stakeholders, participating in a venue and engaging is all related to how much is at stake and how influential they can be to shaping the discussion, to participating, to really achieving something from that participation. So, of course, funding is a very important topic for civil society. But also, if I talk about the region, and I think this would apply to private sector and to civil society in the region, there’s a lot related to the maturity of how they can be influential in policy making or in policy shaping in the region. And if I take an example from the private sector, if you look at the global north, you would always find in private sector companies, a public policy division. Whereas if you come to our region, this is sometimes the least of their worries, and it’s only starting and picking up. Similarly with civil society from the region. So bottom line, it is related to how much they can influence what is happening and how can they shape policies around what is at stake for them. Thank you.

Qusai AlShatty: Let me pass the floor also to our colleague Ayman. The second question is how to integrate all stakeholders in the policy development process. How do you think that engagement can take place?

Ayman El-Sherbiny: So this is a very important point. I think the two worlds of intergovernmentalism and multi-stakeholderism, they can live together very smoothly. They are two sides of a coin. No side can work without the other. So what is the complementarity here? The complementarity is the dialogue in the multi-stakeholder arenas and gatherings such as forums or conferences and so on, which shape the ideas, which shape the positions, messages that should be taken further to the policy making. So it is policy support or policy shaping. Policy making naturally happens in the government circles in most of the issues, not all of the issues. So here comes the role of governments and the way we put them together, multi-stakeholders with equal footing and everything, is in the forums. While we go to the League of Arab States or UN closed meetings, General Assembly, or whatever, we get all these messages and we take the decisions and resolutions and go further to adopt what is in the interest of the global public. It’s a global public good in the globe or in the region. It is a public good. So it’s a public policymaking that has inputs from the citizens, civil society, business, and so on. The weakness here in the region is mainly in the civil society, as Christine said, alignment with what is happening, and also business sector. They see that most of the decisions pertaining to a global good is made elsewhere outside of the region. But the reality is through this regional forum, we contribute to the global public policymaking. And the global public policymaking will also impact the governmental bodies in the region. So it’s a continued loop. The last thing which I want to bring to this very important question is a forum with dialogue only without connecting tentacles or without the tangent points, contact points, with the decision making will not also help. As I said, there are two sides of the coin. So what is needed here also is an agenda for the region that everyone agrees to. So there are objectives, a compass for everyone to try to achieve certain targets. And this is in Europe. This is now in the global thing, in the global digital compact, in Europe, European digital agenda. So we have an Arab digital agenda with the same characteristics. And this has goals, targets, agreed upon by everyone. Multi-stakeholder engagement in the Arab digital agenda is a must, as well as in this kind of dialogue forums. So this complementarity between having a goal or a compass, having two places for policy shaping and policymaking, three of them work together very smoothly. And this is what we have in the region. And we guarantee the commitment of heads of member states into what the citizen and the youth aspires to. The last message I will say more about tomorrow in the discussion of global digital compact. In the summit of the future, we have five different facets. Some of them are related to political dimensions global level of the General Assembly, even. Some related to the finance, some related to the digital and the youth. So we are at the center of the summit of the future. This digital platform is really the place where we not only shaping the digital future, but the real future of the planet, the policy, the political paradigm, and the next generation.

Qusai AlShatty: Thank you. Thank you. I will pass the floor to the lady in the back, please. You can present and then turn it.

Maisa Amer: Thank you all very much for the interesting discussion. I would like to ask, first, I’m Maisa Amer, PhD researcher at Leipzig Berlin, Germany. I would like to ask all the panelists, how do you see the regulatory framework to tackle disinformation in the Arab world? For example, if there’s disinformation and misinformation disseminating on the digital platform, what’s the primary approach the states take in this point? Do you communicate with the platform directly to, I don’t know, negotiate or something to see how to look forward, how to tackle this phenomenon, or do you intervene directly? So I just curious to know what is the regulatory framework to tackle disinformation and misinformation. Thank you.

Qusai AlShatty: I would let Ahmed first, maybe, to address the…

Ahmed N. Tantawy: Thank you so much. I think all Arab regions have a legal framework. We hear you, we hear you. Okay. We have frameworks. I think all regulatory authorities are working on developing this framework every three, four years, following up. any updates, as you mentioned, about the misinformation and disinformation. I think also there is communicating channels between the governments and the platforms and the private sectors within the region or within the country. However, many of the platforms are outside the country. Maybe the framework or the legal framework will not be applying on these platforms. However, the channels are still running and communicated all over and updated.

Qusai AlShatty: Thank you. Any further elaboration from the panelists?

AUDIENCE: Thank you very much for the question. It’s really very important. And if I may add, it’s getting more complicated if you add AI as well. So the misinformation in AI is going to be massive. And I would urge you to read about the global initiative, about how AI initiative has been framed to make sure that this kind of misinformation is tackled. The framework talks about what should the action look like. And it is evolving. And everybody would like to make sure internet is a safer place, a trusted place. And without a global cooperation, I don’t think we’ll reach it to that point. So thank you very much. I think it’s a global phenomenon and it’s been tackled, I hope, very efficiently. Thank you so much. Adding to your intervention, the AI is going to create a lot of not only of misinformation is disinformation, but it will propagate the disguise of the truth in general. So that is a really very complicated issue. The session, the plenary session before us here discussed the ideas related to the truthfulness and ethical matters, and also the explainability, as well as the discoverability of algorithms and this kind of things. What we are doing in the international organizations region, we have developed a strategy for AI in the region, a vision for it, and we connected this to the Arab Digital Agenda. And we are currently putting metrics for measurement, the baseline now of readiness, of maturity, of adoption of AI. And we’re trying to put targets for the regions to combat this phenomena in collaboration with pertinent UN bodies, which is UNESCO. So we are working on that. It’s like an uphill battle. It’s like the virus and antivirus. The more you create antiviruses, the more there are people who are trying to disguise. So with AI, it’s difficult. But when the AI is under the jurisdiction of the humans, it’s still manageable. So the governance is by humans, of the AI, and we will not really be afraid to combat this kind of threats. And we think that it will continue like that. There is an upper hand for the human intelligence that will control such kind of malicious behaviors. Thank you. I’ll pass the floor to dear Nermine Saadani from ISOC.

Nermine Saadani: Hello, everyone, and thank you so much for giving me the chance to contribute to this valuable discussion. My name is Nermine Saadani. I’m the regional vice president at the International Society of the Arab Region. I have actually two points. One question for the distinguished panelists, and as well an intervention to feedback. our colleague here from Berlin, on her question on the regulatory framework and to complement what the distinguished panelists said. So first, my question to the panelists, and this is, I think, an opportunity. Can we start with the intervention so we can remember the question better? So on the intervention and your question about the regulatory framework that could be there to prevent the communities and the societies from the fraud information that could be present on the platform, the intervention is a very, very useful research this year, and we’ll complete this research in the coming year, inshallah, on intermediary liability. And this is a legal framework that many of the developed regions and the countries has been working on. And I think it will be very, very useful for our region as well as Arabs to look at intermediary liability from that perspective on how to strike the balance between protecting the platforms and as well protect the people and the users of the internet from any fraud or any misinformation that could be prevailing on those platforms. Striking a balance is something that is very useful in general, and this framework from a legal perspective is very useful. The document and the research will be announced on our platform, inshallah, in the second quarter of 2025. And we have conducted the research on six countries, including Saudi Arabia, United Arab Emirates, Egypt, Lebanon, Jordan, Oman, and Bahrain. So that would be very useful, and I think it will give a context of whatever they have been mentioning and your concern or your question. So I would refer this to you, and maybe it will be announced on our website shortly, inshallah. On my question, and this is a huge opportunity because all the pioneers from the Arab region who has been building the internet governance in our region are here. So I would like to pose a question very direct about the future of the internet governance forum itself, coming up very soon, the review on the WSIS plus 20. I would like to see or to understand. How do you see this process, and how can we encourage the Arab governments to get engaged more and more in the Internet Governance Forum? We still see lack of governments present, and definitely not all the stakeholder groups are there. So how do you see this, and how do you look at the review process for that forum? Thank you so much.

Qusai AlShatty: I’ll pass the floor to the panelists.

AUDIENCE: Thank you so much for the panelists who gave me the first intervention. But of course, it is not the only intervention. But I know the question of Nermin, and tomorrow we’ll give more details. But now, in general, I see the future, and this is not political talking. I see the future of the Arab IJF bright. I see it revived. I see it strengthened. It’s another wave. Remember the IJF-9? One or two years afterwards, 2009, IJF-4 in 2009, we strengthened our presence through this Arab IJF. With this booster in the IJF-19 in 2014, we have agreements already and discussions with the Saudi government, and we have discussed with the Emirati government, we have discussed with people in the NDRA. We want to strengthen and create the Arab IJF-2.0. Much more stronger, much more vital with goals and targets like the evolution we saw in the global IJF. The IJF lived a very long period, bottom-up only, until at a certain point in time, they created the so-called leadership panel, for example. Now, they connected more with the CSTD, connected more with the General Assembly. So, connecting more with intergovernmentalism. Now, we are already connected, so we don’t have a problem. But, as I said, we have weakness in the business sector. They don’t see a benefit. We have weakness in the civil society, too. They are not enough aligned. So, together, we are reviving a new wave, inshallah. The last eight, 10 months are vital. What you asked about the WSIS plus 20, and let me really give a word of command to Nermeen herself. She was working in when she was in her previous work in the IGF and WSIS plus 10 process. She knows how important it is. At that time, the global IGF plus 10 in 2015 led to connection between IGF and 2030 agenda. In general assembly 70 over 125 resolution, there was such a connected with the WSIS and the 2030 agenda never has it been before. So I see that after 10 months from today, there is going to be a strong connection between the IGF process, the WSIS process, and they will prevail hopefully for another 10 years or more. And maybe the least would be until 2030, but I see it at least 10 to 15 years. And while so doing, they have a GDC. They have never had a global digital compact before. So there’ll be strengthened. We have an Arab digital agenda. So I see the future is really bright in the region globally and at the national level too. We started by Arab IGF and then Tunisian IGF, then Lebanese IGF. And now we see the Saudi, we see the North African. So things are maturing and inshallah, the citizens themselves will play a bigger role and the next generation together with ISOC and with other member countries.

Qusai AlShatty: Thank you.

Shafiq: Please. If I may add as well, thank you very much, Ayman. The good thing about IGF itself, it is a platform for discussion, to be honest. And people come here with open-minded, a kind of open topics. And the more we have this kind of a platform to discuss the idea, especially at this stage where a lot of factors in a place. We are talking about ICANN and the initiative of new GTLDs. and talks about it, things about what we discuss on AI and how it’s impacted global usage of the internet. And there are a lot of topics that it is really emerging. So this kind of a platform, I think the more we have, like IGF, where we discuss ideas, we meet with experts, decision makers. I think our thoughts will bring it together more and more. And everybody who worked a long time in internet, we understand that it is a multi-stakeholder approach. Internet without having a collaboration and connectivity everywhere, it is not internet, per se. So thank you very much for raising this question. Any other question from Ahmed?

Qusai AlShatty: Thank you, thank you, Dr. Amin.

Ahmed N. Tantawy: I’m not gonna add anything new, but maybe I will summarize with two words. Trust between stakeholders is a mandatory, okay? And to avoid avoiding, avoid avoiding being here, avoid avoiding participation, avoid avoiding engagement. Let’s talk, we are different stakeholders. We have different mindsets. We have different perspective. We have different priorities. It’s normal to be not on the same line, but let’s keep dialogue, let’s keep being here, engagement and discussing and showing our perspectives. Thank you.

Shafiq: Shafiq. Thank you, thank you, Ahmed. Just, I will reflect on Nermin’s question. It’s a very critical question. And I cannot agree more with all my colleagues and friends, but I will take this opportunity for a call, for a call to the Arab community, for a call for the all Arab stakeholders to commit and to go and fill. and all the open consultations that make the Arab IGF sustainable for the next five, 10, 15 years. As Dr. Waleed said, it’s the only venue for inclusivity, for bottom-up, for the multi-stakeholder to voice their concerns. So during all these open consultations, as technical community, as RIPE NCC, we recognized and we committed to a multi-stakeholder environment that the IGF is fostering. And this is why my call is for all the Arab stakeholders to go online, fill the open consultations, and make your voice heard that the IGF should be sustainable, should be there to discuss all the concerns and all the challenges that every one of us is facing. Thank you, Qusai.

Qusai AlShatty: Then let me take the privilege as a moderator to give the floor to our dear colleague, Tijani bin Jum’ah, who witnessed the WSIS process and internet governance since the start. And he was a member of the Civil Society Bureau at the WSIS time. And now we have the privilege that he’s with us and looking at the evolution of the WSIS and the continuation of the IGF. So I will take this privilege to give you the floor yourself.

AUDIENCE: Thank you very much, Qusai, and thank you all for having me. I think that we are witnessing an evolution of the IGF. Since Tunis Agenda, IGF was created there and we had IGF plus 10 where we had the possibility to have the output because at the beginning, there was no output. It was not permitted to have output. Now we have output, but it is not binding to anyone. it is only if you want to look as someone said last time. So this is an evolution but we need evolution, we need to have these recommendations out of the IGF be considered by the decision making people, by people who are deciding on behalf of us. So the multi-stakeholder model is the only model that can give me the possibility to express my opinion since I am civil society so I don’t have any decision making rights. So the multi-stakeholder model I can speak, I can give my opinion and this multi-stakeholder model should be considered, it should be improved in my point of view, it should be improved so that it will be a real multi-stakeholder model because the stakeholders are not equal, we need multi-equal stakeholders. Civil society people don’t have the possibility to go to the meetings because they are not funded to go there but fortunately there are some sources of funding, it is not enough in my point of view. So they cannot express their opinion as well as the governments who are paid for it as well as the private sector who have an interest, a financial interest so they are paid to go there and to express their point of view. This model should be improved in my point of view and we have to fight for it. There is no other model of governance that can give all people the right to express themselves. Now about the evolution, as you know GDC was plus 20 etc. The problem is that there is a lack of participation from our region. Unfortunately, I think that it is too short, 10 months is too short to have the opinion, the consensus opinion of the region about the evolution of WSIS, WSIS plus 20. We didn’t have, Nermeen is right, she is right because we don’t have something already prepared and we should have prepared it much before this time, but no problem if you want, we can work on it, we can do that, but we need the engagement of everyone, real engagement. It is not about bring this kind of people or this kind, we have to have the opinion of all people and in my point of view, we have to have the opinion of all the national and sub-regional IGFs in our region and also for the Arab IGF. We have to have also the opinion of all the ISOC chapters in our region. This will help to shape, if you want, the opinion, the consensus opinion about this evolution. We don’t have to be absent in this occasion. Thank you, Kossai.

Qusai AlShatty: Thank you, thank you. I think Christine, our panelist, she has a comment and a question.

Christine Arida: Thank you, Kossai. So, I really thank Nermeen for putting forward that very important question and I would like to recall here a discussion that happened yesterday in the NRI coordination session where, if I recall right, Bertrand de La Chapelle has put forward the proposal that all NRIs across the globe in their sessions between here and between Riyadh and Norway, IJF, that they initiate a discussion about how they would like to see the mandate of the IJF renewed and produce actual paragraph or text about the IJF. And I think that’s a very innovative idea, though it’s basic, it is very innovative, because if we can harness the power of grassroots in terms of looking at the future of IJF, I think there is real impact that can be done in terms of the multilateral process that will look into the WSIS plus 20 review and the mandate of the IJF renewal. And in that respect, I think we can lead by example within the Arab region. I think what we need to see is a very thorough discussion about the renewal of the mandate of the IJF that should happen in the upcoming Arab IJF. And we should see that feeding into the ministerial sessions, whether within ESCWA, within League of Arab States, others to actually shape the intergovernmental process in that area. And to add one more final comment to that, the IJF has done a lot of dialogue, a lot of discussion, even outcomes. We have policy outcomes, policy recommendations. I think what we should be focusing on at this stage is defining the difference between digital governance and internet governance. because this is one point that needs to be tackled. The other thing that we need to look at is linkages. How can the outcomes that come out of the IGF actually feed into actual decision making elsewhere? And we’ve been saying that, but we haven’t been doing it so good. I think those two points are very important for the IGF, in addition to a solid proposal about how to empower the IGF in terms of funding and resources. Thank you very much.

Qusai AlShatty: Thank you, Christine. Any questions from the audience? Let me just make a comment. I had the privilege to attend the GDC Action Day. During the preparation session there, the Arab world was represented by the Kingdom of Saudi Arabia, the Republic of Egypt, and the United Arab Emirates. They presented what they are doing in terms of digital cooperation and regarding the issues related to the digital compact. Yet, they represented the view from our part of the world that preserves the interests of the key players here in front of the global players. The global players are not necessarily the government counterparts, but also the global players like Amazon, Google, and Microsoft, and so on, and other regional groups. It was substantial in reflecting our point of views and having some text in these documents that represents our interests and our priorities. So, leading this back, the next 10 months, I agree it is short, but It’s important to, with governments being our umbrella and representing our legitimate interest as all the stakeholders, and they are the people who will be on the table protecting our interests and the priorities. We need to pass the view of what we want from, let’s say beyond, which is plus 20, and what we want from the IDF as an evolution, beyond 25 years, and how we want this to be set up in the Arab world after the evolution that we see on the landscape regarding internet governance, not necessarily a specific platform, but national, regional, and our attack to the, our interaction with the global forum. So in that sense, really, we need to follow the timeline from here till July, which is the high-level event that we’ll take in Geneva. I think the host will be the International Telecommunication Union, and up to the General Assembly, the session that will be the last decision to say to go or not, and that will take, and so far, it is December 25. But this is all to be confirmed. Any final comment from the audience? Well, Desiree, do you have an intervention? Yes, please.

Desire Evans: Thank you, Desiree Evans. With the technical community and RIPE region, I really admire the Arab region in terms of how much effort has been made to include everyone and have a lot of, especially for the youth, internet governance forums that are taking place in different region. So I just wish that this enthusiasm continues past this WSIS Plus 20 review with the revived ILO. IGF, and not just 10-15 years. I think some people are calling for the unlimited life of this useful platform. But one important point, in addition to Shafiq was saying, please fill in the open consultation forum on the ITU’s website by 14th of March, saying how useful this platform has been. I think we also have an opportunity in June in Norway for the next IGF to get together and think about this proposal that Christine has mentioned, how NRIs could have more connections with other UN agencies. And I think speakers on the panel, Ayman, have said to have more of this bottom-up inclusion. So if we have a new leadership panel, for example, maybe they should come from NRIs, you know, for people who have been involved in the process. So let’s not miss that opportunity. Thank you.

Qusai AlShatty: Thank you. I’ll give the last word to my dear colleague Shafiq to wrap up the workshop. Thank you.

Shafiq: Thank you, Qusai. I just took some notes as final remarks, and I hope that I didn’t miss

Qusai AlShatty: any point. But please feel free to contact me or Qusai in order to add any other key message from this session. So first of all, thank you very much, dear panelists, dear audience. It really was very interesting, very interactive, and it’s an opportunity. And this is the advantage of the IGF, getting all of us in the same room, tackling challenges that are coming for us. So thank you once again. The key messages that I note, first, congratulations for Saudi for hosting this, and congratulations for Dr. Ibrahim, Dr. Waleed, for initiating the the Saudi IGF, and we hope that we will have a lot of meetings to coordinate among the NRIs, Lebanon IGF, Saudi IGF, Arab IGF, North Africa IGF. Thank you. Second, there is the message for strengthening collaboration among NRIs, I said it now. Capacity building and inclusivity, yeah, there is a demand for more capacity building, especially for the underrepresented communities that Dr. Waleed mentioned, civil society, that our dear guest here mentioned. Yes, we need capacity building and fellowship and funds to give them the tools to attend these meetings. The fourth point about the sustainable and evolving IGF that our colleague Nermeen mentioned a very interesting question. Yes, we need to have a new commitment, we need to renew our commitments for a sustainable IGF that includes all the voices. And the last point I noted here is the absence of Arab voices at the global level, at the global scene. So please, once again, Desiree, Nermeen, Christine, myself, please go ahead, fill the open consultations and make your voice heard. So these are the five points that I think Qusay will be the takeaway from this workshop, and hopefully that these discussions will continue. Thank you once again, and I wish you a great day and great IGF. Ayman, you have the last words.

Ayman El-Sherbiny: I would like to thank you, we all thank you and Qusay for this very important session. And add one more message, the road to Amman, Arab IGF 7, is going to take place from 23 to 27 of February. So on our way to Norway, we will pass by Amman. So remember, Amman, then Geneva for the CSTD, then for Norway, and then all the way till the General Assembly, Amman, third week or fourth week of February. And before that, tomorrow here 11.30 in room 10 for the consultation, one step on the road for the West Plus 20, 10 months are still valuable. So tomorrow, 11.30 in room 10, inshallah. Shukr.

Qusai AlShatty: Let me take this opportunity to let me first give a warm applause to our distinguished panelists. And I would like to thank our wonderful audience who remained with us all through the workshop and I would like to thank you for your interactivity and listening to us. So thank you all and see you around, hopefully. Thank you. Bye.

C

Christine Arida

Speech speed

128 words per minute

Speech length

701 words

Speech time

327 seconds

Need for multi-stakeholder engagement and dialogue

Explanation

Christine Arida emphasizes the importance of involving all stakeholders in internet governance discussions. She suggests that this approach is crucial for shaping policies and influencing outcomes in the digital governance landscape.

Evidence

She mentions the need to open up dialogue within classical intergovernmental processes, specifically within the Arab League.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Zeina Bouharb

Ayman El-Sherbiny

Charles Shaban

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Governments as key players in policy-making

Explanation

Christine Arida highlights the role of governments in policy-making processes. She suggests that while multi-stakeholder engagement is important, governments still play a crucial role in shaping and implementing policies.

Major Discussion Point

The Role of Different Stakeholders

Differed with

Zeina Bouharb

Differed on

Role of governments in internet governance

Z

Zeina Bouharb

Speech speed

97 words per minute

Speech length

341 words

Speech time

210 seconds

Importance of enhancing connectivity and digital infrastructure

Explanation

Zeina Bouharb emphasizes the need to invest in reliable digital infrastructure to enable equitable participation across the region. She argues that this is fundamental for sustainable internet governance.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Ahmed N. Tantawy

Agreed on

Enhancing digital infrastructure and connectivity

Differed with

Christine Arida

Differed on

Role of governments in internet governance

Addressing cybersecurity and data protection

Explanation

Zeina Bouharb stresses the importance of developing comprehensive cybersecurity laws and data protection frameworks. She argues that these should be aligned with international best practices while accounting for local needs and contexts.

Major Discussion Point

Challenges and Priorities for Internet Governance

A

Ayman El-Sherbiny

Speech speed

148 words per minute

Speech length

1574 words

Speech time

634 seconds

Promoting regional cooperation and initiatives like Arab IGF

Explanation

Ayman El-Sherbiny emphasizes the importance of regional cooperation in internet governance. He highlights initiatives like the Arab IGF as crucial platforms for dialogue and policy shaping in the region.

Evidence

He mentions the creation of the Arab IGF and its evolution over the years, including the upcoming Arab IGF 7 in Amman.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Charles Shaban

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Intergovernmental organizations facilitating cooperation

Explanation

Ayman El-Sherbiny discusses the role of intergovernmental organizations in facilitating cooperation on internet governance. He emphasizes the complementarity between intergovernmentalism and multi-stakeholderism in shaping policies.

Evidence

He mentions the work of ESCWA and the League of Arab States in organizing regional internet governance forums.

Major Discussion Point

The Role of Different Stakeholders

Revitalizing the Arab IGF and creating “Arab IGF 2.0”

Explanation

Ayman El-Sherbiny proposes revitalizing the Arab IGF to create a stronger, more vital platform with clear goals and targets. He envisions an “Arab IGF 2.0” that would be more effective in addressing regional internet governance challenges.

Evidence

He mentions ongoing discussions with various governments in the region to strengthen the Arab IGF.

Major Discussion Point

Evolution of Internet Governance Forums

A

AUDIENCE

Speech speed

138 words per minute

Speech length

1720 words

Speech time

744 seconds

Increasing awareness and education about internet governance

Explanation

An audience member emphasizes the need for more awareness and education about internet governance in the region. They suggest that this would lead to more effective engagement from various stakeholders.

Evidence

The speaker mentions ongoing workshops, seminars, and online initiatives to educate users about internet governance.

Major Discussion Point

The Future of Internet Governance in the Arab Region

C

Charles Shaban

Speech speed

158 words per minute

Speech length

500 words

Speech time

189 seconds

Strengthening civil society and private sector participation

Explanation

Charles Shaban highlights the importance of involving both civil society and the private sector in internet governance discussions. He argues that their participation is crucial for a comprehensive approach to policy-making.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Ensuring sustainable funding for civil society participation

Explanation

Charles emphasizes the need for sustainable funding to enable civil society participation in internet governance forums. He argues that financial constraints often limit the involvement of civil society organizations.

Major Discussion Point

Challenges and Priorities for Internet Governance

Private sector’s importance in driving economic growth

Explanation

Charles Shaban highlights the crucial role of the private sector in driving economic growth in the digital economy. He argues that their involvement in internet governance is essential for shaping policies that support innovation and development.

Evidence

He mentions the significant economic impact of the digital economy, referencing a figure of 20.6 trillion.

Major Discussion Point

The Role of Different Stakeholders

A

Ahmed N. Tantawy

Speech speed

116 words per minute

Speech length

494 words

Speech time

254 seconds

Focusing on youth engagement and capacity building

Explanation

Ahmed N. Tantawy emphasizes the importance of engaging youth in internet governance processes and building their capacity. He sees this as crucial for developing future leaders in the field.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Charles Shaban

Agreed on

Importance of multi-stakeholder engagement

Bridging the digital divide within countries

Explanation

Ahmed N. Tantawy highlights the need to address the digital divide within countries in the Arab region. He emphasizes that this is a priority for ensuring equitable access to digital resources and opportunities.

Major Discussion Point

Challenges and Priorities for Internet Governance

Agreed with

Zeina Bouharb

Agreed on

Enhancing digital infrastructure and connectivity

Aligning regional priorities with global internet governance agenda

Explanation

Ahmed N. Tantawy discusses the importance of aligning regional priorities with the global internet governance agenda. He suggests that this alignment is crucial for effective participation in global discussions and decision-making processes.

Major Discussion Point

Challenges and Priorities for Internet Governance

Youth as future leaders in internet governance

Explanation

Ahmed N. Tantawy emphasizes the role of youth as future leaders in internet governance. He argues for the importance of empowering young people with the knowledge and skills to take on leadership roles in this field.

Major Discussion Point

The Role of Different Stakeholders

M

Maisa Amer

Speech speed

134 words per minute

Speech length

116 words

Speech time

51 seconds

Tackling disinformation and misinformation

Explanation

Maisa Amer raises concerns about the spread of disinformation and misinformation on digital platforms. She inquires about the regulatory frameworks in place to address this issue in the Arab world.

Major Discussion Point

Challenges and Priorities for Internet Governance

N

Nana Wachuku

Speech speed

94 words per minute

Speech length

229 words

Speech time

145 seconds

Civil society’s role in shaping discussions

Explanation

Nana Wachuku highlights the importance of civil society in shaping internet governance discussions. She emphasizes the need for more active participation from civil society organizations in multi-stakeholder platforms.

Major Discussion Point

The Role of Different Stakeholders

N

Nermine Saadani

Speech speed

174 words per minute

Speech length

469 words

Speech time

161 seconds

Importance of connecting IGF to decision-making processes

Explanation

Nermine Saadani emphasizes the need to connect IGF discussions and outcomes to actual decision-making processes. She argues that this connection is crucial for the IGF to have a meaningful impact on internet governance policies.

Major Discussion Point

Evolution of Internet Governance Forums

S

Shafiq

Speech speed

135 words per minute

Speech length

380 words

Speech time

167 seconds

Ensuring equal participation of all stakeholders

Explanation

Shafiq emphasizes the importance of ensuring equal participation of all stakeholders in internet governance processes. He argues that the current multi-stakeholder model needs improvement to achieve true equality among participants.

Major Discussion Point

Evolution of Internet Governance Forums

D

Desire Evans

Speech speed

128 words per minute

Speech length

207 words

Speech time

96 seconds

Linking national, regional, and global IGF initiatives

Explanation

Desire Evans highlights the importance of connecting national, regional, and global IGF initiatives. She suggests that this linkage could strengthen the overall impact of internet governance forums at all levels.

Evidence

She mentions the opportunity to discuss this proposal at the upcoming IGF in Norway.

Major Discussion Point

Evolution of Internet Governance Forums

Agreements

Agreement Points

Importance of multi-stakeholder engagement

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Charles Shabani

Ahmed N. Tantawy

Need for multi-stakeholder engagement and dialogue

Promoting regional cooperation and initiatives like Arab IGF

Strengthening civil society and private sector participation

Focusing on youth engagement and capacity building

Speakers agreed on the critical importance of involving all stakeholders in internet governance discussions and decision-making processes.

Enhancing digital infrastructure and connectivity

Zeina Bouharb

Ahmed N. Tantawy

Importance of enhancing connectivity and digital infrastructure

Bridging the digital divide within countries

Speakers emphasized the need to invest in digital infrastructure and address the digital divide to ensure equitable access and participation in the digital economy.

Similar Viewpoints

Both speakers highlighted the important role of governments and intergovernmental organizations in shaping internet governance policies while also emphasizing the need for multi-stakeholder engagement.

Christine Arida

Ayman El-Sherbiny

Governments as key players in policy-making

Intergovernmental organizations facilitating cooperation

Both speakers emphasized the importance of empowering underrepresented groups (civil society and youth) to participate effectively in internet governance processes.

Charles Shabani

Ahmed N. Tantawy

Ensuring sustainable funding for civil society participation

Focusing on youth engagement and capacity building

Unexpected Consensus

Revitalizing regional Internet Governance Forums

Ayman El-Sherbiny

Desire Evans

Revitalizing the Arab IGF and creating “Arab IGF 2.0”

Linking national, regional, and global IGF initiatives

Despite representing different stakeholder groups, both speakers agreed on the need to strengthen and revitalize regional IGFs, suggesting a shared recognition of the importance of these forums in shaping internet governance.

Overall Assessment

Summary

The main areas of agreement included the importance of multi-stakeholder engagement, the need to enhance digital infrastructure, the role of governments and intergovernmental organizations in facilitating cooperation, and the importance of empowering underrepresented groups in internet governance processes.

Consensus level

There was a moderate to high level of consensus among the speakers on key issues. This consensus suggests a shared understanding of the challenges and priorities for internet governance in the Arab region, which could facilitate more coordinated efforts to address these issues. However, the diversity of perspectives also highlights the complexity of internet governance and the need for continued dialogue and collaboration among all stakeholders.

Differences

Different Viewpoints

Role of governments in internet governance

Christine Arida

Zeina Bouharb

Governments as key players in policy-making

Importance of enhancing connectivity and digital infrastructure

Christine Arida emphasizes the crucial role of governments in shaping and implementing policies, while Zeina Bouharb focuses more on the need for infrastructure development, implying a less central role for governments.

Unexpected Differences

Approach to addressing disinformation

Maisa Amer

Ayman El-Sherbiny

Tackling disinformation and misinformation

Promoting regional cooperation and initiatives like Arab IGF

While Maisa Amer raises concerns about disinformation and seeks information on regulatory frameworks, Ayman El-Sherbiny’s focus on regional cooperation does not directly address this issue, highlighting an unexpected gap in addressing a critical challenge.

Overall Assessment

summary

The main areas of disagreement revolve around the role of different stakeholders in internet governance, approaches to capacity building, and priorities in addressing regional challenges.

difference_level

The level of disagreement among speakers is moderate. While there are differing emphases on various aspects of internet governance, there is a general consensus on the importance of multi-stakeholder engagement and regional cooperation. These differences in perspective could lead to varied approaches in implementing internet governance policies in the Arab region, potentially affecting the balance between government-led initiatives and grassroots participation.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of multi-stakeholder participation in internet governance, but Ayman El-Sherbiny emphasizes regional cooperation through initiatives like Arab IGF, while Charles focuses more on strengthening civil society and private sector involvement.

Ayman El-Sherbiny

Charles Shabani

Promoting regional cooperation and initiatives like Arab IGF

Strengthening civil society and private sector participation

Both speakers agree on the need for broader participation in internet governance, but Ahmed N. Tantawy emphasizes youth engagement and capacity building, while Charles focuses on sustainable funding for civil society participation.

Ahmed N. Tantawy

Charles Shabani

Focusing on youth engagement and capacity building

Ensuring sustainable funding for civil society participation

Similar Viewpoints

Both speakers highlighted the important role of governments and intergovernmental organizations in shaping internet governance policies while also emphasizing the need for multi-stakeholder engagement.

Christine Arida

Ayman El-Sherbiny

Governments as key players in policy-making

Intergovernmental organizations facilitating cooperation

Both speakers emphasized the importance of empowering underrepresented groups (civil society and youth) to participate effectively in internet governance processes.

Charles Shabani

Ahmed N. Tantawy

Ensuring sustainable funding for civil society participation

Focusing on youth engagement and capacity building

Takeaways

Key Takeaways

There is a need for greater multi-stakeholder engagement and dialogue in internet governance in the Arab region

Enhancing connectivity and digital infrastructure is crucial for sustainable internet governance

Regional cooperation and initiatives like the Arab IGF play an important role in shaping internet governance

Youth engagement and capacity building should be prioritized

Civil society and private sector participation needs to be strengthened in the region

Addressing cybersecurity, data protection, and misinformation are key priorities

The Arab IGF needs to be revitalized and evolved to be more effective

Resolutions and Action Items

Participants encouraged to fill out open consultations on the ITU website by March 14th regarding the usefulness of the IGF platform

Arab IGF 7 to be held in Amman from February 23-27, 2025

Consultation meeting on WSIS+20 to be held tomorrow at 11:30 in room 10

Unresolved Issues

How to effectively increase government participation in multi-stakeholder internet governance processes

Specific mechanisms to link IGF outcomes to actual decision-making processes

How to ensure equal participation of all stakeholders, especially civil society

Concrete steps to address the digital divide within Arab countries

Suggested Compromises

Balancing intergovernmental processes with multi-stakeholder engagement in internet governance

Finding ways to make IGF outcomes more impactful without making them binding

Striking a balance between protecting platforms and users in addressing misinformation

Thought Provoking Comments

The two worlds of intergovernmentalism and multi-stakeholderism, they can live together very smoothly. They are two sides of a coin. No side can work without the other.

speaker

Ayman El-Sherbiny

reason

This comment provides a nuanced perspective on the relationship between governmental and multi-stakeholder approaches to internet governance, challenging the notion that they are incompatible.

impact

It shifted the discussion towards considering how these two approaches can complement each other rather than viewing them as opposing forces. This led to further exploration of how to integrate multiple stakeholders in policy development processes.

We need to show them more the importance of being part of the policy making, let’s say, because I know this is mainly a non-binding forum. At the same time, it’s important not to wait to see what the others want.

speaker

Charles Shabani

reason

This comment highlights the challenge of engaging the private sector in internet governance forums and proposes a proactive approach.

impact

It sparked discussion on how to make internet governance forums more relevant and impactful for all stakeholders, particularly the private sector. This led to considerations of how to demonstrate the value of participation in these forums.

I think creating kind of a networking between all these initiatives in our region is something will help us to focus on our challenges in the region.

speaker

Ahmed N. Tantawy

reason

This comment introduces the idea of creating a network of regional internet governance initiatives, which could enhance collaboration and focus on regional challenges.

impact

It shifted the conversation towards discussing concrete ways to improve coordination and collaboration among various internet governance initiatives in the Arab region.

We need to have a new commitment, we need to renew our commitments for a sustainable IGF that includes all the voices.

speaker

Shafiq

reason

This comment emphasizes the need for renewed commitment to inclusive and sustainable internet governance forums.

impact

It served as a call to action, encouraging participants to actively engage in shaping the future of internet governance forums. This led to discussions about concrete steps to ensure sustainability and inclusivity in these forums.

Overall Assessment

These key comments shaped the discussion by highlighting the need for greater collaboration between different stakeholders, including governments, civil society, and the private sector. They emphasized the importance of creating more inclusive and sustainable internet governance forums, particularly in the Arab region. The discussion evolved from identifying challenges to proposing concrete solutions, such as networking regional initiatives and renewing commitments to multi-stakeholder engagement. Overall, the comments pushed the conversation towards a more action-oriented and forward-looking approach to internet governance in the region.

Follow-up Questions

How can we encourage Arab governments to get more engaged in the Internet Governance Forum?

speaker

Nermine Saadani

explanation

There is a lack of government presence at IGF events, which limits the forum’s effectiveness and representation.

How can we strengthen the Arab IGF and create an ‘Arab IGF 2.0’?

speaker

Ayman El-Sherbiny

explanation

There is a need to revitalize and strengthen the Arab IGF to make it more effective and relevant in the region.

How can we better connect IGF outcomes to actual decision-making processes?

speaker

Christine Arida

explanation

There is a need to improve the impact of IGF discussions by ensuring they feed into concrete policy decisions.

How can we define the difference between digital governance and internet governance?

speaker

Christine Arida

explanation

Clarifying these concepts is important for focusing future IGF discussions and policy work.

How can we improve the multi-stakeholder model to ensure more equal participation, especially for civil society?

speaker

Tijani bin Jum’ah

explanation

There is a need to address the imbalance in resources and representation among different stakeholder groups in internet governance discussions.

How can we develop a consensus opinion from the Arab region on the evolution of WSIS and WSIS+20?

speaker

Tijani bin Jum’ah

explanation

There is a need for more coordinated input from the Arab region into global internet governance processes.

How can we better integrate AI considerations into internet governance frameworks?

speaker

Audience member (unspecified)

explanation

The impact of AI on misinformation and other internet issues is becoming increasingly important and complex.

How can we improve digital literacy programs within Arab countries?

speaker

Zeina Bouharb

explanation

Enhancing digital literacy is crucial for effective internet governance and participation in the digital economy.

How can we foster more regional cooperation on internet governance issues in the Arab world?

speaker

Zeina Bouharb

explanation

Increased regional cooperation could strengthen the Arab voice in global internet governance discussions.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #28 How to procure Internet, websites and IoT secure and sustainable

Open Forum #28 How to procure Internet, websites and IoT secure and sustainable

Session at a Glance

Summary

This discussion focused on the use of internet.nl, an open-source tool for measuring and improving internet security standards, and its adoption in various countries. The tool allows organizations to test their websites and email systems for compliance with modern security standards. Representatives from the Netherlands, Brazil, Singapore, and Japan shared their experiences and plans for implementing similar tools.


Key points included the importance of transparency in security ratings, the role of government in promoting adoption, and the potential for these tools to drive improvements in cybersecurity practices. The Dutch approach of using procurement processes to encourage better security practices was highlighted. Participants also discussed the challenges of implementing such tools in developing countries and the need for proactive measures to ensure no country is left behind in adopting internet security standards.


The discussion then shifted to sustainability in digital systems, introducing the concept of the “twin transition” – balancing digitalization with sustainability. The Dutch Coalition for Sustainable Digitalization was presented as an example of a public-private partnership addressing this issue. The importance of sustainable IT procurement was emphasized, with a focus on energy efficiency, emission reduction, and circular economy principles.


Overall, the session underscored the interconnected nature of internet security, sustainability, and procurement practices. It highlighted the potential for tools like internet.nl to drive improvements in these areas and the importance of international collaboration in addressing these challenges.


Keypoints

Major discussion points:


– Internet.nl tool for measuring adoption of internet security standards


– International efforts to implement similar tools (Brazil, Singapore, Japan)


– Using procurement processes to drive sustainability in IT


– Combining digitalization and sustainability efforts


– Building critical mass to influence big tech companies on security standards


Overall purpose:


The discussion aimed to share information about the Internet.nl tool for measuring internet security standards adoption, highlight international efforts to implement similar tools, and explore how procurement can be used to drive both security and sustainability improvements in IT.


Tone:


The tone was informative and collaborative throughout. Speakers shared their experiences and insights in a constructive manner, with an emphasis on learning from each other and working together to improve internet security and sustainability globally. There was a sense of optimism about the potential for these tools and approaches to make a positive impact.


Speakers

– Wout de Natris: Coordinator of the Internet Standard Security and Safety Coalition


– Wouter Kobes: Standardization Advisor at the Netherlands Standardization Forum


– Annemieke Toersen: Senior Policy Advisor at the Netherlands Standardisation Forum


– Gilberto Zorello: Project coordinator at NIC.br (Brazil)


– Steven Tan: Assistant director at the Cyber Security Agency of Singapore


– Daishi Kondo: Associate professor at Osaka Metropolitan University


– Hannah Boute: Program coordinator for the Dutch Coalition for Sustainable Digitalization


– Rachel Kuijlenburg: Coordinator sustainability for Logius, Ministry of the Interior (Netherlands)


– Coen Wesselman: Rapporteur for the session


Additional speakers:


– Flavio Kenji Yanai: System developer at NIC.br (Brazil)


– Peter Zanga Jackson, Jr.: From the regulatory body in Liberia


– Shawna Hoffman: With Guardrail Technologies


– Munzel Mutairi: CEO of Nataj Al Fikr


Full session report

Internet Security Standards and Sustainable Digitalisation: A Global Perspective


This comprehensive discussion focused on the implementation and promotion of internet security standards through tools like internet.nl, as well as the integration of sustainability principles in digital systems. Representatives from various countries shared their experiences and plans, highlighting the importance of international collaboration in addressing these challenges.


Internet Security Standards and Assessment Tools


The discussion began with an introduction to internet.nl, an open-source tool developed in the Netherlands for measuring and improving internet security standards adoption. Wouter Kobes, Standardization Advisor at the Netherlands Standardization Forum, emphasised that the tool provides a quick assessment of an organisation’s ICT environment security. Wout de Natris, Coordinator of the Internet Standard Security and Safety Coalition, added that it allows organisations to test themselves and improve their security posture. Importantly, it also enables governments to test organizations and pressure them to enhance their security practices.


Several countries have implemented or expressed interest in similar tools:


1. Brazil: Gilberto Zorello, Project coordinator at NIC.br, reported that Brazil has implemented Top.br, with increasing adoption rates.


2. Singapore: Steven Tan, Assistant director at the Cyber Security Agency of Singapore, shared that they have developed the Internet Hygiene Portal (IHP). Since its launch in 2022, IHP has conducted over 200,000 scans with users from across 40 countries, with more than 45% of domains showing improvements from their initial scan to their most recent evaluation. Singapore also introduced the Internet Hygiene Rating Initiative.


3. Japan: Daishi Kondo, Associate professor at Osaka Metropolitan University, expressed interest in implementing a Japanese version of the tool.


Government Strategies for Promoting Internet Standards


Speakers highlighted various government strategies to promote internet standards adoption:


1. The Netherlands: Annemieke Toersen, Senior Policy Advisor at the Netherlands Standardisation Forum, outlined a three-fold strategy involving mandates, monitoring, and community building. She noted that the Dutch government mandates specific open standards through a “comply or explain” list.


2. Singapore: Steven Tan explained that Singapore uses a transparent rating system to shift industry behaviour towards better security practices.


3. Brazil: Gilberto Zorello shared that Brazil promotes their tool through industry meetings and events.


A key point of agreement was the importance of engaging with major technology companies to improve standards support. Annemieke Toersen emphasised that by collaborating with other countries, they create a critical mass that enables more effective negotiations with suppliers.


International Collaboration and Challenges


The formation of an international community to share experiences with internet.nl-like tools was highlighted as a positive development. This collaboration was seen as crucial for creating a unified approach to improving internet security globally. The community plans to meet twice a year online, with the first meeting scheduled for spring 2025.


However, the discussion also revealed challenges faced by developing countries in adopting these standards. Peter Zanga Jackson, Jr., from the regulatory body in Liberia, raised concerns about the disparity in internet development between countries and sought guidance on how regulators in developing nations could implement tools like internet.nl with limited resources.


Sustainable Digitalisation


The latter part of the discussion shifted focus to sustainability in digital systems, introducing the concept of the “twin transition” – balancing digitalisation with sustainability.


Hannah Boute, Program coordinator for the Dutch Coalition for Sustainable Digitalization, presented their organisation as an example of a public-private partnership addressing this issue. She briefly mentioned the Corporate Sustainable Reporting Directive in the EU as a relevant development in this area.


Rachel Kuijlenburg, Coordinator sustainability for Logius at the Ministry of the Interior (Netherlands), emphasised the importance of sustainable IT procurement. She highlighted a framework for minimising energy needs and emissions in IT, noting that 80% of IT’s environmental footprint comes from hardware production. Kuijlenburg presented a sustainability framework based on the “refuse, reduce, reuse” strategy, although it was noted that her slide was in Dutch. She also discussed the SARD legislation in Europe related to procurement practices.


Key points in sustainable digitalisation included:


1. Focus on sustainable procurement of IT


2. Development of frameworks to minimise energy needs and emissions


3. Importance of contract management in sustainable IT procurement


Conclusions and Future Directions


The discussion underscored the interconnected nature of internet security, sustainability, and procurement practices. It highlighted the potential for tools like internet.nl to drive improvements in these areas and the importance of international collaboration in addressing these challenges.


Key takeaways included:


1. The effectiveness of tools like internet.nl in assessing and improving internet security standards adoption


2. The value of government strategies involving mandates, monitoring, and community building


3. The potential of international collaboration to influence big tech companies


4. The growing focus on sustainable digitalisation, particularly in IT procurement


Unresolved issues and areas for further exploration included:


1. Effective implementation of internet security tools in developing countries with limited resources


2. Development of specific metrics or targets for sustainable IT procurement


3. Balancing security and sustainability requirements in IT procurement


The discussion concluded with a call for continued international cooperation and the development of workshops and capacity-building initiatives to implement internet security recommendations globally. As a light-hearted note, it was mentioned that internet.nl t-shirts were available at the end of the session.


Session Transcript

Wout de Natris: which is a tool that tells you how secure your ICT environment is within seconds. Two, Internet.nl is an international community, launched in 2020. And your environment… There is no sound at the moment, so… I hear you, and I hear myself as well. Continue? There was a sound issue. Sorry for that. Three, ICT and sustainability is becoming an ever more serious topic, and it is discussed in the final part of the session. In the panel discussion, you will learn how the Dutch government uses procurement and how it negotiated with big tech to deploy… … Also, you will hear from other organizations that work with the Internet.nl tool, or in the future. I think the mic… Electricity wasn’t really working anymore, so I got a new microphone. So, the big tech to deploy important Internet standards. Also, you will hear from other organizations that work with the Internet.nl tool, or strive to do so and learn from their experiences. There will be ample time to ask questions after the two panels, but first, a short survey. Please show your hands. Who in the room is familiar with Internet.nl? I see a few hands. Who has used Internet.nl? And are you interested to deploy it in your country? Yes, but you have. Okay, thank you very much, but without further ado, let me… introduce the first speaker. Wouter Corbis from the Standardization Advisor at the Netherlands Standardization Forum is going to take you through the internet.nl tool and how it works.


Wouter Kobes: So Wouter, the floor is yours. Thank you very much Wout. Yes, so we will talk about measuring adoption of standards to improve security and for that we use the internet.nl tool. And on the next slide we first have a motto. Why are we using internet standards? Well, to keep our internet open, free and secure. However, those standards do not implement themselves. You have to implement them actively. And to support this adoption of standards we developed the internet.nl tool which can be used to measure your adoption of these important standards. And for demonstration purposes we measured the IGF donors and partners for this event using our dashboard. And the dashboard is a measuring tool which you can automatically measure multiple domain names on the adoption of email and website standards. As you can see here, this is just a snippet of our results from last week. The adoption of standards for IGF donors is not optimal yet. However, some scores are over 50% which is actually pretty good considering international standard adoption. This dashboard can be used for scheduled scanning and also for trend monitoring over time and is regularly updated using the latest standards and measurements. And on the next slide you can see a more detailed scanned result when you use our internet.nl website to scan individual domain names. Quite ironically, the IGF website of this year did not perform too well. And the nice thing is that we do not only present a score to each domain name but we also explain why this score is given and give guidance to how to improve your score. For instance, in this case, to enable some settings to protect your HTTP as connection better. On the next slide, you can see who is behind internet.nl, which is a private public collaboration of organizations in the Netherlands and outside. And we also are striving to more international use of internet.nl. And on the next slide, you can see some of our international users. We are used at the moment in Brazil, in Portugal, in Denmark, and well, hopefully after this session, you get inspired to use internet.nl in your country as well. And to use internet.nl is actually quite easy because on this next slide, I will explain to you that internet.nl is an open source software program. It’s available on GitHub if you really want to reuse it yourself. However, if you’re not in the capacity to host your own internet.nl instance, you can also use our dashboard or API functionality. For that, I ask you to send us an email at question at internet.nl, in which we will give you access to our tooling. And as already introduced by Wout, we are also launching a new international initiative where you can actually participate in using internet.nl internationally. So if you’re interested in joining that user community, please send an email to international at internet.nl. And then I thank you for your attention.


Wout de Natris: Martin, and actually, if you mail to that email address, internet.nl, or sorry, [email protected], then it’s me that will respond to you because I am recently appointed as its coordinator. That if you want to test it yourself, just. now you can because you can just type in internet.nl and for example type in the URL of your bank or your organisation and you will immediately see within about 30 seconds what the security of your organisation is. That is what the tool tells you and the sort of message you can actually take. Thank you again Wouter for the presentation. We’re going into the panel discussion which we have four participants in and the first participant is Annemieke Toersen of the platform Internet Standards and Standardisation Forum and she was a Senior Policy Advisor at the Netherlands Standardisation Forum. So Annemieke please.


Annemieke Toersen: Thank you very much Wouter for your brief introduction and thank you Wouter for showing us the advantages of internet.nl. And thank you for joining our session. Lately also more people came into the room. Thank you very much indeed. My name is Annemieke, Annemieke Toersen from Netherlands Standardisation Forum and this is a think tank and aims for more interoperability of the Dutch government and open standards are key to this goal. Think about interoperability for trustworthy data exchange or security which influence of course the trust positively. Interoperability as government is obliged and to inform the society as a whole and neutrality. No dependency on vendors is very important in this case. The forum is actively promotes and advises the Dutch government about the usage of standards and you have to consider that it’s about 25 people from various backgrounds. So you think about government, business and science. And when it comes to internet standards the Dutch government has a threefold strategies shown in the next sheet and I briefly go through it. It’s a bit, it’s another sheet. It’s backwards. Yes that’s the one. Thank you very much. First, we mandate specific open standards. We can do so by including standards on the comply or explain list. This is done after careful research, in which we also consult technical experts. Standards on this list should be required when governments are investing in new IT systems or services. As we survey under some bigger ICT organizations within the Dutch government, we have seen quite some progress using open standards. However, it also became clear that some organizations have not yet moved yet on. In addition to a comply or explain, the standardization forum can also make agreements with ultimate implementation dates, and we have already done so far for several modern internet standards like HTTPS and DNSSEC and as well as RPKI. Firstly we mandate, by law, specific open standards, for instance with the open standards HTTPS and HSTS. Second, we go for monitoring to promote the adoption of standards, reviewing tenders and procurement documents. And for modern internet standards, we happily use internet.nl, as just mentioned, to frequently measure over about two and a half thousand government domains, so that’s pretty much. Finally number three, we invest in community building. We try to bridge the gap between technical experts and governmental officials, therefore we are really happy with the internet standard platforms and are actively participating. This cooperation enables us to be more effective, helpful to governments with their technical questions and also with their questions regarding how to request modern internet standards for their vendors. If you watch for community building and international collaboration for digital standards, we engage several efforts, for instance, the Platform Internet Standard and the Secure Mail Coalition. And our international initiatives include MESHEU, collaboration with European countries. Wouter mentioned already countries like Denmark and Czech and Portugal. But Internet.nl as well, you encode reused and partners like Australia and you further on hear Flavio from Brazil and Denmark to greater critical mass. What we do, we actively reach out to vendors and hosting providers like Cisco, Microsoft, Open Exchange, Google and Akamai. And this approach inspired Denmark to adopt similar practices, resulting in successes such as Microsoft’s announcement of full support for the Dane email security standards on Exchange Online last October. We are pretty proud for that. This achievement is partly due to ongoing correspondence and discussions between the Dutch government and Microsoft since 2019. And by lobbying other countries, we create a critical mass that enables more effective negotiations with suppliers. Our experience with Microsoft demonstrates the importance of formalizing agreements through concrete correspondence, which is very important, of course. This strategy can be applied to other areas, such as sustainability. Further on, we hear about them and suppliers are more inclined to modify their services when multiple governments or countries support an issue. The key is to build a critical mass and the next sheet will show you that the most important is sorry, this was the next sheet. You can show the next sheet, please. Building a critical mass is very important to find other partners and formalize agreements through concrete correspondence is a very key thing in having this succession. Thank you very much.


Wout de Natris: Thank you, Annemieke. And as you can see, is that everybody thinks that big tech, everything is in concrete. It is not until you start. discussing with them and perhaps they change their ways and make us all more secure because that apparently is what is going to happen. Now we’re going to move outside of the Netherlands and listen to what other countries have been doing so far with Internet.nl but giving it of course their own name. The first to speak is Gilberto Zorrello and he will talk about Top.br from Brazil where he’s the project coordinator at NIC.br and in the room is his colleague Flavio Kenji Yanai who is a system developer and if you have any questions about Brazil you can ask Flavio after the session. But first Gilberto,


Gilberto Zorello: the floor is yours. Hi, good afternoon for all. It’s a pleasure to be in this meeting. The implementation of Internet.nl in Brazil we call Top.nic.br. In Brazil the tool was deployed using a middle-up-down approach unlike the Netherlands. However, we hope that the key players in Brazil market will also adopt the best practices in the usage. We promote the tool in meetings that we have here in Brazil, in events of NIC.br and the Association of Internet Service Providers. We have some numbers of our utilization here in Brazil. For website tests, the unique domain that’s until now, is about 4,000 websites. The Hall of Fame is about 600 and adoption of IPv6 is about 20% only. DNSSEC signed 20% and HTTPS about 6%. We have a work with government here in Brazil. They tested many websites of government and they are working internally with the internal organs of the government to improve the implementation of the best practice here in Brazil. We mailed tests about 21,000 tests, unique domains tested. The Hall of Fame is about 80. IPv6 only 30%. DNSSEC signed 11%. The mark, the KSPF, 16%. And StartCLS, just 1%. The connection test is about 300,000 tests, more tests in this case. The UNIC-IS tested about 7,000. It’s an important number because we have here in Brazil 9,000 ASs. Then we can check that 7,000 was tested up to now. The DNS Recursive validating DNSSEC is about 210,000, 71%, and user for IPv6 is 60, 70%. It is important to say that the adoption of IPv6 in Brazil is increasing in the last year. In Brazil, the tool was, as I said, using a middle up-down approach. I don’t know if you… The other important thing is we just implemented a version of 1.7. In the next year, we will implement the 1.8 version of Internet. I don’t know if Flavio can complete some information. Flavio is okay, he says. Thank you, Gilberto, for showing us how things have changed. Were you finished? Yes, I’m finished.


Wout de Natris: Okay, that’s what I thought, but I just wanted to check to be certain. Thank you very much. It’s encouraging to see that numbers are going up because of the work that you’re doing, and I want to congratulate on that. The next person who is going to speak is going to come in from Singapore, and that is Stephen Tan. He works as an assistant director at the Cyber Security Agency of Singapore and is responsible for internet security and mobile security. Stephen, you’re working with the Internet.nl tool, or something very similar to it, for some years. Can you tell us the experience that you have in Singapore?


Steven Tan: Right. Hi, everyone. I’m Stephen. So, similar to Internet.nl, Singapore has developed our own internet hygiene portal, which is meant to improve the country’s cybersecurity landscape, right? The IHP encourages service providers to adopt key internet security best practices through a very transparent rating system. So on top of the tool itself, we have come up with a rating system to actually understand whether they, sorry, can I just double check because I’m seeing that I’m being muted here. We can hear Stephen, but you dropped away for a few seconds, but we can hear you. Okay, sure, right. So basically we came up with a transparent rating system known as our internet hygiene rating. The approach has actually helped to shift industry behavior and perspective by promoting proactive security enhancements, right? So since its launch in 2022, IHP has conducted more than 200,000 scans with users from across 40 countries, right? And importantly, more than 45% of domains have also shown improvements from their first initial scan to their most recent evaluation, right? That has actually helped to provide us with data points that IHP has reflected meaningful progress in cybersecurity readiness itself as well. So now what we have done is that similar to what internet.nl has done, we have also included an API which will be available early next year, right? That will enable seamless integration for businesses that’s looking to automate their security assessments. Several of the industry players and ICT providers have also signed up with us, showing keen interest and commitment to enhance their cybersecurity postures, right? So to date, IHP has also helped to shift the cybersecurity landscape in Singapore. We have seen ICT service providers that’s within our APAC regions like Orion and Exabytes. Basically these companies, right, they have stepped up and joined the Internet Hygiene Rating Initiative. What it means is that they have configured their websites and… email services to meet, by default, strong internet security best practices, which also means that any of the clients out there or businesses that actually goes to these providers would, by default, have a high rating of internet security best practices. So what has been even more encouraging is that even smaller ICT firms in Singapore has come on board. They are now being featured in our internet hygiene rating under this section known as the ICT website and email management providers category. I’ll provide the link later on. So it shows that more of these businesses are starting to recognize the importance of following recognized internet security best practices. By far, the responses from vendors has been great. I think similar to what internet.nl team has shared earlier on, even tech giants like Microsoft, Amazon, and Akamai has also shown willingness to collaborate with us, recognizing the tools potential to actually drive collective cybersecurity improvements. So besides this, I think Bart and I, we have been sharing notes between our engagement with Microsoft so that we know we could actually help to nudge the different tech giants into doing action to really make the internet a safer place. So to date, the involvement between us and the tech giants have also significantly signaled an industry shift towards greater cooperation and shared responsibility in maintaining a secure internet environment. So by adopting the IHP’s recommended best practices, they have also strengthened trust in their platforms while contributing to a safer digital ecosystem. Such collaboration proves that we could actually set security norms through voluntary industry engagement where transparency and fair recognition are in place. So moving forward, besides the internet hygiene portal, we are seeing similar tools like internet.nl and of course, NIC.br as a strong starting point for establishing broader security norms. And while, of course, we understand that formal regulations may come later, the primary focus of such tools itself is to really encourage voluntary adoption through industry recognition, public visibility, and of course, healthy competition. So I think kudos to the various teams here that have actually, you know, create such wonderful tools for your countries, and of course, to your region, and in fact, internationally. Thanks.


Wout de Natris: Thank you, Stephen. And I think what the three presentations show is that on the one hand, with this tool, organizations can test themselves, but also what you hear Anamika and Stephen say, that they test organizations and let them know what their current status is. And we heard from both that organizations that are tested and not tested so well have the inclination to move upwards and to better themselves and enter this Hall of Fame, but also that the big corporations are more or less exposed as being less secure. And that also means that some pressure on them starts existing to better themselves. And I think that that is one of the major factors that this tool can be so successful. We’ve heard from three organizations that have deployed. We will now hear from an organization that is looking into deployment. So I’m inviting the next speaker, Daishi Kondo. He’s an associate professor at Osaka Metropolitan University, and one of the research interests is internet security, including email security. So Daishi, can you tell us what you’re doing in Japan to make Internet.nl happen and what the challenges are that you’re running into? Thank you. Okay.


Daishi Kondo: Thank you very much. I’m Daishi Kondo from Osaka Metropolitan University. And actually, I have to say that to the best of my knowledge, the Japanese government doesn’t provide a tool similar to the internet.nl. And so I want to ask, I want to answer two questions. The first question should be like, what makes the internet.nl principle interesting for Japan? But for me, so this is just like imagination because we don’t have a similar tool. So the important point is like security visualization. So the internet.nl has like a squaring system and most people don’t know the detail of the security measures, such as SPF, DKIM and DMARC. Although it’s not necessary for people to know the detail. However, the people can understand the security level of the systems through the squaring system. So this principle allows the people to easily prepare the specification sheets for introducing the systems. For example, so we want to achieve at least like 80% of the activity level or like this. So also the, we have internet.nl have a whole of frames and the internet.nl compliant patch. So these features can create a peer pressure among the competitors within the same industry. So this should also like very interesting for me. The second question is what do you expect to achieve in Japan by using the internet.nl? So my answer is that, so the internet.nl using internet.nl can encourage the people to take better care of the systems. So currently the Japanese, in Japan, we don’t have a like standard. So the one potential use case is to create the specification sheet, therefore introducing system using a Japanese version of internet. .nl which can also be used to check the security measures implemented by the system provider. So I hope at some point we can implement Japanese internet.nl in Japan. Thank you very much.


Wout de Natris: And I wish you good luck with the implementation of internet.nl in Japan. I understand that there’s an online question. No? Then we go to the room to see if there are any questions on your side. So who would like to ask a question? Yes, please, sir. Do you have a microphone for a question in the room? Do you have a microphone? And please introduce yourself and your affiliation first, please, sir.


Audience: Firstly, my name is Peter Zanga Jackson, Jr. I’m from Liberia, from the regulatory body. My question is, our countries are far behind in terms of internet development. And now we are talking about internet. I’m hearing nl for my country will be internet.LRO. We as regulators for a developing country like mine, what can we do? What is required of the regulators to ensure that we too have internet.LRO, which is .Liberia. What can we do?


Wout de Natris: Well, I’m going to give the question to Walter, because he is working in that field. So he’s going to ask a question, answer the question. Thank you. And a very good question, indeed. Yes, thank you very much for this question.


Wouter Kobes: Like I mentioned, it is possible to reuse the code of it’s an open source project and well as a regulator I think it might be an interesting tool to also to reuse because you can of course measure standard adoption. Yeah well the question how to launch it it’s basically if you look from a technical perspective it’s simply following the installation instructions or to make use of our own tool through the dashboard or API features. Of course if you want to use this for regulatory purposes you will have to do some own work in making it land in your country, making operators, hosting providers, ISPs to actually use this tool to measure and improve their own standard adoption. But I would say the starting point for adopting adoption is making it possible to measure your adoption and therefore this source code could contribute to that. I hope that answers


Wout de Natris: your question. And I think it’s available for free so it’s not like it’s a business model it’s there for you to use if you want to start using it.


Audience: Yeah well yeah my fear is will we not be breaching because I’ll be using internet.nl we are not be creating problem by interfering into the setup of Netherlands. I won’t be interfering. Will that not create a problem for Liberia if we should start using internet.nl in Liberia? How can we handle this?


Wout de Natris: I would say that if you decide to start using it then you deploy it in your own country and from that moment you name it like Brazil for example they call it top and in Singapore they name it something else so you give it your own name it’s no longer internet.nl it’s the name that you choose. And from that moment on, it’s yours. So it’s available for free and the instructions, just exchange cards with Walter and then it will be okay. And then you can join the community that we will be setting up very soon. The first invitation will go out probably in January, early February. So please join if you would like to. Is there another question in the room? Time for one more. And is there a question? Yes. And something online, Doreen? No? Then the floor is yours. Please introduce yourself first in your field. No? Or not? Oh, okay. Yes. The lady here. Okay. Thank you.


Audience: So first off, thank you for this initiative. Please introduce yourself. Thank you so much. Shawna Hoffman with Guardrail Technologies. One of the questions that I have, I actually just looked up my website and realized we have some work to do ourselves. I come from the United States. So what advancements have you been making with the U.S. or is this something that we need to start in our country?


Wouter Kobes: I’m not quite aware actually if we have any initiative in the U.S. right now. But I think as Wouter is saying, this is partly the reason why we are having this session right now is to make this tool more known globally. And I think perhaps we can talk after this session how we could reach the U.S. as well. Because I think it’s in the end in the best interest of the internet and everyone here at the IGF to have this tool widely used and widely known. Not necessarily under the name internet.nl, but rather as a tool in your country that is known and used by as many providers as possible. So let’s talk after this session about this. Thank you.


Wout de Natris: You have a question, sir, then please introduce yourself. yourself first and your affiliation? Sure, yeah, it’s my pleasure.


Audience: My name is can you hear me? Okay, engineer Munzel Mutairi, I’m CEO of Nataj Al Fikr. My question is regarding being proactive in terms of building the capacity of countries, especially with limited resources, technical limited resources. Usually we talk, but is there like any proactive initiative where there is measuring of their, you know, infrastructure and trying to develop the infrastructure so it’s standardized as much as possible with other countries? I’m not sure if I got your question fully. Okay, for example, the gentleman from Liberia is asking and that’s a reactive to what’s happening. How about being proactive by, you know, that’s maybe by UN or international organization in the field. So to make sure that no country is left behind. So that’s my question.


Wouter Kobes: Yeah, I think that’s a very good point. Well, the power of the tool is that it is not bound to a country to test domain names. So you can actually test, like I presented, you can test the IGF website, you can test websites from all over the world, already. So the challenge is not in the technology, the challenge is in making it be adopted in those countries where the reach might be more difficult. And well, I consider this session one of our proactive measures to make this adoption more known. But if you have any ideas on that, then perhaps we can discuss it also after this session, how to make sure no one is left behind, as was introduced this morning as well. Thank you.


Wout de Natris: Yes, thank you. And I think what I can add, if you would like to be proactive on this, you can use internet.nl to test websites in your country, and it will work. Except if you really want to measure your country and make it better known, it would be better then next as a next step to deploy the system yourself. But to show your countrymen, you can just use internet.nl and for example, test the bank or the government, whatever in your own country, and you will get the results also. So that’s perhaps a more proactive way to start. And thank you for your question.


Steven Tan: Maybe I can answer to that question as well. I think firstly, right, like what the Netherlands Standardization Forum is actually doing, similar to what, you know, in Singapore, CSA, what we are trying to preach here is that when we identify the various best practices here, or these various modern standards itself, right, what we are trying to do is that we are coming up with a list to set a series of standards that all countries, and hopefully at an international level, can deploy. So while we, and we do believe that, you know, no country should be left behind when it comes down to secure internet adoption. By creating all these best practices available, like on internet.nl, or NIC.br, or the internet hygiene portal on the CSA government’s website, right, what we are trying to do is that every country should embrace this, and that, you know, this is where we have gone through the pain and various efforts that we have tried many standards out there. And then we have also tried things that by then is already dated, right, and that new technologies are coming along the way, and that we have curated this list for everybody to use it. So for countries who are now starting to embrace internet, and to start to take on these various standards, right, the by de facto great standards that many have already tested, and tried, and have also learned our lessons, right, more or less the correct answers are now available. And that for you who are trying to, you know, really trying to get your internet up now, these standards are perhaps, I wouldn’t say it’s the best, but if it will be to second or one another, right, that you may want to actually look at these standards and then, you know, adopt it across your country and then have your ICT providers also develop and actually adopt such best practices. They would, by default now, be great and useful, especially with that many countries have actually already tried and tested already. So this is what we are trying to do here.


Wout de Natris: Yes, thank you very much, Stephen. And apologies that I didn’t understand where the voice came from, because we can’t see anybody online here. So thank you for that comment, because it’s very significant and I think very explanatory. I see the gentleman asking the question, saying thank you and nodding. As we have only an hour, we are going to move into the second section of our open forum, and that is on sustainability. And many of them in the room will have knowledge and most likely opinions on sustainability and the role that different stakeholders have to play to reach a more sustainable future for all living creatures. But how to go about this? Well, the Dutch government is working on a novel approach and use procurement processes. But what is the plan? What is the current state of the plan? What actions can we look forward to? And these are more questions will be answered by Hannah Bouten, whose program coordinator for the Dutch Coalition for Sustainable Digitalization. And then next, Rachel Kuylenburg, who is coordinator sustainability for Logius, which is a part of the Ministry of the Interior. And Rachel is committed to getting sustainable IT a step further. But first we hear Hannah about the project and then Rachel more about the policy side of it. So Hannah, please first.


Hannah Boute: Thank you so much, Wout, and thank you all live in RIAZ and here online for attending the session. And I’d like to take you along and make the bridge from security. sustainability and in the Netherlands we do that with the Dutch coalition for sustainable digitalization. So if I could have the next slide please. So to start with the internet obviously is part of a digital system and that digital systems shouldn’t only be secure but also sustainable. If we zoom out in Europe we call this the twin transition. So we’re in a digitalization transition but the other side of the digitalization transition is the sustainability transition. We see a growth of connections and growth in quality and also technology becoming more efficient but nevertheless we as human beings have the tendency to use efficiency that comes free. So if we can go to the next slide please. So that means that we have to see digitalization in terms of sustainability as well and we can look at sustainability of the digital system in three scopes and the first scope has to do with your own company. Think of your company facilities but also the vehicles of your company. We call this scope one and also scope two where purchasing comes in perspective. So the emissions you make by purchasing energy for example and scope three has to do with the indirect responsibility you have when you purchase something throughout the supply chain and also the distribution to your end consumer. And with regards to IT you can think about e-waste and also the scarcity of minerals and metals. So next slide please. So in the Netherlands we try to give this so-called twin transition. We try to drive it forward. in a public-private cooperation since stakeholder groups all have to play a role in this transition. And here you see a number of the logos of the parties we cooperate with. The Coalition for Sustainable Digitalization is possible with help of all these parties and especially together with the Ministry of Economic Affairs, together with the coalition and the Ministry of Economic Affairs in the Netherlands, we created the Action Plan for Sustainable Digitalization which has analyzed and identified several action points to be able to move this transition forward. If we can go to the next slide please. And we do that in four program lines and that is visualized in this House of Sustainable Digitalization. A very important slide is the first program line technological innovation. This has to do with sustaining the digital system and within the coalition we focus on sustaining artificial intelligence since the big adoption of sustainable intelligence. But also we’re looking at how to make internet more sustainable. Then on the right you see sustainable by IT and as you already heard in my introduction, digitalization needs energy and therefore we’re looking into the relation between the energy system and the digital system. But also making the IT of organizations more sustainable is a very important program line. In the European Union we currently have a directive, it’s called the Corporate Sustainable Reporting Directive, which makes companies to report about the three scopes I just mentioned. And we have several working groups working towards solutions for organizations to start their journey. to making their IT more sustainable. And a very important working group in this programme line… is the IT procurement working group… of which Rachel is our chair. And this is a very natural moment to give the word to Rachel… so she can explain a bit more how we do that with regards to procurement. Thank you so much for your attention.


Wout de Natris: And thank you, Hannah. And, Rachel, the floor is yours.


Rachel Kuijlenburg: Okay, thank you. Thank you very much for this opportunity… to talk a little bit about sustainable procurement of IT. My name is Rachel and I work for Logius… and that’s the digital government service of the Netherlands… of the Ministry of the Interior. And we maintain government-wide ICT solutions and common standards… to simplify the communication between the government and our society. As Logius, we procure a lot of IT… and that’s why I’m also chair of the working group… IT procurement within the coalition. And the proof of the pudding is in the eating… because we can talk a lot, but we should do this. So, next slide. We… Well, I… Well, here you see a policy frame. And in my former job, when I was also in sustainable or tangible world… of plastics and food waste, we developed this framework… to commit our strategies to a policy within an organisation. So, we focus on refuse. That should be the strategy in the tactics is reduce and reuse. So, it was really nice to hear that our former speakers… really try to reuse the standard of the internet.com. But we strive really to… make a better sustainable footprint on IT. And that involves software, but also storage, cloud data, and hardware, because 80% of the footprint of IT is in the production of hardware. So to make sure that hardware should be, well, bought as less as possible, this framework could help. But then the question is, how to procure this? And in the next slide, we show you our focus points. And firstly, it’s really we focus on to minimize our energy needs. So energy efficiency is really impossible, very needed, within the procurement of IT. Also emission-free, so that’s focused on hardware. Also in the data centers to become CO2 neutral in 2030 or 50, whatever your policy is. And in the end, we really try to procure circular and climate proof. But then the question is, how are you going to do this? So at this moment, and that’s on the next slide, we are developing a framework. And here you can see, and it’s not its start. So please, if you want to help us, send me an email. How to really focus on several focus points in relation to hardware, software, and cloud. So in hardware, what you can see if we procure, you can really aim for energy requirements. Also there are some ISO standards or an energy audit. You can ask for. What is a very helping legislation at this moment is to see SARD within Europe. So every big company from whom we procure, they have to report their sustainability credentials. So that also will help you with to become CO2 neutral. And also for cloud, because we within Logius, but the Dutch government, we have our own data centers, but we also procure a lot of data centers. For that, we also have standards to make sure that our suppliers are trying to aim for, well, becoming as sustainable as possible. Within the coalition at this moment, we try to develop, well, a better framework. So coming year, we will focus on hardware, software and cloud. But also a very important part of the procurement is the contract management. So the next step, what we will take coming year is to do more research on how to, not only how to procure, but also how to contain within the contract management that all the IT procurement is going to be secure. So if you, the last slide, if you want to help us, or if you need any information, we are more than willing to share. Just send me an email. My email address is here within the sheet. And well, we make nice steps in procuring sustainable IT. And hopefully, with your help, we can make better steps. So thank you for the attention.


Wout de Natris: Thank you, Rachel. As everything is in Dutch on this slide, perhaps you could translate what people are reading here for them.


Rachel Kuijlenburg: Oh, yeah. Well, it’s about our aims of Logius. What I said, we are the digital government service. And it’s about well, how do I translate this in English? It’s early morning here. Accessibility, I would say. Accessibility, it’s about interaction, it’s about data exchange, the infrastructure, and it’s about standards within IT. So, these are our aims within the company and I should have used an English one. So, my apologies for this Dutch.


Wout de Natris: Sorry for putting you on the spot, Rachel. There’s one question online, but one short comment. I think what was promised is a third topic and that is how things interact between the two topics, but that is procurement. Because if you measure your internet standards, that’s the moment you know what you can procure on to make yourself more secure and the same goes for the sustainability. When you know what it’s about, then you can start procuring measures that are actually supporting sustainability. There’s a question online. I’m pointed to my cell phone. And it’s in the chat. It’s someone called Bart and he’s asking, what does the international inter.nl community focus on and is more the tool or is it usage? And Walter, would you like to answer that? I can do that myself. I think what the community is trying to do is bring together the organisations that are currently working with the internet.nl or a tool like it and the organisations that are interested to do so in the future. And what we would do is to come together twice a year and online, not physically, but online, and discuss where we actually are at this point in time and what the challenges are that we run into to learn from each other, but also, for example, to coordinate on the next step. So if we all would move together to a next evaluation of evolution of the project, then we could do so together, develop together and that way learn together. So that is what we’re going to do in the first year and after the first year we’ll evaluate and see if we continue in the same way or that we have to change it, or perhaps that there’s no interest, that’s the other option of course, but we’ll be working on that in the next year and have two sessions, probably one in the spring and one in the fall, on the Northern Hemisphere at least, and from there see how we develop. I saw there was a question in the room, no there isn’t, then I will move to Koen Wesselman who is our rapporteur, and Koen can you tell us what we went through and what the lessons are that we learned here today, Coen the floor is yours.


Coen Wesselman: Yes I can, thank you all for being here, we started this session off with a clear explanation by Wouter of what internet.nl is and what it’s doing and what it’s standing for, an open free and accessible internet for everyone. We moved into the panel discussion where Annemiek from the Dutch Standards Forum made a clear appeal to build a critical mass of countries and organisations to make lasting changes to internet standards and security. We have seen very promising presentations from the Brazilian Organisation for Internet Standards and the Cyber Security Agency from Singapore, thanks for that a lot, and we hope to see that Japan is moving, Daishi is moving forward to start an internet.nl version for Japan and monitor their internet activities. We have seen clear questions from the present person from Liberia which have been answered and Hannah and Rachel gave a clear insight into what the Netherlands is doing in combining digitalization and sustainability in a private-public cooperation with the government and companies and how procurement plays an important role in improving the situation for the Netherlands and I hope that for everyone this concludes the session.


Wout de Natris: Thank you very much Koen and this is what we will be putting online as a report for this session. I think that we’ve learned a lot in this session and I’m not going to repeat what Koen said because that is the summary already but I think that what is important to know that if you want to procure products you have to know what you’re buying and you can only know what you’re buying when you test it beforehand and I think that is a lesson that we’re starting to learn where sustainability is concerned but also where the security of the internet is concerned and I’ve got a minute so I will make some advertisement about the dynamic coalition that I’m chairing also and I’m a coordinator of which is called Internet Standard Security and Safety Coalition which functions within the IGF and internet standards as I say is the main part that we’re working on and to make sure that procurements start happening especially within governments that they procure secure by design but also to watch education and skills and how our youth trained on cybersecurity and tertiary cybersecurity and can we improve that in the future because there seems to be a skills gap of about 20 years what we’ve been hearing but we’re also looking at IOT security we’re looking at in post quantum encryption pretty soon that is starting as we speak we signed the contract on Friday to research on it but we also hope to move to the next phase and the next phase is not just looking at the theory of our recommendations but to start producing I can’t think of the word in English, but the workshops, the capacity building, so that actually our findings are being used by organizations around the world so that the world becomes inherently more safer for all the users of the internet and not just the privileged few who can afford it. So that’s where we are. For example, at internet.nl, that is me, I can say, because I will be replying to you. If you have any questions, please contact us there. And if you’re interested, you will get an invitation to the first meeting that we will be organizing pretty soon. And we’ll let you know when the first meeting happens, but it will be somewhere in the spring of 2025. But let me end here. We’re actually on time, and we had not expected that that would happen, but we say that our speakers really kept their time. I didn’t have to correct anybody at some point, so that is all kudos here. Let me thank the speakers first, because we’ve heard a lot about their experience, and I think that it’s tremendously important that we’re going to improve ourselves and make ourselves better and keep making ourselves better. It looks like with internet.nl, it’s certainly possible. Doreng, thank you for monitoring online, and Koen, thank you for rapporteuring for us. And I want to thank the technicians for setting everything up, and our scribes somewhere in the world, I don’t know where you are, but thank you very much for making sure that everything is recorded. So let me stop there. Thank you very much for your attention, and hope to meet again pretty soon. Bye-bye. It’s one thing, I think we have a few t-shirts, right? So if anybody wants an internet.nl t-shirt, it’s yours. So, thank you very much. Bye-bye. Thank you. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye. Bye-bye. Bye-bye. Bye-bye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


W

Wouter Kobes

Speech speed

155 words per minute

Speech length

906 words

Speech time

350 seconds

Tool provides quick assessment of ICT environment security

Explanation

Internet.nl is a tool that quickly evaluates the security of an organization’s ICT environment. It measures the adoption of important internet standards and provides a score within seconds.


Evidence

Demonstration of measuring IGF donors and partners using the dashboard


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


W

Wout de Natris

Speech speed

135 words per minute

Speech length

2557 words

Speech time

1131 seconds

Allows organizations to test themselves and improve security

Explanation

The Internet.nl tool enables organizations to assess their own security status. This self-assessment capability encourages organizations to identify areas for improvement and take steps to enhance their security measures.


Evidence

Organizations that are tested and not tested so well have the inclination to move upwards and to better themselves and enter this Hall of Fame


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Formation of international community to share experiences

Explanation

An international community is being formed to share experiences with implementing and using Internet.nl-like tools. This community aims to facilitate learning and coordination among countries and organizations interested in promoting internet standards.


Evidence

Plans for biannual online meetings to discuss challenges and coordinate on next steps


Major Discussion Point

International collaboration on internet standards


A

Annemieke Toersen

Speech speed

131 words per minute

Speech length

718 words

Speech time

326 seconds

Promotes adoption of important internet standards

Explanation

The Internet.nl tool encourages the adoption of crucial internet standards. It provides a way to measure and visualize the implementation of these standards, motivating organizations to improve their practices.


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Dutch government uses three-fold strategy: mandate, monitor, community building

Explanation

The Dutch government employs a comprehensive approach to promote internet standards. This strategy includes mandating specific standards, monitoring their adoption, and fostering community engagement to drive implementation.


Evidence

Mandating standards through ‘comply or explain’ list, monitoring over 2,500 government domains, and participating in community initiatives like the Internet Standard Platform


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Gilberto Zorello


Steven Tan


Agreed on

Government strategies for promoting internet standards


Engagement with big tech companies to improve standards support

Explanation

The Dutch government actively engages with major technology companies to enhance support for internet standards. This approach aims to create a critical mass of support for these standards, leading to wider adoption.


Evidence

Success with Microsoft announcing full support for the DANE email security standard on Exchange Online


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Gilberto Zorello


Steven Tan


Agreed on

Government strategies for promoting internet standards


G

Gilberto Zorello

Speech speed

76 words per minute

Speech length

341 words

Speech time

266 seconds

Implemented in Brazil as Top.br with increasing adoption

Explanation

Brazil has implemented its version of the Internet.nl tool called Top.br. The tool is being used to measure and promote the adoption of internet standards in the country, with growing usage and improvements in various security metrics.


Evidence

Statistics on website tests, email tests, and connection tests showing adoption rates for various standards


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Brazil promotes tool through industry meetings and events

Explanation

The Brazilian organization NIC.br actively promotes the Top.br tool through various industry meetings and events. This outreach strategy aims to increase awareness and adoption of internet standards among key players in the Brazilian market.


Evidence

Promotion in meetings of NIC.br and the Association of Internet Service Providers


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Annemieke Toersen


Steven Tan


Agreed on

Government strategies for promoting internet standards


S

Steven Tan

Speech speed

160 words per minute

Speech length

1045 words

Speech time

391 seconds

Developed as Internet Hygiene Portal in Singapore

Explanation

Singapore has created its own version of the Internet.nl tool called the Internet Hygiene Portal. This tool is designed to improve the country’s cybersecurity landscape by encouraging service providers to adopt key internet security best practices.


Evidence

Over 200,000 scans conducted since 2022, with users from 40 countries and 45% of domains showing improvements


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Singapore uses transparent rating system to shift industry behavior

Explanation

The Internet Hygiene Portal in Singapore employs a transparent rating system to influence industry behavior. This approach promotes proactive security enhancements and has led to improved cybersecurity practices among service providers.


Evidence

ICT service providers in the APAC region joining the Internet Hygiene Rating Initiative and configuring their services to meet strong internet security best practices by default


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Annemieke Toersen


Gilberto Zorello


Agreed on

Government strategies for promoting internet standards


Sharing best practices internationally

Explanation

Singapore actively shares its experiences and best practices in implementing internet security standards internationally. This collaborative approach aims to drive collective cybersecurity improvements on a global scale.


Evidence

Collaboration with tech giants like Microsoft, Amazon, and Akamai, and sharing notes with other countries like the Netherlands


Major Discussion Point

International collaboration on internet standards


D

Daishi Kondo

Speech speed

132 words per minute

Speech length

312 words

Speech time

140 seconds

Interest in implementing a Japanese version

Explanation

There is interest in implementing a Japanese version of the Internet.nl tool. The potential benefits include security visualization and creating peer pressure among competitors to improve their security measures.


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Agreed on

Importance of Internet.nl tool for measuring internet security standards


A

Audience

Speech speed

123 words per minute

Speech length

331 words

Speech time

160 seconds

Interest from developing countries in adopting standards

Explanation

There is interest from developing countries in adopting internet standards and implementing tools like Internet.nl. This reflects a growing awareness of the importance of internet security and standards in countries at various stages of digital development.


Evidence

Question from a representative from Liberia about how regulators in developing countries can implement similar tools


Major Discussion Point

International collaboration on internet standards


H

Hannah Boute

Speech speed

136 words per minute

Speech length

615 words

Speech time

271 seconds

Dutch Coalition for Sustainable Digitalization coordinates public-private efforts

Explanation

The Dutch Coalition for Sustainable Digitalization is a public-private partnership that aims to drive forward the twin transition of digitalization and sustainability. It brings together various stakeholders to address the sustainability challenges of digital systems.


Evidence

Creation of the Action Plan for Sustainable Digitalization in collaboration with the Ministry of Economic Affairs


Major Discussion Point

Sustainable digitalization


R

Rachel Kuijlenburg

Speech speed

129 words per minute

Speech length

731 words

Speech time

338 seconds

Focus on sustainable procurement of IT

Explanation

The Dutch government is emphasizing sustainable procurement of IT as a key strategy for improving the sustainability of digital systems. This approach aims to minimize energy needs and reduce the environmental impact of IT infrastructure.


Evidence

Development of a framework for sustainable IT procurement focusing on hardware, software, and cloud services


Major Discussion Point

Sustainable digitalization


Framework for minimizing energy needs and emissions in IT

Explanation

A framework is being developed to guide the procurement of sustainable IT. This framework focuses on energy efficiency, emission-free hardware, and circular and climate-proof solutions for hardware, software, and cloud services.


Evidence

Specific focus points for hardware (energy requirements, ISO standards), software, and cloud services


Major Discussion Point

Sustainable digitalization


Importance of contract management in sustainable IT procurement

Explanation

Contract management is identified as a crucial aspect of sustainable IT procurement. Ongoing research is being conducted to determine how to ensure that sustainability requirements are maintained throughout the contract lifecycle.


Evidence

Plans for future research on contract management to ensure sustainable IT procurement


Major Discussion Point

Sustainable digitalization


Agreements

Agreement Points

Importance of Internet.nl tool for measuring internet security standards

speakers

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


arguments

Tool provides quick assessment of ICT environment security


Allows organizations to test themselves and improve security


Promotes adoption of important internet standards


Implemented in Brazil as Top.br with increasing adoption


Developed as Internet Hygiene Portal in Singapore


Interest in implementing a Japanese version


summary

Speakers agree on the value of Internet.nl and similar tools for assessing and promoting internet security standards across different countries.


Government strategies for promoting internet standards

speakers

Annemieke Toersen


Gilberto Zorello


Steven Tan


arguments

Dutch government uses three-fold strategy: mandate, monitor, community building


Engagement with big tech companies to improve standards support


Brazil promotes tool through industry meetings and events


Singapore uses transparent rating system to shift industry behavior


summary

Speakers highlight various government strategies to promote internet standards, including mandates, monitoring, community engagement, and collaboration with industry.


Similar Viewpoints

Both speakers emphasize the importance of engaging with major technology companies and sharing best practices internationally to drive improvements in internet standards.

speakers

Annemieke Toersen


Steven Tan


arguments

Engagement with big tech companies to improve standards support


Sharing best practices internationally


Unexpected Consensus

Interest from developing countries in adopting standards

speakers

Audience


Wouter Kobes


arguments

Interest from developing countries in adopting standards


Tool provides quick assessment of ICT environment security


explanation

The interest from developing countries in adopting internet standards and tools like Internet.nl was unexpected, showing a growing awareness of cybersecurity importance across different levels of digital development.


Overall Assessment

Summary

There is strong agreement on the importance of tools like Internet.nl for measuring and promoting internet security standards, as well as the need for government strategies to drive adoption. Speakers from different countries shared similar approaches and experiences in implementing these tools and strategies.


Consensus level

High level of consensus among speakers on the core issues. This implies a growing international recognition of the importance of internet security standards and the potential for increased collaboration in developing and implementing tools and strategies to promote these standards globally.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

There were no significant disagreements among the speakers.


difference_level

The level of disagreement was minimal to non-existent. The speakers largely presented complementary information about implementing and promoting internet security standards and sustainable digitalization in their respective countries or contexts. This high level of agreement suggests a shared understanding of the importance of these tools and approaches, which could facilitate international collaboration and adoption of similar practices across different regions.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers emphasize the importance of engaging with major technology companies and sharing best practices internationally to drive improvements in internet standards.

speakers

Annemieke Toersen


Steven Tan


arguments

Engagement with big tech companies to improve standards support


Sharing best practices internationally


Takeaways

Key Takeaways

The Internet.nl tool allows organizations to quickly assess and improve their internet security standards adoption


Several countries have implemented or are interested in implementing versions of the Internet.nl tool


Government strategies involving mandates, monitoring, and community building can effectively promote internet standards adoption


International collaboration and creating a critical mass of countries can help negotiate improvements with big tech companies


Sustainable digitalization efforts are focusing on areas like sustainable IT procurement and minimizing energy needs


Resolutions and Action Items

Formation of an international community to share experiences with Internet.nl-like tools


Plans to develop a more comprehensive framework for sustainable IT procurement


Invitation for interested parties to join the Internet.nl international community


Development of workshops and capacity building initiatives to implement internet security recommendations


Unresolved Issues

How to effectively implement Internet.nl-like tools in developing countries with limited resources


Specific metrics or targets for sustainable IT procurement


Details on how to balance security and sustainability requirements in IT procurement


Suggested Compromises

Using existing tools like Internet.nl to test websites in countries without their own versions, as a starting point before full deployment


Thought Provoking Comments

To keep our internet open, free and secure. However, those standards do not implement themselves. You have to implement them actively.

speaker

Wouter Kobes


reason

This comment succinctly captures the core purpose of internet standards and the need for proactive implementation, setting the stage for the entire discussion.


impact

It framed the subsequent conversation around the importance of tools like internet.nl in promoting and measuring the adoption of these standards.


We mandate specific open standards. We can do so by including standards on the comply or explain list.

speaker

Annemieke Toersen


reason

This insight into the Dutch government’s approach to mandating standards provides a concrete example of how to drive adoption at a national level.


impact

It sparked discussion about different approaches to promoting standards adoption, from government mandates to voluntary initiatives.


By lobbying other countries, we create a critical mass that enables more effective negotiations with suppliers.

speaker

Annemieke Toersen


reason

This comment highlights the strategic importance of international collaboration in influencing major tech companies to adopt standards.


impact

It shifted the conversation towards the global impact of coordinated efforts and the potential for smaller countries to influence industry giants.


Since its launch in 2022, IHP has conducted more than 200,000 scans with users from across 40 countries, right? And importantly, more than 45% of domains have also shown improvements from their first initial scan to their most recent evaluation.

speaker

Steven Tan


reason

This data-driven insight demonstrates the tangible impact of implementing such tools on improving internet security across multiple countries.


impact

It provided concrete evidence of the effectiveness of these initiatives, encouraging other countries to consider similar approaches.


Our countries are far behind in terms of internet development. And now we are talking about internet. I’m hearing nl for my country will be internet.LRO. We as regulators for a developing country like mine, what can we do?

speaker

Peter Zanga Jackson, Jr.


reason

This question from a representative of a developing country highlights the global disparities in internet infrastructure and the challenges faced by nations trying to catch up.


impact

It prompted discussion about how to make these tools and standards accessible and relevant to countries at different stages of internet development.


80% of the footprint of IT is in the production of hardware. So to make sure that hardware should be, well, bought as less as possible, this framework could help.

speaker

Rachel Kuijlenburg


reason

This comment introduces the important connection between IT procurement and sustainability, broadening the discussion beyond just security standards.


impact

It shifted the conversation to include sustainability considerations in IT procurement, linking the earlier discussion on security standards with environmental concerns.


Overall Assessment

These key comments shaped the discussion by broadening its scope from a focus on the technical aspects of internet standards to encompass global collaboration, the challenges faced by developing countries, and the intersection of security with sustainability. The conversation evolved from explaining the internet.nl tool to exploring its potential for driving systemic change in internet governance and IT procurement practices worldwide. The discussion highlighted the need for both top-down (government mandates) and bottom-up (community-driven) approaches to improving internet security and sustainability, while also acknowledging the disparities in resources and infrastructure between different countries.


Follow-up Questions

How can developing countries implement tools like internet.nl?

speaker

Peter Zanga Jackson, Jr. (Liberia)


explanation

Important for ensuring developing countries are not left behind in internet security and standards adoption


What advancements have been made with implementing internet.nl or similar tools in the United States?

speaker

Shawna Hoffman (Guardrail Technologies)


explanation

Explores potential for expanding the use of these security assessment tools to other major internet markets


How can international organizations proactively help build capacity in countries with limited technical resources to implement internet security standards?

speaker

Engineer Munzel Mutairi


explanation

Addresses the need for a coordinated approach to ensure global adoption of internet security standards


How to improve contract management for sustainable IT procurement?

speaker

Rachel Kuijlenburg


explanation

Identified as a next step for ensuring long-term sustainability in IT procurement practices


What does the international internet.nl community focus on – is it more the tool or its usage?

speaker

Bart (online participant)


explanation

Seeks clarification on the priorities and activities of the international community around internet.nl


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

Session at a Glance

Summary

This workshop focused on how data governance initiatives can promote ethics by design in AI and other data-oriented technologies. Speakers from various organizations discussed challenges and strategies for embedding ethical considerations into technological development.

Key themes included the need for multi-stakeholder collaboration, the challenge of defining and implementing ethics across different contexts, and the importance of moving from principles to actionable implementation. Speakers highlighted initiatives like UNESCO’s recommendation on AI ethics, which provides a global standard, and tools like ethical impact assessments to evaluate AI systems.

Challenges discussed included varying levels of AI readiness across countries, differing interpretations of ethical principles, and ensuring meaningful inclusion of civil society voices beyond tokenistic representation. The importance of education and capacity building around AI ethics was emphasized.

Speakers noted the value of open source AI for collaborative development and risk mitigation. Initiatives to bring together diverse stakeholders, including coalitions and expert networks, were described as ways to advance ethical AI governance globally.

Overall, participants agreed on the need to operationalize ethical principles through concrete actions and implementation strategies. Moving from high-level agreement on ethics to practical application across diverse contexts was seen as a key next step for advancing responsible AI development and deployment worldwide.

Keypoints

Major discussion points:

– The challenges of defining and implementing ethics in AI across different contexts and cultures

– The importance of multi-stakeholder collaboration in shaping ethical AI governance

– Various initiatives and frameworks being developed by organizations to promote ethical AI, including UNESCO’s recommendation on AI ethics

– The need to move from high-level principles to concrete implementation of ethical AI practices

– Involving civil society and underrepresented voices in AI governance discussions

The overall purpose of the discussion was to explore how data governance initiatives and multi-stakeholder collaboration can promote ethics by design in AI and other digital technologies. The panelists aimed to address challenges in embedding ethics in AI systems and debate strategies for meaningful inclusion of diverse stakeholders in shaping ethical norms and standards.

The tone of the discussion was largely constructive and solution-oriented. Panelists acknowledged the complexities and challenges involved, but focused on sharing concrete initiatives and proposing ways to make progress. There was a sense of urgency about moving from principles to implementation as the discussion progressed. The tone became slightly more critical when discussing the tokenistic inclusion of civil society voices, but remained overall collaborative and forward-looking.

Speakers

– José Renato Laranjeira de Pereira: Researcher at the University of Bonn, Co-founder of LAPIN (Laboratory of Public Policy and Internet)

– Thiago Moraes: Joint PhD candidate in law at University of Brasilia and University of Brussels, Specialist in data protection and AI governance at Brazilian Data Protection Authority (ANPD), Co-founder and counselor of LAPIN

– Ahmad Bhinder: Policy Innovation Director at the Digital Corporation Organization

– Amina P.: Privacy Policy Manager at META

– Tejaswita Kharel: Project Officer at the Center for Communication Governance at National Law University Delhi

– Rosanna Fanni: Program Specialist in the ethics of AI unit at UNESCO

Additional speakers:

– Alexandra Krastins: Senior lawyer at VLK Advogados, Former project manager at Brazilian National Data Protection Authority, Co-founder and counselor of LAPIN

Full session report

Revised Summary of AI Ethics and Governance Discussion

This workshop explored strategies for embedding ethical considerations into AI and digital technologies. Speakers from various organizations discussed challenges and approaches to promoting ethics by design, emphasizing the need for multi-stakeholder collaboration and the importance of moving from principles to actionable implementation.

Ahmad Bhinder – Digital Corporation Organization (DCO)

Ahmad Bhinder highlighted two main regulatory approaches to AI governance: a prescriptive, risk-based approach led by the EU and China, and a more flexible, principles-based approach favored by the US and Singapore. He noted varying levels of AI readiness across countries, complicating the development of global governance frameworks. Bhinder also mentioned the DCO’s Digital Space Accelerator program, which aims to bring together multiple stakeholders to address AI governance challenges.

Amina P. – META

Amina P. presented open source AI as a tool to enhance privacy and safety, challenging common perceptions by arguing that opening AI models to the wider community allows experts to identify, inspect, and mitigate risks collaboratively. She emphasized META’s partnerships with academia and civil society, including the Partnership on AI and the Coalition for Content Provenance and Authenticity. Amina also highlighted the need for better education on AI and privacy among stakeholders.

Rosanna Fanni – UNESCO

Rosanna Fanni emphasized UNESCO’s recommendation on AI ethics as a global standard agreed upon by 194 member states. She introduced UNESCO’s readiness assessment methodology for evaluating governance frameworks at the macro level and an ethical impact assessment tool for specific algorithms at the micro level. Fanni mentioned UNESCO’s plans to launch a global network of civil society organizations focused on AI ethics and governance, and their ongoing implementation of AI ethics recommendations through readiness assessments in over 60 countries. She also noted the upcoming AI Action Summit hosted by France in February and referenced the Global Digital Compact in her concluding remarks.

Tejaswita Kharel – Center for Communication Governance

Tejaswita Kharel highlighted the need for a context-specific understanding of ethical principles, emphasizing that ethics is a subjective concept varying across individuals and cultures. She raised concerns about ensuring meaningful inclusion of civil society voices in AI governance discussions, pointing out the challenges of moving beyond tokenistic representation to incorporate diverse perspectives effectively.

Challenges in Implementation

Speakers identified several key challenges in implementing ethics by design in AI systems:

1. Varying levels of AI readiness across countries

2. Difficulty in operationalizing ethical principles

3. Subjectivity and differing interpretations of ethics

4. Misconceptions about AI and privacy among stakeholders

5. Ensuring meaningful inclusion of civil society and Global South voices in AI governance processes

Tools and Frameworks for Ethical AI

Various tools and frameworks were presented to promote ethical AI development:

1. DCO’s AI governance assessment tool

2. META’s open source AI and responsible use guidelines

3. UNESCO’s readiness assessment methodology and ethical impact assessment framework

Moving Forward: Resolutions and Unresolved Issues

The discussion led to several action items:

– UNESCO’s launch of a global network of civil society organizations focused on AI ethics and governance

– Continued implementation of UNESCO’s recommendation on ethics of AI through readiness assessments

– META’s planned launch of a voluntary survey for businesses to map AI use across their operations in summer 2024

Key unresolved issues include:

1. Effectively operationalizing ethical principles in AI development and deployment

2. Addressing varying levels of AI readiness across different countries and regions

3. Reconciling differing interpretations and applications of ethical principles across contexts

Conclusion

The discussion highlighted the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. While there was broad agreement on the importance of ethical AI and multi-stakeholder collaboration, the specific implementation strategies and tools varied among different organizations and stakeholders. Moving from high-level agreement on ethics to practical application across diverse contexts emerged as a key next step for advancing responsible AI development and deployment worldwide.

Session Transcript

José Renato Laranjeira de Pereira: here in our time zone, but also good morning, good evening for those watching us online in other time zones. My name is Jose Renato. I am a researcher at the University of Bonn in Germany, but originally from Brazil, also co-founder of LAPIN, the Laboratory of Public Policy and Internet, a non-profit organization based in Brasilia, Brazil. And well, we’re going to start now our workshop number 45 on AI ethics by design. And well, our main goal here is mainly to delve into how data governance initiatives can serve as a cornerstone for promoting ethics by design in data-oriented technologies, in particular artificial intelligence. So more specifically, this panel aims to, one, offer an overall understanding of ethics by design and importance of embedding ethical considerations at the inception of technological development, two, address the challenges of embedding ethics in AI and other digital systems, and finally, debate multi-stakeholder collaboration and its relevance in shaping ethical norms and standards, particularly in the context of the recent UN resolution, A78L49, which underscores the importance of internationally interoperable safeguards for AI systems. We have as policy questions, which will guide the panelists to reflect upon these issues. First one is how can policymakers effectively promote the concept of ethics by design, ensuring integration of ethical principles into the design of process of AI and digital systems in a way that meaningfully includes multiple stakeholders, especially communities affected by the systems? Second policy question is what are the primary challenges to embedding ethics in AI and other systems, and how can policymakers, industry, and civil society collectively? address them to ensure digital technology’s responsible development and deployment? And finally, what strategies and mechanisms can be implemented to foster these multi-stakeholder collaboration in an ethical way, considering the diverse interests among these communities? Who is going to moderate this session? Our first, Thiago Moraes, who is a joint PhD candidate in law at the University of Brasilia and in the University of Brussels. Hope I pronounced that correctly, but my Dutch is not so good. Definitely. And he also works as a specialist in data protection and AI governance at the Brazilian Data Protection Authority, ANPD. Thiago is also co-founder and now counselor of the Laboratory of Public Policy and Internet, LAPIN. The online moderator will be, but which is, who is also with us in person here, is Alexandre Crastins, who is a senior lawyer at VLK Advogados, V-L-K Advogados. She provides consultancy in privacy and AI governance, also a former member of the, a former worker at the Brazilian National Data Protection Authority as a project manager and also co-founder and counselor of the Laboratory of Public Policy and Internet, LAPIN. Well, I hope you all enjoyed the session. Looking forward for the great discussions that we’re, that I’m sure we’re going to have. And I pass the floor to Thiago.

Thiago Moraes: Thank you, José. Well, we are really excited to be here today because this is not only a relevant discussion, but also it’s an opportunity for us to understand better what’s being done in a more hands-on approach when we are discussing this topic of ethics in AI. And that’s why the by-design part of it is so important. And we brought brilliant speakers today, and I will briefly introduce each one of them as they are. open the introductory remarks. And for starting, I would like to invite Mr. Ahmad Binder to speak. Ahmad Binder is the Policy Innovation Director at the Digital Corporation Organization, leading digital policy initiatives to foster collaboration amongst its six team members. With over 20 years of experience in public policy and regulation, he has shaped innovative policies driving connectivity and digital economic growth. Ahmad is dedicated to advancing the DCO’s mission on promoting digital prosperity for all. So, Ahmad, many thanks for coming here. Yesterday, we had the opportunity to see a bit of your framework of the DCO. It’s very interesting. And now we’ll have another interesting moment to know how it can relate with the questions that we are provoking here in this session. So, please, the floor is yours.

Ahmad Bhinder: Thank you very much, Thiago. Thank you very much everybody, for inviting me and on behalf of the Digital Corporation Organization to this session. So, just a very brief introduction to what DCO is. We are an intergovernmental organization, and we are headquartered in Riyadh with the countries from the Middle East, Africa. We have European countries as our member states, and so far we have South Asian countries. So, we started in 2020. Within the last four years, we have come up from five member states to 16 member states now, and we are governed by the council that has representatives or the ministers for digital economy and ICT from our member states. Our sole agenda is to promote digital prosperity and the growth of digital economy, a responsible growth of digital economy. So, this makes us one of a kind organization, a global intergovernmental organization that is not looking at sectors but looking at broadly digital economy. Again, we have our offices here, so we welcome you from Brazil, the whole group of you here to Riyadh. I hope you’re enjoying. Okay, so coming to the global AI governance, there are different initiatives that DCO is doing. I will take you through one of those when I explain the framework, but broadly, we have AI development and governance is not a harmonized phenomenon across the globe. So, we see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles. So initially, we called it as a principles-based approach, but actually all the approaches are based on principles. So even the prescriptive regulatory approaches, they are also based on certain principles. Some call them ethical AI principles, some call them responsible AI governance principles, et cetera, et cetera. So also, we have seen across the nations different means of approaching AI governance or AI regulation. For example, there are domain-specific approaches. So we have laws, for example, for health sector, for education, for a lot of other sector-specific laws, and those laws are being shaped and developed in advance to take into consideration the new challenges and opportunities that are posed by AI in them. Then we have framework approaches, where a broader AI frameworks are being shaped in the countries that would either reform some of those laws or they would impact the current laws. So there’s a broader AI framework. And the third one, as I said, is the specialized AI acts. So the EU AI Act, for example, Australia is working on an AI Act, China has an AI law. So just wanted to give. you

José Renato Laranjeira de Pereira: you you you you you you you you you where she’s served as a member of the jury for data protection officer certification. Amina is CIPP-C certified and has conducted numerous training sessions for professionals on personal data.

Amina P.: Or an additional layer of complexity. And so Ahmed mentioned earlier the approach based on risk-based approach. Yes, you mentioned the risk-based approach. You mentioned the principle-based approach as well. And these are exactly what we advocate for when it comes to regulating AI. We also advocate for technology-neutral legal frameworks, build on existing legal frameworks without creating conflict, and to avoid creating conflicts between different legal frameworks, and then most importantly, collaboration between different stakeholders. And the way we approach this collaborative work at META when it comes to privacy or ethical standards in general is that we rely mainly today on open source AI. So some people will ask a question, very simple one, saying normally open sourcing AI would bring more complexities because we are opening the doors to malicious actions, et cetera, and malicious actors. And so how come that we are enhancing privacy? with or through open-source AI. Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks. And it becomes a collaborative way of working on these risks and mitigating these risks within the AI community altogether. Of course, all this work is also preceded by a certain work, a privacy review before the launch of products. Redeployment risk assessments are done by Meta. Fine-tuning, safety fine-tuning, red teaming is also done ahead of any launch of any product. But in addition to all of these, of course, we have the Privacy Center. We can talk about the privacy-related tools that we have. But if we want to be specific on collaborative work, of course, once the product, once the model is launched and at Connect 2024, which is a developer conference that is organized by Meta annually, we announced the launch of LAMA 3.2, one of our open and large language models and to open using an open model that can be used by the AI community, of course. And so just to describe this, one of the tools that we use in open source AI is the purple project that we have, which means that before putting in place standards and sharing standards with the user, we, there is a project that is called a purple project which enhances the privacy and safety, means that this is an open tool that is where developers and experts can use and to mitigate the existing risks and rules. They have tested and mitigated these risks through the purple project, because it combines both the blue teaming and red teaming and both are necessary in our opinion. It puts in place standards and we call this the responsible use guide and of course, accessible to everyone. So this is when it comes to open source AI. To conclude on open source AI, for us, it’s a tool to enhance, it’s through open source AI that we can enhance privacy. Another project that is worth mentioning is the open loop projects that we have at Meta, which is a collaborative feedback way of working. So we gather policy makers with companies, share feedback and when it comes to prototypes of AI regulations and ethical standards or open loop, an issue that has been identified in a specific country. So there are prototypes that are being put in place, gathering policy makers and tech companies and starting from there under real conditions, under the real world conditions, these prototypes or these rules or testing rules, let’s go and then starting from there, we can issue policy recommendations, learn from the lessons and then also issue policy recommendations. These are the four steps that use OpenLoop and actually last year or the year before we organized an OpenLoop sprint, not an OpenLoop, OpenLoop we accompanied, for instance, the EU AI Act in Europe, testing some of the provisions ahead of their publication officially, but the OpenLoop sprints are a very small version of the OpenLoop projects that we organized at the Dubai Assembly last year, the year before. In the Minat region, the way we do it, as a privacy policy manager, for instance, I organized expert group round tables ahead of the launch of any product, even related to AI, not related to AI, we gather our experts, we have a group of experts, we share the specificities of the product and we get their feedback to improve our products, whether that is a legal, in relation to safety, in relation to privacy, in relation to human rights. et cetera, we take into consideration this feedback. We organize roundtables with policymakers. Recently, we had one in Turkey around AI and the existing data protection rules and whether they are enough to protect within the AI framework or not, what is necessary to do, a discussion on data subject rights as well. We also contribute to all the public submissions in the region, in some of the countries, not all of them, depending on the regulation, the importance or the nature of the regulation. In Saudi Arabia, of course, recently, Saudi Arabia has been very active on that front, completing the legal framework around data protection, putting in place AI ethical standards as well. So they have been very active on this and we shared our public comments and that we do believe that it’s always a discussion with policymakers. Yeah, looking forward to your questions. Sorry if I took more than seven minutes.

Thiago Moraes: Yeah, it’s okay. The only challenge we have is like, as we have to go with this first round and then try to have some discussions, but it’s interesting to see like the many different activities that META is being involved to try to bring more collaborative approach. The open source AI is definitely a hot topic and there are even some sessions here at the IGF that are also discussing this topic. So it’s nice to know that there are initiatives like that in META as well. Well, without further ado, I think I should move on to our next speaker. Our next speaker is. online is Tejasvita Kharel. I don’t know if I pronounced it right, but she’s a project officer at the Center for Communication Governance at National Law University Delhi. Her work relates to various aspects of information technology law and policy, including data protection, privacy, and emerging technologies such as AI and blockchain. Her work on the ethical governance and regulation of technology is guided by human rights-based perspectives, democratic values, and constitutional principles. So, Tejasvita, thanks a lot for participating with us. And, yeah, well, we’re looking forward to know more about your work regarding these topics.

Tejaswita Kharel: Thank you. Can you guys hear me? Just want to confirm.

Thiago Moraes: Yes.

Tejaswita Kharel: All right. So, I’m Tejasvita. I’m a project officer at the Center for Communication Governance at NLU Delhi. We do a lot of research work in terms of a lot of governance of emerging technology and whether the governance is ethical is, I think, a large part of what our work is. So, in terms of what I want to talk about today, I know we have three policy questions. Out of these three, what I want to concentrate on is number two, which is on primary challenges to embedding ethics. I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else. And that is something we can already see in a lot of existing AI principles, ethical principles, or guiding documents, where in one you can see that they might consider transparency to be a principle, which will be a recurring principle across documents, but privacy may not necessarily be one, which means that there will be a varying level of of what these ethical principles might be implementing. So what this means for us is that when you’re implementing ethics, there’s a good chance that not everybody’s applying it in the same way, or even the principles might be different. So in terms of what I mean when I say that people may not implement it in the same way is I will talk about fairness in AI. When we look at fairness in AI, fairness as a concept is different when you look at it in, let’s say the United States versus what you would consider to be fairness as an ethical principle in India, right? In India, there’ll be various factors such as caste, religion, which will be very, very high value rules when you’re determining fairness. Meanwhile, in the US, these factors may look like race. So I specifically mean this in terms of AI bias when you’re looking at discrimination bias in AI. So with that in mind, the first challenge when we’re looking at embedding ethics is that ethics is different for everyone. And even the principles, even though they may be similar, there’ll be a lot of varying factors or differences in how these ethical principles even are understood. So with that in mind, we need to solve this issue and how do we deal with that is, when we look at that as the answer, I will get to the point number three, which is the strategies and mechanisms, what strategies and mechanisms can be implemented, right? So one way that we solve this problem is by ensuring that there’s collaboration between multiple stakeholders in the sense that we very often as civil society and policymakers, we have certain ideas of what ethics means, but do the developers and designers of these systems understand what this even means? Whether they have the ability or not to implement ethics by design into these systems is a very big question. The main way that we can solve this issue is by first identifying what the… ethical principles are, what it means for each differing context. I am of the belief that we cannot define ethics as a larger concept. We must understand that depending on the system, depending on the regional societal context, there will always be differences in terms of what ethics by design is going to look like and there must always be differences because there cannot be a one-size-fit- all standard application of ethics by design because not everybody agrees on what ethics means. So first we determine what ethics even means, what these principles can be, whether for example we want to ensure that ethical, whether we want to ensure that privacy is a part of the ethical principles, for example. And then we get into the question of what these factors are that will be included within these ethical principles. Like I said, if it’s for fairness, are we looking at fairness in the sense of non-discrimination, inclusivity, what are these factors that fall within this is very important to have like one level of understanding on. And then we get into understanding how developers and designers can actually implement this in their systems, whether it’s by ensuring that their data is clean before they start working to ensure that there’s no bias that comes into the data inherently. So I think that the main way that we ensure ethics by design is by ensuring that there’s good collaboration between stakeholders. This collaboration can perhaps look in the can be in the form of a coalition, like for example in India what we have right now is we have a coalition on responsible evolution of AI, where there’s a lot of stakeholders, some of them are developers, some of them are big tech, there’s also civil society participation, and all of us we talk about how number one what the difficulties are in terms of AI and the responsible evolution of it. And then we also discuss how we solve this. So the only way that we can do this is by actually creating a mechanism where there’s collaboration between between all of these different stakeholders where we discuss and identify how we design it. So I’m, so this is my point predominantly in terms of how you can implement ethics by design. Thank you.

Thiago Moraes: Thanks a lot, Jastha. And quite interesting to know about these coalitions that are trying to engage different stakeholders to tackle issues such as fairness in AI. I think this is part of the puzzle that we have to solve here. When we’re discussing about what we really means, right, on ethics by design and where we will get from here is definitely a challenge that we have to consider these many different perspectives. And one of the challenges is how to make these collaborations actually to work and come into results. So thanks for giving a glimpse on that. Hopefully we can have some time to go back a bit, but we’ll move now for our last but not least speaker who is Rosana Fani from UNESCO. So Rosana is a program specialist in the ethics of AI unit at UNESCO, part of the bioethics and ethics of science and technology team, focusing on technology governance and ethics. She supports the global implementation of UNESCO’s recommendation on the ethics of AI, they run process and assist countries in shaping ethical AI policies. Previously, she coordinated AI policy projects at SIPS and contributed to research at the Brookings Institution, the European Parliament Research Service and Nuclear. Rosana holds expertise in international AI governance and policy analysis. So thanks a lot for being here with us, Rosana, and we are looking forward to know more about your work in the topic.

Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things have already been mentioned that I maybe would have been repeating. So I hope I will not do that, but instead offer some sort of, yeah, first maybe putting together the remarks that we’ve heard together today and also offering some perspectives for the discussion. And I will outline that based on, of course, the work that we do at UNESCO to implement ethics of AI around the world. And first, thanks also for organizing the session because I think it’s really, really important to always think when we think about new technologies and especially artificial intelligence, to look at the ethics, because the ethics is what makes us human, what makes us come together, what makes us sit together in the room and discuss and interact and exchange different perspectives. So for us at UNESCO, ethics is not something philosophical and it’s also not something that is built in as an afterthought, so to say, when we look at AI, but it means really from the first moment to respect human rights and human dignity and fundamental freedoms and to put people and societies at the center of the technology. So we really believe that it should not be about controlling the technology, but rather steering the development in a way that serves our goals for humankind, because we believe that the technology conversation, especially the conversation is about AI is in the end, a societal one, not a technological one. And this means that we must scale our governance and our understanding of the technologies in a way that matches the growth of the industry and the growth of the technology itself as it develops into our societies in every aspect. I mean, I don’t have to, I think, mention the examples of where we see AI already happening today and also the risks that arise with it. And there was one point in the discussion when it came to. AI regulation, it was a bit, let’s say, let’s think back a few years when we didn’t yet have the AI Act in place, when we didn’t yet have the discussion about the US framework and also not other standards. There was still this moment, if you remember, that a lot of governments were like, oh, but we see the technology is developing so fast, we can’t really do anything about it, we don’t really know how to steer it and we need to leave the market, solve the problems on its own. But that was the moment when UNESCO started to implement its work on the recommendation on the ethics of AI. So UNESCO actually has been working on ethical and governance consideration of science and technology for several decades. Previously, we have promoted rigorous analysis and multidisciplinary and inclusive debate regarding the development and implication of emerging technologies with our scientific committees that we have. And this started off actually as a debate about ethics and the human genome editing. And since then, we have at UNESCO constantly reflected on the ethical challenges of emerging technologies. And this work eventually accumulated in the observation of member states, seeing that there is actually a lot of ethical risks when it comes to the development and application of artificial intelligence. And this is what has led us to work on the recommendation on the ethics of AI. The recommendation on the ethics of AI, if you think again now today, is actually quite, I think, a fascinating instrument because it is approved by all 194 member states and it has a lot of ethical principles, values, and policy action areas that everybody agreed to. So maybe already reacting to my previous speaker and fellow panelists, that there is actually a global standard I can maybe just quickly, very quickly list the values we have, the respect, promotion and protection of fundamental rights and human rights, then we have the environment and ecosystem flourishing which is also something that is really important when you look at ethics to also look at the environment, and we have ensuring diversity and inclusiveness and peaceful just and interconnected societies, and then we have ten principles how these are translated into practice and these values, for example fairness and discrimination, safety, right to privacy of course, human oversight, transparency, responsibility, you can all read them up online, I will not outline them for the purpose of, for the sake of time. And this recommendation that was adopted in 2021 is actually now being implemented in over 60 member states already around the world and counting, what does that mean implementing the recommendation? It means implementing the recommendation through a very specific tool, it’s the readiness assessment methodology, I have only the French version but here it is, the readiness assessment methodology is actually ethics by design for member states AI governance frameworks, so what does it mean? It means that when member states work on AI governance strategies or before they start working on them, we offer them this tool, it’s basically a long questionnaire that gives member states a 365 degree vision of how their AI ecosystem looks like at home, and this has five dimensions, so societal and cultural, regulatory, infrastructural and other dimensions and through this tool we really ensure that member states know where they stand and how they can improve their governance structures. to ensure that ethics is really at the center of what they do when they work on AI policy and governance. We also have another tool, and I want to quickly spend a minute or so on explaining this tool as well. It’s the ethical impact assessment. The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation. That’s really important when we look at AI systems used in the public sector. For example, when we see AI systems used for welfare allocation or where children go to school, or when we look at AI used in the healthcare context. It is really crucially important that these AI systems are designed in an ethical manner. The ethical impact assessment does exactly that. It analyzes the systems against the recommendation, and this is done in the entire life cycle. Looking at the governance, for example, the multicycler governance, how has it been designed, which entities have been involved. Then it looks at the negative impacts and the positive impacts. That’s also something that I think is really important to emphasize when you look at ethics by design. It’s not just about mitigating the risks, but also looking at the opportunities that exist in the use of AI systems. There’s also always the contextualization of weighing the negatives against the positives. That’s also something that the ethical impact assessment looks at. I will just very briefly, because I think I’m almost also over time, I will very briefly also mention that we work with the private sector. We work with the private sector as well, because we think that when it comes to AI governance, nobody can do it alone. key entity in ensuring that AI systems are being designed and implemented in an ethical manner. So we have been teaming up with the Thomson Reuters Foundation to launch a voluntary survey for business leaders and companies to map how AI is being used across their operations products and services. And this was actually not yet live, it’s going to be launched in June, but now we have already launched the initiative and the questionnaire will be available in summer next year. And the idea really is for businesses to conduct a mapping of their AI governance models and also assess, for example, where AI is already having an impact, for example, on the diversion and inclusion aspect or human oversight or the environmental impact assessment is also featured there. And by offering this tool to the private sector, we really want to support the sector to ensure that their governance mechanisms are becoming more ethical, that they can also disclose this to their investors, their shareholders, but also to the public and really ensure that ethics is at the center of their operations. And last but not least, another aspect that we have heard a lot of times today is multi-stakeholderism. And especially we at UNESCO see that civil society is always a really critical part of the discussion about ethics of AI and governance, but it’s most often civil society is not properly, I think, sitting at the table when it comes to discussions. So we at UNESCO want to change that. We have been already through the last year working on mapping all the different civil society organizations that are working on ethics of AI and governance of AI and we’re bringing them all together also the next year at the the AI Action Summit in Paris first, and then at the Global Forum on the Ethics of AI, that’s UNESCO’s flagship conference on ethics and AI governance. And we will be bringing this global network of civil society organizations for the first time together at these both events. And we invite all civil society organizations that would like to join us as well to ensure that we bring these voices to affect the major AI governance processes that are ongoing right now. And with that, I will close. I really look forward to discussion. I have many more points to say, but yeah, thanks a lot and over to you, back to you, Tiago, or to our next moderator.

MODERATOR: Thank you, Rosanna, for your speech. We’re going to the second part of our panel, but first I would like to engage our audience online and on site. So does anyone have any questions, comments, observations of any kind? Please reach out the standing mic. Okay, so I’m gonna make some questions to our speakers. You can answer as you like. So how are you involving stakeholders from civil society in academia in the initiatives you have mentioned?

Ahmad Bhinder: I spoke last, so maybe I’ll pass the floor. Well, I’ll be quick. So we, as Rosanna said, we as an intergovernmental organization is all about collaboration and discussion. We have, so first of all, we have member states who we hold and conduct the discussion and workshop with. We have a very big network, a growing network, not very big, so it’s of observers and all the initiatives that we propose, we try to seek the inputs from them. them and improve and shape the dialogue. And we want to then become or position ourselves as a collective voice and advocate for the best practices on that behalf. So yeah, this is from an intergovernmental organization perspective. Would you want to take it?

Amina P.: Okay. We have an initiative at META. So actually we partner with a non-profit community that has been created called Partnership on AI. And it’s a partnership with academics and civil society, industry and media as well, creating solutions so that AI advances positive outcomes for people and society. And out of this initiative, specific recommendations are provided under what we call synthetic media framework, recommendations on how to develop, create and share content generated or modified by AI in a responsible way. So this is one of the initiatives that META collaborated. Actually, we collaborate whether with academia, but also with CSOs. We have other projects, Coalition for Content Provenance and Authenticity, with the publication of what we call content credentials about how and when digital content was created or modified. This is called C2PA. And this is another kind of coalition that we have, not necessarily with academia or CSOs, or limited to these actors. Another partnership is the AI Alliance that was established. with IBM, and this gathers creators, developers, and adopters to build, enable, and advocate for open source AI. Tejaswita, do you want to join us?

Tejaswita Kharel: Yes. I would say as somebody who represents more civil society and academia, I can give more of the input on how I think we get involved in these conversations. So like I said before, it’s predominantly in a lot of these coalitions or other groups, there is a lot of representation predominantly by industry, but I do think very often academia and civil society organizations are invited to get opinions in, to listen and understand more about what our beliefs are. But I do believe that very often when this is done, it ends up being a little bit of a minority perspective, and it feels like you’re not necessarily always taken very seriously, because it almost feels like you’re, it’s a little bit like advocacy, where you know that you’re speaking about things that may not necessarily be what other people want to do. So I think even though academia and civil society representation exists, I don’t think it’s being done in a way that is actually useful, because it’s almost like it’s a tokenization of representation. So I will be asked to do something or attend an event representing civil society, academia, and I will do it. But I feel like at the end of the conversation, I am there solely to mark for like a tick box being like, okay, we have had representation, we’ve heard from them. But ultimately, what we want to do is what we believe should be done. So I think it’s more of a criticism from my end on this part. That being said, I unfortunately have another clashing event, so I will not be able to stay for this event any further. I really apologize. It’s been a great. I really love listening to everyone. I’m really grateful for this opportunity and to have been part of this panel with all of these other excellent panelists and even the moderators all of you. Thank you very much. I will be leaving now. Thank you.

MODERATOR: Thank you very much for participation

Ahmad Bhinder: I just want to add one quick thing which I would skip my head so we have a mechanism called digital space accelerator program, where we hold global roundtables on different different digital economy issues. So, this AI tool that we are developing. We actually have have been. So we gather on the sidelines of big events we gather the expert stakeholders, like we did yesterday, and we seek their inputs, while we are shaping and designing our product so this tool so we went to Singapore, for example, we went to, to reality and a couple of other places we gathered the experts, and this is a mechanism not just for the AI, but this is what it’s a holistic program for DCO, please have a look at it on on our website, and feel free to contribute or join as well. So this digital space accelerator program is how we collect. We involve all the stakeholders into our initiatives. Thank you.

Rosanna Fanni: Yes, and I will also add a couple of points, and maybe directly picking up by panelists that has unfortunately now left us. It’s very much true that we also observed that there’s this tick box exercise, especially when it comes to civil society involvement in global governance processes on AI. So this is just exactly why we want to set up this, or are setting up the global network of civil society society organizations, and maybe to give a bit more context. We will launch this in the context. Summit hosted by France, which is happening in February next year. As many of you know, a government led summit, the first one having been taken place in the UK as the safety summit, known as the safety summit, and then followed by the second one hosted by the Republic of Korea. And for us, it’s really crucial that we indeed do not do it again as a tick-tock exercise, but that we bring civil society in the discussions and leverage their voices during the ministerial discussions as well. So this is something that also the organizers have actually already announced. So if you go to the AI Action Summit website, you will see that civil society will actually be a high priority. And our idea is really to link the dots and to make this network, so to say, permanent, to then also offer it as a consultative body, so to say, for future AI Action Summits or for other major governance processes on AI. And this is something that is really at the heart of our endeavor, and we really thank also the cooperation with the Patrick McGovern Foundation, which funds this initiative for their support in this project. The other part that I really wanted to mention is the work with academics. This is also for us a really crucial part of our work, and especially people from academia support us in implementing the recommendation through the readiness assessment methodology that I mentioned beforehand as a 360-degree scanning tool for governments. And we bring together these experts. So imagine we are conducting the readiness assessments in over 60 countries. So that means we have already 60 experts engaged in each country, and every expert brings something that is a bit unique from the country to the discussions, and we assemble these experts in a network that we call AI Ethics Experts Without Borders. So this AI Ethics Experts Without Borders network is really there to unite the knowledge that we find in governments, in on country level and maybe even on regional or local level on ethical governance of AI and brings this, so to say, together at UNESCO. And what is really, really special about is this, that then experts can exchange, hey, what was the experience on, let’s say maybe AI used in healthcare or AI used in another sector, or maybe there was an issue with the supervision of AI. So the idea is really to bring this expertise together and leverage also the knowledge of local experts. And what I want to also emphasize, and it’s also links to the civil society discussion that’s really important. It’s also very often the same, let’s say theme or the same issue happens with civil society as it happens with countries from the global South, so that it is really more of a tick-box exercise. Oh, we have someone from Africa here, but actually the grand majority of the countries that do AI governance are mainly developed economies. So for us, this is very much also linked to our work that we bring in these voices from the global South and not to bring them in as, let’s say a tick-box exercise, but to really leverage their voices. And that’s also why we are already working with out of these 20, we have out of 60 countries, 22 far from Africa and even more from small island developing states. And for us, it’s really important to bring in these actors that are normally underrepresented. And we really hope to be continuing the work with as far as the IGF, but also in many other contexts as well.

MODERATOR: Thank you. We’re going to ask you to bring us your final remarks, but as part of this final remarks, if you could bring us some last insights about one question. So what is the feedback you have received from? stakeholders participating in those collaborative approaches? And what were the challenges they shared in doing those collaborative work? What were the key takeaways? And thank you very much for your participation.

Ahmad Bhinder: Well, okay. I think my last concluding thoughts are actually connected to this this question. So we have engaged with stakeholders across our DCO member states and the governments as well as the civil society. What we have noticed that across our membership, and that’s a sample because we are very diverse to the global examples as well, there are varying levels of AI readiness across the member states. So while some member states or some countries are struggling with the basic infrastructure, the other ones are really, really at the forefront of how to shape AI governance. There are diverse definitions, diverse approaches to the governance. So the uniting factor, as Rosanna said, are the principles, which have been very widely adopted because they are not controversial, but how to action those principles has been quite diverse. Some countries are really, really looking at but the principles are common. So there’s a huge potential for engagement, for harmonization, for synchronization of the policies because the AI or all the emerging technologies, the regulations are not restricted to the countries themselves. So they are global actors, the borders do not define technologies, etc. So I think it’s really, really important now that this when we talk about multi-stakeholderism. or multilateralism to actually action it to to have those voices heard to have those these global forums these global discussions and then the global rule rulemaking or rule setting bodies to be more active and and and push the the the right set of rules etc for the nations to adopt and I think the dialogue is very important that we are having here and we you know we have across these forums. Thank you.

Amina P.: Yeah I would I would highlight so one of the I cannot provide a detailed of course feedback because when we work for instance with experts the experts provide their feedback depending on the product that we are asking them to provide feedback on so but like in a very very general and overview of the comments that we receive sometimes we feel that there is some very varying level of understanding of what AI is of the risks that are on the table being put on the table are we talking about existential risks in general or are we trying to like have a more specific and more scoped approach identifying a specific risk and trying to target this specific risk and mitigate it properly and in a very specific way I think and sometimes we face also some misconceptions from the the experts because if we are talking with experts who are from the human right who have a human rights approach or based approach then maybe in terms of privacy or when it comes to AI specific specificities sometimes they there are some misconceptions. So, the educational work is absolutely indispensable and hence some of the privacy tools that we put in place. For instance, the system cards when it comes to AI to explain to the user, the user who does not have a knowledge and if a user does not understand the AI model, how it works and why it works and behaves this way, it’s very difficult to get the trust from this user. And this is why, for instance, the system cards that we put in place explain, let’s say, the ranking system in our ads, how our ads are ranked, how the users when it comes to the ads are ranked, the ranking systems, the privacy center, some other educational tools as well. It’s very important to educate, to do this education work.

Rosanna Fanni: Yes, I will make it really, really short, implementation, implementation, implementation. We hear from member states that they want to operationalize the principles, they want to do something with AI, they want to use AI, but at the same time, they want to not get it wrong. They don’t want to use it in an unethical manner. They want to have the benefits for everyone, for their citizens, for their businesses. And I think implementation of the recommendation, but also implementation of other tools that we have heard today, hear from other stakeholders, my fellow panelists, also we have the implementation of the Global Digital Compact. I think now the focus really needs to shift from the principles, from the kind of agreement consensus that we have found. Yes, we need ethics. Yes, we need ethics by design. Yes, we need also a global governance for AI, but yes, how do we do it and how do we move from the principles to the action? And I think there’s still a lot of work to be done. necessity to build capacities in governments, to build capacities in public administration, but also in the private sector, also in civil society to really, you know, be actionable, be operational. And at the same time, use AI for, of course, the benefit of the citizens, but also be aware of the risks and mitigate the ethical challenges that we have.

Thiago Moraes: Thanks a lot. And it was amazing having this discussion. And I think Rosanna just got this main question now. Now that we have a consensus, where do we go from here, right? How do we do it? And we’re looking forward to these initiatives that are being developed by different organizations, the iAction Forum that’s coming and many others that we have been sharing in the Internet Governance Forum. So thanks everyone. Thanks to our speakers to be here, the audience for being and for the whole discussion. And yeah, looking forward to what’s coming.

A

Ahmad Bhinder

Speech speed

140 words per minute

Speech length

1105 words

Speech time

472 seconds

Risk-based and principles-based regulatory approaches

Explanation

Ahmad Bhinder discusses two main approaches to AI governance: prescriptive rules-based approaches (like the EU AI Act) and principles-based approaches focused on innovation. He notes that all approaches are ultimately based on certain principles, whether called ethical AI principles or responsible AI governance principles.

Evidence

Examples of EU AI Act and approaches in the US and Singapore

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Varying levels of AI readiness across countries

Explanation

Ahmad Bhinder observes that across DCO member states, there are varying levels of AI readiness. While some countries struggle with basic infrastructure, others are at the forefront of shaping AI governance. This diversity presents challenges in harmonizing approaches to AI governance.

Evidence

Observations from DCO member states

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Digital Cooperation Organization’s collaborative initiatives

Explanation

Ahmad Bhinder discusses the DCO’s collaborative approach to AI governance. The organization engages with member states, governments, and civil society to gather inputs and shape dialogue on AI governance issues.

Evidence

DCO’s digital space accelerator program and global roundtables

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Amina P.

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

DCO’s AI governance assessment tool

Explanation

Ahmad Bhinder mentions the development of an AI governance assessment tool by the DCO. This tool is being shaped through inputs from expert stakeholders gathered at global roundtables and events.

Evidence

Stakeholder consultations in Singapore and other locations

Major Discussion Point

Tools and Frameworks for Ethical AI

A

Amina P.

Speech speed

118 words per minute

Speech length

1488 words

Speech time

755 seconds

Open source AI as a tool to enhance privacy and safety

Explanation

Amina P. argues that open source AI can enhance privacy and safety by allowing experts to identify, inspect, and mitigate risks. She emphasizes that this collaborative approach within the AI community helps address potential issues before product launch.

Evidence

Meta’s open source AI initiatives, including the LAMA 3.2 model and the purple project

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Misconceptions about AI and privacy among stakeholders

Explanation

Amina P. highlights that there are varying levels of understanding about AI and its risks among stakeholders. She notes that misconceptions can arise, particularly when experts from different backgrounds (e.g., human rights) engage with AI specificities.

Evidence

Feedback from expert consultations and the need for educational tools like system cards

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

META’s partnerships with academia and civil society

Explanation

Amina P. describes META’s collaborations with academia and civil society organizations to address AI ethics and governance. These partnerships aim to create solutions for responsible AI development and use.

Evidence

Partnership on AI initiative and Coalition for Content Provenance and Authenticity (C2PA)

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

T

Tejaswita Kharel

Speech speed

175 words per minute

Speech length

1277 words

Speech time

436 seconds

Need for context-specific understanding of ethical principles

Explanation

Tejaswita Kharel emphasizes that ethics is subjective and can mean different things in various contexts. She argues that ethical principles for AI must be understood and applied differently based on regional and societal contexts, as there cannot be a one-size-fits-all approach.

Evidence

Example of fairness in AI differing between the United States and India due to factors like caste and religion

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Subjectivity and differing interpretations of ethics

Explanation

Tejaswita Kharel points out that ethics is a subjective concept, leading to varying interpretations and implementations of ethical principles in AI. This subjectivity creates challenges in consistently applying ethics by design across different contexts and stakeholders.

Evidence

Variations in ethical principles across different AI guidelines and documents

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Need for meaningful inclusion of civil society voices

Explanation

Tejaswita Kharel criticizes the current state of civil society involvement in AI governance discussions, describing it as often tokenistic. She argues for more meaningful inclusion of civil society perspectives beyond just ticking a box for representation.

Evidence

Personal experiences in participating in stakeholder consultations

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Need for multi-stakeholder collaboration in AI governance

R

Rosanna Fanni

Speech speed

159 words per minute

Speech length

2581 words

Speech time

972 seconds

UNESCO recommendation on ethics of AI as global standard

Explanation

Rosanna Fanni presents UNESCO’s recommendation on the ethics of AI as a global standard approved by 194 member states. This recommendation provides a set of ethical principles, values, and policy action areas that have gained widespread agreement.

Evidence

UNESCO’s recommendation on the ethics of AI and its implementation in over 60 member states

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Differed on

Approaches to AI Governance and Ethics

Difficulty in operationalizing ethical principles

Explanation

Rosanna Fanni highlights the challenge of moving from agreed-upon ethical principles to practical implementation. She emphasizes the need to shift focus from establishing principles to taking concrete actions in AI governance and ethics.

Evidence

Feedback from member states expressing the desire to operationalize principles and use AI responsibly

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Challenges in implementing ethics by design

UNESCO’s readiness assessment methodology

Explanation

Rosanna Fanni describes UNESCO’s readiness assessment methodology as a tool for ethics by design in AI governance frameworks. This tool provides member states with a comprehensive view of their AI ecosystem across multiple dimensions.

Evidence

Implementation of the readiness assessment in over 60 member states

Major Discussion Point

Tools and Frameworks for Ethical AI

Ethical impact assessments for AI systems

Explanation

Rosanna Fanni introduces UNESCO’s ethical impact assessment tool, which evaluates specific AI algorithms against the principles outlined in the UNESCO recommendation. This tool is particularly important for AI systems used in the public sector.

Evidence

Examples of AI systems in welfare allocation, education, and healthcare

Major Discussion Point

Tools and Frameworks for Ethical AI

UNESCO’s global network of civil society organizations

Explanation

Rosanna Fanni discusses UNESCO’s initiative to create a global network of civil society organizations focused on AI ethics and governance. This network aims to amplify civil society voices in major AI governance processes and discussions.

Evidence

Planned launch of the network at the AI Action Summit in February

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

Agreements

Agreement Points

Need for multi-stakeholder collaboration in AI governance

Ahmad Bhinder

Amina P.

Rosanna Fanni

Tejaswita Kharel

Digital Cooperation Organization’s collaborative initiatives

META’s partnerships with academia and civil society

UNESCO’s global network of civil society organizations

Need for meaningful inclusion of civil society voices

All speakers emphasized the importance of involving various stakeholders, including governments, industry, academia, and civil society, in shaping AI governance and ethics frameworks.

Challenges in implementing ethics by design

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Misconceptions about AI and privacy among stakeholders

Subjectivity and differing interpretations of ethics

Difficulty in operationalizing ethical principles

Speakers agreed that implementing ethics by design in AI systems faces various challenges, including differing levels of readiness, misconceptions, and the difficulty of translating ethical principles into practical actions.

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Unexpected Consensus

Importance of education and capacity building in AI ethics

Amina P.

Rosanna Fanni

Misconceptions about AI and privacy among stakeholders

Difficulty in operationalizing ethical principles

While not explicitly stated as a main argument, both speakers emphasized the need for education and capacity building to address misconceptions and enable the practical implementation of ethical principles in AI governance.

Overall Assessment

Summary

The main areas of agreement include the need for multi-stakeholder collaboration, recognition of challenges in implementing ethics by design, and the importance of context-specific approaches to AI governance and ethics.

Consensus level

There is a moderate to high level of consensus among the speakers on the fundamental aspects of AI ethics and governance. This consensus suggests a growing recognition of the complexities involved in ethical AI development and the need for collaborative, context-sensitive approaches. However, the specific implementation strategies and tools vary among different organizations and stakeholders, indicating that while there is agreement on the importance of ethical AI, the path to achieving it remains diverse and evolving.

Differences

Different Viewpoints

Approaches to AI Governance and Ethics

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Risk-based and principles-based regulatory approaches

Open source AI as a tool to enhance privacy and safety

Need for context-specific understanding of ethical principles

UNESCO recommendation on ethics of AI as global standard

Speakers presented different approaches to AI governance and ethics, ranging from risk-based and principles-based regulatory approaches to open source AI and context-specific ethical principles. While Ahmad Bhinder discussed various regulatory approaches, Amina P. focused on open source AI, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni presented UNESCO’s global standard.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement centered around the specific approaches to implementing ethical AI governance and the challenges in operationalizing ethical principles across different contexts.

difference_level

The level of disagreement among the speakers was moderate. While they shared common goals of ethical AI governance, they presented different perspectives and approaches. This diversity of viewpoints highlights the complexity of the topic and the need for continued multi-stakeholder dialogue to develop comprehensive and effective ethical AI frameworks.

Partial Agreements

Partial Agreements

All speakers agreed on the need for ethical AI governance, but differed in their approaches to addressing the challenges. Ahmad Bhinder highlighted varying levels of AI readiness, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni focused on the difficulty of operationalizing ethical principles. They all recognized the complexity of implementing ethics by design but proposed different solutions.

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Need for context-specific understanding of ethical principles

Difficulty in operationalizing ethical principles

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Takeaways

Key Takeaways

There are varying approaches to AI governance and ethics globally, including risk-based and principles-based regulatory approaches

Open source AI and multi-stakeholder collaboration are seen as important tools for enhancing privacy, safety and ethical AI development

UNESCO’s recommendation on ethics of AI provides a global standard agreed upon by 194 member states

Implementing ethics by design in AI faces challenges due to varying levels of AI readiness across countries and differing interpretations of ethical principles

There is a need for context-specific understanding and application of ethical principles in AI

Moving from ethical principles to practical implementation and action remains a key challenge

Resolutions and Action Items

UNESCO to launch a global network of civil society organizations focused on AI ethics and governance

UNESCO to continue implementing its recommendation on ethics of AI through readiness assessments in over 60 countries

META to launch a voluntary survey for businesses to map AI use across their operations in summer 2024

Continued development of tools like DCO’s AI governance assessment tool and UNESCO’s ethical impact assessment framework

Unresolved Issues

How to effectively operationalize ethical principles in AI development and deployment

How to ensure meaningful inclusion of civil society and Global South voices in AI governance processes

How to address varying levels of AI readiness across different countries and regions

How to reconcile differing interpretations and applications of ethical principles across contexts

Suggested Compromises

Balancing prescriptive regulatory approaches with more flexible principles-based approaches to AI governance

Using open source AI as a way to enhance both innovation and ethical safeguards

Combining global ethical standards (like UNESCO’s recommendation) with context-specific implementations

Thought Provoking Comments

We see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles.

speaker

Ahmad Bhinder

reason

This comment provides a clear overview of the two main regulatory approaches to AI governance globally, highlighting the key differences between prescriptive and principles-based approaches.

impact

It set the stage for discussing different regulatory frameworks and their implications, prompting further exploration of how ethics can be embedded in these different approaches.

Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks.

speaker

Amina P.

reason

This comment challenges the common perception that open-sourcing AI models could lead to more risks, instead presenting it as a collaborative approach to identifying and mitigating risks.

impact

It shifted the discussion towards the potential benefits of open collaboration in AI development and ethics, prompting consideration of how transparency can contribute to ethical AI.

I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else.

speaker

Tejaswita Kharel

reason

This comment highlights the fundamental challenge of defining and implementing ethics in AI, pointing out the subjective nature of ethical principles.

impact

It deepened the conversation by prompting reflection on the complexities of implementing ethical AI across different cultural and societal contexts.

The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation.

speaker

Rosanna Fanni

reason

This comment introduces concrete tools for assessing ethical AI implementation at both macro and micro levels, providing practical approaches to the challenge.

impact

It moved the discussion from theoretical considerations to practical implementation strategies, offering tangible ways to embed ethics in AI development and governance.

Overall Assessment

These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. They moved the conversation from abstract principles to concrete challenges and potential solutions, emphasizing the need for collaboration, transparency, and practical assessment tools. The discussion evolved from identifying the problem to exploring multifaceted approaches for embedding ethics in AI development and governance.

Follow-up Questions

How can we move from ethical principles to actionable implementation of AI governance?

speaker

Rosanna Fanni

explanation

There is a need to operationalize ethical principles and implement AI governance in practice, beyond just agreeing on high-level concepts.

How can we ensure meaningful inclusion of civil society voices in AI governance discussions, beyond tokenistic representation?

speaker

Tejaswita Kharel

explanation

Civil society participation often feels like a ‘tick box exercise’ without real influence, so more effective ways of inclusion are needed.

How can we address varying levels of AI readiness across different countries while developing global AI governance frameworks?

speaker

Ahmad Bhinder

explanation

There are diverse approaches and capabilities related to AI across countries, which creates challenges for harmonizing global governance.

How can we improve public understanding of AI systems and their implications?

speaker

Amina P.

explanation

There are often misconceptions about AI among experts and the public, highlighting a need for better education and explanation of AI systems.

How can open source AI be leveraged to enhance privacy and security?

speaker

Amina P.

explanation

Open sourcing AI models allows for collaborative risk mitigation, but the implications and best practices need further exploration.

How can we ensure ethical principles are applied consistently across different cultural and societal contexts?

speaker

Tejaswita Kharel

explanation

Ethical principles like fairness can have different interpretations in different contexts, creating challenges for global standards.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #133 Better products and policies through stakeholder engagement

WS #133 Better products and policies through stakeholder engagement

Session at a Glance

Summary

This discussion focused on the importance of stakeholder engagement in developing better technology products and policies. Participants from various sectors shared insights on effective engagement strategies and challenges.

Richard Wingfield from BSR emphasized the need for companies to engage with diverse stakeholders, especially vulnerable groups, to understand potential human rights impacts of their products. He outlined a five-step approach for meaningful stakeholder engagement. Thobekile Matimbe highlighted the importance of proactive engagement with communities in Africa, suggesting platforms like the Digital Rights and Inclusion Forum for companies to connect with stakeholders.

Fiona Alexander, drawing from her government experience, stressed the value of targeted questions and political will in successful stakeholder engagement. She noted that while the process can be messy, it ultimately leads to better policies and buy-in. Charles Bradley shared Google’s approach, describing their External Expert Research Program and how it has improved product development by incorporating stakeholder feedback early in the process.

The discussion also addressed challenges, including time constraints, the fast pace of technology development, and potential disincentives for companies to engage meaningfully. Participants debated the effectiveness of regulation versus voluntary engagement, with some arguing for a combination of frameworks and impact assessments.

Overall, the panel agreed that while progress has been made in stakeholder engagement over the past decade, there is still significant room for improvement. They emphasized the need for more transparent, proactive, and meaningful engagement practices across the technology sector to ensure products and policies better respect human rights and meet community needs.

Keypoints

Major discussion points:

– The importance of meaningful stakeholder engagement in technology product and policy development

– Challenges and best practices for effective stakeholder engagement by companies and governments

– The role of regulation and other external pressures in driving responsible tech development

– Unique challenges of stakeholder engagement in the fast-moving tech sector

– The need for more proactive and inclusive engagement, especially in regions like Africa

Overall purpose/goal:

The discussion aimed to explore how stakeholder engagement can lead to better technology products and policies, sharing perspectives from industry, civil society, and former government officials on effective approaches and ongoing challenges.

Tone:

The overall tone was constructive and solution-oriented, with speakers acknowledging progress made while also highlighting areas needing improvement. There was a shift to a more critical tone when discussing ongoing shortcomings in tech company engagement practices, but the conversation remained professional and focused on identifying ways to advance meaningful stakeholder engagement.

Speakers

– Jim Prendergast: Moderator

– Richard Wingfield: Director, Technology and Human Rights, BSR

– Thobekile Matimbe: Senior Manager Partnerships and Engagements at Paradigm Initiative

– Fiona Alexander: Former official at the U.S. Department of Commerce

– Charles Bradley: Manager for trust strategy on knowledge information products at Google

Additional speakers:

– Lena Slachmuijlder: Executive Director, Digital Peacebuilding, Search for Common Ground; Co-Chair, Council on Tech and Social Cohesion

Full session report

Stakeholder Engagement in Technology Development: Challenges and Best Practices

This discussion focused on the critical importance of stakeholder engagement in developing responsible and effective technology products and policies. Participants from various sectors, including industry, civil society, and former government officials, shared insights on effective engagement strategies and ongoing challenges in the rapidly evolving tech sector.

Importance of Stakeholder Engagement

All speakers emphasised the fundamental importance of stakeholder engagement in technology development and policy-making. Richard Wingfield from BSR highlighted that stakeholder engagement is critical for responsible business practices, referencing the UN Guiding Principles on Business and Human Rights as a key framework. He also stressed the importance of prioritizing engagement with communities most likely to be at risk. Charles Bradley of Google noted that proactive stakeholder engagement builds trust and improves products, while Thobekile Matimbe stressed the importance of meeting stakeholders where they are, especially in Africa. Fiona Alexander, drawing from her government experience, argued that despite taking more time, stakeholder engagement ultimately leads to better policies and products.

Challenges in Implementing Effective Stakeholder Engagement

The discussion acknowledged various challenges in implementing effective stakeholder engagement, particularly in the fast-paced technology sector. Richard Wingfield pointed out the unique challenges faced by the tech industry due to the rapid pace of development. Charles Bradley highlighted the difficulty of engaging stakeholders early in the product development process when many products may not make it to market. He also noted internal pressures and incentives that can work against meaningful engagement. Thobekile Matimbe called for more proactive and meaningful engagement from companies, especially in Africa, noting a lack of willpower from some companies to engage effectively. Fiona Alexander added that cultural differences impact approaches to engagement and regulation across different regions.

Best Practices for Stakeholder Engagement

Speakers shared several best practices for effective stakeholder engagement:

1. Richard Wingfield outlined a five-step approach toolkit developed by BSR to help companies implement stakeholder engagement.

2. Charles Bradley described Google’s “External Expert Research Program,” which integrates stakeholder input into product development through regular engagement with a panel of experts.

3. Thobekile Matimbe suggested leveraging multi-stakeholder platforms like the Digital Rights and Inclusion Forum (DRIF) for companies to connect with stakeholders in Africa.

4. Fiona Alexander emphasised the importance of setting clear goals and deadlines for engagement processes, as well as transparency, including discussing when products are not released due to stakeholder feedback.

Specific Examples of Stakeholder Engagement

Charles Bradley provided concrete examples of Google’s stakeholder engagement efforts:

1. The development of the Circle2Search feature, which involved extensive consultation with accessibility experts.

2. AI overviews for various products, created in response to stakeholder feedback requesting more transparency about AI use in Google’s services.

Role of Regulation and External Pressure

The discussion touched on the role of regulation and external pressure in driving stakeholder engagement. Richard Wingfield noted that regulation, particularly in the EU, is requiring companies to engage with stakeholders as part of their risk assessment processes. Charles Bradley explained that Google views regulation as a way to level the playing field and ensure all companies are held to high standards. He argued that proactive engagement can help companies get ahead of regulatory pressures. Fiona Alexander expressed uncertainty about the impact of recent regulations like GDPR, suggesting their effectiveness is still unclear.

Critique of Current Practices

Lena Slachmuijlder raised important questions about the effectiveness of current stakeholder engagement practices in big tech. She challenged the notion that existing approaches are sufficient, highlighting the need for more upstream testing and transparency in product development. This critique sparked a discussion about how to make stakeholder engagement more meaningful and impactful.

Unresolved Issues and Future Considerations

The discussion identified several unresolved issues and areas for future consideration:

1. Balancing the need for stakeholder engagement with the fast pace of technology development and market pressures.

2. Ensuring stakeholder engagement is truly meaningful and not just a ‘tick-box’ exercise.

3. Addressing the ‘de-incentives’ that work against thorough stakeholder engagement in some companies.

4. Assessing the effectiveness of recent regulations like GDPR and the EU AI Act in driving responsible technology development.

5. Exploring ways to combine frameworks with impact assessments, as suggested by Avri Doria.

Conclusion

The discussion underscored the critical importance of stakeholder engagement in responsible technology development while acknowledging the complex challenges involved in its implementation. While progress has been made in recognizing the value of stakeholder engagement, there remains significant room for improvement in creating more transparent, proactive, and meaningful engagement practices across the technology sector. The conversation highlighted the need for continued dialogue, innovation in engagement strategies, and a commitment to ethical product development that respects human rights and meets diverse community needs.

Session Transcript

Richard Wingfield: you you you you you you you and rights and lead our work with technology companies on how to act responsibly as a company and to build products and services that align with international human rights standards. So really pleased to be part of this conversation because stakeholder engagement is a critical part of the way we work with companies at BSR. And the approach that we take is very much in line with a number of existing frameworks and standards that exist in relation to stakeholder engagement. And for companies, whether in the technology sector or any other sector who are looking to be responsible businesses, to build trust and confidence, to align with what being a responsible business means, one of the most critical frameworks that is used and where we draw our inspiration from is the UN Guiding Principles on Business and Human Rights. So the idea that states have human rights responsibilities is one that is well-established in international law, in various international treaties. But in the last few decades, there was increasing concern from a number of external stakeholders that businesses also should be taking a role in making sure that the human rights of people affected by their businesses were respected as well. And that resulted about 15 years ago in the endorsement by the United Nations Human Rights Council of this framework called the UN Guiding Principles on Business and Human Rights. So this is the framework that sets out what business and human rights looks like. It has various obligations that are imposed on states in terms of regulation of businesses. It imposes responsibilities on businesses to respect human rights. And it also imposes expectations as to how individuals who have been adversely affected can seek a remedy for any harm that has been suffered. And the most critical part of the UN guiding principles as a framework when it comes to businesses is that pillar that is specifically about how businesses should respect human rights. Now, the reason why I’m sort of mentioning this framework in a conversation around stakeholder engagement is because the UN guiding principles on business and human rights make regular and explicit recognition of the importance of stakeholder engagement and meaningful stakeholder engagement when it comes to companies behaving responsibly. And this exists in a number of different aspects. One of the things that we do a lot of work with companies at BSR is to try to think through the way that companies have human rights impacts at all. And those can be risks to human rights. So for example, the way the use or misuse of a particular technology might cause harm to somebody, perhaps restrictions on their freedom of expression or impacts upon their rights to privacy. We also look at the way that companies can advance human rights, the way that different technologies can be developed and used in ways which advances societal goals, for example, supporting freedom of expression or enabling education or improving healthcare. So looking at the way that the actual technological products and services can sort of both improve but also create risks to human rights. But we also look at the way that company’s own policies are also relevant here. And obviously company’s policies are pretty instrumental in the way that technologies are designed, the way they are used. And these can be everything from a company’s AI principles, which might govern the way that it develops and uses AI. If we were looking at social media or online platforms, the rules that they impose as to how people can and cannot use the platform in different ways. So there are a range of different ways that companies can sort of have impacts upon human rights. And what the guiding principles say is that in trying to understand those impacts. need to speak to the people who are actually ultimately affected. So when we’re working with a company at BSR we put a huge amount of effort into alongside the company talking to and working with stakeholders to understand the risks that might be connected to a particular company, so what’s happening in practice in different parts of the world and with different communities, the opportunities that can be provided, so the way that different technologies are being used in ways that can create benefit for communities, but also the company’s own policies, so the way that the rules that the company sets relating to how they develop and use technology or the way that users can and cannot use those technologies, how they might also themselves be having impacts upon human rights. So what does this look like in practice? So one of the complicating aspects of technology as a sector is that so many people are affected and often the impacts of technology are global in nature, so if you’re looking at a large online platform that might be used by people across the world, potentially hundreds of millions or even billions of people across the world, that’s a huge number of people who might potentially be affected by the way that that service operates or by the rules that that company imposes, and so we really try to prioritise our stakeholder engagement with the communities that are most likely to be at risk. So we know, for example, that there are certain groups around the world who are particularly vulnerable to human rights harms. We know, for example, that persons with disabilities, for example, have historically been marginalised, may not be able to access or use technologies in the same way. We know that certain groups are vulnerable to things like hate speech or other types of harmful content online, particularly minority groups. We know that there are certain groups who might be vulnerable to discriminational bias when it comes to AI systems because of the lack of data that was used connected to that group when those AI systems were created. So we try to prioritise our stakeholder engagement by working with those communities and groups that are going to be particularly affected by the risk or particularly vulnerable to that risk. And so that might mean working with women’s rights organisations, it might mean working with organisations that support persons with disabilities, it might mean working with groups representing those who are vulnerable to discrimination within different societies. But we also know that the ways that technologies are used and the ways that the rules that companies impose can be felt very differently in different parts of the world. There are different cultural contexts, there are different language issues depending on the company and the primary language that it uses, there are different levels of digital literacy in different parts of the world and so familiarity with technology and the way that it can be used or misused. So we also try to make sure that we take a global approach to our engagement with stakeholder and that we talk with groups that can either speak to the experience of people in different parts of the world or in some cases we talk directly to groups in certain parts of the world where their experiences may be different from elsewhere. So that’s the kind of approach that we take to stakeholder engagement is really trying to, particularly when you have potentially hundreds of millions or billions of people who are using a platform or affected by a technology, prioritising our engagement with those groups who are going to be most vulnerable or most at risk but also making sure that we are geographically and culturally diverse so that we hear the full range of experiences and can provide recommendations that are nuanced appropriately. What the UN Guiding Principles don’t give a lot of detail on however is the actual mechanics of stakeholder engagement. So yes they talk about the importance of talking to a diverse range of stakeholders and meaningful using what they tell you in the way that the company develops its technology or it or it creates or modifies it rules. But it doesn’t really tell you how to do it in practice. And so we use a range of different ways, depending on the company we’re working with and the issue in question. So it can be something like organizing one-on-one interviews. So we might just simply organize a number of sort of one-on-one interviews with different organizations around the world. We ask them questions. We talk to them about their concerns. We might get into some specificity about a particular rule or a particular product, depending on the work that we’re doing in question. We might also organize workshops where we bring together a broader range of people. And sometimes that can be helpful because then you have a diversity of opinions within a room and people are able to counter each other or raise different perspectives or push back. And so you get much more of a dialogue. So sometimes we’ll use workshops or those broader interviews as a way of seeking engagement as well. We also know that there is a lot of stakeholder fatigue as an issue. So a lot of stakeholders constantly being asked to participate in interviews and meetings. So we also try to use existing spaces where people are talking about the issues that concern them. And the IDF is a great example of that. There are other conferences around the world like RightsCon, TrustCon, the UN Forum on Business and Human Rights. So there are many existing spaces where NGOs and other stakeholders come and talk about the issues that are most important to them, including the impacts of different technologies and different technology companies. And so quite often we will come to these events, run sessions ourselves, participate in other sessions, and use that as an opportunity of hearing directly from people on the ground. So we use a variety of different tactics and techniques to ensure that we are not only talking to a broad range of people, but also not adding additional burden and time to them, but using existing spaces wherever possible. What the Ewing Guiding Principles also don’t give a lot of guidance on is how you then incorporate that feedback back into company decision-making. So, and I’m sure perhaps some of our company colleagues on this call will speak to, decision-making at a company is not a straightforward exercise. The design and the creation of new technology products, the way they’re launched, the way the policies are created and modified, these are complex processes. And so it’s not always straightforward simply to take the results of one interview or one workshop back and then to very quickly and easily make changes as a result of it. Stakeholder engagement and the feedback received will be one of a number of different sources of input into the ultimate decision-making of a company. So what we at BSR try to do is to make the feedback that we get from stakeholders as practical as possible. We try to make sure that it’s very clear in exactly what stakeholders would like to see from the company and how that can be measured and assessed in time. One of the things that we also try to do is to build long-lasting relationships between companies and stakeholders as well. So simply bringing in one organisation for one interview at a point in time and then never speaking to them again does not encourage a sort of a long-standing and trusted relationship. So we often try to make sure that companies are providing updates to the stakeholders on what’s happened, involving them in later decision-making and trying to create relationships rather than something which is merely transactional. So the Ewing Guiding Principles as a framework is a really helpful starting point in setting out that companies should engage with stakeholders. This should be used to understand the company’s risk profile but also where there might be opportunities as well and ensuring that there is a diversity of opinions in terms of the range of stakeholders that you speak to and the nuance that you get from those engagements. And then at BSR we’ve tried to add a bit more practicality to that framework in terms of what those engagements look like in practice and how we make sure that they are meaningful. and impactful rather than transactional. So that’s the approach that we take. Jim, maybe I’ll pass back to you for our next speaker at this point.

Jim Prendergast: Yeah, great. Thanks, Richard. So yeah, turning to our next speaker, Thobekile, I hope I’m pronouncing that correctly. You know, one of the things that Richard talked about was going to various fora where stakeholders are, and I know you’re heavily involved with DRIF, which is the Digital Rights and Inclusion Forum. Could you share with us sort of your take on this concept of stakeholder engagement and maybe any outcomes from that conference that may be relevant to our discussion today?

Thobekile Matimbe: Thank you so much. So hi, everyone. I’m Toba Kile Matimbe, and I work for Paradigm Initiative, which is an organization that promotes digital rights and digital inclusion across the African continent and within the global South. And I work for PIN, Senior Manager Partnerships and Engagements. So this is a very important conversation around stakeholder engagement. And I’m happy that Richard was able to unpack the UN Guiding Principles on Business and Human Rights and what they say with regards to corporate responsibility, which is something that is critical. And one of the key things, you know, that point towards adherence with corporate responsibility is obviously stakeholder engagement is something that is critical to ensuring that we have better products that are out there and that also take into consideration human rights. So as I reflect on, you know, that topic on stakeholder engagements and better products, it’s important to articulate that it is important, I think, for the private sector to sort of, you know, within their quest for due diligence to be able to think about what stakeholder engagement looks like. And from where I’m sitting, I think for me, one thing that I’ll echo is the importance of meaningful stakeholder engagement and not just tokenistic stakeholder engagement. where, you know, it’s just sort of like ticking the boxes, but how can engagements become more and more meaningful? And thinking about it, it’s so important for companies to think about how they can meet the community where they are, as opposed to perhaps, you know, scheduled meetings that come in once off, probably as a way of transactions. I think Richard just mentioned transactional engagements, but more proactive, you know, strategies to actually meet the community where they are. And that’s what DRIF is. The Digital Rights and Inclusion Forum is a platform that Paradigm Initiative hosts annually. And we’re looking forward to hosting the 12th edition in Lusaka, Zambia next year, from 29 April to 1 May. But what happens at DRIF, which is the acronym for the Digital Rights and Inclusion Forum, is that we have multi-stakeholder engagements where we have different actors coming into the room to discuss trends and developments in the digital rights space. And we have governments come in, we have civil society organizations, the media technologies, you know, companies as well. But we’ve not seen as much companies coming on board to engage with the community. This year alone, we held the Digital Rights and Inclusion Forum in Accra, Ghana, and we had just almost 600 participants who were there, and from not just Africa, but around 40 countries that were at DRIF. And we had attendees from Africa and global South spaces as well. So it’s a really rich platform where any product designer would want to be there to be able to engage with the community interface and discuss products. But thinking about it, the key players really, with regards to better products, these would be those who use the products, and that’s… why I’m saying that it’s important for companies to think proactively about engaging with those who use their products to be able to flag out what they think about when they are designing products and how they can even improve those products better and what better place than to be in platforms where there are different stakeholders that can be able to input into the design process of technology. I would also highlight that one critical thing in the design process is obviously the do no harm principle, especially in the context of human rights and who are those who are bearing the brand of bad products that are unleashed on the market. It’s the users of those technologies, those who are probably marginalized groups, those are minority groups and their voices can only be heard in spaces where human rights are discussed, even discussing as well persons with disabilities, what are their challenges? And this is why I think it’s important for more and more companies to find themselves in platforms where there are such conversations happening and DRIF is one such platform. And I think one key thing when we’re looking at policies themselves, maybe community standards, for instance, if we’re looking at social media platforms, we’ll find that they come up with community standards and it’s always important to circle back to the community and say, this is what we have, is this still fit for purpose because technology does not wait, it’s fast evolving. So it’s important as well to always have that interface with the community proactively. And I’ll give an example that for us as Paradigm Initiative just recently, we had a very interesting engagement with one telecommunications company that reached out to us after seeing one of our reports that we had done on surveillance in Africa and they were so keen and they laid their hands on this research and they literally reached out and requested a meeting with us, which was a proactive action as opposed to a reactionary, probably reactionary stakeholder engagement process where. perhaps if we had risen to them and say, look, there’s this challenge that we’ve seen that this has happened in this country based on your product. But they were proactive in terms of reaching out to us and say, let’s have a conversation. And it was one of, I would say, one of our best engagements with the private sector, especially around community standards. And I think, as policies are being developed by different private sector actors, it’s important to always figure out where is the community that uses our product? Where can we get to? How can we reach the community to be able to get feedback on what we are churning out so that we strengthen what we develop and we put out something that is good and robust and rights respecting, mitigating as well human rights impacts. It’s also important as well to reflect on some of the outputs that have come from the Digital Rights and Inclusion Forum. So every year we come up with community recommendations and we gather these from the people who attend the Digital Rights and Inclusion Forum from across underserved communities across the global South and they give input. And I think for the private sector, what has been clear is that need for that engagement around policies and how they are developed to better strengthen security and safety. When we’re talking about trust as well, and safety is something that is critical to the context of products and how they can be better and better serve the users themselves. So I think one thing as well that I would highlight that has also come up is the importance of having policies that ensure that vulnerable groups as well are not left behind. So you have your human rights defenders or your media who feel that sometimes when policies are being developed, they are not really addressing some of the lived realities that they face. So I think reflecting more on the do no harm principle is something that I really want to echo. It’s something that is really important. and it’s actually something that should be embedded at every point of the product design process. So it’s really critical that we continue to have this conversation and also hear from colleagues within the private sector as well with regards to their views as well around proactive stakeholder engagement as opposed to stakeholder engagements that are reactive or just a ticking of the boxes and what they are doing as well to ensure that they are able to meet the community where the community is. And it’s something as well that I’ll highlight even as I conclude my reflections that due diligence is something that demonstrates corporate responsibility is important primarily as a corporate practice towards better products that within themselves respect human rights and echo the importance of mitigating human rights impact. So I think I’ll leave it at that for now and I’ll post in the chat as well the link to more about the Digital Rights and Inclusion Forum. And currently we actually have a call that’s out for session proposal. So that’s a good opportunity for those in the private sector who would want to engage around their products or discuss more about them with the community to be able to possibly consider being at the Digital Rights and Inclusion Forum in Lusaka, Zambia next year so that we continue to have meaningful, proactive stakeholder engagements.

Jim Prendergast: Great, thank you very much. Thanks to both of you. Now we’re gonna turn the perspective a little bit away from product development to policy development. You know, Fiona, who’s sitting across the table from me here in the room, you spent a long time at the Department of Commerce. I’ve known you for a long time and you were heavily engaged in stakeholder engagement. I think the U.S. government has been a leader in that aspect. So can you share with us sort of what you found worked with stakeholder engagement when it comes to developing government? and policies and maybe what didn’t.

Fiona Alexander: Sure, happy to. And let me turn one ear off. Maybe I’ll take both off. It’s hard to hear yourself when you’re talking. So thanks, Jim, for inviting me and to everyone remotely. You’re missing a beautiful venue. So sorry, you’re not here to join us in person for today’s conversation. But as Jim mentioned, I was at the Department of Commerce in the US government for about 20 years. And in terms of the conversation for today about better policy from my perspective through stakeholder engagement, I think it’s important to note, at least in the US government system, there’s a couple of different ways and processes that are used. So for regulation and under our regulatory regimes, our legislature will pass a law, but our independent regulators or other parts of the government will actually do a lot of stakeholder engagement to actually produce the specifics of how a law is implemented through regulation. And we actually have a pretty prescribed process for that through the Administrative Procedures Act, where actually a particular agency will get an assignment, they’ll have to put draft rules out, or they’ll do a notice of proposed rules. And there’s pretty formulaic 45 days, 90-day stakeholder feedback, that kind of stuff. And that’s on the sort of regulatory side in the United States, and that’s across sectors. So it’s not just technology sectors, but all of our sector regulatory approaches work that way. Where it becomes a little bit more flexible and a little bit different is with respect to broader policy setting. And in the agency that I worked at in the Department of Commerce, NTIA is a big proponent and has been historically with the multi-stakeholder model in places like the IGF and things like that. But we also talked about, and we talk about government as a convener. So there’s the idea of similarly seeking public input or stakeholder engagement on what should be the priorities and policies of your office or your administration. Sometimes you do a public meeting, sometimes you do a notice of inquiry and you ask for written feedback. And the outcome of these efforts is really, government policy setting or government priority setting. and it impacts what the team does and how advocacy happens across different parts of the world or bilateral engagement. But then there’s also government as a convener in terms of actually trying to set policy or participate in policy. And I had the great experience of being involved and responsible for the US government’s relationship with ICANN. So I was very much involved in the IANA Stewardship Transition, which is a big, probably one of the largest examples of a multi-stakeholder decision-making process versus a multi-stakeholder consultation process. And in that regard, we were sort of instrumental in setting some of the key foundational principles, participating in the process, and actually evaluating it. But something that’s probably not as well-known in this environment is at the time, NTIA actually tried to deploy sort of a multi-stakeholder decision-making process domestically, and it was much more challenging, actually, than it was globally. And the example I give is we were trying to actually implement some sort of baseline privacy rights without congressional legislation. And in the absence of that, tried to convene stakeholders and actually said, okay, what should we be talking about? And what do you all wanna talk about? And what policies do you all wanna set? And I will say the very first meeting of that was very strange for a lot of people because they were much more used to what I described at the outset, the Administrator for Teachers Act, where government comes in and says, here’s the particular problem we’re trying to solve. Here’s some of our initial thinking. What do you think? In this case, we were like, nope, nope. My boss at the time, we’re gonna let the stakeholders decide what did they wanna focus on? What did they wanna, what rules do they wanna set? And that those processes were much more uneven. Some yielded some specific, you know, voluntary codes around mobile apps transparency, but a couple of those stakeholder processes actually fell apart. And I think we’re, you know, it was a learning experience, I think, for the team, but didn’t yield any actual policy outcomes because the stakeholders themselves didn’t have a particular focus that they wanted to talk about. So again, I think when we’re talking about better policies through stakeholder engagement. Some of the lessons learned, at least from my experience, depending on how you’re handling it and setting aside again our required regulatory approach, but if you’re going to try to deploy a multi-stakeholder process or if you’re going to try to do stakeholder engagement, it’s better when you have a targeted question you’re asking people, just like when you’re developing a particular product. If you’re developing a particular policy and it helps people focus, that tends to be a little bit more useful. At least in the governmental sense, there’s got to be political will to actually want to follow this approach because there’s a lot of people that will challenge the approach and there’s a lot of people that, when they don’t get exactly what they want from the approach, will try to go around you or go to other parts of the government to get what they want. So there’s strong commitment to political will is an important thing. You’ve got to also, as someone else mentioned as well, it can’t just be a check the box exercise. You actually have to always be talking to people. It can’t just be, okay, I have this particular problem, I’m going to talk to you now. You’ve got to build relationships and you’ve got to sustain the relationships and you’ve got to actually keep working with people so that you understand each other and can talk. There’s also got to be enough resources. Not just in the sense of stakeholders being able to participate, which can be a challenge if you want a broad range of stakeholders, not everybody’s resourced the same. The same is true of governments. You’ve got to actually have enough staff and enough people and resources to do them. And then again, I go back to, at least in my experience, where better policy through stakeholder engagement has occurred when the questions have been a little bit more focused and the problem set has been a little bit more narrow. We have a broad problem set. I think it lends itself to inertia sometimes and it’s hard to get past some of the different competing perspectives. The other thing that helps is having a deadline. There’s a clear deadline. It drives people to particular outcomes. And that was kind of my takeaway from my experiences. And maybe I’ll end there and keep the conversation going.

Jim Prendergast: Yeah. Thanks, Fiona. And, you know, as you were speaking and giving some of your best practices, I could see Charles reacting on screen, you know, deadlines and political will. And, you know, so let’s flip it back to product development. I see you reacting to a lot. of what Fiona says. Why don’t you share with us some of your experiences at Google in the product development lifecycle and this engagement process that you’ve undertaken? Absolutely, yeah. So,

Charles Bradley: hi everyone. I’m Charles. I’m the manager for trust strategy on knowledge information products here at Google. So, just a bit of context of what that means. Knowledge and information products are our search, maps, news, Gemini products. So, anything that connects people with information rather than our hardware or cloud work. And manager of trust strategy, well, our role in our team is to shape our products and our product strategy so that we continue to build trust with users. And so, sort of a department that was built about three or four years ago. And a fundamental part of that is our stakeholder engagement. We built a program at Google called our external expert research program, which is all about ensuring that we get meaningful expertise into the product development lifecycle in a company that’s moving at a million miles an hour at all times. We, having been on the other side of this conversation for many years now, I totally understand some of the challenges that have been raised by my fellow panelists. I was one of the stakeholders who’s fatigued about being asked the same questions by different companies over and over. I was also one of the stakeholders who sort of would come to consultations and be like, I have absolutely no idea what you’re talking about. You’ve been thinking about this question for three years and you’re asking me to split a hair on something in 15 minutes. Maybe a bit of context would have been helpful and you could have helped me sort of understand the problems to base a little bit more. So, I think that might be why I was hired in the first place was to try and bring a bit of that understanding from the stakeholder perspective into the product development lifecycle, which is at Google. run by product managers and engineers who are trying to build and ship products to millions and billions of users. So when we come along and say, hey, we need to be speaking to a wider range of expertise, often we get sort of flags being flown of that’s going to slow us down, how do we get the product market fit faster, etc, etc. So the program was built as a way of showing that if we do this right at the beginning, our products will be more successful, and we’ll build greater trust when they launch rather than having to build that over time. And I want to talk about two sort of examples that we’ve done in 2024, which has been quite an exciting sort of year for us in this space. Firstly, it was on Circle2Search. Circle2Search is a new feature available on Android, where on any surface on an Android device, you can long hold the bar at the bottom and circle a bit of your screen, and that will send a search up to Google Search. Why is this useful? Well, people are finding information in many different ways. They’re looking for access to information, not just coming to search directly anymore, but coming from different platforms. And we thought it was a great way of meeting users where they’re at. So if you’re on a video somewhere, or you’re on some other piece of content, and you want to know a bit more about that, why can’t you just circle it and off you go? Well, there were a number of key risks to launching this product, including some of the privacy risks you could imagine associated with it. So our product manager, who’s leading this, is very familiar with some of these risks and forced a opening in the product development lifecycle to ensure that we went out and got expert feedback. And we got expert feedback through a number of one-on-one consultations to start with, so thinking through what Richard was talking about in terms of formats. The format of engagement has been very important to ensure that we can get direct and specific feedback from individuals as well as group feedback. So we went out and spoke to dozens of experts in human-computer interaction, as well as privacy and human rights experts. And then we went and after a few one-on-one engagements with these experts, we also brought them together in a group setting. And we came back with five key themes, which actually led to amendments to the product. So the first issue that we heard was, how do you prevent unintentional feature activation or sharing of data? So if you don’t want to have this product on your phone, how can we stop that from unintentionally opening us, sharing data with you? So we ensured that there was explicit user action to launch and to actually activate the product, rather than it being auto-on. And we also provided access in the search itself to delete that search, because we wanted to make sure that people had the closest control to deletion. We also heard, how do we ensure that users can access the controls over this information as well? So we integrated a delete the last 15 minutes search, which is something that we’re trying to do more broadly across a number of our products. We understand that deleting your whole search history might not be what you want to do, but you may have searched for something that might be a present for someone, or it might be a more sensitive query that you want to be able to quickly delete your last 15. minute search. So we integrated that as a feature. Meaningful disclosure, so what on earth is going on? How do we ensure that there’s a meaningful consent to what’s happening with this product and how do we educate users? So in the first launch of the product, we provide a lot more clear language explanation of what this product is and how it works. And we provide much clearer control and consent to how we’re using your data. One risk that also came up, so the fourth of the five points, was around facial recognition technology. So we use visual search a lot in this. This is like our lens product you may have come across before. And people were very worried that we were going to be using biometric technologies for this. We don’t use biometric technologies. We’re using a similar image-to-image matching service. So if this picture is available in the open web, it’s indexed and then we’ll be able to find you a similar copy of that. But we don’t know who that person is and we’re not taking a photo of Charles Bradley and saying, oh I know that’s Charles Bradley, let me return you other photos of him. It’s purely on a visual match-to-match basis and we’re explaining that to users. And then what information are we using and what data are we storing when this product is being invoked was one of the key points as well. So the whole point of the circling part of it is that users can precisely select a part of their phone that they want to search for. Nothing outside of that search is used or collected in the process. And what we’re doing here is actually turning, if it’s an image we’re return that image into text or we’re using the text to create a search and that text is stored as part of your search history but no other information is stored. So we’re not taking the photo of it and storing that photo against your account or anything else, we’re just using the text that we’ve generated from that. So these were sort of five really critical things that came up through these engagements and I think the team had a good sense of some of some of these issues but not the level of priority to some of them and I think the stakeholder engagement we were able to more clearly develop escape hatches or solutions for users which met users needs so providing control front and center in the product as you’re using it rather than back in a setting or some account profile which is often how products provide you with control over search history and everything else. So it really fundamentally changed the way in which we launched this product and has resulted in a really good launch for us and a product that’s been used quite a lot over the last few years. Just one example where we’ve done a very specific product development thing and I think to some of Fiona’s points we had a very clear deadline, we had a very clear problem statement and scope, we were looking to launch this product and we’re looking for ways in which we could build it more sustainably, more suitably for users. Another example which I’ll use just to sort of show some of the other strategies that we have is our work on AI overviews. So now if you go to search you may see an AI overview where we generate a response to your query using generative AI, and then below that provide you with 10 blue links. This is something that we launched about 14 months ago in beta in labs, which is all like beta opt in service on search, and have recently rolled out to over 100 markets. But when doing so, we knew that there were going to be a number of broader challenges to sensitive queries. So things that may not be a very straight tie, a straight line answer to a factual response, there, there, obviously, we apply our product policies to, to ensure that we don’t trigger an AI overview on something that is policy prohibitive. So it’d be that about illegal activity or hate speech, etc. But there are obviously a number of gray area queries where we could with Google voice and our point of view, provide a less than suitable answer to that. And to do this, we didn’t, we didn’t really have a very clear sense of what the product, like strategy should be, and how we should and how we should do this, because it’s such a new and evolving space. So we built a panel of experts that we are now in the second year of engaging. And we work with on a monthly basis, either through one on ones through online virtual calls or through in person meetings, who are giving us much higher level advice around like the product strategy and direction, as well as providing clear guidance on when we have quite specific questions to ask them about whether we should respond in this way, or what frameworks we should be using to train our models to respond here. I think the benefit of this has been that it’s such a complicated space, and the asking an expert in a one or two, like one hour calls, we would be really under utilizing the expertise of these experts. There was a quite a large ramp up to build a clear and consistent understanding amongst our experts of what our ultimate challenges were with this. Like how is the model actually being trained and what different strategies do we have within our model and like product launch strategy do we have at our disposal? And then going iteratively across a number of these, looking at different verticals of sensitive queries, stack ranking them and working through some of those strategies has been very, very fruitful. And I know that the experts that we’ve worked with in this program have found it very rewarding because not only can they see some of their works of directly being integrated into the product and being launched and we’ve now had many billions of queries trigger overviews now, but also they get to sort of learn about the different strategies that we’re focusing on and some of their expert work is actually based on this, but they’ve never had the opportunity to integrate that within a business context. So there’s two things within the program. We’ve now done about 30 studies this year and we’re sort of focused on a number of areas for next year and always we can do a better job, but we think we’re sort of moving in the right direction to provide clarity over how we integrate experts into product development. Pass it back to you, Jim.

Jim Prendergast: Great, Charles. Thanks a lot. You know, it’s really interesting to see how the product development life cycle did take into account the outside expertise and feedback. I can only imagine your engineers looking at you saying, are you kidding me? You want to do this on the front end? But as you said, it probably saved time and a lot of aggravation in the long run. So for those in the room, we’re gonna be moving to discussion and question and answer. We do have a couple of microphones up here. I can play Phil Donahue for your Americans who understand that reference and move the microphone around. There’s one up here on the table, but I’ll sort of get the conversation going. I know Richard, you engaged with Avri Doria just about a five-step approach toolkit that you’ve developed. Do you want to just, for those who aren’t in the chat, do you want to give a quick overview of what that is and how folks might be able to access it?

Richard Wingfield: Yes, absolutely. So the approach is linked to in the chat, but you can also find it by using the search engine of your choice and looking up BSR stakeholder engagement five-step approach. In short, the approach is, it’s a toolkit that we’ve developed, which helps companies think about how to approach stakeholder engagement. The steps are, first of all, developing a strategy. So basically setting out what you want to do as a company in terms of your vision for stakeholder engagement, your level of ambition, maybe reflecting on existing stakeholder engagement. This is obviously something that will vary depending on the resources of the company and what it wants to achieve through stakeholder engagement. Secondly, stakeholder mapping. So when I was talking earlier, I mentioned the kind of the breadth and diversity of stakeholders that exist or people who might be affected by a company’s products or policies. And so undertaking a mapping of which groups or organizations or individuals you need to speak to and maybe where those relationships already exist. Third, preparation. And this is sort of coming back to some of the points that Charles and others have made around making sure that stakeholders are able to… to engage in that process with confidence and with an understanding of what’s happening. So that’s everything from building those relationships, thinking about what the logistics might be for those meetings, preparing and capacity building beforehands that people can come to them and genuinely participate in a helpful way. The fourth stage is the actual engagement itself. And we provide some guidance there on how to manage difficult situations. For example, making sure that all voices are heard in dealing with some of the barriers that might exist to stakeholder engagement relating to language or accessibility, for example. And then fifth, setting out an action plan as to how you’re going to use the inputs for that engagement, either to make changes or just to make sure that the people are kept in the loop about what’s happening. So those are the five steps of the approach and the toolkit is available via the link or just by searching BSR, Stakeholder Engagement Five-Step Approach.

Jim Prendergast: Great, thank you, Richard. Thobekile, a question for you. You know, for many companies, Africa is an opportunistic market. It’s a growing market. It’s a place where they want to do business, but there are unique challenges to it as well. And what would you say are, you know, some of the challenges that for companies that want to engage in stakeholder engagement across the continent, what might they face and what are your recommendations, how they might overcome those?

Thobekile Matimbe: Thanks, Africa is a very great place where there is room to engage with civil society actors on what they’re facing, what the challenges are. But I think the challenge has been really, you know, having more of, you know, willpower from private sector actors to actually want to meet with, you know, the community on the ground to be able to engage on key challenges. Like I mentioned, we host the Digital Rights and Inclusion Forum and we’ll have, there are definitely companies that we’ll know will be there, will be in the room, will have Google in the room, will have Meta there to be able to engage. engage, but we feel that there’s a whole lot of other private sector actors who would want to be in the room. We really have had several actors that have been able to come through and be able to engage. But I think what is important to highlight is that the environment that we operate in on the African continent, it’s marked by repressive governments that obviously have their own calls on companies and they might want to also make certain orders even as well on companies. And that’s the kind of challenging atmosphere, environment that companies face when they come on the African continent willing to engage. But I think there’s a way around it. I think that proactiveness in terms of stakeholder engagement will be able to ensure that even when there has been a challenge and companies have been forced to do certain things or even not to be able to respond according to their policies effectively to certain situations, they can still have a space to engage with actors on the African continent to say, what else can we do and support other forms of strategies that civil society actors might be using to actually address some of the challenges that we face. So with regards to stakeholder engagement, there’s a willing civil society space and it’s open because I think the ways we’ve been engaging, the formats of engagement, they can always be adapted to context. So there is room to actually engage. I think what we need to see is more willpower from private sector actors to actually meet the community where the community is.

Jim Prendergast: Great. Thank you very much. And that’s good insight and good advice. I’m going to look at Fiona and Charles virtually. I’m going to look at you. Charles, you each, Fiona, from your perspective talking to, well, as a former government official speaking to other governments, what one key piece of advice would you give governments who are looking to engage in stakeholder advice? And Charles, yours would be, what piece of advice would you give to other private sector entities about government? going down this path?

Fiona Alexander: So I think I just might say that as I listen to others speak and Charles in particular, I think it’s easy or even natural, right? If you’re the decision maker, whether you’re a company making a product or a government making a policy, it’s kind of natural to be like, I know best. I’m gonna sit in my office and talk to my team and I’m gonna decide. And that’s just a natural, I think, human way of thinking. It’s really important though to take a step back and realize that even though talking to people might take more time, and if you’re doing a multi-stakeholder process, it probably is a little bit messy. At the end of the day, you’re gonna get a better product or a better policy and you’re gonna have buy-in if you actually take the time to talk to people in a meaningful way. And I think that’s my advice to people is to actually take that step back. And I’m not on my computer in front of me, so I don’t know who’s on, but you mentioned that Avri was on and I don’t know why, but it makes me think of that maybe sort of stakeholder engagement or participation almost needs ambassadors to make the case as to why this is actually a better way to do policy and the better way to make product is to actually convince people that’s the best way to do it. But I think it’s natural to be like, you know, I know best, I’m just gonna make my own choice. And I think we realize that the outcome of that isn’t always the best.

Jim Prendergast: Great, Charles?

Charles Bradley: Yeah, I mean, I sort of agree with all of that. I’d be willing to further on the knowing best point. I think sometimes people do know that they need to do it, but there’s just so many other pressures on time and some of the skills needed to be able to do this. I think there’s a confidence issue as well with some people who are very familiar with engaging with different internal stakeholders, but not external stakeholders and a concern about what they might hear or how they might get that feedback. I totally agree with Fiona’s point around champions. I think the smartest. thing that my boss did was turn this into a formal program at Google with high visibility and structure to it so that we could build champions underneath that program. And champions not just who are staffed to this program, but also champions in different parts of the business who have utilized the program and deliver greater products. We get all sorts of challenges from other product areas, as Jim alluded to, of you’re going to make me want to do this beforehand. Why aren’t we doing this down the line to see what actual risks there are or harms there are, rather than foreseeing them? And I think that once we’ve got a bunch of case studies within a formal program with a bunch of ambassadors, the inbound requests for this have really started to appear. And we have a number of expert engagements at the moment that are underway, which came to us saying, oh, I really want to make sure that my product lands in the right way. And I know that you’re a team that can do that. But you can also do that at pace and with inside the infrastructure of the business. So it’s internalizing it, creating a formal program, and then building champions who can drive up demand would be my advice.

Jim Prendergast: Great. Thank you. So you’ve created almost like a little cottage industry within Google on how to engage on this, so maybe a profit center someday. So turning to the audience, I know we have a question here. If anybody else has a question, let me know. I don’t have eyes behind my head, so I don’t think we have any there. To be fair, just please identify who you are. And if you have a question directed to one of our panelists, just let them know. Thanks.

Lina Slakmolder: Thank you, everybody. My name is Lina Slakmolder. I work with Search for Common Ground, which is a community-based platform that is a part of Google Cloud Platform. And I’m here to talk to you about how we’re going to make sure that we’re able to make the most of Google Cloud Platform. So I’m going to start with a little bit of background on Google Cloud Platform. So Google Cloud Platform is a platform that’s been around for a number of years. And it’s been around for a number of years. an international peace building organization, but I also co-chair the Council on Tech and Social Cohesion, which brings technologists, peace builders, academics, policy influencers to influence tech design for social cohesion. And listening to this panel, each of you are saying things that are true, but I feel like there’s some other truths that also need to be put on the table. And then I’m curious to hear what are some of your thoughts about those, right? And I want to say, Charles, that, you know, just dovetailing from where you left it, like the rest of the industry has completely depleted trust teams. It’s extraordinary that you’ve built it up and that you just said that, you know, that your senior leadership is actually trying to incentivize this, because this is the first thing I want to say is that there is actually a de-incentive for this kind of engagement, even when organizations like Tobaz and others are bringing forth the harms to these companies, right? They’re basically saying it’s not going to be prioritized over profit. We’re looking for growth. We’re looking for engagement. And to you, Richard, you know, I wonder if you also feel like there is a real changing narrative in the sort of business and human rights space when it comes to big tech today, and that the things that are really leading to most of the changes, again, not in any way excluding, Charles, those excellent examples of how you’ve made change, but in most cases, the changes that the tech companies are making in their products is due to litigation, fear of fines, reputational damage, and things like that. And somehow, even with really good multi-stakeholder-ness, the companies are not necessarily interested in making these changes. And I’ll go one step further, that in Africa, there’s places where these companies are even trying to damage the reputations of organizations that that are pointing out the harms of these products, right? They’re using money to fund other groups that may be saying what they want to hear, and they’re actually damaging the other organizations that are being more critical. So even with multi-stakeholder engagement, there’s something that’s going really wrong when we look at big tech. And it’s why, and I’ll end with this, that we still see a number of products, whether it’s the chatbots, whether it’s the notify things, there’s a whole range of products coming out on the market each week that are not doing an upstream test on safe, that are not being transparent. And without the transparency, again, what kind of stakeholder engagement are you really looking at, right? When you ask people for the consultations, you’re not subsidizing them to give you all those consultations, right? So again, I’d just love to hear from the panelists, are we recognizing that we’re at a different time here? And even with all the good five-step skills in multi-stakeholder consultations, there’s still a real issue on the table here.

Jim Prendergast: All right, who wants to go first on that one? Maybe Richard, do you want to take it from the high level?

Richard Wingfield: Yeah, I’m happy to. I’m hesitant to generalize too much by saying that all technology companies do or don’t do something. I think there is huge variation in terms of maturity and attitude towards the importance of being a responsible business with some taking that responsibility a lot more seriously than others, for sure. But I can understand why there is a sort of feeling that the overall, the sector still hasn’t done enough on this and it still isn’t doing enough. And I think one of the real challenging things, and I don’t have a solution to this, is that meaningful stakeholder engagement takes a long time and it requires organizations to be brought in at a very early stage. If you’re a company with potentially thousands of- different products that might be developed, some of which will never make it to market. You often don’t know until a relatively late stage, which ones are ultimately likely to launch or not. And by that point, it’s very difficult to then bring stakeholders in unless you want to exhaust them by constantly asking them about all of the different options that there might be at every single stage. And of course, technology moves so fast. And that when you’ve got, you know, companies and we think about generative AI and the rush for companies to make sure that they are leading on this as a new technology, you know, bringing in stakeholder engagement slows the process down. And that’s not to say that we shouldn’t do it. But I’m just saying that there are ways that I think the technology sector is faces unique challenges when it comes to meaningful stakeholder engagement, because it does run contrary to a number of other business interests potentially. So I think the solutions to that and these aren’t none of these are silver bullets. One is regulation. And we’re seeing more regulation, particularly in the EU, which requires companies to engage with stakeholders as part of their risk assessment processes, things like the EU’s Digital Services Act, the AI Act, the Corporate Sustainability Diligence Directive. Second is to find it make it easier for stakeholders to become engaged. And that might mean more sectorally focused engagement. So for example, at BSR, we’re now doing a human rights impact assessment into generative AI, which is across the sector as entirety. So not just individual companies, but working collectively to try to reduce the amounts and the demands on them. But there are some of those, as you know, what you call disincentives that are really hard to work around for sure. So I’m not going to pretend there isn’t a problem there. But I do think that there is huge variation still in terms of the approach that different companies take with some with some doing it better than others, for sure.

Jim Prendergast: Thobekile, anything to add?

Thobekile Matimbe: Thanks. I would just say that I think from from my earlier reflections, what I mentioned about willpower, I think on the part of companies to engage and not just engage, but meaningfully engage is something that we still as an organization are looking forward to experiencing more of. And more specifically as well, engaging where the community is, especially at community convenings, is something that we would love to see as well gain more traction. I think one thing that I would say is that what we have experienced, especially with engagements with the private sector company, is definitely those few that are willing to engage with us and that we engage with, it’s usually engagements that will have like side meetings, like closed meetings. And it’s not out there where we are engaging with the broader communities that we represent or that we support and stand for. It’s more of, OK, who are we going to engage with on the continent? OK, there’s this organization and that organization. But proactive stakeholder engagement is looking further than that and saying, hey, if you’re reaching out to Paradigm and say, hey, Paradigm Initiative, we are working on the African continent. We would like to meet the community. Where can we meet the community? And we open it up to broader actors on the continent. I think there would be much more enriching conversations around the challenges that the communities are facing with regards to products, as well as more better inputs into how to shape policies, even for big tech companies as well. So I think what we would definitely love to see is more interest in engaging, especially where the community is, meeting the community where it is, the broader community, and not just picking out or cherry picking those organizations that will know, OK, if we say we were in a room, we engaged with Paradigm, then tick. We’ve done our part. But we need it to be more meaningful and be able to, as well, address the concerns of the broader community on the African continent.

Jim Prendergast: Thank you. Charles, obviously, you can’t speak for the industry, but what’s your take from the Google standpoint?

Charles Bradley: Yeah, I mean, I’m glad that it’s been raised. And if it was very easy, and if it was all going swimmingly, we probably wouldn’t have our jobs trying to do this. I think there are two ways of seeing some of the more harsh government actions over the last few years, where the increase in regulation, which has been welcomed and important to ensure that decision-making about the way in which products are developed and deployed to users, is much more democratic and organized by nation states and regional bodies. It’s been really important to see, and we’re really encouraged by the stable engagement that the different national governments and regional bodies have done there, as well as you mentioned some of the fines. I mean, our view of this has been that we can either continue to wait for these fines and more regulation that may or may not be fit for purpose, or we can engage more with stakeholders to ensure that our products are more aligned with expectations and the values that we’re trying to inhibit. And that’s actually been a strategy that’s got a lot of traction at a leadership level. And as you say, that’s not the case for every company, and we are very fortunate to be able to take a long-term view on this. But it has been sort of part of the business’s DNA for a long time to be able to engage with stakeholders and bring that expertise into product development. I think we’re just getting much sharper at doing so in a more meaningful way, i.e. actually showing impact at the product level. And there are hundreds of success stories where products have never, ever been launched because we have spoken to external stakeholders and experts who have given us very clear guidance on what the risks will be, which are way beyond thresholds of… that we would be able to accept. But internally, we didn’t see those, those issues, we didn’t pattern match those or, or understand the trends there. So I think there’s, um, there are different viewpoints from different businesses, obviously, our viewpoint is that, you know, with the increase of a harder government action in this space, we’re going to end up with greater need for stakeholder engagement and building trust and strength and safety into our products, rather than the opposites, where people are racing to get things out the door to try and get product market fits.

Jim Prendergast: Thanks to all three of you. I’m looking around the room just to see if there are any other questions. I’m not seeing any hands. Oh, Fiona.

Fiona Alexander: I might just respond a little bit to this one as well. Because I think it’s important and I get the perspective that you’re bringing, but there’s no universal solution. And there’s not a single path that will fix all of these things. All companies are slightly different. All products are very different. And not all products or policies are equal in terms of their purpose and their impact. And this is why frameworks and the one that I think Richard mentioned is why frameworks can be useful, because you can talk about how to implement those frameworks and how you can incentivize action, and how to get to that. I will just say that the idea that regulation is going to solve all these problems, I think it’s slightly misguided. I think we’ve seen a lot of regulation emanate from Brussels in the last five years. And I think it’s unclear yet what the implication of that regulation is going to be, how damaging it’s going to be, how effective it’s going to be, or if it’s going to be good. So I think the jury’s out on all of that. I think if GDPR is any example, I think that’s probably not going to help on the innovation side, at least from coming from Europe. But again, we’ll wait and see. And I think a lot of this culturally depends on sort of where you come from, from my perspective, a policy and regulatory perspective. But even I guess, from a company perspective, engagement perspective, and this gets back to ex ante or ex post, right? Do you deal with something once something’s out, and there’s a proven problem? Or do you try to look at something? mapping, map out all the possible potential things and then decide whether to do it or not. And I think that probably goes for products as well. And you have to kind of decide what’s your risk factor and what you’re willing to do and not do. And a lot of that I think comes from culturally where you are. Western philosophy, a U.S. approach versus a European approach, they’re very different. They’re not the same. And I think companies have that same dynamic and where they fit in that. Thanks. I’ll just read out a comment that

Jim Prendergast: Avri did put into the chat. Frameworks plus impact assessments. So probably the combination of the two will yield some effective outcomes for sure. I don’t see any questions online or in the room. So maybe just a quick sentence or two as a wrap up to sort of bring us to a close. I know everybody’s got busy schedules. And if I can give you 10 minutes of your time back, I’m sure you’d appreciate it. You could probably use it to get in line to the restrooms and for the rest of the session’s wrap up. So let’s go to you, Charles. Why don’t you

Charles Bradley: kick us off? Yeah. I mean, I think just, you know, this is such an important topic and one that I think needs to get, we need to move to very specific good practices and frameworks that can be used across industry. And I’m really glad that Richard and the BSR team are doing that for the industry and that we sort of try to bring the whole industry along on this journey. It’s not going to go away. It’s going to get more and more complicated. There are going to be more like unintended consequences or unforeseen utilizations of new technologies as the pace increases. And I’m excited that this is a space that continues to be a place where we can learn from each other and build some sort of like common understanding of how this is done well. So that our colleagues from civil society and academia. are not being asked to provide input into things that don’t go anywhere.

Jim Prendergast: Thobekile, please.

Thobekile Matimbe: Thanks. Thanks a lot. I think my last reflection is really that I think going forward, I think we are open as a paradigm initiative to engage as well as, you know, connect any, you know, product designers to, you know, the broader community on the African continent who are within our networks. And, of course, the Digital Rights and Inclusion Forum is going to be hosted in 2025 from 29 April to 1 May in Lusaka, Zambia. It’s a multi-stakeholder platform and we’re looking forward to having at least 800 stakeholders from the Global South in attendance. So it would be a good platform to continue those conversations and obviously a perfect platform as well for any policy consultations or any other, you know, product launches. And, yeah, I think it’s something that we look forward to and look forward to building lasting relationships as well with the private sector around human rights, so to speak. So thank you so much for the opportunity.

Jim Prendergast: Great. Thank you. Fiona.

Fiona Alexander: I think my takeaway from all of this is that, you know, it’s important to always talk about to anyone and everyone the importance of getting stakeholder feedback. And I think we talk a lot about the successes of the processes, but the fact that sometimes it doesn’t work or it doesn’t work and you don’t release a product, we don’t talk about that. Right? So I think it’s equally as important to talk about why things don’t work or if the outcome of the stakeholder feedback is not to release the product, making that known. And I think the more transparent we can all be in all of this, I think the better it will be for everyone.

Jim Prendergast: Thank you. And Richard, do you want to finish it off for us?

Richard Wingfield: Yeah, I just want to kind of say as well that, you know, although there is still so much more to do and it’s right that expectations increase and that demands on companies, you know, continue to be ones that cause a lot of problems. them to do better. We are a lot further advanced than we were 10, 20 years ago in terms of this issue being one that’s on the radar of companies and on the sophistication of existing efforts. There’s huge variation still. There’s an awful lot more to be done. And I think some of the criticisms have been rightly called out today. But I do think it is something that companies are aware of and thinking about in a way that they weren’t 10 plus years ago. And there are opportunities there to use that and to use other tools to increase what we do. So I hope that things will continue to improve. But there is, as you say, still a lot more to be done.

Jim Prendergast: Great. Thank you very much. And I’d like to thank everybody who found our workshop room tucked over here in the corner. And also for those who joined online. And for our speakers, it’s unfortunate you couldn’t be here, but here in spirit and here in sight. And once again, thanks, everybody, for joining us. And we’ll enjoy the rest of your week.

R

Richard Wingfield

Speech speed

174 words per minute

Speech length

3111 words

Speech time

1071 seconds

Stakeholder engagement is critical for responsible business practices

Explanation

Richard Wingfield emphasizes the importance of stakeholder engagement for companies to act responsibly and align with international human rights standards. He highlights that the UN Guiding Principles on Business and Human Rights explicitly recognize the importance of meaningful stakeholder engagement.

Evidence

UN Guiding Principles on Business and Human Rights framework

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Technology sector faces unique challenges due to fast pace of development

Explanation

Richard Wingfield points out that the technology sector faces unique challenges in stakeholder engagement due to the rapid pace of development. He notes that it’s difficult to bring in stakeholders at an early stage for all potential products, especially when it’s unclear which will make it to market.

Evidence

Example of generative AI and the rush for companies to lead in new technologies

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

C

Charles Bradley

Speech speed

153 words per minute

Speech length

2981 words

Speech time

1163 seconds

Proactive stakeholder engagement builds trust and improves products

Explanation

Charles Bradley argues that proactive stakeholder engagement leads to more successful products and builds greater trust upon launch. He emphasizes the importance of integrating stakeholder feedback into the product development lifecycle.

Evidence

Example of Circle2Search feature development and modifications based on stakeholder feedback

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Thobekile Matimbe

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Internal pressures and incentives can work against stakeholder engagement

Explanation

Charles Bradley acknowledges that there are internal pressures and incentives within companies that can work against stakeholder engagement. He notes that product managers and engineers often prioritize getting products to market quickly.

Evidence

Mention of challenges in convincing teams to slow down for stakeholder engagement

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Thobekile Matimbe

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

Create formal programs with champions to drive engagement

Explanation

Charles Bradley recommends creating formal stakeholder engagement programs within companies. He suggests building champions across different parts of the business who have utilized the program and delivered better products.

Evidence

Google’s external expert research program

Major Discussion Point

Best practices for stakeholder engagement

Proactive engagement can help companies get ahead of regulatory pressures

Explanation

Charles Bradley argues that proactive stakeholder engagement can help companies anticipate and address potential regulatory issues. He suggests that this approach is preferable to waiting for fines or regulations that may not be fit for purpose.

Evidence

Mention of increasing government action and regulation in the technology sector

Major Discussion Point

Role of regulation and external pressure

T

Thobekile Matimbe

Speech speed

163 words per minute

Speech length

2324 words

Speech time

854 seconds

Meeting stakeholders where they are is crucial, especially in Africa

Explanation

Thobekile Matimbe emphasizes the importance of companies engaging with stakeholders in their own communities, particularly in Africa. She argues that meaningful engagement involves reaching out to broader communities rather than just select organizations.

Evidence

Digital Rights and Inclusion Forum (DRIF) as a platform for engagement

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Charles Bradley

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Lack of willpower from some companies to engage meaningfully

Explanation

Thobekile Matimbe points out that there is often a lack of willpower from private sector actors to engage meaningfully with communities on the ground. She notes that many companies prefer closed meetings with select organizations rather than broader community engagement.

Evidence

Observation of limited participation from private sector at community convenings

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Charles Bradley

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

Differed with

Richard Wingfield

Differed on

Effectiveness of current stakeholder engagement practices

Leverage multi-stakeholder platforms like the Digital Rights and Inclusion Forum

Explanation

Thobekile Matimbe recommends using multi-stakeholder platforms like the Digital Rights and Inclusion Forum for engagement. She highlights that these platforms bring together diverse stakeholders and provide opportunities for meaningful dialogue.

Evidence

Details about the upcoming DRIF in Lusaka, Zambia

Major Discussion Point

Best practices for stakeholder engagement

Civil society continues to push for more meaningful engagement

Explanation

Thobekile Matimbe indicates that civil society organizations continue to advocate for more meaningful engagement from companies. She expresses openness to connecting product designers with broader communities in Africa.

Evidence

Mention of Paradigm Initiative’s willingness to facilitate connections

Major Discussion Point

Role of regulation and external pressure

F

Fiona Alexander

Speech speed

211 words per minute

Speech length

1937 words

Speech time

549 seconds

Stakeholder engagement leads to better policies and products despite taking more time

Explanation

Fiona Alexander argues that while stakeholder engagement may take more time, it ultimately leads to better policies and products. She emphasizes the importance of taking a step back and realizing the value of talking to people in a meaningful way.

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Agreed on

Importance of stakeholder engagement

Cultural differences impact approaches to engagement and regulation

Explanation

Fiona Alexander points out that cultural differences influence approaches to stakeholder engagement and regulation. She notes that Western philosophy, U.S. approaches, and European approaches can differ significantly.

Evidence

Mention of differences between U.S. and European regulatory approaches

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Agreed on

Challenges in implementing effective stakeholder engagement

Set clear goals and deadlines for engagement processes

Explanation

Fiona Alexander suggests setting clear goals and deadlines for stakeholder engagement processes. She notes that having a clear deadline drives people towards particular outcomes.

Major Discussion Point

Best practices for stakeholder engagement

Impact of recent regulations like GDPR is still unclear

Explanation

Fiona Alexander expresses uncertainty about the impact of recent regulations like GDPR. She suggests that it’s unclear whether these regulations will be damaging, effective, or beneficial for innovation.

Evidence

Reference to GDPR and recent regulations from Brussels

Major Discussion Point

Role of regulation and external pressure

Differed with

Richard Wingfield

Differed on

Role of regulation in driving stakeholder engagement

Agreements

Agreement Points

Importance of stakeholder engagement

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Stakeholder engagement is critical for responsible business practices

Proactive stakeholder engagement builds trust and improves products

Meeting stakeholders where they are is crucial, especially in Africa

Stakeholder engagement leads to better policies and products despite taking more time

All speakers emphasized the critical importance of stakeholder engagement in developing responsible and effective technology products and policies.

Challenges in implementing effective stakeholder engagement

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Technology sector faces unique challenges due to fast pace of development

Internal pressures and incentives can work against stakeholder engagement

Lack of willpower from some companies to engage meaningfully

Cultural differences impact approaches to engagement and regulation

Speakers acknowledged various challenges in implementing effective stakeholder engagement, including technological pace, internal pressures, lack of willpower, and cultural differences.

Similar Viewpoints

Both speakers emphasized the importance of structured approaches to stakeholder engagement, including formal programs, champions, clear goals, and deadlines.

Charles Bradley

Fiona Alexander

Create formal programs with champions to drive engagement

Set clear goals and deadlines for engagement processes

Both speakers stressed the importance of engaging with stakeholders in their own communities and contexts for responsible business practices.

Richard Wingfield

Thobekile Matimbe

Stakeholder engagement is critical for responsible business practices

Meeting stakeholders where they are is crucial, especially in Africa

Unexpected Consensus

Proactive engagement to address regulatory pressures

Charles Bradley

Fiona Alexander

Proactive engagement can help companies get ahead of regulatory pressures

Impact of recent regulations like GDPR is still unclear

Despite coming from different perspectives (industry and former government), both speakers agreed on the importance of proactive engagement to address regulatory pressures, while acknowledging uncertainties in the regulatory landscape.

Overall Assessment

Summary

The speakers generally agreed on the importance of stakeholder engagement in technology development and policy-making, while acknowledging various challenges in implementation. They also emphasized the need for structured approaches and proactive engagement to address regulatory pressures.

Consensus level

There was a high level of consensus among the speakers on the fundamental importance of stakeholder engagement. This consensus implies a growing recognition across sectors of the need for collaborative approaches in technology development and policy-making. However, the speakers also highlighted various challenges and nuances in implementation, suggesting that while the principle is widely accepted, practical application remains complex and context-dependent.

Differences

Different Viewpoints

Effectiveness of current stakeholder engagement practices

Richard Wingfield

Thobekile Matimbe

We are a lot further advanced than we were 10, 20 years ago in terms of this issue being one that’s on the radar of companies and on the sophistication of existing efforts.

Lack of willpower from some companies to engage meaningfully

Richard Wingfield sees progress in stakeholder engagement practices, while Thobekile Matimbe emphasizes a lack of willpower from companies to engage meaningfully, especially in Africa.

Role of regulation in driving stakeholder engagement

Richard Wingfield

Fiona Alexander

One is regulation. And we’re seeing more regulation, particularly in the EU, which requires companies to engage with stakeholders as part of their risk assessment processes

Impact of recent regulations like GDPR is still unclear

Richard Wingfield sees regulation as a potential solution to drive stakeholder engagement, while Fiona Alexander expresses uncertainty about the impact of recent regulations.

Unexpected Differences

Cultural differences in stakeholder engagement approaches

Fiona Alexander

Thobekile Matimbe

Cultural differences impact approaches to engagement and regulation

Meeting stakeholders where they are is crucial, especially in Africa

While not a direct disagreement, it’s unexpected that Fiona Alexander highlights cultural differences between Western approaches, while Thobekile Matimbe focuses specifically on the African context. This suggests a potential gap in understanding or addressing regional differences in stakeholder engagement practices.

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current stakeholder engagement practices, the role of regulation in driving engagement, and the specific approaches to implementing stakeholder engagement across different cultural contexts.

difference_level

The level of disagreement among the speakers is moderate. While there is general agreement on the importance of stakeholder engagement, there are significant differences in perspectives on its current state, effectiveness, and implementation. These differences highlight the complexity of the issue and the need for continued dialogue and improvement in stakeholder engagement practices, especially in the rapidly evolving technology sector.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of proactive stakeholder engagement, but differ on the specific approaches. Charles Bradley focuses on internal company processes, while Thobekile Matimbe emphasizes the need for companies to engage with broader communities in their local contexts.

Charles Bradley

Thobekile Matimbe

Proactive stakeholder engagement builds trust and improves products

Meeting stakeholders where they are is crucial, especially in Africa

Similar Viewpoints

Both speakers emphasized the importance of structured approaches to stakeholder engagement, including formal programs, champions, clear goals, and deadlines.

Charles Bradley

Fiona Alexander

Create formal programs with champions to drive engagement

Set clear goals and deadlines for engagement processes

Both speakers stressed the importance of engaging with stakeholders in their own communities and contexts for responsible business practices.

Richard Wingfield

Thobekile Matimbe

Stakeholder engagement is critical for responsible business practices

Meeting stakeholders where they are is crucial, especially in Africa

Takeaways

Key Takeaways

Stakeholder engagement is critical for responsible technology development and business practices

Proactive and meaningful stakeholder engagement leads to better products and policies, despite taking more time

The technology sector faces unique challenges in stakeholder engagement due to the fast pace of development

There is significant variation in how different companies approach and prioritize stakeholder engagement

Regulation is driving more stakeholder engagement, but is not a complete solution to addressing concerns

Creating formal programs and champions within companies can help drive more effective stakeholder engagement

Leveraging existing multi-stakeholder platforms and meeting stakeholders where they are is important, especially in regions like Africa

Resolutions and Action Items

BSR has developed a five-step approach toolkit to help companies implement stakeholder engagement

Google has created a formal program called the ‘external expert research program’ to integrate stakeholder input into product development

Paradigm Initiative invited companies to participate in the upcoming Digital Rights and Inclusion Forum in Zambia in 2025

Unresolved Issues

How to balance the need for stakeholder engagement with the fast pace of technology development and market pressures

How to ensure stakeholder engagement is truly meaningful and not just a ‘tick-box’ exercise

How to address the ‘de-incentives’ that work against thorough stakeholder engagement in some companies

The effectiveness of recent regulations like GDPR and EU AI Act in driving responsible technology development

Suggested Compromises

Using sector-wide engagement processes to reduce the burden on individual stakeholders and companies

Focusing engagement efforts on the most vulnerable or at-risk communities to prioritize limited resources

Balancing proactive engagement early in product development with targeted engagement on specific issues later in the process

Thought Provoking Comments

The UN guiding principles on business and human rights make regular and explicit recognition of the importance of stakeholder engagement and meaningful stakeholder engagement when it comes to companies behaving responsibly.

speaker

Richard Wingfield

reason

This comment introduces a key framework for understanding stakeholder engagement in the context of business and human rights.

impact

It set the stage for the rest of the discussion by grounding it in an established international framework. This led to further exploration of how companies can implement stakeholder engagement in practice.

We try to prioritise our stakeholder engagement with the communities that are most likely to be at risk.

speaker

Richard Wingfield

reason

This insight highlights a strategic approach to stakeholder engagement that focuses on the most vulnerable groups.

impact

It shifted the conversation to consider how companies can identify and engage with the most relevant stakeholders, rather than trying to engage everyone equally.

I think reflecting more on the do no harm principle is something that I really want to echo. It’s something that is really important and it’s actually something that should be embedded at every point of the product design process.

speaker

Thobekile Matimbe

reason

This comment emphasizes a core ethical principle for technology companies to consider throughout product development.

impact

It broadened the discussion from just stakeholder engagement to the broader ethical considerations companies should keep in mind, leading to more discussion of responsible product development.

Where it becomes a little bit more flexible and a little bit different is with respect to broader policy setting.

speaker

Fiona Alexander

reason

This insight highlights the differences between regulatory processes and broader policy development in terms of stakeholder engagement.

impact

It added nuance to the discussion by distinguishing between different types of stakeholder engagement processes, leading to more specific examples and recommendations.

We built a panel of experts that we are now in the second year of engaging. And we work with on a monthly basis, either through one on ones through online virtual calls or through in person meetings, who are giving us much higher level advice around like the product strategy and direction, as well as providing clear guidance on when we have quite specific questions to ask them about whether we should respond in this way, or what frameworks we should be using to train our models to respond here.

speaker

Charles Bradley

reason

This comment provides a concrete example of how a major tech company is implementing ongoing stakeholder engagement in practice.

impact

It moved the discussion from theoretical frameworks to practical implementation, sparking more conversation about best practices and challenges in real-world stakeholder engagement.

Even with multi-stakeholder engagement, there’s something that’s going really wrong when we look at big tech. And it’s why, and I’ll end with this, that we still see a number of products, whether it’s the chatbots, whether it’s the notify things, there’s a whole range of products coming out on the market each week that are not doing an upstream test on safe, that are not being transparent.

speaker

Lena Slachmuijlder

reason

This comment challenges the effectiveness of current stakeholder engagement practices and raises important critiques of the tech industry’s approach.

impact

It significantly shifted the tone of the discussion, prompting the panelists to address criticisms and limitations of current stakeholder engagement practices in tech.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical frameworks to practical implementation, highlighting the challenges and limitations of current practices, and emphasizing the importance of ethical considerations throughout the product development process. The discussion evolved from a general overview of stakeholder engagement principles to a more nuanced exploration of how these principles are (or aren’t) being applied in the tech industry, with a particular focus on the challenges faced in different global contexts and the need for more meaningful, proactive engagement with diverse stakeholders.

Follow-up Questions

How can companies better meet communities where they are for stakeholder engagement, especially in Africa?

speaker

Thobekile Matimbe

explanation

This is important to ensure more meaningful and inclusive engagement with a broader range of stakeholders, rather than just select organizations.

How can the disincentives for meaningful stakeholder engagement in the tech industry be addressed?

speaker

Lena Slachmuijlder

explanation

This is crucial to understand why some companies may not prioritize stakeholder engagement over profit and growth, and how to change this dynamic.

How can stakeholder fatigue be mitigated when companies seek input?

speaker

Richard Wingfield

explanation

Addressing this issue is important to ensure continued meaningful participation from stakeholders without overburdening them.

How can companies better incorporate stakeholder feedback into decision-making processes?

speaker

Richard Wingfield

explanation

This is critical to ensure that stakeholder engagement leads to tangible changes in products and policies.

What are effective ways to build long-lasting relationships between companies and stakeholders?

speaker

Richard Wingfield

explanation

This is important for creating trust and ensuring ongoing, meaningful engagement rather than transactional interactions.

How can companies balance the need for stakeholder engagement with the fast-paced nature of technology development?

speaker

Richard Wingfield

explanation

This is crucial for finding ways to incorporate meaningful engagement without significantly slowing down product development.

How can the impact of recent tech regulations, particularly from the EU, be assessed?

speaker

Fiona Alexander

explanation

This is important to understand the effectiveness and potential consequences of new regulations on innovation and stakeholder engagement.

How can companies be more transparent about stakeholder engagement processes and outcomes, including when products are not released?

speaker

Fiona Alexander

explanation

This transparency is crucial for building trust and demonstrating the value of stakeholder engagement.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #254 The Human Rights Impact of Underrepresented Languages in AI

WS #254 The Human Rights Impact of Underrepresented Languages in AI

Session at a Glance

Summary

This panel discussion focused on the impact of underrepresented languages in AI, particularly in large language models. The speakers highlighted how the dominance of English and Western languages in AI training data leads to bias and exclusion of other languages and cultures. They discussed how this affects human rights, socioeconomic opportunities, and cultural preservation for speakers of underrepresented languages.

Key issues raised included the poor performance of AI models in non-dominant languages, the risk of further marginalizing minority languages, and the ethical and legal implications of using AI trained on limited language data for critical applications like asylum processing. The speakers emphasized the need for more diverse, high-quality datasets and greater transparency in AI development.

Legal and policy solutions were explored, including copyright law adaptations, personality rights considerations, and international cooperation for knowledge sharing and capacity building. The panelists noted the challenges of creating universal data platforms due to commercial interests but highlighted some promising initiatives for language inclusion in AI.

The discussion also touched on the role of governments in supporting local language AI development and the complex interplay between education systems, economic incentives, and language preservation efforts. Overall, the panelists stressed the importance of inclusivity in AI as a human rights issue and called for more holistic approaches to address language representation in AI technologies.

Keypoints

Major discussion points:

– The impact of underrepresentation of languages in AI from human rights and socioeconomic perspectives

– The role of legal and ethical frameworks in enhancing AI inclusivity and language-based inclusion

– Challenges and potential solutions for creating more diverse and inclusive AI on an international level

– Government support and incentives for developing AI in local languages

Overall purpose:

The discussion aimed to explore the challenges of language underrepresentation in AI systems and datasets, and to consider potential solutions for creating more linguistically diverse and inclusive AI technologies on both national and international levels.

Tone:

The tone of the discussion was largely analytical and informative, with speakers providing in-depth explanations of complex issues. There was also an undercurrent of concern about the societal impacts of language exclusion in AI. The tone remained consistent throughout, maintaining a balance between highlighting problems and proposing potential solutions.

Speakers

– Moderator: Luis Dehnert, fellow with International Digital Policy at the German Ministry for Digital and Transport

– Nidhi Singh: Project manager at the Center for Communication Governance in the National Law University Delhi, India. Works in information technology law and policy, AI governance and ethics.

– Gustavo Fonseca Ribeiro: Lawyer from Brazil, holds a Master’s of Public Policy from Sciences Po. Specialist consultant for AI and digital transformation at UNESCO. Youth ambassador for the Internet Society in 2024.

– Kathleen Scoggin: Online moderator

Additional speakers:

– Audience member: Asked questions about government support for local language initiatives and language requirements in education

Full session report

Language Underrepresentation in AI: Impacts, Challenges, and Potential Solutions

This panel discussion, moderated by Luis Dehnert from the German Ministry for Digital and Transport, explored the critical issue of language underrepresentation in artificial intelligence (AI), particularly in large language models. The speakers, Nidhi Singh from the Center for Communication Governance in India and Gustavo Fonseca Ribeiro, a specialist consultant for AI and digital transformation at UNESCO, provided in-depth insights into the multifaceted challenges and potential solutions surrounding this topic.

Impact of Language Underrepresentation in AI

The panelists agreed that the dominance of English and Western languages in AI training data leads to significant bias and exclusion of other languages and cultures. This underrepresentation has far-reaching consequences:

1. Human Rights and Cultural Preservation: Both speakers emphasized that the exclusion of non-dominant dialects and languages affects cultural rights and threatens cultural identity preservation.

2. Socioeconomic Implications: The panelists concurred that language exclusion in AI exacerbates the digital divide, potentially limiting economic opportunities for speakers of underrepresented languages.

3. Nuanced Exclusion: Nidhi Singh highlighted that even within English, only the most common internet dialect is represented, effectively excluding many English speakers as well.

4. Educational Discrimination: Singh provided a concrete example of how language bias in AI can lead to discrimination in education, noting that non-native English speakers are more likely to be flagged for plagiarism by AI-powered detection systems.

5. Legal Implications: Ribeiro mentioned the use of AI in Afghan asylum cases, highlighting the potential for bias in critical legal decisions.

Legal and Ethical Frameworks for AI Inclusivity

The discussion emphasized the crucial role of legal and ethical frameworks in enhancing AI inclusivity:

1. Detailed Implementation: Singh stressed the need for detailed implementation guidelines beyond broad inclusivity frameworks.

2. Copyright Law Adaptations: Ribeiro discussed the potential for copyright exceptions for data mining to facilitate AI development while protecting intellectual property rights.

3. Transparency and Accountability: Both speakers agreed on the importance of transparency and accountability mechanisms in AI development.

4. Traditional Knowledge Protection: Ribeiro introduced the concept of traditional knowledge protection under international intellectual property law as a potential framework for addressing data rights of underrepresented communities.

5. Personality Rights: Ribeiro highlighted the importance of considering personality rights in AI development and data usage.

International Efforts and Challenges

The panelists explored various international efforts and challenges in building more diverse and inclusive AI:

1. State-Driven Initiatives: Singh highlighted the importance of state-driven initiatives for language inclusion in AI, such as the Karya initiative in India.

2. Knowledge Sharing: Ribeiro emphasized the role of international organizations in facilitating knowledge sharing and capacity building.

3. Accessibility Framing: Singh suggested framing language inclusion as an accessibility issue to leverage existing legal frameworks.

4. Power Balancing: Both speakers stressed the need for balancing power in international policy conversations, particularly between the global north and global majority.

5. EU Data Governance Act: Ribeiro mentioned the Data Governance Act in the European Union as an attempt to create data commons and address challenges in data sharing.

Challenges in Creating Universal Data Platforms

The discussion revealed unexpected consensus on the challenges of creating universal data platforms for AI:

1. Economic Disincentives: Singh pointed out the strong economic incentives against data sharing for private companies.

2. Financial Sustainability: Ribeiro highlighted the tradeoffs between openness and financial sustainability in data sharing initiatives.

3. Data Protection Concerns: Both speakers acknowledged personal data protection as a significant obstacle to universal data platforms.

Role of International Organizations

Ribeiro elaborated on the role of international organizations, particularly UNESCO, in addressing language underrepresentation:

1. Capacity Building: Supporting member states in developing AI strategies and policies.

2. Standard Setting: Developing ethical guidelines and recommendations for AI development.

3. Facilitating Dialogue: Creating platforms for knowledge sharing and discussion among diverse stakeholders.

Government Support and Challenges

In response to an audience question, the panelists discussed government support for local language initiatives:

1. African Initiatives: Ribeiro provided examples of government support for AI development in African countries.

2. Inclusive Decision-Making: Singh emphasized the need for more inclusive decision-making processes in government initiatives.

3. Market Forces: Both speakers acknowledged the significant role of market forces in driving AI development.

4. Educational Challenges: Singh highlighted the complex interplay between education systems, economic incentives, and language preservation efforts, noting the difficulty in balancing English proficiency with local language preservation.

Conclusion

The panel discussion underscored the critical importance of addressing language underrepresentation in AI as both a human rights issue and a key factor in equitable technological development. The speakers emphasized the need for holistic approaches that combine legal and ethical frameworks, international cooperation, government support, and innovative technical solutions. While significant challenges remain, particularly in balancing economic incentives with inclusivity goals, the discussion highlighted promising initiatives and potential pathways for creating more linguistically diverse and inclusive AI technologies on both national and international levels.

Session Transcript

Moderator: Hello, can you guys hear me? We would start with the session now, so please go to channel 1 and then if you can give me a thumbs up if it’s working, that would be great. Cool. Okay. Thank you for joining. Our panel on the impact of underrepresented languages in AI. My name is Luis Dehnert, I’ll be moderating today, sorry for the slight delay, we’re trying to make up for that now. I myself, I’m a fellow with the International Digital Policy with the German Ministry for Digital and Transport. So obviously we heard about the AI divide already early on today, so the issue of representation and diversity is a critical subject for inclusivity in the digital age. So AI is also a technology that is increasingly attempting to model our reality based on training data, but data often fails to capture that reality. So this is especially the case for languages. For example, we have the commonly used so-called common crawl, a data set that is made of nearly everything on the internet and which is often used to train large language models, yet nearly half of the data in it is in English and it leaves out more than 8,000 documented languages by UNESCO worldwide. So we are going to discuss this topic today. We are joined, as far as I know, by two speakers, which I will now let them introduce themselves, so we start with a needy thing. So I give the floor to you to introduce yourself, please. Yeah. OK. Hi.

Nidhi Singh: Thank you so much for inviting me here today. My name is Nidhi Singh. I am a project manager at the Center for Communication Governance in the National Law University Delhi in India. I work primarily in information technology, law and policy and for about the last five years I’ve been working in AI governance and AI ethics, focusing on a global majority approach to how AI is being developed, how norms are being formed, and how it’s being regulated and governed at the international stage.

Moderator: Thank you so much. Now I would like to give the floor to Gustavo Ribeiro. I think he has joined us online. Could you please introduce yourself?

Gustavo Fonseca Ribeiro: Hello, good afternoon to everyone in Riyadh. I apologize because I think my camera is malfunctioning. We were trying to fix it before we started, but I was not able. I will introduce myself and then I’ll try to leave and rejoin so I can see if the camera is working. But thank you all for joining, really appreciate your presence here. So my name is Gustavo Fonseca Ribeiro. I’m a lawyer from Brazil. I hold a Master’s of Public Policy from Sciences Po, a university based in France. And I’m also a specialist consultant for AI and digital transformation at UNESCO. Here at the Global IGF, I’m speaking in my capacity as one of the youth ambassadors for the Internet Society in the year 2024. So I’m very happy to join this meeting. Yes. Thank you, Luis.

Moderator: Thank you so much, Gustavo, for joining. Yes, you can try to rejoin with video. We’d love that. But Nidhi, maybe I start with you with the first question. So can you tell us a bit… What are the impacts of under-representation of languages in AI from a human rights and also socioeconomic perspective?

Nidhi Singh: Yeah, thank you so much for the question. So I think this is something we’ve broadly said this in the introduction as well, but there’s a lot of concerns around bias and inclusivity that comes in. But before we talk about it, I just want to talk about how, when we talk about what languages that AI models are being trained on, it’s not even really just that the resource-heavy languages like English are being adapted. Only specific dialects of English are being adapted. So it’s not even like the English that we all speak, especially for non-native speakers, is the language that’s going into the model. So we’re not part of the majority in either case. Even if you do speak English, it’s not your dialect of English that goes in. So even within that, it’s only the version of English that’s most commonly present on the internet is something that’s being trained on. So in a sense, everybody’s sort of being excluded. When you look at what happens from this, there’s a couple of just use cases I wanted to bring up before we get into deeper discussion. There’s very real-world consequences of this. So as generative AI has gone up, universities have started using models to check if generative AI is being used, if students are cheating, if they’re using generative AI to turn in their homework or to write their papers. As a non-native speaker of English, even if you speak English with a high degree of proficiency, you are far more likely to be flagged for plagiarism. Because the way that these tools are developed, it’s actually developed only for native speakers of a certain dialect of English. So that’s one thing. The other is also AI-driven translation softwares that are used, which are being increasingly used in welfare uses by the state as well. They are also not working well for low-resource languages. So that’s another concern. So what happens is that as part of the internet which does not speak the majority language or the majority dialect, you are already not part of the majority. And as this digital divide of the people sort of increases, the language becomes a further barrier to that. So if the generative AI also just generates in the predominant dialect, there’s a very decent chance that in a few years, the internet will only be filled with this one dialect of English, only a few resource-heavy languages, and all of the other languages. languages will increasingly be removed from the internet. Another thing you can see is that it’s only generative AI content that’s now coming up on the internet. So as that gets collected for more training, it’ll just be one language that’s sort of getting repeated. And your native dialects, and the way that you speak, and your cultural identity on the internet will slowly be lost. So I think there’s a lot of implications of what happens when generative AI models, typically ones that are actually focusing on prompt-based answers, have such a big problem with how they’ve been trained in terms of language.

Moderator: OK, thank you very much for the answer. I would like to give the same answer to the same question to Gustavo. Could you please share your view on that, please?

Gustavo Fonseca Ribeiro: Luis, can you repeat the question, please? Because I was resetting my camera while you asked it.

Moderator: I was asking, what are the impacts of under-representation of languages in AI from a human rights and also a socioeconomic perspective?

Gustavo Fonseca Ribeiro: Of course. That is quite interesting. So when you think of languages in artificial intelligence, the first thought that comes to mind in terms of human rights is cultural rights. If we look at the International Covenant, particularly the one on human rights, particularly the one economic, social, and cultural rights, there are, broadly speaking, three cultural rights protected under international human rights law. The first one being the right of access to culture. The second one being the right of a society or people to guide, to steer the progress of their own scientific progress. And the third one relates to intellectual property. So in terms of human rights, I think to understand this, we to understand the impact of under representation of languages. We have to understand that these technologies, artificial intelligence and the data sets that are fueling it are being, as you mentioned, primarily developed in a Western setting, let’s say that way, for instance, United States, or with European languages, with the exception perhaps of China in Asia. So when these tools are translated into other contexts, contexts that speak different languages, they’re not going to work, they’re not going to perform as well. And this does affect, so this does affect those communities, the rights that I have just mentioned, it’s going to affect how the scientific community explores this new technology. And it’s also going to affect how everyday users of artificial intelligence relate culturally to the outputs of the technology. And in terms of socioeconomic benefits, I would say that you can think of this in two ways, you can think of this through supply and demand. In terms of demand for AI technology, I think this usually happens because it can bring a lot of productivity. But again, if there’s under representation of language, the people that speak that language, they would, they’re not going to reap the same benefits. If you look at the major language models out there, such as chat GPT, they perform very well in English, but they perform very poorly, for example, in African languages. So this is one socioeconomic benefit that it’s not going to be reaped. And on the side of the supply, though, we can think of opportunities, though, because if there is a demand, trade from local communities, right? There’s also an opportunity for local companies to come up. And we do have an example of this, some examples of this in Africa, for example, in Ghana, you have Ghana NLP, which is a startup producing language models in Ghanaian languages. And by the way, there are over 50 languages in Ghana alone. In South Africa, there is Lilapa AI. And another example is the Masakani Foundation, which is a Pan-African organization working also on advanced language inclusion. So I would say those are the main impacts of it. Thank you.

Moderator: Gustavo, so I want to turn now to another aspect of this. So Nidhi, what is the role of legal or ethical frameworks in enhancing AI inclusivity? How can this further language-based inclusion work in your opinion? Thank you.

Nidhi Singh: So I think when it comes to AI governance, there are of course now some legal instruments that are coming up. Even without the legal instruments, of course, countries have their own sort of constitutional protections and human rights protections that will still apply. But primarily a lot of AI deployment is still being governed through the use of AI ethics. There are more broad based sort of frameworks around which you can have AI deployment. And inclusivity is a key framework in almost all of them. So the UNESCO AI ethics, even the OECD ones, all of them have something on inclusivity. Now how that’s to be implemented is actually a really interesting question. Because when you look at something like large language models and trainings, inclusivity just dictates that you should make it available in all of the languages and you should have training in all of the languages. But that’s not very helpful because you actually need very high quality data sets in order to train the models. And that means that it would require a significant amount of time and investment to get those models. So just to give you an example, if you try to use chat GPT in any of the Indian languages, not maybe the bigger ones, but some of the smaller ones like Assamese or Kannada, which is something that we tried to do for fun. At some point, it will start repeating Bollywood dialogues to you, because they ran out of things to train it on. And the easiest thing they could find was Bollywood movies, and they started training large language models on that. So you’ve met the quota, basically, you’ve checkmarked that it is inclusive, but it’s not, that doesn’t make any sense, the model doesn’t actually work. So inclusivity, I think AI ethics, they make a good framework. But to implement that framework, you need to have a lot more detail attached there. Another case study that I want to talk about, this is something that has very real legal concerns, is that in the US, four in 10 Afghan asylum cases would be driven translation algorithm that was used a generative AI based translation algorithm. And they didn’t put a lot of effort into how it translated Afghan into English. And because of that, asylum applications were being derailed. So if you are going to use them, this is like a legal consideration. So if you are going to be using generative AI in such specific, but also important and critical aspects of your public welfare, like if you’re using it in healthcare, if you’re using it for asylum, you’re using it for security, you’re using it for any sort of public welfare benefits. Ideally, you want to make sure that it works really well across all languages. This is especially true in countries in the global majority, which generally have a large diversity of languages. Like Gustavo said, these models are typically made and trained in the global north, where they don’t have so much trouble with like such drastically different languages, so many dialects. So what will happen in these countries is that the people who potentially know English, but also the dominant dialect of English, will probably be able to access that and the people who don’t will not be able to access them. That further creates a larger digital divide. So even in English speaking countries, it really only recognizes the dominant dialect. So like the have dialects and all of the other dialects that are being used typically by immigrant communities in these countries also don’t get recognized as well. Which means that when you look again, specifically at when you ask questions of chat GPT, there is a whole field now from engineering, where if you phrase the questions just right, you’ll get a much better answer. That is, again, something that’s very dependent on you knowing the dominant language and being able to understand how to phrase that very specifically. So a large part of LLMs are designed to give you answers in the dominant language, only if you ask them properly in the dominant language. So the setting does kind of work against you. And it requires at the very fundamental level, a lot of change and investment and effort to be made into it. That might sound like it’s a private concern, but it’s really not considering that you’re using generate generative AI to check for cheating. And based on that, you know, potentially like ending somebody’s education career or branding them as somebody who’s plagiarized. These are things that should have an actual framework, which is of much higher threshold. Like this isn’t about us using chat GPT, Google jokes, or check the weather, or something like this. These are some things that have very real world consequences. So you shouldn’t ideally when you’re deploying AI in these contexts, especially generative AI is something that’s so dependent on language in your culture, it needs to have additional safeguards in place. As of right now, it’s, I think people are really only looking at technical solutions to these problems. I don’t think social and legal problems can really only be solved by technical solutions. So there needs to be I think, a more holistic approach to how you would approach this, but there does need to be some framework in place. If I can get back on this, you said, of course, legal and governance solution should be at the forefront. But can I ask you, maybe from a technical perspective, do you think perhaps, because with many languages, you also have the problem of data availability? Do you think technical solutions such as as synthetic data generation could be a potential solution to also address this? So I think synthetic data generation is something that’s gotten a lot of traction over the last couple of years. And then that’s gotten one set further, where now you have super synthetic data, the data that is synthesized from synthetic data. And I think all of those technical solutions work well within certain areas where they’ve been researched. At the end of the day, if your base data set isn’t of high quality, so these are what we call resource-intensive languages or resource-rich languages. If you don’t have the base data set built up and you try to generate synthetic data out of it, maybe the technology will catch up. But right now, it just sort of exacerbates the problems in it. Another problem is that languages don’t actually typically directly translate. Because the way you write the languages and the way the concepts are sort of explained in Asian languages versus European languages does tend to differ. And in a lot of these LLMs, what it does, it has some basic idea of what the words means. And so it’ll just try to directly translate. That doesn’t really work. And then also, if you’re using synthetic data and your base data wasn’t very good quality data, you’ll just end up with a lot of synthetic data, which is also replicating the same problems. And that could have further problems going down. This is something that I guess you’d need a lot of impact assessment. You’d need a lot of transparency in the model to figure out, which is, of course, something that we’re struggling with right now. Because the problem with an LLM is you need a large amount of data. If you sat down and you tried to individually check every single piece of data, we’d never be able to build an LLM. So these are things I think that, yes, you’d need technical solutions. Synthetic data could be one of them. But there’s no way to be sure that that won’t just exacerbate the problem right now. So unless there’s some way of figuring out, OK, unless we’re sure of how this works, let’s at least not use it in our justice system, in our welfare delivery system, in our asylum system. Until you have that at least pause put in there, I think that’s just going to keep worsening the problem.

Moderator: Thank you for the well-thought response. Gustavo, I’d like to get back to you. So from a legal perspective, what do you think could be other levers, so to speak, to enable more diverse datasets in AI, maybe also perhaps thinking about copyright?

Gustavo Fonseca Ribeiro: So yeah, thank you for the question. So one of the things that I had mentioned earlier, one of the three international human rights affected, right, was intellectual property. And I thought this was going to become a longer conversation. That’s why I didn’t mention anything in the first one. So yeah, I think one key area of law that calibrates access to data, right, not only language, but all types of data, right, is copyright law. But before going there, what is copyright, right? So everybody’s in the same page. Copyright is basically how we assign intellectual property rights to authors. And this applies as well. So when you build a dataset, that is something that you can do as well. You can protect the data with copyright, then you have an exclusive use for that, and only you can license it. That also happens with the source material that is used to create data. So creative works. So whenever we’re using, for instance, newspapers, or like Nidhi said, Bollywood movies, those are protected by copyright. So the way we work with the copyright governance of these two types of resources, right, the datasets and the raw materials for the datasets, is going to affect access to data. But right now, our copyrights, we do have an opportunity for expansion, right, for innovation, but our copyright laws are not necessarily yet adapted to it. We’ve seen a handful, for example, of lawsuits in the United States, between between OpenAI and the New York Times, because OpenAI used the New York Times without authorization. And sorry, there’s some noise around. Yeah. So this is the context, this is the problem. But there are some solutions already out there in terms of copyright. So for example, if you look at the European Union, the AI Act, they do have an exception to what they call the data mining exception. So originally would not be permissible for an AI company to mine data from this raw materials from the source materials like Bollywood movies or from the internet without authorization, the copyright owner. But in the UA Act, there’s an exception to that if you’re doing it for non commercial purposes. So that’s one way to develop to allow for the progress of science. In the progress of science, but still limit the shared benefit of resources, right? Second, a second development that we might see soon, it’s in the US, they have an exception to copyright, which is called the fair use doctrine, which is not so binary is not so certain how it applies, it’s it is on a case by case basis, based on a certain a set of criteria, such as whether this copyrighted certain copyrighted material is being used for educational purposes, or non commercial purposes, or for research, for example. And some of these lawsuits that we see in the US, we might see something like that coming out of it. But we have to wait and see another area. And in my opinion, this is under explored. I would love to see a larger discussion on it. There exists this concept under international intellectual property, law of traditional knowledge. It protects traditional knowledge and traditional knowledge in the sense of knowledge that has been passed from generation to generation, for example, in traditional and indigenous communities. And in the beginning of the late 90s, early 2000s, we saw a lot of debate on that, when it came to healthcare, when it came to medicines and traditional medicines, because you’d see a lot of big companies using this traditional medicines, and which were invented by communities and and this community is not benefiting from it. And yet, we’re still we yet to see this debate in the context of artificial intelligence and data, data has become per se the new oil. But we don’t really, we haven’t seen any stakeholders talking very strongly about how this idea of traditional knowledge communicates with this new, very valuable resource. And just to conclude another challenge that we have, which is not copyright law, but is associated with it is personality rights. The personality right is, for example, how, how is the rights to your own likeness, your voice, your image, your face, you know, and other people can only use it if you have given consent. And this is not an economic right, it’s a moral right, it’s attached to personality. It’s belongs to you because you’re human, not because you own any property, which is often the case, which is, generally speaking, what happens with copyright. So what that means is that you what that means is that you cannot be easily given away, sold away in a market, for example. So right now, we’ve actually seen some decisions by courts, for example, in India, the High Court of Bombay has found that like when an AI replicates the voice of someone who exists, that is a violation of personality rights. But what we don’t know is if taking the voice of someone and using it to train an AI, and if that in itself is a validation, like what if an AI is trained with it, but we don’t know, but it doesn’t necessarily replicate it, right? That is an area that we don’t know. And deciding on that the law would also require some adaptation to either allow for the expansion of data or narrow it, right? Maybe that they conclude my remarks. Yeah. Thank you.

Moderator: Very insightful. Thank you. Thank you very much. I would have would like to pose one last question to the both of you. So in my opinion, we can observe that, you know, this this problem of limited language inclusion in AI leads to efforts on a national or regional level, where countries and regions build their local data sets and models. So thinking more of the international perspective, what steps can we take to together on in within international organizations or beyond build more diverse AI? I would, yeah, Nidhi, maybe you want to start?

Nidhi Singh: Thank you. And you’re right, actually, I think a lot of the efforts that come for something like this come domestically. Because a I think that’s maybe a better place to start, but also probably because state and governments have a far better incentive. So you don’t make AI accessible, because it’s financially lucrative, it may not always be depending on the community. And if it’s not financially lucrative, then why would somebody like open AI be doing this? So it’s usually up to the states to make this. So just to give an example, also like the large language models that have been run in regional languages in India, like Jugalbandi, they’re also done with in partnership by state. So I think that in this case, it’s important to focus on the fact that this isn’t like a favor that’s being done. Inclusivity is, in fact, an essential part of, it’s an essential access to the internet, is, in fact, considered now an international human right. So if you are launching something like LLM, which is fast trying to replace large parts of the internet, you are required to make that accessible to everybody. And like Gustavo said, if you are using public data, there is an, I think, implied expectation that you would use it for public good. So it’s not like, oh, you can also do public good with it, but no, you are, in fact, chaining it on everybody’s data, so you are required to do good with it. So I think that it’ll be good to maybe, in international organizations, bring this under the head of accessibility and inclusivity, and in that sense, sort of apply the same protections to it that we are doing when we’re trying to spread internet to everybody, generally trying to bring everybody online. And yeah, I think, to some extent, even for private players, depending on how the initiative is structured, there might need to be some requirements of including at least some percentage of languages within their training data set. But all of this will really only be possible once you have transparency and accountability mechanisms down, because we don’t actually know how the data is, what data is being collected, exactly how they’re training it. So unless you have very solid transparency mechanisms and accountability mechanisms to see what they’re using to train all of these models with, I think it’ll be hard to really push anything through. It’s a very interconnected problem. You want to have inclusivity. In order to have that, you want to have accountability and transparency. So yeah, I think once you get from ethical AI to actually implementing it, I imagine a lot of the things will get sorted.

Moderator: Gustavo, would you like to add to that?

Gustavo Fonseca Ribeiro: Yeah, of course. So what can we do at the international level, and international organizations in particular? Three things that have come to mind. The first one is sharing of knowledge. and strategies to enhance language inclusion. We can learn from, countries can learn from one another, right? For example, in India, there’s this great initiative called Karya, which is a nonprofit that has this platform that allows data workers and data annotators to go on it, work on providing information and curating data sets. And once this data set is sold by Karya, the profit, the revenue associated with that data set is distributed with the workers. So here we have a case that first we get increased inclusion of Indian languages, and we manage to revert the profits of it to a socially beneficial purpose, right? To helping data workers, which are often don’t have the best working conditions in the value chain of artificial intelligence. So bringing that from one country, and this platform, for example, is open source. So international organizations, they do have the ability of bringing knowledge that exists in India, for example, to other countries that are in similar situations, for example, in Africa, or with indigenous languages in Latin America. The second one I would say is capacity building. Whatever we do with artificial intelligence, we often need data and technology and models. We often need governance as well, like laws that enable its development, but we always need human talent. So even this example that I gave with Karya, a platform doesn’t exist by itself, right? It needs training, people around it to do it. So I think international organizations can work in capacity building. It’s actually been one of the priorities outlined in the Global Digital Compact when it comes to artificial intelligence. And I would say the third one is bringing a better balance of power to conversations on international policy. It’s when we’re speaking of digital divides in particular, there is a clear imbalance of power and ability to tailor this conversation, for example, between the global north and the global majority. And wealthier countries have more resources to participate in this conversation in comparison to low-income countries, for example. So, international organizations, they also have the ability to provide a forum in which these different actors can talk face-to-face. So, I would say these are three potential roles. Thank you.

Moderator: Thank you. With that, I would like to open up this session to questions, either from the in-person audience or from online. I think we have an online moderator, so please let us know if there are any questions online. And, yeah, please also state to whom you want to direct your question. We have a mic in person right here, so if you have a question, please step forward. And, yes, you’re free to ask now. Any questions from the online audience?

Kathleen Scoggin: We don’t have any as of now, but if there aren’t any, I have one to throw out to the group. There’s been lots of talk about the way that we use platforms to gather this data. Either from a technical sense or from more of just a user interface sense, are there things that you all think would be beneficial to creating a universal platform for data collection? There have been lots of different ones. How would you see that going? Thanks.

Nidhi Singh: Should I start? Universal platform for data collection is a very interesting idea. I think that I’ve also heard a lot of conversations about AI Commons and data Commons. And yeah, the principle behind it is quite sound. Because basically what you’re saying is that you put all the data in one place so that everybody can benefit from it. And I do think that, in principle, that sounds good. I do think, however, and I think this might be a bit pessimistic of me, realistically data is quite like the new oil. It’s very unlikely that anybody who has a lot of data would want to share it in a way where other people can profit off of it as well. Well-labeled data is currently one of the biggest resources that we have. Like good clean data that you get for training these things. There are economies now built around these kind of data and data brokers. So I do think that actually putting that into place would be difficult. It’s also very interesting, and this is something that I thought of after Gustavo was speaking. We’ve talked a lot about copyright, but I think it’s quite impressive how much acceptance things like large language models have in our society right now. So when something trawls the internet to collect information to train a large language model, there is a very good chance that you will end up catching a lot of personal data as well. And typically personal data protection used to be very stringent about what you can train models on. Now we’re seeing an increasing trend where because of the lucrative promise of LLMs, countries are just saying that if you posted it on the internet and a large language model caught it, then I guess that’s just fine. As long as it isn’t directly harming your privacy in a way that the output is harming you. the case of the Bombay High Court, where his voice was directly being used, a lot of these protections are diluted when it comes to training the LLM itself. In that era of people sort of prioritizing, I think, economic progress or economic incentives that you can potentially get from generative AI, I think it’s very unlikely that people would agree to pool all of the data in a uniform platform. I hope that that would be the case, but I do think that it’s unlikely that people would agree to that.

Gustavo Fonseca Ribeiro: Yeah, and if I may jump on the question as well. Yes, sure, please. Thanks. I find it to be a very interesting proposal to think about data commons, right? I will first speak of a challenge and then of someone who’s trying to do that. The biggest challenge I see is how bringing data to the open source world, opening data up in a market of technology that is so highly competitive, creates, affects, comparative advantages, so market advantages, right? So it is true that openness, intuitively speaking, leads to more access to data. And you often see this advocacy, right? Oh, to companies, you see it also a lot in the development context. Open the data so everybody can have access to it. But openness can also come with a trade-off of financial sustainability, right? If you open the data without any restrictions whatsoever, it’s not that easy to profit from, to have a revenue from that. And in developing context, texts, the social economic security of companies and the people working on it is it’s a it’s a very big priority. But more importantly, so is if you manage to get people on board with this idea, you have to either come get everyone or no one to create a data commons because from the moment you get people certain amount of companies into the commons share the data for free, the companies that are private and have the size to to continue charging, like they’re private and have not opened their data and already have a big like it’s already big like a big big tech company, they’re going to have a financial advantage over whoever opened their data. And because their model will be more profitable, their model is also going to grow more, which could actually crowd out the commons the common data pool. So that is the challenge I see. It works a lot like a negative externality, you either get everyone on board or it’s hard to implement it. And but the second point is there is an attempt to to implement that kind of thinking actually in the European Union. Through the Data Governance Act, I wish I was the type of lawyer who was an expert in that I am not. But there is this regulation at the European Union level, the Data Governance Act, coupled with the Data Act, in which they try to create pan European pools of data sets in certain fields that are profitable, like agriculture, healthcare, mobility data. And the way they’re trying to build that is by kind of creating a compulsory licensing arrangement, like so the public sector can buy data sets that are of public value. from private entities, and the private entities, they’re mandated to sell it for a reasonable price. That’s literally what the law says, a reasonable price. So it’s somewhat of an attempt to do that, but still incorporate the cost structure of developing data sets, but whether it works or not, well, we’ll see. Yeah, thank you.

Moderator: Thank you. So I think we have two questions from the audience. Maybe we start with the madam in the front. Would you come to the microphone, please? And then we check quickly whether it’s on.

Audience: Can you hear me? Okay, thank you so much for the great session and presentations. I guess my question goes to both of you. You did speak when you were talking about the recommendation on domestically having the incentive coming from government. And I think Gustav online, he had also mentioned some of the large language models in Africa. I think you mentioned Lelapa AI in South Africa in Masakane. So my question is, do you see a lot of appetite, especially from governments to actually support a lot of these initiatives to develop our own local languages and having them included in these models? That’s the first one. And then the second one is just something that I’ve noticed from a very practical perspective where I’m originally from Zimbabwe, and for a learner to graduate and go to university, you must have passed mathematics and English. So not even our local language is included. So I was just thinking from an incentive perspective that why would I, if I’m a software developer, invest in local languages? when they are not useful to students and learners who are supposed to be using these technologies because you know you need English to go to university and all your products should be in English and you are making use of AI products in English. So I don’t know how you see it. Maybe you have practical examples from your regions where there is direct incentive from government financially to support a lot of these initiatives and as well with the school system, are there going to be any changes where we see more and more local languages actually being integrated for you to move to the next level? You must have at least passed a local language, not necessarily just English. So those are my two questions, thank you.

Nidhi Singh: Generate AI and then eventually get to building like translation softwares. As for the second question, that’s a very complicated question, I think which doesn’t maybe have a legal answer, it’s more for sociological answer. I know that some states in my country have a three language model where you must study three languages in school just because of how it works. I think this is just a general problem that many countries that have multiple languages have is the fact that the internet infrastructure and now increasingly AI is all really built on like one common thing which happened to be a specific dialect of English. Until you can have a lot more inclusivity in the room where these decisions are being made, that is unlikely to change. But I think that is something that you’re trying to do through these conversations is to just say that that actually doesn’t make any sense. If you just have a chatbot that really only recognizes English and like a specific way of accessing a service, then that’s not an accessible service. You need to have like more either human intervention and all of these problems. But yeah, I think that’s a more sociological problem that I think the world generally has like a majority perspective on thing and it’s not. I think that will really only get fixed when you have more voices in the room talking about what the experience is like.

Moderator: Okay, with the time in mind, we have one minute roughly left. Gustavo, would you want to add to that quickly?

Gustavo Fonseca Ribeiro: I think Niti’s answer was very good. So very quickly, government support, yes, you can see examples, for example, in Rwanda, the government has been quite supportive of developing datasets in Kinyarwanda, in partnership with academia and startups. And in Nigeria, you can also see the government supporting the development of a large language model in Nigerian languages. As for the second question on the education, I would say that Niti’s was on spot, I think Niti’s answer. I would add that the market is a powerful tool to drive development of AI. And in many countries that speak many, many languages, this market is in English, it is true. For example, in Kenya, that is the case, Uganda as well. You also have other purposes, right? Public services, for example, access of citizens to welfare, and research, which doesn’t necessarily have to have a commercial purpose. So in that context, I would say it’s quite relevant. I hope I’m touching on the question. And there was a question in the chat, which asked about the opportunities and risks of the localization, which is going beyond, it’s contextualizing the model as well, to the local culture, on the opportunities and risks. I would say very quickly. The opportunities? First, because it’s useful. People have a demand for solutions in their local language, and those tools work better for them. And the second, just cultural preservation, I think, will be an opportunity. And the risks, I would refer to risks at large of associated with AI. Even if you’re doing in a local language with culture, local culture embedded, you still have privacy risks. You still have bias risks. And, yeah. Thank you. I will give back the floor.

Moderator: So I saw that there was one in-person question at least left. Maybe I would ask you to, if Nidhi still stays here, you can also ask it after the session. I would like to thank our speakers for the very interesting insights. Also to the audience for the good questions. And, yeah. I wish you a few other good sessions today. And, yeah. Enjoy IGF. Thank you.

N

Nidhi Singh

Speech speed

180 words per minute

Speech length

3083 words

Speech time

1022 seconds

Exclusion of non-dominant dialects and languages

Explanation

AI models are primarily trained on specific dialects of English, excluding other languages and dialects. This leads to a lack of representation for non-native speakers and minority languages in AI systems.

Evidence

Example of universities using AI models to check for plagiarism, which are more likely to flag non-native English speakers.

Major Discussion Point

Impact of underrepresentation of languages in AI

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Underrepresentation of languages in AI has significant impacts

Exacerbates digital divide and loss of cultural identity

Explanation

The focus on dominant languages in AI systems widens the digital divide. It may lead to the loss of cultural identity as minority languages are increasingly removed from the internet and AI-generated content.

Evidence

Prediction that the internet may be filled with only one dialect of English in the future, crowding out other languages and dialects.

Major Discussion Point

Impact of underrepresentation of languages in AI

Need for detailed implementation of inclusivity frameworks

Explanation

While inclusivity is a key framework in AI ethics, its implementation requires more detailed guidelines. Simply making AI available in all languages is not sufficient without high-quality datasets for training.

Evidence

Example of chat GPT repeating Bollywood dialogues when used in smaller Indian languages due to lack of proper training data.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Need for legal and ethical frameworks to enhance AI inclusivity

Importance of transparency and accountability mechanisms

Explanation

Implementing inclusivity in AI requires strong transparency and accountability mechanisms. Without knowing how data is collected and models are trained, it’s difficult to push for meaningful inclusivity.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

State-driven initiatives for language inclusion

Explanation

Efforts for language inclusion in AI often come from domestic or state-driven initiatives. This is because states have better incentives to make AI accessible in local languages, even when it’s not financially lucrative.

Evidence

Example of Jugalbandi, a large language model for regional languages in India, developed in partnership with the state.

Major Discussion Point

International efforts to build more diverse AI

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Government support is crucial for local language AI development

Framing language inclusion as an accessibility issue

Explanation

Language inclusion in AI should be framed as an accessibility issue, similar to internet access. This approach would apply the same protections and requirements to AI as those used to spread internet access globally.

Major Discussion Point

International efforts to build more diverse AI

Economic incentives against data sharing

Explanation

The idea of a universal platform for data collection faces challenges due to economic incentives. Data is valuable, and those who possess it are unlikely to share it freely for others to profit from.

Evidence

Comparison of data to ‘the new oil’ and mention of economies built around data and data brokers.

Major Discussion Point

Challenges in creating universal data platforms

Differed with

Gustavo Fonseca Ribeiro

Differed on

Approach to data sharing and universal platforms

Personal data protection concerns

Explanation

The development of large language models raises concerns about personal data protection. There’s an increasing trend of accepting the use of personal data for training AI models, potentially diluting privacy protections.

Major Discussion Point

Challenges in creating universal data platforms

Need for more inclusive decision-making

Explanation

The lack of inclusivity in AI and internet infrastructure stems from a lack of diversity in decision-making processes. More diverse voices are needed to address the challenges of language inclusion in AI.

Major Discussion Point

Government support and incentives for local language AI

Sociological challenges in prioritizing local languages

Explanation

The prioritization of English in education and technology is a complex sociological issue. Changing this requires addressing broader societal norms and practices beyond just technological solutions.

Evidence

Mention of the three-language model in some Indian states as an attempt to address language diversity in education.

Major Discussion Point

Government support and incentives for local language AI

G

Gustavo Fonseca Ribeiro

Speech speed

142 words per minute

Speech length

2756 words

Speech time

1159 seconds

Affects cultural rights and socioeconomic benefits

Explanation

Underrepresentation of languages in AI impacts cultural rights protected under international law. It also affects the socioeconomic benefits that communities can derive from AI technologies.

Evidence

Reference to the International Covenant on Economic, Social, and Cultural Rights, mentioning three protected cultural rights.

Major Discussion Point

Impact of underrepresentation of languages in AI

Agreed with

Nidhi Singh

Agreed on

Underrepresentation of languages in AI has significant impacts

Creates opportunities for local AI companies

Explanation

The lack of language representation in AI creates opportunities for local companies to develop solutions. This can lead to the emergence of startups focusing on underrepresented languages.

Evidence

Examples of Ghana NLP, Lilapa AI, and Masakani Foundation working on language inclusion in Africa.

Major Discussion Point

Impact of underrepresentation of languages in AI

Copyright law and exceptions for data mining

Explanation

Copyright law plays a crucial role in regulating access to data for AI training. Exceptions to copyright for data mining, such as those in the EU AI Act, can facilitate the development of more inclusive AI systems.

Evidence

Mention of the data mining exception in the EU AI Act for non-commercial purposes.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Agreed with

Nidhi Singh

Agreed on

Need for legal and ethical frameworks to enhance AI inclusivity

Potential role of traditional knowledge protection

Explanation

The concept of traditional knowledge protection in international intellectual property law could be applied to AI and data. This could help address issues of data ownership and benefit-sharing for communities.

Evidence

Reference to debates on traditional medicines and healthcare in the late 90s and early 2000s.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Knowledge sharing and capacity building by international organizations

Explanation

International organizations can play a role in sharing knowledge and strategies for language inclusion in AI. They can also focus on capacity building to develop the necessary human talent for AI development.

Evidence

Example of the Karya platform in India, which could be shared with other countries facing similar challenges.

Major Discussion Point

International efforts to build more diverse AI

Balancing power in international policy conversations

Explanation

International organizations can help balance power dynamics in discussions about AI and language inclusion. They can provide forums for different actors to engage in face-to-face conversations, particularly between the global north and the global majority.

Major Discussion Point

International efforts to build more diverse AI

Tradeoffs between openness and financial sustainability

Explanation

Creating open data commons for AI faces challenges related to financial sustainability. Companies may be reluctant to open their data due to competitive advantages and the need for revenue streams.

Evidence

Discussion of the European Union’s Data Governance Act and Data Act as attempts to create pan-European pools of datasets while considering cost structures.

Major Discussion Point

Challenges in creating universal data platforms

Differed with

Nidhi Singh

Differed on

Approach to data sharing and universal platforms

Examples of government support in African countries

Explanation

Some African governments are actively supporting the development of AI models in local languages. This demonstrates growing recognition of the importance of language inclusion in AI.

Evidence

Examples of government support for AI language models in Rwanda and Nigeria.

Major Discussion Point

Government support and incentives for local language AI

Agreed with

Nidhi Singh

Agreed on

Government support is crucial for local language AI development

Market forces driving AI development

Explanation

Market forces play a significant role in driving AI development, often favoring dominant languages like English. However, there are other purposes for language inclusion, such as public services and research, which may not have commercial motivations.

Evidence

Examples of English dominance in markets in countries like Kenya and Uganda.

Major Discussion Point

Government support and incentives for local language AI

Agreements

Agreement Points

Underrepresentation of languages in AI has significant impacts

Nidhi Singh

Gustavo Fonseca Ribeiro

Exclusion of non-dominant dialects and languages

Affects cultural rights and socioeconomic benefits

Both speakers agree that the underrepresentation of languages in AI has substantial negative impacts on cultural rights, socioeconomic benefits, and digital inclusion.

Need for legal and ethical frameworks to enhance AI inclusivity

Nidhi Singh

Gustavo Fonseca Ribeiro

Need for detailed implementation of inclusivity frameworks

Copyright law and exceptions for data mining

Both speakers emphasize the importance of developing and implementing legal and ethical frameworks to promote language inclusion in AI, including copyright exceptions and detailed inclusivity guidelines.

Government support is crucial for local language AI development

Nidhi Singh

Gustavo Fonseca Ribeiro

State-driven initiatives for language inclusion

Examples of government support in African countries

Both speakers highlight the importance of government support in developing AI models for local languages, citing examples from India and African countries.

Similar Viewpoints

Both speakers recognize that while the underrepresentation of languages in AI can exacerbate the digital divide, it also creates opportunities for local companies to develop solutions for underrepresented languages.

Nidhi Singh

Gustavo Fonseca Ribeiro

Exacerbates digital divide and loss of cultural identity

Creates opportunities for local AI companies

Both speakers emphasize the need for transparency, accountability, and balanced representation in AI development and policy discussions, particularly between the global north and global majority.

Nidhi Singh

Gustavo Fonseca Ribeiro

Importance of transparency and accountability mechanisms

Balancing power in international policy conversations

Unexpected Consensus

Challenges in creating universal data platforms

Nidhi Singh

Gustavo Fonseca Ribeiro

Economic incentives against data sharing

Tradeoffs between openness and financial sustainability

Both speakers unexpectedly agree on the challenges of creating universal data platforms for AI, citing economic incentives and financial sustainability as major obstacles. This consensus is significant as it highlights the complexity of balancing open data initiatives with commercial interests in AI development.

Overall Assessment

Summary

The speakers show strong agreement on the impacts of language underrepresentation in AI, the need for legal and ethical frameworks, the importance of government support, and the challenges in creating universal data platforms. They also share similar viewpoints on the digital divide, opportunities for local AI companies, and the need for transparency and balanced representation in AI development.

Consensus level

The level of consensus between the speakers is high, with agreement on most major points discussed. This high level of consensus implies a shared understanding of the challenges and potential solutions in addressing language inclusion in AI. It suggests that there is a common ground for developing policies and initiatives to promote more inclusive AI systems across different regions and languages.

Differences

Different Viewpoints

Approach to data sharing and universal platforms

Nidhi Singh

Gustavo Fonseca Ribeiro

Economic incentives against data sharing

Tradeoffs between openness and financial sustainability

While both speakers acknowledge challenges in data sharing, Nidhi Singh emphasizes the economic disincentives for companies to share valuable data, whereas Gustavo Fonseca Ribeiro focuses more on the balance between openness and financial sustainability, citing attempts like the EU’s Data Governance Act.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on the approach to data sharing and the specifics of legal and regulatory frameworks for AI inclusivity.

difference_level

The level of disagreement between the speakers was relatively low. Both speakers generally agreed on the importance of language inclusion in AI and the need for legal and regulatory frameworks to support it. Their differences were mainly in the emphasis and specific approaches they suggested, rather than fundamental disagreements. This low level of disagreement suggests a general consensus on the importance of the issue and the need for action, which could be beneficial for advancing policies and initiatives in this area.

Partial Agreements

Partial Agreements

Both speakers agree on the need for legal and regulatory frameworks to enhance AI inclusivity, but they focus on different aspects. Nidhi Singh emphasizes the need for detailed implementation guidelines beyond broad inclusivity frameworks, while Gustavo Fonseca Ribeiro discusses specific legal mechanisms like copyright exceptions for data mining.

Nidhi Singh

Gustavo Fonseca Ribeiro

Need for detailed implementation of inclusivity frameworks

Copyright law and exceptions for data mining

Similar Viewpoints

Both speakers recognize that while the underrepresentation of languages in AI can exacerbate the digital divide, it also creates opportunities for local companies to develop solutions for underrepresented languages.

Nidhi Singh

Gustavo Fonseca Ribeiro

Exacerbates digital divide and loss of cultural identity

Creates opportunities for local AI companies

Both speakers emphasize the need for transparency, accountability, and balanced representation in AI development and policy discussions, particularly between the global north and global majority.

Nidhi Singh

Gustavo Fonseca Ribeiro

Importance of transparency and accountability mechanisms

Balancing power in international policy conversations

Takeaways

Key Takeaways

Underrepresentation of languages in AI has significant impacts on cultural rights, socioeconomic benefits, and digital divide

Legal and ethical frameworks for AI inclusivity need more detailed implementation guidelines and transparency mechanisms

International efforts are needed to build more diverse AI, including knowledge sharing, capacity building, and balancing power in policy discussions

Creating universal data platforms faces challenges due to economic incentives and data protection concerns

Government support and incentives are crucial for developing AI in local languages, but sociological challenges persist

Resolutions and Action Items

International organizations should facilitate sharing of knowledge and strategies to enhance language inclusion across countries

Capacity building initiatives should be implemented to develop human talent for AI in diverse languages

Efforts should be made to bring better balance of power to conversations on international AI policy

Unresolved Issues

How to effectively implement inclusivity frameworks in AI development

Balancing openness of data with financial sustainability and market competitiveness

Addressing the lack of incentives for developers to invest in local language AI when education systems prioritize dominant languages

How to create universal data platforms that overcome economic and privacy challenges

Suggested Compromises

Using synthetic data generation to address data availability issues for underrepresented languages, while acknowledging potential limitations

Implementing copyright exceptions for non-commercial data mining to allow AI development while protecting intellectual property rights

Creating compulsory licensing arrangements for public sector to buy valuable datasets from private entities at reasonable prices

Thought Provoking Comments

Even if you do speak English, it’s not your dialect of English that goes in. So even within that, it’s only the version of English that’s most commonly present on the internet is something that’s being trained on. So in a sense, everybody’s sort of being excluded.

speaker

Nidhi Singh

reason

This comment highlights the nuanced issue of language bias in AI beyond just non-English languages, pointing out that even English speakers may be excluded if they don’t use the dominant dialect.

impact

It broadened the discussion from just underrepresented languages to issues of dialect and cultural expression within languages, leading to deeper exploration of inclusivity challenges.

As generative AI has gone up, universities have started using models to check if generative AI is being used, if students are cheating, if they’re using generative AI to turn in their homework or to write their papers. As a non-native speaker of English, even if you speak English with a high degree of proficiency, you are far more likely to be flagged for plagiarism.

speaker

Nidhi Singh

reason

This comment provides a concrete, real-world example of how language bias in AI can have serious consequences, especially in education.

impact

It shifted the conversation from abstract concepts to tangible impacts, prompting discussion on the ethical implications and potential discriminatory effects of AI in various sectors.

There exists this concept under international intellectual property, law of traditional knowledge. It protects traditional knowledge and traditional knowledge in the sense of knowledge that has been passed from generation to generation, for example, in traditional and indigenous communities.

speaker

Gustavo Fonseca Ribeiro

reason

This comment introduces a legal concept that could potentially be applied to AI and data rights, particularly for underrepresented communities.

impact

It opened up a new avenue of discussion on how existing legal frameworks might be adapted or applied to address issues of data ownership and cultural preservation in AI development.

Universal platform for data collection is a very interesting idea. I think that I’ve also heard a lot of conversations about AI Commons and data Commons. And yeah, the principle behind it is quite sound. Because basically what you’re saying is that you put all the data in one place so that everybody can benefit from it.

speaker

Nidhi Singh

reason

This comment addresses a potential solution to the problem of language bias in AI, while also acknowledging its challenges.

impact

It prompted a deeper discussion on the practical and economic challenges of creating inclusive AI systems, balancing idealism with realism.

Overall Assessment

These key comments shaped the discussion by expanding the scope of the conversation from simply underrepresented languages to issues of dialect, cultural expression, and real-world impacts. They introduced legal and ethical considerations, highlighted practical challenges, and prompted exploration of potential solutions. The discussion evolved from identifying problems to considering complex, multifaceted approaches to addressing language bias and inclusivity in AI development.

Follow-up Questions

How can synthetic data generation be used to address the problem of limited data availability for underrepresented languages?

speaker

Moderator

explanation

This is important to explore potential technical solutions for increasing language diversity in AI training data.

How can copyright laws be adapted to balance access to data for AI development with protection of intellectual property?

speaker

Gustavo Fonseca Ribeiro

explanation

This is crucial for determining how data can be legally used to train AI models while respecting copyright.

How does the concept of traditional knowledge under international intellectual property law apply to AI and data?

speaker

Gustavo Fonseca Ribeiro

explanation

This is an underexplored area that could have implications for how indigenous and traditional knowledge is protected and used in AI development.

How can personality rights be applied to AI training data, particularly when an AI is trained on someone’s voice or likeness but doesn’t directly replicate it?

speaker

Gustavo Fonseca Ribeiro

explanation

This is an unresolved legal question that has implications for data collection and AI training practices.

What are the benefits and challenges of creating a universal platform for data collection?

speaker

Kathleen Scoggin (online moderator)

explanation

This explores potential solutions for improving data diversity and accessibility for AI development.

How much appetite is there from governments to support initiatives developing local language AI models?

speaker

Audience member

explanation

This is important for understanding the potential for government-backed efforts to increase language diversity in AI.

How can education systems be changed to incentivize the development and use of AI in local languages?

speaker

Audience member

explanation

This addresses the systemic factors that influence language representation in AI development and use.

What are the opportunities and risks of localizing AI models to specific cultures and languages?

speaker

Audience member (via chat)

explanation

This explores the broader implications of developing AI models tailored to specific linguistic and cultural contexts.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #12 Tackling Misinformation with Information Literacy

Day 0 Event #12 Tackling Misinformation with Information Literacy

Session at a Glance

Summary

This discussion focused on tackling misinformation and promoting information literacy in the digital age. The speakers, including representatives from Google and other tech policy experts, explored various strategies and challenges in addressing online misinformation.

The conversation began by highlighting how misinformation is a persistent and complex problem that has evolved, especially since the 2016 US election. Platforms have implemented various approaches, including labeling, fact-checking partnerships, and reducing the reach of potentially false content. However, these methods face challenges, as misinformation is often not entirely false and can be difficult to definitively label.

The speakers emphasized the importance of information literacy skills for users. Google has developed tools like “About this result” to help users evaluate sources and claims more easily. The discussion also touched on the impact of AI-generated content, noting that while it presents new challenges, many core information literacy principles still apply.

A key point was the need for a holistic, multi-stakeholder approach to combat misinformation. This includes efforts from platforms, governments, civil society, and users themselves. The speakers stressed that user preferences and behaviors play a significant role in exposure to misinformation, highlighting the importance of individual media consumption habits.

The discussion also covered specific issues like gender-based online violence and the effectiveness of information literacy efforts. The speakers acknowledged that while progress has been made, misinformation remains a complex issue requiring ongoing research, collaboration, and adaptation of strategies.

In conclusion, the speakers emphasized the need for continued experimentation, cross-sector collaboration, and a focus on empowering users with information literacy skills to navigate the evolving digital information landscape.

Keypoints

Major discussion points:

– Challenges of addressing misinformation, including its “sticky” nature and the difficulty of labeling content

– Google’s approach to information literacy, including tools like “About this result” and watermarking for AI-generated images

– The role of user preferences and behavior in encountering misinformation

– The need for a multi-stakeholder approach to information literacy that includes users, not just governments and tech companies

– Evolving challenges with AI-generated content and deepfakes

Overall purpose:

The goal of this discussion was to explore approaches to tackling misinformation through information literacy, focusing on strategies used by tech platforms and the challenges involved.

Tone:

The tone was informative and collaborative throughout. The speakers shared insights from their professional experiences in a constructive manner, acknowledging the complexity of the issues. There was an emphasis on the need for continued experimentation and multi-stakeholder cooperation to address misinformation challenges.

Speakers

– Jim Prendergast: Moderator, with the Galway Strategy Group

– Sarah Al-Husseini: Head of Government Affairs and Public Policy for Saudi Arabia for Google

– Katie Harbath: Founder and CEO of Anchor Change

– Zoe Darmé: Director of Trust Strategy at Google

– Audience: Various audience members who asked questions

Additional speakers:

– Lina: From Search for Common Ground and the Council on Tech and Social Cohesion

– Ian: From the Brazilian Association of Internet Service Providers

Full session report

Tackling Misinformation and Promoting Information Literacy in the Digital Age

This comprehensive discussion, moderated by Sarah Al-Husseini, Head of Government Affairs and Public Policy for Saudi Arabia at Google, brought together experts from Google and the tech policy sector to explore strategies for addressing online misinformation and enhancing information literacy. The panel included Katie Harbath, Founder and CEO of Anchor Change, and Zoe Darmé, Director of Trust Strategy at Google.

Evolving Challenges of Misinformation

The session began with an interactive quiz, highlighting the difficulty in identifying misinformation. Zoe Darmé shared research from Australia suggesting people’s accuracy in spotting false information is only slightly better than a coin toss, emphasizing the need for more sophisticated approaches to combating misinformation.

Darmé aptly quoted Kate Klonick, stating, “You can’t bring logic to a feelings fight,” underscoring the emotional aspect of misinformation consumption. The discussion highlighted the persistent and complex nature of misinformation, which has evolved significantly since the 2016 US election.

Platform Strategies and Their Limitations

The speakers discussed various approaches implemented by platforms to address misinformation, including labelling, fact-checking partnerships, and reducing the reach of potentially false content. They also mentioned pre-bunking as a strategy used by platforms. However, they acknowledged the limitations of these methods, particularly given that misinformation is often not entirely false and can be difficult to definitively label.

Katie Harbath pointed out that people interpret labels and warnings differently, which can sometimes lead to unintended consequences. She also highlighted the importance of understanding the different modes people are in when consuming content online, which affects how they interact with information.

Google’s Approach to Information Literacy

Zoe Darmé detailed Google’s efforts to promote information literacy, focusing on tools like “About this result” that aim to help users evaluate sources and claims more easily. She also mentioned a new Google feature that shows whether search results are personalized or not, empowering users with more context about their search experience.

The discussion touched on Google’s development of SynthID watermarking technology for AI-generated images, though Darmé acknowledged that such technical solutions are not foolproof, especially when considering issues like screenshots that could potentially circumvent watermarking.

The Role of User Preferences and Behaviour

A key insight from the discussion was the significant role that user preferences and behaviours play in exposure to misinformation. Darmé revealed findings from a recent study on search engines showing that users often encounter misinformation when they are explicitly searching for unreliable sources, challenging assumptions about algorithmic responsibility and highlighting the need for approaches that address individual media consumption habits.

Emerging Challenges: AI-Generated Content and Deepfakes

The speakers acknowledged the evolving landscape of misinformation, particularly with the rise of AI-generated content and deepfakes. While these technologies present new challenges, the panel emphasised that many core information literacy principles still apply. They stressed the need for ongoing research and adaptation of strategies to address these emerging issues.

Darmé also discussed Google’s measures to combat involuntary synthetic pornographic imagery, including efforts to remove such content and provide support for victims.

Multi-stakeholder Approach and Collaboration

A recurring theme throughout the discussion was the necessity of a holistic, multi-stakeholder approach to combat misinformation. The speakers emphasised that effective solutions require efforts from platforms, governments, civil society organisations, and users themselves. They highlighted the importance of cross-industry collaboration, especially as AI technology continues to evolve.

However, both Harbath and Darmé addressed the challenges of implementing multi-stakeholder approaches to information literacy, including issues of coordination, resource allocation, and measuring effectiveness.

Addressing Specific Concerns

The discussion also touched on specific issues, such as health and election misinformation, with Al-Husseini mentioning targeted approaches being developed by platforms. In her closing remarks, she emphasized the importance of addressing youth populations in information literacy efforts.

Unresolved Issues and Future Directions

While the discussion provided valuable insights into current strategies and challenges, several unresolved issues emerged. These included questions about how to effectively address user preferences for unreliable sources, balancing platform intervention with concerns about censorship and free expression, and measuring the long-term effectiveness of information literacy efforts.

Conclusion

The discussion concluded with a call for ongoing collaboration and experimentation in addressing the challenges of misinformation. The speakers emphasised the importance of balancing technological solutions with user education and critical thinking skills. They acknowledged the complexity of the issue but expressed optimism that multi-stakeholder approaches and continued innovation could lead to more effective strategies for promoting information literacy and combating misinformation in the digital age.

Session Transcript

Jim Prendergast: Thank you for coming. This is the session, Tackling Misinformation with Information Literacy. My name is Jim Prendergast with the Galway Strategy Group. I’m going to help kick it off and I’m also going to help monitor the online activity. Fair warning, we want this to be a highly interactive session. You will be taking a quiz at some point. No pressure, it’s pretty easy and there are no wrong answers. We really want people to learn from this session, walk away with some new ideas, some new thinking, and by all means, we welcome your questions and interaction. I’m going to now kick it off to Sarah Al-Husseini, who’s the Head of Government Affairs and Public Policy for Saudi Arabia for Google.

Sarah Al-Husseini: Great. Thank you, Jim, and thank you everybody for joining us this afternoon. I know it’s day zero and it’s the end of the day, so those of you who have made it this far, thank you for being here. As Jim mentioned, I’m Sarah Al-Husseini, I lead Government Affairs and Public Policy for Google in Saudi and Egypt. I think this session is hugely timely with everything that’s happening on a global scale and the amount of information that is present online. I hope that you will be very engaged with our session today. We have a wonderful speaker lineup. With that being said, we’ll start with a presentation by Katie Harbeth, Founder and CEO of Anchor Change on the challenges of addressing misinformation. We’ll go over to Zoe Darma, who is the Director of Trust Strategy at Google, who will present on Google’s approach to information literacy, and then we’ll go into a Q&A session. I’ll take the prerogative of being the moderator and the first few questions, and then hand it over to those of you in the audience who would like to engage as well. With that, I will hand it over to Katie. Katie, thank you so much for joining us, Zoe. Thank you as well. Happy to have you here.

Katie Harbeth: Yeah. Thank you so much for having me, and I’m sorry I can’t be there in person. But I wanted to start today by just sharing a little bit of the history of companies working around misinformation and some of the things that they have tried and how they approach this problem. Just to sort of ground ourselves, the current iteration of working on misinformation really started after the 2016 election. Misinformation has been around for quite some time in various forms and companies have been working with it and trying to combat it for quite some time. But after the 2016 election in the United States, there was a lot of stories about initially Macedonian teenagers spreading fake news to make money. And it wasn’t until later in 2017 that we also started to realize and find the Russian Internet Research Agency ads that were on, I worked at Facebook, so not only on the Facebook platform, but many other platforms. And this is what really spurred a lot of the companies, I can speak on behalf of the work at Facebook in particular, to start adding labels and working with fact checkers around trying to combat some of this. And misinformation is not just stuff that is around elections and hot button issues. Some of the earlier stuff too is things like a celebrity died in your hometown or those type of clickbaity headlines that the companies were trying to fight. The initial focus was also very much on foreign activity, so foreign adversaries trying to influence different elections around the world. The other important thing to remember about a lot of this is that much of this work is very much focused sometimes on the behavior of these actors, not necessarily the content as well. So this means that pretending to be somebody that they’re not on platforms that require people to use their real names, if they’re coordinating with other types of accounts that they’ve created to try to amplify things. So as we’re talking about this, it’s not just what they’re saying and if it’s false, but it’s also how they might be trying to amplify it in inauthentic ways. You can go to the next slide. There we go. So I think a couple of other things to remember too as we think about this. Misinformation is sticky and fast, which means that it can spread very, very quickly. and it’s something that very much sticks with people and it’s very hard and can be very hard to change their minds. Things can also, we find that most of the times, things are not completely false or completely true. There’s usually a kernel of truth in this with a lot of misinformation around it, which makes it a lot trickier for trying to figure out what to do because you can’t just fully label it false or true. You also have things like satire, parody, hyperbole that exist in many places that are perfectly legal types of speech and understanding the intention of the poster and what they mean for it to be and doing that at scale is something that is incredibly tricky for many companies in which to do. And overall, these platforms very much do not wanna be the arbiters of truth. They do not wanna be the ones that are making decisions of whether or not something is true or false or what the facts are because they have seen and they have been accused of the risk of censorship, whether that’s true or just perceived, but that has become a huge political problem for them, particularly in the United States, but also around the world. And also trying to, and sometimes defining subcategories of misinfo and dealing with these specifically can be a better way for platforms to develop and enforce policy. So rather than having a blanket one, you might prioritize it because health misinformation, for instance, you may have more facts and authoritative information that you can refer to on that. The same thing with elections, where, when and how to vote is something that election authorities have that is easier to point to than something that might be more amorphous or there’s disagreeing opinions about what is happening. And so sometimes you’ll see companies start to parse this out on the types of content that they’re seeing and the topic of it in order. or to try to better to figure out how to combat this and mitigate the risks that appear to them. Sorry, I haven’t had enough coffee. The risks that might happen to them. Jim, if you can go to the next slide. So a couple of strategies that we’ve seen companies do in addition to just, so most companies do not take down fake information unless again, it’s about some very, very specific topics, health and elections are two that I can think of. But other strategies that we have seen companies take, one is pre-bunking. So giving people a warning of the types of information that they might see, the types of stuff that could potentially be false, or even directing them to authoritative information on these sensitive topics. So saying that, a lot of you may have seen during COVID, platforms will put a label about where you could get more information about COVID during election season and might be going to authoritative information there. A lot of them, as I mentioned earlier, work with fact-checkers all around the world. Many work through the Pointer Fact-Checking Consortium in order to, so what that means is that the platforms aren’t making the decision, but they’re working with these fact-checkers, they’re giving them a dashboard, and those fact-checkers can decide what stories it is they wanna fact-check. They can write their piece, and then if they determine that it is false or partially false, a label will be applied to that. And then that will take the person to that fact-check. The other thing it will do too is that it reduces the reach of this content, so less people can see it, but it doesn’t fully remove it. And then as I’ve been mentioning too, there’s the labeling component of this. And so people can see whether or not what these fact-checkers are saying while they’re consuming this different types of content. And Jim, if you can go to the next slide. And a couple of notes about labels. So a lot of this work continues to be something that people are, there’s some trial and error and experimentation to it. Because as platforms. been implementing it. I know it’s really easy to just be like, just put a label on it. That’ll help people understand that it’s not fully true or what more context. But unfortunately, how people interpret that, as we’ve been seeing with research, is a lot more murkier. So for some people, when it says alter content, does that mean it was made with AI, edited with AI? Was it used to Photoshop? There’s a lot of different ways of manipulating or altering content. And not all are bad. Many people use editing software in perfectly legitimate ways. And so how do you help distinguish for a user between that versus stuff that might have been nefariously edited? We find that people have many interpretations of what labels mean. And so they may not even have, the platform might not even have the proper label or enough information to even label it. The other thing too is that when some content is labeled as false, if content is unlabeled, sometimes they will infer that that content is true, even though that may not be the case. It may be that a fact checker hasn’t gotten to it. And so what sort of things are we training users to think about in ways that were unintended? And so platforms are very much trying to experiment with different ways of helping to build up, and I know Zoe is going to go into this, the information literacy of them, because it is not as simple as just putting a label on it, because how people interpret that is very different across generations, across cultures, across many different factors. If you want to go to the next slide. The one other thing I wanted to mention, and this is something, this is a study that Google’s Jigsaw division, along with GEMIC, did earlier this year looking at how Gen Z in particular, but I do think you can pull this out more broadly, but how Gen Z approaches when they go online and how they think about information. And what this study found was that there are seven different modes that people are in when they go online. And they plotted this on this sort of axis, where on the far right, you have heavy content. This is stuff that is news, politics, weighs heavily on your mind, versus on the far left, it’s more lighthearted content. Think cats on Zumba, stuff like that. On the vertical axis, you have on the bottom, things that have social consequences, affect others. People think that they need to do something. At the top, it only affects them, so it’s not necessarily something they feel like they have to act on. So what they found is that most people are in that upper left quadrant, which is the time pass and lifestyle aspiration modes. This is where they’re just hoping to, they’re kind of at the end of the day, they’re trying to kind of emotionally equilibrium, they’re just trying to kind of, you know, zone out a little bit and relax. And when they’re in these modes, they don’t care if stuff is true or not. However, what they found is that as they were absorbing it over time, they did start to believe some of the things that they were reading and consuming. And what they also found with this is that people do still want that heavier stuff, that heavier news and information, but they want to be intentional about it. They want to know when they’re going to get it, and when they go in to get that information, they want to get it quickly, they want a summary, and then they want to get out of it. And so something about this as we continue to have this conversation over the coming years is going to be, how can we reach people where they’re at? And we also have to recognize that their feelings play a huge role in trying to combat misinformation. And as a common friend of Zoe and mine’s and many as Kate Klonick has said in a recent paper, you can’t bring logic to a feelings fight. And this is something that we’re very much trying to think through and figure out when it comes to combating misinformation. Because again, logically, we think just label it, just tell them. And what we have found is that is not actually how the human psyche works. So I can’t remember if I’ve got one more slide or if we’re going over to Zoe.

Sarah Al-Husseini: I think that’s the final slide for you, Katie. And thank you so much for that insight. And I think there are tons of approaches and strategies that can be used to safeguard users, both proactive and then reactive. And we’ll get into that. information literacy with Zoe in a second. Just for everybody in the room, if you haven’t had a chance to get a headset, they’re on the table over here. We also have the captioning behind us, so please feel free. Great to see that a few more people have joined the room. Wonderful. And so with that, Zoe, I’ll hand over for you for Google’s approach to information literacy. Great. Thanks so much, Sarah. And thanks, everybody. Jim did mention that we were going

Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right and wrong answers for this next quiz, so I just want you to, there are three simple questions, and I want you to basically keep track for yourself. So the first question here, and folks on the chat are free to put their answers in the chat, which one of these has not been created with AI? Is it the photo on the left or the photo on the right, A or B, which has not been created with AI? Not everybody has microphones in the audience, so maybe we’ll take a show of hands. Yeah, you can just also just keep track for yourself, because we’ll reveal at the end. And we’re getting some answers in chat and Zoom, so thank you. Great. Now the next one is, I see Jim struggling with the clicker. Now, which photo is more or less as it is described? Is it the photo on the left from WWF, the claim here seems to be about deforestation, or is it the photo on the right, which also seems to be somewhat climate related with a warship in the Danube finally being revealed because of low water levels? Which one is more or less as it is described? Great. Next one. Now, which one of these is a real product? Is it cheeseburger Oreo or is it spicy chicken wing flavor Oreo? Hopefully neither. Yeah, that was my answer Sarah hopefully neither. Okay, Jim, we can advance. So, the house on the left is a real place in Poland. The, the post about the sunken ship is is unaltered I would say and accurate. The post on the left actually from the WWF is a cropped photo. And it’s the same photo taken on the same day not from 2009 and 2019. And unfortunately I hate to say it but spicy chicken wings Oreo were was unfortunately a real product, I think it was a marketing stunt, even still, I’d be so scared to eat it. And a person, a reviewer said it was the worst part of the experience was the greasy split it left behind in my mouth, that still haunts me. So I’d love a show of hands and maybe in the, in the chat to see how many people got all three correct. Any, anybody. I’m not seeing too many hands. And don’t feel bad about yourself because next slide. We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss. to very easily always identify what’s wrong in an image, or to identify whether an image is misleading or not. We’re also not able to identify very effectively what it is about that image that’s wrong. What’s the salient part of the image? So this group of researchers actually tracked people’s eye movements to see if they were focusing on the part of the image that had been altered. And we’re just not trained visual photo authenticity experts. And so if it’s hard for us to do this, even in a setting like this one, it’s hard to think about what Katie mentioned when folks are just in time pass mode, they’re not going always to be doing a great job at this. Next slide, please. I think also in this day and age, when there’s a lot of synthetic or generated content, we’re getting caught up perhaps in the wrong question as well. So also, as Katie mentioned, a lot of people just want us to label things, label things misinfo or not misinfo, label things generated or not generated. But is this generated does not always mean the same thing as is this trustworthy? It really depends on the context. So on the left here, you see a photo that is a real, a quote unquote real photo of trash in Hyde Park. And the claim is about that this trash was left by climate protesters. But actually this is a very common tactic and technique for misinfo. It’s just a real image taken out of context. This was actually a marijuana celebration for 420 day. And that makes a lot more sense that there would be a lot of trash left over. Now this photo on the right is just something that I created. I said, create me. and art of Hyde Park, London with trash. And so, it really depends not only how something was created, but how it’s being used and in the context it’s being used with the caption and label and everything like that. Next slide, please. So, we’ll still need your plain old vanilla information literacy tools. These will still need to evolve given that there is more generated content. There is more synthetic content out there. Certainly, our tools need to evolve. But there’s not going to be a magical technical silver bullet for generated content, just like there’s not a magic silver bullet for mis- and disinformation overall. And so, the way that we’re thinking about these things at Google is inferred provenance or inferred context over here. This is like your classic information literacy techniques, training users to think about when did the image or claim first appear? Where did it come from? Who’s behind it? And what’s the claim they’re making? And what do other sources say about that same claim? And then, tools on the right, which are assertive provenance tools. These can either be user visible or not. They’re explicit disclosures for AI-generated content like watermarking, fingerprinting, markup metadata, and labels. Next slide, please, Jim. Thank you. So, we have set this out in a new white paper. You can scan to read it here. Or if you give your email to Sarah, I can connect with you, and we’ll make sure that we send you a copy. But this white paper kind of sets out how we’re thinking about both inferred context and assertive provenance, and what these two things, how they both play a role in meeting the current moment. around generated content, trustworthiness, and misinformation. Next slide, please. Now, what Katie talked about are a bunch of tools that are happening across many different platforms. I’m going to focus on some of the tools and features that we brought in directly to Google Search. So first, we have about this result. Next to any blue link or web result on Google Search, there are a set of three dots. And if you click into those three dots, you can get this tool, which is designed to encourage easier information literacy practices like lateral reading, or basically doing more research on a given topic. And so, this will tell you what the source says about itself, what other people says about a source, and what other people are saying about the same topic that you search for. So let’s say there was a piece of misinfo about the King of Bahrain having a robot bodyguard. So when you click on the three dots next to that, you’ll see not only information about the source, but also web results about that topic. Spoiler alert, the King of Bahrain did not have a robot bodyguard, just in case you were wondering. Next slide, please. This is just another layer into about this result. It brings all of this information into one page, and it helps users carry out the CORE and SIFT methods. SIFT stands for Stop, Investigate the Source, Find Other Sources, and Trace the Claim. That’s really hard to expect people to do when they’re in time pass mode, so we wanted to put a tool directly into search. just to make this as easy as possible for folks. Because one of the criticisms of inferred provenance or inferred context is it puts a lot of responsibility onto the user. When we’re thinking about all those other modes, let’s say where it might be more important for users, let’s say making a big decision, like a financial decision, for example. We want to make sure that users have the tools that they need when they really feel motivated to go that extra mile. We’ve also built a similar feature into image results. You can also, in the image viewer, click on a result, you’ll see three dots. This is like a supercharged reverse image search directly in the image viewer. It will tell you an image’s history when we, Google, have first indexed an image. Because sometimes an old image, again, will be taken out of context and go viral for a completely different reason. It’ll also show you an image’s metadata. That brings us, next slide, to assertive provenance. For Google’s consumer AI products like Gemini, for example, that power Gemini or Vertex AI in Cloud, we are providing a watermark, a durable watermark for content. Oftentimes, that’s not user visible. In about this image, you can see here if that Synth ID watermark is present, we’ll provide this generated with Google AI directly into the image viewer so that you can see it was produced by one of our image generation products. Now, the reason that it’s hard to just do this as Google for every image out there in the universe, is for the reason that Katie mentioned earlier. Let’s take the example of Russian-backed Macedonian teens. They’re probably not using tools that are using watermarking. If they’re running a derivative of an open model, for example, there’s no way to force those other providers to watermark their content. There’s no motivation for the content creator in that example to use a watermark or a label. Until we have, and we’re never going to have 100 percent accurate AI detectors that are able to suck out all of the information on the Internet, send it through an AI detector and spit out a label or a watermark that’s accurate 100 percent of the time. So really, we need a holistic approach that involves inferred and assertive provenance and a whole of a society solution. Next slide, please. The last thing I’ll say is that there is a lot of talk about the role of recommendations and algorithms and how they’re designed, and whether that is what is creating or promoting or giving more reach to this misinformation that is sticky and fast, as Katie mentioned. But a recent study, at least looking at search, and this is looking at Bing actually, shows that there is consistent evidence that user preferences play an important role in both exposure to and engagement with unreliable sites from search. So what does this mean? Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources. And so it’s not Taylor Swift that’s bringing misinfo about Taylor Swift, but it’s when you’re searching for Taylor Swift plus the site that you like to go to. Now, that site may not be reliable. That might not be a reliable news site. But that is really when folks are most likely to encounter misinformation in the wild on a search engine. And so really, we have to focus on individual choices with these so-called navigational queries, because that’s what’s driving engagement. And it really has to do with what users are actively seeking out. And that’s a bit of an uncomfortable conversation, because it goes to a question of, like, how do you get users on a more nutritional or healthy media diet, rather than, like, how do we just label something or how do we just backtack something? And that’s a much harder problem to solve. So I’ll stop there and turn back to Sarah.

Sarah Al-Husseini: Thank you very much, Zoe. And it’s great to see the Google tools that are helping empower people to make the best decisions and really find the best information for their decision-making and consumption. So thank you. With that, I’m going to turn over to the Q&A portion of the session. Oh, maybe Jim first. I have one online. Oh, great. Fantastic.

Zoe Darma: Wonderful. I see the question. So I’ll read this question out. Does Google’s watermarking for AI-generated images, like those created with Imogen, rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself? And, yes, that slide that y’all were on just now with the chart might be a helpful one. to go to, but essentially the answer is no. It doesn’t rely on metadata. It does produce metadata that shows that the image has been created with a Google generative AI product. It is a tamper resistant, I will say Adderall. So not tamper, like nothing is 100% tamper proof. However, SynthID is tamper resistant. So it is hard to remove from that image and doing something like clearing the images metadata is not going to remove the watermark. Now, this is a little bit different from other types of provenance solutions in the past. Some other types of metadata are easy to edit using very common image editing software. So IPTC metadata, for example, you can edit, and it was not designed to be a provenance tool the way that we’re thinking about it now, but there are ongoing conversations happening both with C2PA and IPTC about how durable that metadata should be. Where we have metadata from SynthID or from IPTC, for example, we are including it in the image viewer the way that I just showed you in about this image. Thank you for the question.

Sarah Al-Husseini: Thanks so much. And maybe Katie, back over to you. So misinformation and disinformation, you mentioned are very sticky problems with big impacts on society. So there’s a lot of external pressure for platforms to do something about these issues, but also a lot of concern about platforms overreaching, potential freedom of expression issues, and so on. Can you talk about how platforms think about these challenges of misinformation and what kind of tools and approaches they have to address them?

Katie Harbeth: Yeah, absolutely. And I think this is why you’ve seen a variety of approaches. for platforms to try. I think one thing in particular is a question about people’s right to say something versus the right for that to be amplified. And so you oftentimes are seeing platforms where they’re again, not taking it down but they are adding the labeling, they are trying to reduce the reach that also has brought criticism on them as well. And sort of a question around the principles of that. I think this pre-bunking is really important as well in trying to just give people other information and context that they can see and reach and be able to understand when they’re consuming this content. You’re starting to see new approaches from places like Blue Sky, which are more decentralized, where they are instead, not only them labeling content but anybody can label content. And then the user can decide what sort of content they do or do not want to see in their feed. And so it is much more putting the power back in the hands of the users versus the platform themselves making some of those decisions. I think a lot of this will continue to evolve and change too as AI continues to play a big role in how we summarize what sort of information that we get. And people are also thinking about that and what types of information are pulled into it. But this is sort of an ever evolving thing as the pressure on them from, as you mentioned, people who are saying that they should do more but others who are saying they’re taking down too much around this. And at the moment you are seeing more platforms taking less of a strong approach around, again, the leave it up or take it down and instead trying to find some of these other ways. Another one I should just mention is X slash Twitter. This is before Musk took over, but it still exists. They have community notes. And so they have larger number of people that can help to add a community note to something to give it more context, to say if something might be true or partially true. And they have some really great mechanisms in place to make sure that that cannot be gained as part of. of that. But I think we’ll continue to see a lot of experimentation on this as they try to balance that, you know, the freedom of expression versus also the safety of people who are using these

Sarah Al-Husseini: platforms. Fantastic. Thank you for that, Katie. That was really helpful. And maybe shifting gears a little bit because I see a few questions coming up on on AI in the chat. Maybe Zoe will tee up. Can you talk about how the proliferation of AI generated content changes the conversation around information literacy? And then we’ll go in and take a viral question in the chat.

Zoe Darma: Yeah, I don’t I think it’s like evolutional, not revolutional. I mean, it’s, it’s more in terms of the volume of content that people are seeing. But edited content is the an age old problem. So the the very first photographic hoax was like of a ghost. And it was using like a daguerreotype, for example. And so as long as images have created, there has been this issue of what do we do about if it’s been edited, if it’s been altered. And, and that’s why I’m a strong believer that our, our, our information literacy muscle really needs to grow as a society because whether something is generated or not, doesn’t necessarily always change the question, is this trustworthy or not? And that’s the key question that we have to remind people. Is this trustworthy generation, whether it’s generated or not, it’s like one element. So and it really depends on the context. And so I think our information, what needs to change is we need to ask, yes, is this generated or not, but still ask all of those other questions that we’ve always been asking ourselves when we’re encountering content that could be potentially suspect. Hope that answers your question, Sarah. Yeah, great. Thank you. And I know Zoe,

Sarah Al-Husseini: you answered directly in the chat, but maybe for those in the room, does Google’s watermarking for AI generated images like those created with image in rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself?

Zoe Darma: Yeah, so it provides metadata, but that metadata cannot be stripped easily. I say easily because nothing is 100% tamper-proof, but SynthID is very tamper-resistant, so it’s not editable the way that other metadata is. And I think that’s a critical piece of what we as Google are doing. Again, neither Google, nor OpenAI, nor Anthropic, nor Meta, none of us control all of the generation tools that are out there. And it’s very difficult to make folks who are using a derivative of open source, maybe smaller models, for example, models that are being run by other companies. There’s not a way for us to force other companies to watermark. And so this is where it becomes really difficult, because we’ll never have 100% coverage on the open web, even if the biggest players are all in C2PA and all responsibly watermarking and or labeling where appropriate. There’s always going to be some proportion of content that’s generated out there on the open web that does not include a watermark, for example.

Sarah Al-Husseini: Fantastic. Thank you, Zoe and Iberal for those questions and answers. And maybe to a question

Audience: in the room. Yeah, thanks so much. This is Lina from Search for Common Ground and the Council on Tech and Social Cohesion. So it’s powerful. You said it again, Zoe, that you can’t actually force others to do certain things, which then in some ways pokes holes in your valiant efforts. So there is a growing evidence about really harmful tech facilitated gender-based violence. And I’m just curious, you know, are we seeing attention on this growing because we do hear that there’s specific things you’ve put in place for health and elections. And a lot of that’s because of the excellent work of the two of you, right? So what would it take for us to also begin to think differently about the harms around tech, TFGBV? Do we need to rally other companies so that there is that standardization of the watermarking of that kind of harmful content? Just where do you think the conversation’s at right now?

Zoe Darma: Thanks. That’s a fantastic question. And image-based sexual abuse, I’ll just say, in terms of like, quote, unquote, deepfake pornography, that’s not what we call it internally. We call it involuntary synthetic pornographic imagery, is a great example of a problem that Google didn’t necessarily create, right? We are not allowing our models to be used for image generation of deepfake pornography or ISPI. However, it is an issue that we’re deeply grappling with, especially on the product that I work on, often like Google Search, because a lot of that material is out there now on the open web. So what we’ve done, and I can only speak for ourselves, is we’ve taken an approach that relies both on technical and kind of multi-stakeholder and industry solutions. So one of the things that we’ve done is we have implemented new ranking protections and ranking solutions so that we can better recognize that type of content, and not necessarily always like recognizing, is this AI generated or not? There are other signals that we can use as well. So for example, if the page itself is advertising it as like deepfake, celebrity pornography, for example, we can detect that and we’re applying ranking treatments so that that’s not ranking highly or well. We also have long had a content policy to remove that type of imagery as well. The other thing that we’re doing is we’re providing more automated tools to victim survivors. When you report even just regular non-synthetic but just regular NCEI to us, there are a couple things we do on the back end. One is if that image is found to be violative, we not only remove the image, we also dedupe using hashing technology. Now, hashing can be evaded with some alterations to the image itself. We also give an option for reporting users to check a box to say that they want explicit images, the best of our ability, removed for queries about them. If the query that I’m reporting, for example, is Zoe Darmay leaked nudes, I can check a box saying, I also want explicit imagery filtered for results about my name. Zoe Darmay leaked nudes, Zoe Darmay, et cetera. That’s another way we’re addressing the problem through automation that doesn’t necessarily rely on finding all of the generated imagery or not, but attacks the problem through another dimension. Those are a couple of the ways. I’ll throw in the chat our most recent blog post on involuntary synthetic pornographic imagery and all of the ranking protections that we’re applying to demote such content and also raise up authoritative content on queries like, for example, deepfake pornography where we’re really trying to return authoritative trusted sources about that. a particular topic and an issue rather than problematic sites that are promoting such content.

Sarah Al-Husseini: Fantastic. Thank you. And I think we have another question in the room before bouncing back to online.

Audience: Hello. Hi. Okay. Katie and Zoe, thank you for the presentations and wonderful answers as well. My name is Ian. I’m from the Brazilian Association of Internet Service Providers. I have more of a doubt than a question. Do we have any studies already showing the effectiveness of the literacy on actually identifying and combating misinformation? I mean, does it have an actual impact or how much can we measure it already?

Katie Harbeth: I think. Yeah, I was gonna say but isn’t there a jigsaw one so we, I feel like you all have done quite a bit of research on this but I think there’s been some very preliminary stuff on pre bunking, where I first saw it really be effective and folks starting to realize its effectiveness was particularly when Russia invaded Ukraine, and sort of that pre bunking ahead of that actually, that actually happening but I’ll toss it to Zoe I don’t want to take Google’s thunder for some of the great research that they’ve done on this too.

Zoe Darma: Oh, and it wasn’t my research either so big shout out to our colleague Beth Goldberg who’s not here but has led a lot of our pre bunking work. Yeah, we can try to dig that up and throw it in the chat. In terms of, so Katie covered pre bunking, I’ll just cover kind of information literacy so there is a lot of evidence that the SIFT and CORE methods work. And so we searched for evidence based practices that we could make easier in the product itself. So, first, the first way I’ll answer your question is, yes, these are evidence-based practices. SIFT, for example, was developed by Mike Caulfield, who is a misinformation researcher who was most recently with the University of Washington, and he’s since moved on, so I don’t know his affiliation right now. CORE was by Sam Weinberg, another misinformation and information literacy researcher. When we’ve done user research on, about this result, for example, we’ve actually seen folk theories decrease. So, I’ll say that the user research that we’ve done, I’ll caveat this by saying small sample size, more research needs to be done, but internal indications for us are that consistent use of about this result, for example, reduced folk theories in terms of how somebody was being shown a certain result. So, for us, that was a really positive indicator that they had a better understanding of not only the results they were seeing, but how those results were chosen for them. And what I’ll say is that a lot of people think, for example, that they have a lot of folk theories about why we’re showing certain results, you know, are we snooping in your Gmail and then giving you results based on things that you’re emailing, blah, blah, blah, you know, all those types of folk theories. And when you actually just say, it really has to do with the keywords that you’re putting in the search box, people understand that that’s why they’re seeing those types of results. A lot of folks think that the results are so relevant to them, they must be, we must know something about them, when oftentimes we’re just using what’s in the, what the user puts in the box. People are unfortunately not as unique as they think that they are. And so, we know a lot about what people want when they’re searching. for, gosh, Katie and I were talking about beach umbrellas yesterday. So people searching for best beach umbrellas, we know a lot about them, and are serving great relevant results based on that, and people think, oh, this is exactly what I need, it must have to do with something about me. The other thing I’ll just say, which is a new feature that you can check for yourself, is we’re rolling out a new feature at the footer of the search results page that will say these results are personalized, these results are not personalized, and if they are personalized, you can try without personalization. I would encourage everybody to check that out, because a lot of the search results pages you’ll find are not personalized. A great many of them are not. And the ones that are right now are on things like what to watch on Netflix, for example. And so then you can see it just goes right at the bottom of your page. You can even click out a

Sarah Al-Husseini: personalization to see how those results would change, and you can check that you’re not in, like, an echo chamber filter bubble. Thank you for the great question and answers from both our panelists. I’m being told we have five more minutes left, and so maybe one quick one from the room in follow-up, and I think this one is probably aimed at Zoe. What if a screenshot is taken then would there be any way of tracking it in the context of watermarks? Can they be

Zoe Darma: removed easily? Yeah, that’s a great question, and for SynthID, I’m sorry, SynthID, I hate to do this weird Google thing where we’re 200,000 people and it’s a different product area that created it. And for screenshots and SynthID, I don’t know the answer directly, but I will certainly follow up and put it in the chat while Sarah’s wrapping up. But I’ll say generally, yes, taking a screenshot is one way to strip metadata off an image. for example. So that was like the classic example of evasion for certain other kind of metadata techniques that we talked about. There are other ways to evade. We talked about evasion of hashing, for example, which can be done by adding a watermark or slightly modifying the image in some ways. There are always ways to get around technical tools with really motivated actors who want to do that. And so we have made it as difficult as possible to strip that metadata. But that’s why we’re saying in a presentation like this, we cannot always rely on 100% technical solutions. We have to think about these other ecosystem solutions as well. And that’s why I come to these presentations and always talk also about inferred provenance and inferred context. However, I will say that we’ve made it tamper resistant. So again, you can’t go into photo editing software and remove it, for example, and things like that. But I’ll get you an answer. It’s a good one. It’s a good question about screenshotting in particular.

Sarah Al-Husseini: Fantastic. Thank you so much, Zoe. Any other questions from the room? And if not, maybe I’ll wrap up with one quick question. Yep. Fantastic. And Katie, I’d love to start with you. So just given the platform of IGF, we’re all here this week in Riyadh for the event. What are some of the challenges of implementing a multi-stakeholder approach to information literacy? How can these challenges be overcome, especially at a platform like IGF?

Katie Harbeth: Yeah, I think this work is absolutely multi-stakeholder and needs to be done from multiple different approaches. It’s not just enough to ask the platforms in which to do this. And I think Taiwan is frankly a really great example of how you see in a multi-stakeholder approach to mis- and disinformation in their country. I think one of the biggest challenges. that I’ve seen that I continue to want to work on is trying to help those that have not been inside of a company sort of understand the scale and the operational challenges to some of the solutions and better thinking and brainstorming about how we might do that. And then on the platform side, helping them to understand sort of the approaches that civil society and others that they are finding when they’re trying to combat all of this in their countries and in their regions around all of this. So I think continued cross collaboration is really important. And then the other thing too, is that this does need to continue to be experimental because if there were a silver bullet to this, we would have all figured this out a long time ago, but this is a really hard and tricky problem. And I think having open dialogue and conversations like this will continue to be important, particularly again, as we go into this new era of AI, which is very much going to change how we generate and consume information, that now is really the time to be thinking about how we shape what those models and things are gonna look like for the next, at least five to 10 years.

Sarah Al-Husseini: Fantastic, thank you so much, Katie and Zoe.

Zoe Darma: The same question. Yeah, I just, before I answer your question, I actually just wanted to go back to the watermarking question because I found the answers through the best product ever, Google search, just a quick search. So SynthID uses two neural networks. One takes the original image, produces another image almost identical to it, but embeds a pattern that is invisible to the human eye. That’s the watermark. The second neural network can spot that pattern and tell users whether it detects that watermark and suspects the image has one or finds that it doesn’t have a watermark. So SynthID is designed in a way that means that watermark can still be. detected, even if the image is screenshotted or edited in another way, like rotating or resizing it. So I was looking that up. Can you repeat your your final wrap up question to me, Sarah? Sorry, of course, and thank you for taking the time. I think we’re a little biased towards Google search being the best, which I absolutely love. We drink the Kool-Aid for sure. And the question is, what are some of the challenges in implementing a multi stakeholder approach to information literacy? And how can the challenges be overcome? Yeah, I think one of the biggest challenges is just the idea that it takes a lot of time and it’s too much to expect of users. So I really think what the multi stakeholder approaches, even though they’re called multi stakeholder, they’re often really focused on governments, what governments can do, and what tech companies can do. And I think one of the things that you’ve heard us consistently throughout this really fascinating talk, and thank you so much, Sarah, for a great job facilitating it, is the third leg of this stool is what users contribute, what users do, what users seek out, and what users are consuming, and how they’re consuming it. And so I think that’s the biggest challenge, like one of the studies I mentioned earlier, really focused on how much users expressed preferences play into the fact that they are finding or not finding this information, like our users actively seeking out unreliable sources. That’s a hard problem to solve. And there’s a reason that multi stakeholder approaches really want to focus on governments or technology companies, they’re at the table, we’re the ones who are doing the talking. But we really are missing a huge piece of the puzzle if we’re not talking about user expressed preferences, what they want to find what they’re seeking out, and then how they’re consuming that, and how we can get them to be stronger. and reliable consumers and creators in the information ecosystem. And that’s a tough, that’s a tall order.

Sarah Al-Husseini: Thank you for that Zoe. I think we’re at time. So maybe just to wrap up a big, huge thank you to our panelists, to Jim Zena, our clicker, stepping in with the internet connection. And just to say, I think it’s from the conversation today, very apparent that we need a holistic approach to the shared responsibility of information literacy and protecting our users and education with regards to our stakeholders and government users, especially youth, because somebody who works in one of the largest youth populations in the world, this is something that sometimes gets overlooked, but is really important. Start them young and bringing in civil society to the conversation is always really important. So maybe I’ll hand back over to Jim.

Jim Prendergast: Yeah, no, thanks everybody. Especially thanks for the great questions, both in person and online. That’s what we really look forward to is the interaction with everybody instead of talking amongst ourselves. So appreciate it. Everybody have a good evening and we’ll see you back here tomorrow. Can the speaker stay online for two seconds for a quick photo? Great. Thanks everyone. Thank you.

K

Katie Harbeth

Speech speed

185 words per minute

Speech length

2677 words

Speech time

867 seconds

Misinformation is sticky and spreads quickly

Explanation

Katie Harbeth explains that misinformation can spread rapidly across platforms and is difficult to counteract once it has taken hold. This stickiness makes it challenging for platforms to effectively combat false information.

Evidence

No specific evidence provided in the transcript.

Major Discussion Point

Challenges of Misinformation

Agreed with

Zoe Darma

Agreed on

Misinformation is a complex and challenging problem

People interpret labels and warnings differently

Explanation

Harbeth points out that users may have varying interpretations of content labels and warnings. This variability makes it difficult for platforms to effectively communicate the reliability or potential issues with certain content.

Evidence

Example given of how ‘altered content’ label could be interpreted as AI-generated, Photoshopped, or other forms of manipulation.

Major Discussion Point

Challenges of Misinformation

Differed with

Zoe Darma

Differed on

Effectiveness of labeling and warnings

Users’ emotional state affects how they consume information

Explanation

Harbeth discusses how users’ emotional states and intentions when using platforms influence their information consumption. She notes that people in different modes (e.g., relaxation vs. intentional information seeking) interact with content differently.

Evidence

Reference to a study by Google’s Jigsaw division and GEMIC on how Gen Z approaches online information.

Major Discussion Point

Challenges of Misinformation

Agreed with

Zoe Darma

Agreed on

User behavior and preferences play a significant role

Platforms struggle to balance free expression and content moderation

Explanation

Harbeth highlights the challenge platforms face in moderating content while respecting freedom of expression. She notes that platforms are often criticized for both over-censorship and under-moderation.

Major Discussion Point

Challenges of Misinformation

Platforms use fact-checking, labeling, and reducing reach of suspect content

Explanation

Harbeth outlines various strategies platforms employ to combat misinformation. These include partnering with fact-checkers, applying labels to questionable content, and reducing the visibility of potentially false information.

Evidence

Mention of platforms working with fact-checkers through the Pointer Fact-Checking Consortium.

Major Discussion Point

Approaches to Combating Misinformation

Pre-bunking can be an effective strategy

Explanation

Harbeth suggests that pre-bunking, or providing users with warnings about potential misinformation before they encounter it, can be an effective approach. This strategy aims to prepare users to critically evaluate information they may come across.

Evidence

Reference to pre-bunking being effective during Russia’s invasion of Ukraine.

Major Discussion Point

Approaches to Combating Misinformation

Cross-industry collaboration is needed as AI evolves

Explanation

Harbeth emphasizes the importance of collaboration across industries to address the challenges posed by evolving AI technologies. She suggests that as AI changes how information is generated and consumed, stakeholders need to work together to shape future models and approaches.

Major Discussion Point

Evolving Landscape of Information and AI

Agreed with

Zoe Darma

Agreed on

Multi-stakeholder approaches are necessary

Z

Zoe Darma

Speech speed

142 words per minute

Speech length

4387 words

Speech time

1850 seconds

Accuracy in identifying misinformation is only slightly better than chance

Explanation

Darma cites research showing that people’s ability to identify misinformation, particularly in images, is not much better than random guessing. This highlights the difficulty individuals face in distinguishing between authentic and manipulated content.

Evidence

Reference to a study by Australian researchers finding that accuracy in identifying misinformation is only slightly better than a coin toss.

Major Discussion Point

Challenges of Misinformation

Agreed with

Katie Harbeth

Agreed on

Misinformation is a complex and challenging problem

User preferences play a role in exposure to unreliable sources

Explanation

Darma discusses how users’ own search behaviors and preferences contribute to their exposure to unreliable information. She notes that people often actively seek out specific unreliable sources, rather than stumbling upon them randomly.

Evidence

Reference to a study on Bing search results showing that user preferences play an important role in exposure to and engagement with unreliable sites.

Major Discussion Point

Challenges of Misinformation

Agreed with

Katie Harbeth

Agreed on

User behavior and preferences play a significant role

Google implements tools like “About this result” to encourage information literacy

Explanation

Darma explains Google’s approach to promoting information literacy through features like “About this result”. This tool provides users with context about search results, encouraging critical evaluation of sources.

Evidence

Detailed description of the “About this result” feature in Google Search, including its functionality and purpose.

Major Discussion Point

Approaches to Combating Misinformation

Differed with

Katie Harbeth

Differed on

Effectiveness of labeling and warnings

Watermarking and metadata for AI-generated content can help, but are not foolproof

Explanation

Darma discusses Google’s use of watermarking and metadata for AI-generated content as a means of identification. However, she notes that these methods are not perfect solutions, as they can potentially be circumvented or may not be universally adopted.

Evidence

Description of Google’s SynthID watermarking technology and its resistance to tampering.

Major Discussion Point

Approaches to Combating Misinformation

Multi-stakeholder approaches are needed, including user education

Explanation

Darma emphasizes the importance of involving multiple stakeholders in addressing misinformation, including users themselves. She argues that user education and improving information literacy are crucial components of any comprehensive strategy.

Major Discussion Point

Approaches to Combating Misinformation

Agreed with

Katie Harbeth

Agreed on

Multi-stakeholder approaches are necessary

Information literacy skills remain crucial even with new AI tools

Explanation

Darma stresses that traditional information literacy skills are still important, even as new AI tools emerge. She argues that these skills need to evolve to address the challenges posed by AI-generated content, but remain fundamental to navigating the information landscape.

Evidence

Reference to SIFT and CORE methods as evidence-based practices for information literacy.

Major Discussion Point

Approaches to Combating Misinformation

AI-generated content adds new challenges to information literacy

Explanation

Darma discusses how the proliferation of AI-generated content creates additional complexities for information literacy. She notes that distinguishing between human-created and AI-generated content is becoming increasingly difficult.

Major Discussion Point

Evolving Landscape of Information and AI

Distinguishing between generated and trustworthy content is complex

Explanation

Darma explains that the trustworthiness of content is not solely determined by whether it is AI-generated or not. She argues that context and use of the content are crucial factors in assessing its reliability.

Evidence

Example of a real photo taken out of context versus an AI-generated image used appropriately.

Major Discussion Point

Evolving Landscape of Information and AI

Platforms are developing new tools to address AI-generated misinformation

Explanation

Darma outlines Google’s efforts to develop tools for identifying and managing AI-generated content. She discusses both technical solutions like watermarking and user-facing features to promote critical evaluation of information.

Evidence

Description of Google’s SynthID watermarking technology and the “About this result” feature.

Major Discussion Point

Evolving Landscape of Information and AI

Google is implementing measures to combat involuntary synthetic pornographic imagery

Explanation

Darma discusses Google’s approach to addressing the issue of involuntary synthetic pornographic imagery (ISPI). She outlines various technical and policy measures implemented to detect, demote, and remove such content from search results.

Evidence

Mention of ranking protections, automated tools for victim-survivors, and content removal policies.

Major Discussion Point

Addressing Specific Misinformation Concerns

S

Sarah Al-Husseini

Speech speed

193 words per minute

Speech length

932 words

Speech time

289 seconds

Platforms are developing targeted approaches for health and election misinformation

Explanation

Al-Husseini notes that platforms are creating specific strategies to combat misinformation in critical areas such as health and elections. This suggests a recognition of the particular importance and potential impact of misinformation in these domains.

Major Discussion Point

Addressing Specific Misinformation Concerns

A

Audience

Speech speed

137 words per minute

Speech length

233 words

Speech time

101 seconds

Tech-facilitated gender-based violence is an emerging concern

Explanation

An audience member raises the issue of technology-facilitated gender-based violence as a growing problem. This highlights the need for platforms to address specific forms of harmful content and behavior beyond general misinformation.

Major Discussion Point

Addressing Specific Misinformation Concerns

Agreements

Agreement Points

Misinformation is a complex and challenging problem

Katie Harbeth

Zoe Darma

Misinformation is sticky and spreads quickly

Accuracy in identifying misinformation is only slightly better than chance

Both speakers emphasize the difficulty in combating misinformation due to its rapid spread and the challenges users face in identifying it accurately.

Multi-stakeholder approaches are necessary

Katie Harbeth

Zoe Darma

Cross-industry collaboration is needed as AI evolves

Multi-stakeholder approaches are needed, including user education

Both speakers stress the importance of collaboration across industries and involving multiple stakeholders, including users, in addressing misinformation.

User behavior and preferences play a significant role

Katie Harbeth

Zoe Darma

Users’ emotional state affects how they consume information

User preferences play a role in exposure to unreliable sources

Both speakers highlight the importance of user behavior and preferences in the spread and consumption of misinformation.

Similar Viewpoints

Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Google implements tools like “About this result” to encourage information literacy

Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Distinguishing between generated and trustworthy content is complex

Unexpected Consensus

Importance of traditional information literacy skills

Katie Harbeth

Zoe Darma

Pre-bunking can be an effective strategy

Information literacy skills remain crucial even with new AI tools

Despite discussing advanced technological solutions, both speakers unexpectedly emphasize the continued importance of traditional information literacy skills and strategies like pre-bunking.

Overall Assessment

Summary

The speakers largely agree on the complexity of misinformation, the need for multi-stakeholder approaches, the importance of user behavior and education, and the ongoing relevance of traditional information literacy skills alongside new technological solutions.

Consensus level

High level of consensus on the main challenges and general approaches to combating misinformation. This agreement suggests a shared understanding of the problem and potential solutions, which could facilitate more coordinated efforts across platforms and stakeholders in addressing misinformation.

Differences

Different Viewpoints

Effectiveness of labeling and warnings

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Google implements tools like “About this result” to encourage information literacy

While Harbeth emphasizes the challenges of user interpretation of labels, Darma presents Google’s approach as a potential solution, highlighting a difference in perspective on the effectiveness of such tools.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of specific strategies to combat misinformation, such as labeling and user education.

difference_level

The level of disagreement among the speakers is relatively low. They largely agree on the challenges posed by misinformation and the need for multi-faceted approaches. The differences are primarily in emphasis and specific strategies, rather than fundamental disagreements. This suggests a general consensus on the importance of addressing misinformation, which could facilitate collaborative efforts in developing comprehensive solutions.

Partial Agreements

Partial Agreements

Both speakers agree on the need for technical solutions to combat misinformation, but differ in their emphasis on specific approaches. Harbeth focuses on fact-checking and labeling, while Darma highlights watermarking and metadata for AI-generated content.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Watermarking and metadata for AI-generated content can help, but are not foolproof

Both speakers agree on the need for collaborative approaches, but Darma places more emphasis on user education as a crucial component.

Katie Harbeth

Zoe Darma

Cross-industry collaboration is needed as AI evolves

Multi-stakeholder approaches are needed, including user education

Similar Viewpoints

Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Google implements tools like “About this result” to encourage information literacy

Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Distinguishing between generated and trustworthy content is complex

Takeaways

Key Takeaways

Misinformation is a complex, evolving challenge that requires multi-stakeholder approaches

Technical solutions alone are insufficient; user education and information literacy remain crucial

Platforms are developing new tools and strategies to combat misinformation, including AI-generated content

User preferences and behaviors play a significant role in exposure to misinformation

Balancing free expression with content moderation remains an ongoing challenge for platforms

Resolutions and Action Items

Google to continue developing and improving tools like ‘About this result’ and SynthID watermarking

Platforms to explore pre-bunking as an effective strategy against misinformation

Stakeholders to focus on user education and improving information literacy skills

Unresolved Issues

How to effectively address user preferences for unreliable sources

Balancing platform intervention with concerns about censorship and free expression

Addressing misinformation from sources not using responsible AI practices or watermarking

Measuring long-term effectiveness of information literacy efforts

Suggested Compromises

Platforms focusing on reducing reach and amplification of misinformation rather than outright removal

Implementing user-choice features like community notes or decentralized labeling systems

Balancing automated detection with human review and fact-checking partnerships

Thought Provoking Comments

Misinformation is sticky and fast, which means that it can spread very, very quickly and it’s something that very much sticks with people and it can be very hard to change their minds.

speaker

Katie Harbeth

reason

This comment succinctly captures a key challenge in combating misinformation – its rapid spread and persistence in people’s minds.

impact

It set the stage for discussing the complexities of addressing misinformation and why simple solutions like labeling may not be sufficient.

You can’t bring logic to a feelings fight.

speaker

Katie Harbeth (quoting Kate Klonick)

reason

This pithy statement encapsulates a crucial insight about the emotional nature of misinformation and why purely factual approaches often fail.

impact

It shifted the conversation towards considering the psychological aspects of misinformation and how to address them.

We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss.

speaker

Zoe Darma

reason

This comment, backed by research, challenges assumptions about people’s ability to identify misinformation and manipulated content.

impact

It highlighted the need for better tools and education to help people identify misinformation, leading to discussion of Google’s approaches.

Is this generated does not always mean the same thing as is this trustworthy? It really depends on the context.

speaker

Zoe Darma

reason

This insight shifts the focus from simply identifying AI-generated content to evaluating its trustworthiness in context.

impact

It broadened the discussion beyond technical solutions to include the importance of context and critical thinking skills.

Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources.

speaker

Zoe Darma

reason

This comment reveals a counterintuitive finding about how users encounter misinformation, challenging assumptions about algorithmic responsibility.

impact

It introduced the idea that user behavior and preferences play a significant role in misinformation exposure, shifting the conversation towards individual responsibility and media literacy.

Overall Assessment

These key comments shaped the discussion by highlighting the complex, multifaceted nature of the misinformation problem. They moved the conversation beyond simplistic technical solutions to consider psychological factors, user behavior, and the importance of context. The discussion evolved from focusing solely on platform responsibilities to emphasizing the need for a holistic approach involving user education, critical thinking skills, and understanding the limitations of both human perception and technical solutions in identifying and combating misinformation.

Follow-up Questions

How effective are information literacy efforts in actually identifying and combating misinformation?

speaker

Ian from the Brazilian Association of Internet Service Providers

explanation

Understanding the measurable impact of information literacy initiatives is crucial for evaluating and improving strategies to combat misinformation.

How can SynthID watermarking be affected by screenshots?

speaker

Audience member (unnamed)

explanation

This explores potential limitations of watermarking technology, which is important for understanding its effectiveness in identifying AI-generated content.

What would it take to begin thinking differently about the harms around tech-facilitated gender-based violence?

speaker

Lina from Search for Common Ground and the Council on Tech and Social Cohesion

explanation

This highlights the need to address specific types of harmful content and suggests exploring standardization of watermarking across companies for such content.

How can we better incorporate user preferences and behaviors into multi-stakeholder approaches to information literacy?

speaker

Zoe Darma

explanation

This area of research emphasizes the importance of understanding and addressing user behavior in consuming and seeking out information, which is often overlooked in current approaches.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #26 High-level review of AI governance from Inter-governmental P

Open Forum #26 High-level review of AI governance from Inter-governmental P

Session at a Glance

Summary

This discussion focused on global AI governance from various perspectives, including government, industry, civil society, and youth. Participants explored the current status, challenges, and priorities in AI governance across different regions.

Key themes included balancing innovation with security and equality, addressing the needs of emerging economies, and ensuring cultural and linguistic diversity in AI development. The importance of open-source AI and its implications for economic development and flexibility were highlighted. Challenges such as data privacy, copyright issues, and the governance of both advanced and traditional AI applications were discussed.

Participants emphasized the need for a harmonized global approach to AI governance, recognizing the geopolitical and economic competition surrounding AI development. The discussion touched on the importance of infrastructure, particularly in Africa, where limited data centers and skills gaps pose significant challenges.

The role of youth in AI development was highlighted, along with concerns about data ownership and localization. The need for inclusive governance frameworks that involve multiple stakeholders, including youth, was stressed. Participants also discussed the importance of enforcement mechanisms and the potential for tax incentives to encourage compliance with AI governance policies.

The discussion concluded with a call for collaborative efforts in AI governance, emphasizing the need for transparency, partnerships between public and private sectors, and the implementation of voluntary reporting frameworks to inform policy decisions. Overall, the participants agreed on the necessity of a unified, inclusive approach to AI governance to ensure its responsible and beneficial development globally.

Keypoints

Major discussion points:

– Current challenges and risks of AI, including security, bias, diversity, and environmental impact

– The need for inclusive, global AI governance frameworks and standards

– Data and infrastructure gaps between developed and developing regions, particularly Africa

– Balancing innovation with regulation and safety considerations

– The roles and responsibilities of different stakeholders in AI development and governance

The overall purpose of the discussion was to explore different perspectives on the current state of AI governance from government, industry, civil society and youth representatives. The goal was to identify key challenges, priorities and potential paths forward for developing effective global AI governance.

The tone of the discussion was largely collaborative and solution-oriented. Speakers acknowledged both the opportunities and risks of AI, and emphasized the need for cooperation across sectors and regions. There was a sense of urgency about addressing governance gaps, but also optimism about the potential for AI to drive progress if managed responsibly. The tone became slightly more critical when discussing inequalities in AI development and data ownership between Global North and South.

Speakers

– Yoichi Iida: Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations, moderator of the session

– Audrey Plonk: Deputy Director of Science and Technology and Innovation Directorate of OECD

– Thelma Quaye: Director of Infrastructure Skills and Empowerment from Smart Africa

– Leydon Shantseko: Representative from Zambia Youth IGN

– Henri Verdier: French Ambassador

Additional speakers:

– Melinda Claybaugh: Policy and privacy policy director from META

– Levi: Youth representative (full name not provided)

Full session report

AI Governance: A Global Perspective

This comprehensive discussion on global AI governance brought together diverse perspectives from government, industry, civil society, and youth representatives. The session, moderated by Yoichi Iida from the Japanese Ministry of Internal Affairs and Foreign Relations, explored the current status, challenges, and priorities in AI governance across different regions.

Current State and Challenges of AI Governance

The discussion began with a thought-provoking comment from Henri Verdier, the French Ambassador, who questioned whether the AI revolution would truly represent progress for humankind. This framing set the tone for considering the broader implications and ethics of AI development, beyond mere technological advancement.

Speakers highlighted several key challenges in the current AI landscape:

1. Balancing Innovation and Security: Governments face the task of fostering innovation while addressing potential risks and security concerns.

2. Infrastructure and Skills Gap: Thelma Quaye, representing Smart Africa, a pan-African organization, emphasised the lack of necessary infrastructure and skills in Africa to fully leverage AI. She likened AI to water, highlighting its potential to nourish and help societies grow, but also underscoring the need for proper governance.

3. Data Sovereignty and Localisation: Leydon Shantseko, a youth representative from Zambia, raised concerns about data sovereignty and localisation issues, particularly in Africa. He pointed out the practical challenges of implementing data localisation policies when most platforms used are not hosted in Africa.

4. Geopolitical Competition: Henri Verdier highlighted the geopolitical aspects of AI development, framing it as a source of power and intense competition between companies, countries, and international organisations.

5. Open Source AI: The discussion touched on the role of open source AI models in promoting innovation and economic development, while also presenting challenges for governance. Melinda Claybaugh, a policy director from META, elaborated on their open-source AI approach and its implications.

Priorities for AI Governance Frameworks

The speakers agreed on the need for a holistic, international approach to AI governance, but differed in their specific focuses:

1. Inclusive Approach: There was consensus on the importance of multi-stakeholder cooperation in developing AI governance frameworks. This includes involving youth throughout the process, as emphasised by Leydon Shantseko.

2. Reflecting AI Ecosystem Realities: Melinda Claybaugh stressed that governance should reflect the realities of the AI value chain and ecosystem, particularly for open source AI.

3. Building Local Capabilities: Thelma Quaye highlighted the importance of building local African datasets and AI capabilities to ensure relevance and reduce bias.

4. Interoperable Governance Tools: Audrey Plonk from the OECD emphasised the development of interoperable governance tools and reporting frameworks.

5. Balancing Global and Local Needs: The discussion highlighted the need to balance global standards with local needs and perspectives, particularly in developing regions.

6. Enforcement Mechanisms: Thelma Quaye stressed the importance of developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.

Roles and Responsibilities in AI Development

The speakers outlined various roles and responsibilities for different stakeholders:

1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks.

2. Companies: Should be transparent about AI development and associated risks.

3. International Organisations: The OECD is working to provide data and harmonisation across AI approaches, including through its AI observatory and integration of the Global Partnership on AI.

4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks.

5. African Nations: Need to increase data infrastructure and sovereignty.

6. Public-Private Partnerships: Collaboration between public and private sectors is crucial for advancing common AI goals, such as developing representative global datasets.

Proposed Solutions and Action Items

Several concrete actions and proposals emerged from the discussion:

1. The OECD is finalising a reporting framework to implement the Hiroshima AI Code of Conduct, aiming to provide interoperability and harmonization across different AI approaches.

2. France is organising an international AI summit in November 2023 to discuss global AI governance, as announced by Ambassador Verdier.

3. Efforts are being made to increase the interoperability of AI governance tools and frameworks across different initiatives.

4. Proposals for using AI and virtual reality to scale up technical and vocational education and training (TVET) skills in Africa.

5. Suggestions for developing partnerships between public and private sectors to advance common AI goals, such as developing representative global datasets.

Unresolved Issues and Future Considerations

Despite the productive discussion, several issues remained unresolved:

1. Effectively balancing innovation with regulation and risk mitigation.

2. Addressing data localisation and sovereignty concerns, particularly for developing regions.

3. Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions.

4. Developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.

5. Increasing AI adoption across industries, particularly in smaller companies, as the current diffusion rate remains low.

6. Bridging the growing gap between public and private AI research capabilities, including the need for public research to reproduce private sector AI development results.

Conclusion

The discussion highlighted the complex and multifaceted nature of global AI governance. While there was general agreement on the need for comprehensive, inclusive approaches, the speakers’ diverse perspectives underscored the challenges in developing universally applicable frameworks. The conversation emphasised the importance of considering ethical, geopolitical, and developmental aspects alongside technical considerations in shaping the future of AI governance.

As AI continues to evolve rapidly, ongoing dialogue and collaboration between stakeholders will be crucial in addressing emerging challenges and ensuring that AI development truly represents progress for humankind. The upcoming international AI summit in France and the OECD’s work on reporting frameworks represent important steps towards more coordinated global efforts in AI governance. The inclusion of youth perspectives and the focus on addressing regional disparities, particularly in Africa, highlight the importance of a truly global and inclusive approach to AI governance.

Session Transcript

Yoichi Iida: AI Governance. So, my name is Ochi Iida, the Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations. I’m the moderator of this session. I’ll talk about the global AI governance from different perspectives and from different communities such as government, industry, and civil society, including Europe. So, we’re now hearing a little bit of technicals. So, first of all, I’m going to be talking about AI from government of France. She’s a policy, privacy policy director from NATO. And we have one of the speakers from the International Organization, OECD, online, and Audrey Plonk, Deputy Director of Science and Technology and Innovation Directorate of OECD. From civil society, we have two speakers. One is Thelma Nguyen, if I pronounce correctly. She’s Director of Infrastructure Skills and Empowerment from Smart Africa. And also, we have one more speaker from civil, in particular youth community, Mr. Leydon Shantseko. He’s representative from Zambia Youth IGN. So, thank you very much to all of you for joining us. And we will have a very productive session. So, in the beginning, I’d like to invite all five speakers to speak about your views on the general current status and also challenges in AI governance, probably in your domestic AI governance situation, or probably you can talk about the global AI governance situation. And you can also tap on your priorities, what you are most expecting from AI, and what you are doing now. So, I’d like to start with Ambassador Andy Beaudoin.

Andy Beaudoin: Good afternoon, everyone. So, you did ask us a very important question, and you did ask us privately to do it in four minutes, which is impossible. The main challenge is quite simple to say, not to do. Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind. And that’s probably the main responsibility of governments, to be sure that we balance innovation and security, economic growth and equality and efficiency and diversity, and to find a good balance. And that’s why we are part of, of course, a global movement with important companies, multi-stakeholder community, but probably as governments, we have at least a responsibility in front of our citizens to pay attention to this. And as diplomats, I am a diplomat, we have a responsibility to do it within an international framework and in conversation with the important multi-stakeholder community. And if we start with the idea of progress, and then I will pass the mic to other speakers, I just want to emphasize that since the beginning of the story of AI, we have had different conversations about, so last year, for example, the main topic was existential risk. Now we are speaking more about, for example, equal development and are we addressing the needs of emerging economies? I think that the most important thing is to start by recognizing that we will have to face a lot of challenges and to try to have a broad vision of the challenges. So security matters. Security is not just AI becoming crazy and attacking humanity. Security is also cyber security. Security is also bias. Are we sure that we did train the model with good data and that we are not just reproducing current inequalities? And security is not everything. We also need to think about cultural and linguistic diversity. I believe that if you don’t have a large language model for your language, your language will disappear as a working language, so as an economic language. So we need to be sure that everyone will have the possibility to enter in this revolution. Diversity doesn’t mean just linguistic diversity, because you need also to train the model with the knowledge of different cultures and to be sure that your point of view, your history, your perception of the world will be taken into account. We are soon to face the question of environmental impact of intellectual property. Are we sure that we still know how to repay creators and creations and build a good framework? Maybe you will tell about this. Maybe we have to find new concepts to protect privacy, because maybe just to protect my personal data is not enough to protect my privacy. And we can continue. Maybe we need a specific policy to be sure that we train enough skills and competencies in emerging economies and they will take a seat in the driver’s seat, not just as consumers. Maybe we have to rethink about education and not just to train engineers, but to be sure that the future citizens will be ready for this new world and there will be free minds and free citizens in the new world. I could continue. I won’t. But the idea is that the holistic approach and the perception of the global question we have to face is, from my perspective, very important. Thank you.

Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, you talked about the security or diversity. Diversity will be very important when AI continues to develop. And also, we need to recognize the importance of the project in the next generation. A lot of risks and challenges were talked about, but at the same time, we also recognize the importance of innovation. So, what about the industry perspective? I would like to now invite Melinda to share your view. Yes, thank you so much. Can everyone hear me? Yes. Okay, great. So, just a bit of context to set the stage for my perspective from META. Two things. One is that in the AI space, we are very much an open AI company. Not open AI, but an open source AI company. And what that means is that we are all in on providing our AI technology on an open source basis. So, our large language model, Lama, we have different versions of it, but it’s made available to anyone to download for free. And this, in our view, is the best way forward in terms of approaching AI innovation for a few reasons. One being that for developers, it is the most valuable and flexible option for them to build on and be able to customize applications to their local needs, fine-tune the way they would like to with the data that they want to. It also is the best, we think, from an economic development perspective. Being able to provide a really diverse ecosystem of AI tools to developers and to countries is going to have the most benefits from an economic perspective because it won’t be locked into a few companies that are providing closed models. And then finally, it benefits us as a company because we won’t be beholden to other operating systems, so to speak, that people will be building on our technology, which is a benefit to us. And we also come at it from the perspective of a company that has supported and signed on to various AI global frameworks in the last few years. last year, including the sole AI, Frontier AI commitments, which will require us to publish an AI safety framework before the AI summit in France next year as an early adopter also of, really supportive of the G7 code of conduct. And so that’s our perspective. And what I’ve seen happening, I think in the AI governance landscape, there are some positives and some challenges. I think the positives are that we’ve seen a real harmonization in the AI safety conversation at the global level. So there’s a increased understanding of the safety risks. There’s an increased understanding of the steps that we need to take to mitigate those risks. And more importantly, I think a firm understanding that we need to have a harmonized global approach to this global technology. I think some of the challenges, however, that we’re seeing are that there’s a lot of conversations happening that are not necessarily relating to each other. So while we have international agreement on the safety conversation, as the ambassador pointed out, there are other conversations happening. So there’s the data conversation around data privacy and the use of data. There’s the copyright conversation that’s happening. There’s the conversation around the governance of all AI, not just advanced AI, but kind of our classic AI, and how do we guard against the risks and harms from regular AI when it’s used to make decisions that affect people’s lives. And so I think those are some, of course, there’s the industry, there’s a lot of industry standards being developed that are important in different ways. And then there’s the conversation around the AI safety institutes, which I think in a positive development are being stood up around the world that will help with the science of AI and the evaluations and benchmarks that should be looked to for AI governance. So I think the question is going to be how to tie a lot of these things together as they deepen, as the science deepens, as the industry standards deepen, as the global frameworks deepen, how do we connect these pieces to make sure that they talk to each other? And then finally, the point I wanna make about one of our priorities that we have in terms of the AI governance conversation is how do we reflect in our governance frameworks the realities of the AI value chain, and particularly open source AI? So what do I mean by that? Well, so what we need to do is reflect the different roles that the actors in the AI value chain and ecosystem have to play, and those are unique and different roles. So what the model developer has within its responsibility in terms of safety mitigation, risk mitigation, then what role does the deployer of the model play? And then the downstream developers, all of these players have unique roles and responsibilities. And I think as we look at a comprehensive kind of governance framework for the ecosystem, we need to take that into account. So speaking from the open source perspective, we don’t have the control and visibility into the downstream uses of the model that a closed model provider might, simply because anyone can use our model for any purpose. In that case, then, what are the responsibilities of the developers who are developing the applications for very specific use cases? And so I think we need to bring that complexity to the conversation to make sure that we’re addressing, using the right tools in the toolbox to address the harms that might arise. Thank you. Thank you very much, Maylene, to cover it. A lot of things actually, you know, in divergence or maybe in parallel things are going on as the international discussions on governance frameworks and whatever, risk assessment. And this, I thought, will be the second topic in our discussion. But before that, I would like to, to invite the other speakers to share your overviews on the general fundamental understanding on the current situation. The previous two speakers covered a lot of elements like risk and challenges and also the opportunities and diversity and inclusiveness will be also a very important part. And now I’d like to invite Thelma from some African view when we talk about AI governance, what we need to prioritize and how you regard the current situation.

Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinational organization, basically, we are also a pan-African organization working across Africa, supporting governments together with private sector in the digital transformation. Now, my perspective of AI from the African context, I would add, I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog. I’d like to emphasize on the leapfrog in South Africa. You would know that, for instance, a number of Africans are not behind in terms of education, in terms of health, in terms of transport, for instance. But if I take countries like Rwanda, Rwanda is very, very similar to the help of AI in our dreams. That supplies, for instance, to rural Rwanda. These are locations that it’s very hard to reach by car. And we are seeing an improvement in mortality rates, for instance, and this is based on AI. We are using AI in precision agriculture, for instance, in Rwanda because of Rwanda, we have new use cases. And these are things that we would not have been able to do leveraging on our current infrastructure. So for us, indeed, AI is a way to leapfrog. Just like the duality of water, it also is a part of, if we don’t govern it, then what happens? If we don’t, if we don’t know what’s going to happen, it’s going to be disastrous. It’s a duality that in 2000, in 2000 years to come, we’re going to take a lot of love. So we don’t have AI, we don’t do it from a consulting point of view. But it doesn’t mean that, don’t always remember that. We have the AI, the African Union, come up with the continental AI strategy. We’ve had, and as much as we’ve got also the blueprint, some countries like Ghana, some of the national strategies, some of the international strategies. So there are efforts towards that. The question still remains, whether this approach we are taking is going to be used. Are we going to be using it? Is it a multi-stakeholder approach? What is the strategy leading to the failure that the African Union has developed against us? It’s very important that we come together, first of all, that is harmonized, but also is fair and ethical. I think my other colleagues have spoken about it, and inclusive. For us, if AI is going to help us to leapfrog, we need to be able to make it be inclusive as well. In terms of the challenges, within the African perspective, we know that the bedrock of AI is data, right? You need a lot of data to be able to properly utilize AI. But the number of data centers in Africa equals the number of data centers in Ireland. Look at the population of Africa, 1.4 billion, and compared to the population of Ireland. So we need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us. And if we are talking about ethical AI, fair AI, using AI within your context, it’s important that data is also within our jurisdiction so we are able to properly leverage it and make sure that it’s sovereign. The second for us is the skills gap. I think five years ago, there was a craze about creating a lot of coders, but now AI can also code. So what do we do now? to be able to, I believe, for instance, that what we need in Africa is to use AI to increase and scale TVET skills. So for instance, we are talking about electric cars. We are talking about assembling phones, for instance, in Africa. Why can’t we use virtual reality generative AI to create classrooms for people to learn these skills, for instance, because we are not able to scale it up, but we can leverage AI to scale it up. But also, even within the space of software development, we still need people who can develop this AI and the robots and the rest. And the last one is also the data sets. And I think it speaks to the biases that my colleagues have spoken about. When most of the AI tools we have, have data sets that are from other regions, we do not have the data sets in Africa. So that’s one key thing that we also need to focus and build a data set so that the AI is African. It’s to our story, like we did with M-PESA. We can leverage on our own story to tell our story, because AI is as intelligent as the data you feed it. So if you feed it with other cultures, other data sets, it will always go against us. And we’ve seen some of them where we’ve applied AI tools from let’s say banking, AI banking tools from other regions that are rejecting people applying for loans just because they are of a certain race or of a certain gender. So it’s important for Africa within that perspective. I know it sounds very nascent, but it is what it is. We need to be able to create those data sets. Thank you.

Yoichi Iida: Okay, thank you very much. Very deep thoughts about the current situation of Africa and also challenges of Africa. I think many of those are shared by other people from around the world. But we also, your talk also reminded me of the speech by Minister Al-Sabaha in the opening session and also other speakers. Actually, it was very much impressive to hear many speakers talked about AI in the opening of Internet Governance Forum. So everything is emerging together and everything is related and reflected to each other. But we need to read all one after another to make the best benefit from technologies. And we need to future proof. So from that perspective, I would like to ask the viewpoint of youth generation from Levi. Thank you, Aida.

Speaker 2: I’m sure you can hear me, right? So from, I think maybe in addition, because what some of the things that the previous speakers have raised are valid and great concerns. Of course, there’s a dual aspect of it, the good and the bad. Talking about the youth perspective, I think a number of youths give a check. They have contributed largely to the development of artificial intelligence. Be it from the innovative stage and also how some of the systems are working. And the duality of it is others have used it to, for lack of a better term, like to cheat their way through, to leapfrog. But others are using it holistically and based on honest and truth, as well as transparency and accountability. Now, when you look at it from, maybe touching a bit back in Africa, majority of the youths that I think have majority of the youths that I think have used AI have among other things, been a source of data. We talk about data mining, for instance, most of the AI tools have been trained by a number of youths. Let me give an example of recently, I think some Kenyan youths were protesting because the amount they were getting to feed data systems was way lower compared to the amount of data the amount of money that these data systems are going to generate from it. So it’s an issue of the balance between you have the global North developing most of the AI systems, but the global South is being used to train and less investment in the global South is coming in. Thelma mentioned and raised an issue of data centers versus the number of the population in terms of how many people are actually using local data centers based on these advanced AI systems. A good example is I think one of the arguments that we’ve had is we have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working. So in as much as we can talk about data governance, data regulation, we’re building data acts, data protection abuse based on the data that we actually don’t host, which kind of I think created disparity because you have a data act supposing to govern database and localized data, but most of the data that we’re actually working and operating on is not being hosted on local platforms. So you have a bigger gap in terms of governance of data in Africa based on the data that we actually are not hosting in the first place. So it comes back to who then hosts the most of the data and how do we create local acts or views or laws to govern the data that we’re using when it’s actually not being hosted in Africa. So when you talk about what should be then the priorities and the expectations from the youth perspective is how much influence do we have on the data that we are producing at a local level when it’s being hosted outside of Africa? Because give or take, I can use this as an example, majority of governments and civil society organizations that are based in Africa are not using local tools. For example, I think Microsoft is one of the big techs that most African countries and governments and civil society are using as a platform. Microsoft 360, a good case in point, right? We use most of the data and it’s not stored in Africa at all. Most of the data centers are not based in Africa yet the data that is being received from Africa is something that we expect to input into AI and we’re talking about governance. So I think when it comes to youth perspective, looking forward is how then do we create a balance between globalized data versus governing data from the local perspective and how much benefit then does African countries have on the data they are giving AI systems which are not being hosted and leveraged from the African perspective. I think that’s pretty much from the youth perspective aside from just adoption of AI by majority of the youth in Africa. I’ll end here for now.

Yoichi Iida: Okay, thank you very much. Very interesting perspective and it covers a lot of things. But now it’s very interesting to see, we gather for internet and we talk about AI and now we are talking about the data. So everything is related, of course, to each other and probably we also need to talk about infrastructure such as data center or computing power. And also Melinda talked about the supply chain and the business is trying to reflect the governance discussion onto the reality of supply chain. But it’s very interesting because from the government perspective, we are trying to reflect the reality of supply chain to the discussion of policy making. So it can be a kind of a very healthy mutual interaction but if it fails, the future will not be very much productive. It’s very interesting. And as government, we have been making a lot of efforts in developing a governance framework such as Melinda talked about the G7 Hiroshima Code of Conduct which we spent a lot of time with Ambassador Verdi and other colleagues from G7 countries. And also over the last few years, we had European convention, AI convention of European Council and also we had EU AI Act in place and also we had the global partnership was integrated with OECD AI community. And a lot of things happened over the last one or two years and everything was connected to actually OECD and my partner Audrey Prong had been looking after probably everything and she knows everything I believe. And I would like to ask for her comment on overview of this current situation and also the point that previous speakers touched upon. So Audrey, please.

Audrey Plonk: Thank you, Yoichi for the kind introduction and hi to everyone. Sorry to not be with you in rehab today but thank you so much for having me and the OECD on the call. panel. I’ll be brief so that we can have more discussion. But just to say, we’ve had, you know, of course, in the last couple of years, I’m sorry, we cannot hear you get a moment. Can you try again? Does this work? No? No, I’m sorry. Okay, so technical people is working on this, so please wait, and I will speak to you once again later. And now, so we talked about a lot of things, and we have touched upon data or infrastructure, and, you know, we talked about the challenges and risks, of course, as well as opportunities. And also, we heard the views from different communities. So now, we do not have enough time, but I would like to invite all the speakers to make a comment on your perspective

Yoichi Iida: about the responsibilities or roles, or what you are planning to do, or as your own community, such as business, government, industry. Okay, so I would like to invite for the last comment from all speakers, but before that, I would like to invite Audrey to try again.

Audrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observer room and not the speaker room. Anyway, thank you for the kind introduction, Yoichi, and hi, everyone. Sorry to not be with you in person, but I’m sure you’re having a great IGF in Riyadh. Just on behalf of the OECD, we’ve obviously seen a lot of changes to the global internet governance space in the past five years, since we initially adopted our first AI recommendation, which we just revised earlier this year. And of course, as Yoichi mentioned, we’ve had the emergence of safety institutes, we’ve had changes just recently here at the OECD with the integration of the global partnership on AI into our work program, and the emergence of a lot of different policy topics, many of which other speakers talked about. Just to give a couple of examples, we know that issues of data and AI are super important. They’re critical. Everybody has said that in their own way. And our expert community here at the OECD has more than 400 people. And one of the more recent things we did in the last six months is create a group focused on privacy and data and AI. So just to say that I think the topics of the table and the issues that you’re coping with in different regions around the world, we see very much across the community that we work in, which is a broad global community. I would just say that if you’re not familiar with the OECD’s AI observatory at oecd.ai, it’s really the place where we’re trying to put as much data and evidence behind trends that are happening in AI, trends like what kind of language modelers are being built on what languages, so that policymakers can look at that data and start to shape a policy environment that implements the broad principles that I think we all agree on, things that have been mentioned before already, like around quality and fairness, around bridging divides and other things. So if you haven’t checked out the observatory, it’s a great place to look at things like where research cooperation is happening across countries, where patents are being filed, where investment is going into AI. And I think as we build that out, we have right now 70 different jurisdictions participating in the observatory and invite many more to come to come join us. I have a colleague in the room that you can talk to since I’m not there. But a big part of what we’re trying to do at the OECD is to provide as much interoperability and harmonization across different approaches, whether those be technical standardization approaches or policy approaches. That’s, I think, many people said the importance of us operating on a global space and in a global way. And we’re trying to bring our analytical and data-driven approach to AI. And I think, you know, just as we move forward into 2025, 2024 has been a very jam-packed year of AI and a lot of focus on important issues of safety, frontier models. But I also note the importance that others have said about maybe not frontier models, just the day-to-day integration of AI. And I’ll just close by saying some of the data we released earlier this year shows just how much runway there is for AI to diffuse or to be adopted across industries for a lot of potential benefit. And so, I think, you know, at best we see about an 8% diffusion rate of AI technologies and mostly in large companies. So, to make AI both more accessible and more widely adopted, we think, and we know it has to be trustworthy, but also that we have to put some of these other framework conditions in place around safety, security, and fairness to make those numbers around diffusion go up. So, I look forward to the rest of the discussion. And thank you, Yoichi.

Yoichi Iida: Thank you very much, Audrey, for the comment. And I think the remaining time is very much limited, but I would like to hear one more voice from all speakers. And we always hear, you know, inclusivity when we talk about AI governance. And we had the GDC and a lot of people are talking about AI gaps, AI divide. So, now, I think OECD is a kind of a small group with 38 member countries, but now after integrated with the GPA, they have more than 40 members, and they are welcoming more. So, there will be a more inclusive group for AI discussion. But I think France has a similar perspective on AI governance and when you are organizing action summit next year. So, I would like to ask Ambassador what will be the objective and goals of AI action summit? You have just two minutes. Hello, hello. Yes. So, in two minutes. First, maybe there is something we didn’t say enough

Andy Beaudoin: in this room, and maybe not within the IGF itself. Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead. We have to face this. If we don’t recognize this, we won’t do a good job. And for us, for France, for a lot of people, one of the big threats regarding the future of AI and the future of AI governance would be a fragmentation of the global governance. Because if we let a fragmentation happen, we will engage a race to the bottom, a race to the worst. Everyone, to remain strong within the competition, will propose the weakest regulation or governance. So, we have to stick together, we have to exchange. So, yes, of course, we think that the political framework of the OECD is of the utmost importance. We discuss, we integrate, we agree, and we are doing a great job within the GPI and with the OECD. But we think that we need also a universal conversation. So, the Paris Summit, February 10 and 11, so soon now, in two months, will be probably the biggest international summit ever so far. So, we did invite 110 heads of state and governments, and we do expect something like 80 heads of state and governments, and most of the heads of international organizations. And we will propose an agenda around what is a good but broad governance regarding all the holistic approach I did mention earlier. And that will be also a very intense multi-stakeholder conversation. So we do expect something like 1,000 or 2,000 delegates from research, from industrial sector, from private sector, from civil society. And we try to put the conversation about three main topics, or maybe four, because risk, security, safety institute, and even the question about catastrophic risk or existential risk still matters. But we will add three layers. Conversation regarding sustainable AI, because let’s try not to break the planet again with this new technology, unpromising technology. A conversation about, yes, broad governance, addressing all the concerns I did mention earlier. And a conversation about the needs of, what we don’t know so far, big builds and digital public infrastructure, because frankly, we don’t want these movements to be completely privatized. We need also public resources. Just to conclude with this, you know, I travel a lot. I meet a lot of researchers everywhere, including at the biggest American universities. Today, let’s face it, big research cannot reproduce the results of the biggest companies. So that’s not your fault, you are not responsible. But we want, we need a world where there is a common knowledge and where public research can reproduce at least, or maybe preempt, but at least reproduce the results of the private sector. So for this problem, we don’t have to finance, we need more money in the public sector. Okay. Thank you very much.

Yoichi Iida: So we have five minutes left, but I would like to hear one voice, it’s from one of the other speakers. Melinda, what is your expectation and what do you think you can do in developing AI governance and also AI ecosystem?

Speaker 1: Thank you. So just a couple of things I want to touch on. I think companies have significant responsibility here, clearly. I think particularly around participating in the international frameworks and initiatives that are being developed and hearing to them, working with safety institutes in their home countries to develop the research and evaluations of advanced models. I think being transparent with everyone about how their large models are developed, what they’re capable of, what the risks are, how they’re addressing risks, all of those things I think are really important and squarely on the shoulders of developers. And then I think there’s a lot of partnerships that need to be developed in terms of public, private sector, whether it’s on research capabilities, whether it’s working to develop data that’s going to be representative of the entire world, that is working with the Gates Foundation, which is not the government, but to develop African data for training. So how can we partner together to advance some of the common goals? I think that is going to be really important going forward. And I think we’re starting to get a sense of what the real needs are and the real opportunities. And then no one can do this alone, right? So everyone has their interests to further, but we need to be doing it together. So I think really getting a clear understanding of where we can partner together to advance the governance, I think, is the next phase. Thank you very much. That will be very important. And I’m going to go back to you, Aisha, and then I’ll turn it over to you. Thank you. So I’ll speak from the governance perspective. One of the key things or the key differences we’ve seen is that there’s a disparity between the governance and data and enforcement. So within the African perspective, we need to come up with ways of enforcing so that it’s effective. So there’s no point in spending so much time developing policies, governance, and enforcing. And on that, we are looking at things like giving tax breaks to companies who are compliant, setting up authorities who are autonomous, those kind of things. So enforcement is a key thing to consider. But also multi-stakeholder. I believe that AI brings the world together even more than the internet. And so there’s a need to have a universal approach to AI governance. And we fully support what was said in terms of bringing everybody, having a universal approach. And that’s what we do also at Smart Africa, to bring private sector, civil service organizations to the government. But it’s only Africa. And so we need to also to go beyond Africa to put a sort of a universal model. Thank you.

Yoichi Iida: Thank you very much. Actually, you know, enforcement is very important. And actually, this is what we are working very hard on with Audrey. Before I invite Audrey to comment on this, I invite Lidl to share what is their expectation. What do you think you can do?

Leydon Shantseko: The first one is not to be used in most of the conversation, especially when it comes to governance. I’m not very confident in as much as I have an idea of how other continents or other regions have done with regard to youth involvement. But with regards to Africa, partly there is a feeling among youths that governance, the most thing comes in to try and reduce or regulate innovative ideas before they actually thrive. So it’s something that I think from the youths, the core is for government to engage youths with an open mind so that they allow for innovation to thrive and then the safeguards in terms of what are the risks can then be talked about with the youths in the meetings or in the room. And like talking about reducing certain access, because at the end of the day, it looks more of we are trying to actually come up against innovation by bringing regulation and also the tendency to regulate something before fully understanding how big it is. It’s just, I think, one of the challenges. Secondly, it’s allowing for youths to continue innovating, even when we are talking about emerging technology. In the context that youths, I think, have been at the heart of most of the innovative ideas. So when we talk about AI and emerging technologies, we don’t have to think of the youths in terms of not knowing something, but allowing them the benefit of doubt to take the risk and allowing them to thrive and grow. Because most of the technical ideas or some of the infrastructure or developments we have as foundations, most of them were actually developed by the youths. So involving them, I think, from the start all the way to the end of the process is something that I would recommend. Or maybe also, let me say this in front of the ambassador, considering the number of heads of state who are being invited, the question is how are they doing against the youths? Because if it’s governments mostly making these policies, there’s a high chance that youths’ perspective are mostly left out in the room. And then youths are involved at a later stage when they are behind most of the innovation. So I think trying to create a balance between government perspective versus the youths’ perspective in most of the governance process becomes very critical. Thank you.

Yoichi Iida: Thank you. Sounds good. So thank you very much for the comment. And now we heard the enforcement and innovation. And I would like to invite Audrey to talk about the conclusion on behalf of myself.

Audrey Plonk: Thanks, Riti. I just want to say that governance is a lot more than regulation. Regulation is really important, but governance can include other tools. And I just want to not miss the opportunity to talk about one that we hope to be finalizing very soon this week, which is an implementation framework or a reporting framework to implement the Hiroshima AI Code of Conduct. And the purpose of this framework is to allow companies and institutions, organizations to report publicly on their activities related to the code of conduct so that we can take voluntary tools and move them from just sort of nice words on a paper to building out an ecosystem of information that can inform policy decisions. Because I think the prior speaker said rightly that, you know, we operate often in a vacuum. Those of us who work on AI day in and day out, we may know a lot, but there’s a lot of stuff we don’t know about AI. And it’s very hard to implement a good governance regulation in that vacuum. And so I think filling that vacuum in the void is an important step. And as part of that process to develop this reporting framework for the Hiroshima Code of Conduct implementation, we’ve also mapped this code of conduct to many other codes of conducts. And I think there’s a way to make these different. systems, you know, for the organizations that we’re going to be asking and hoping adopt and start to engage in this governance dialogue and more than just talking, which is also really important, but in more of information sharing in a concrete way that can help inform researchers and the public, that we make these tools as interoperable as possible. And that that’s not a negative thing for the world, it’s instead advancing a system whereby we can compare and in an, as we say, in an apples to apples way, we can compare like things together to see what’s happening at a global, at a global scale. So hope to be able to announce good news there. I know you hope so too, Yoichi, as we really work to implement some of these extremely important activities in the governance space that have taken place over the last couple of years and try to move them from, you know, out of the negotiating rooms and into the practical implementation phase. So thank you so much. Okay. So, if the General Code of Conduct is put in place in action together with a monitoring mechanism, that would be a very much experimental mechanism where the private sector and the government will work together to ensure safety, security, and trustworthiness of AI systems in voluntary work. So we know that is not the only answer, but we are making a lot of efforts to build up a governance framework, which should be inclusive and trustworthy. And we, of course, understand that this effort should be done in collaboration among the different stakeholders, such as not only the government, but the industry, civil society, academia, youth, and others. So I hope the discussion was very much productive and helpful to the audience, and I hope we continue working together towards being open, free, and not fragmented, and translating AI ecosystem globally. So thank you very much to all the speakers, and also thank you very much to the audience. Very much sorry about the audio system, but I hope you enjoyed the discussion. So thank you very much. Thank you.

Yoichi Iida: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

A

Henri Verdier

Speech speed

128 words per minute

Speech length

1085 words

Speech time

505 seconds

AI as a source of progress and power requires balanced governance

Explanation

AI is not just a promising technology but also a source of power and intense competition between companies and countries. There is a need for balanced governance to ensure AI benefits humanity while addressing potential risks.

Evidence

The speaker mentions the upcoming Paris Summit in February, which will be a large international summit with 80+ heads of state to discuss AI governance.

Major Discussion Point

Current State and Challenges of AI Governance

Need for holistic, international approach to address diverse challenges

Explanation

A holistic approach to AI governance is necessary to address various challenges including security, cultural diversity, environmental impact, and intellectual property. The speaker emphasizes the importance of a universal conversation on AI governance.

Evidence

The Paris Summit will propose an agenda around broad governance, addressing topics like sustainable AI, digital public infrastructure, and the needs of emerging economies.

Major Discussion Point

Priorities for AI Governance Frameworks

Agreed with

Speaker 1

Thelma Quaye

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

Differed with

Speaker 1

Differed on

Approach to AI governance

Governments responsible for balancing innovation and security

Explanation

Governments have a responsibility to balance innovation and security, economic growth and equality, and efficiency and diversity in AI development. There is a need to ensure that AI progress benefits humankind as a whole.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Speaker 1

Leydon Shantseko

Agreed on

Importance of balancing innovation and regulation

S

Speaker 1

Speech speed

145 words per minute

Speech length

461 words

Speech time

189 seconds

Open source AI models promote innovation and economic development

Explanation

Open source AI models provide flexibility for developers to customize applications to local needs. This approach is seen as beneficial for economic development by creating a diverse ecosystem of AI tools.

Evidence

The speaker mentions their company’s large language model, Lama, which is made available for free download.

Major Discussion Point

Current State and Challenges of AI Governance

Agreed with

Henri Verdier

Leydon Shantseko

Agreed on

Importance of balancing innovation and regulation

Governance should reflect realities of AI value chain and ecosystem

Explanation

AI governance frameworks need to reflect the different roles and responsibilities of actors in the AI value chain. This includes model developers, deployers, and downstream developers, each with unique responsibilities in risk mitigation.

Evidence

The speaker discusses the differences between open source and closed model providers in terms of control and visibility into downstream uses.

Major Discussion Point

Priorities for AI Governance Frameworks

Agreed with

Henri Verdier

Thelma Quaye

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

Differed with

Henri Verdier

Differed on

Approach to AI governance

Companies should be transparent about AI development and risks

Explanation

Companies have significant responsibility in AI governance. They should participate in international frameworks, work with safety institutes, and be transparent about their large models’ development, capabilities, and risks.

Evidence

The speaker mentions their company’s support for various AI global frameworks and commitment to publish an AI safety framework.

Major Discussion Point

Roles and Responsibilities in AI Development

T

Thelma Quaye

Speech speed

150 words per minute

Speech length

924 words

Speech time

368 seconds

Africa lacks necessary infrastructure and skills to fully leverage AI

Explanation

Africa faces challenges in leveraging AI due to a lack of data infrastructure and skills. The continent needs to increase its data center capacity and develop AI-related skills to fully benefit from the technology.

Evidence

The speaker mentions that the number of data centers in Africa equals the number in Ireland, despite the vast population difference.

Major Discussion Point

Current State and Challenges of AI Governance

Importance of building local African datasets and AI capabilities

Explanation

There is a need to create African datasets and develop local AI capabilities to ensure AI is relevant and beneficial to the African context. This is crucial for addressing biases and ensuring AI tools are appropriate for African needs.

Evidence

The speaker gives an example of AI banking tools from other regions rejecting loan applications based on race or gender.

Major Discussion Point

Priorities for AI Governance Frameworks

Differed with

Leydon Shantseko

Differed on

Focus of AI development in Africa

Africa needs to increase data infrastructure and sovereignty

Explanation

Africa needs to increase its data infrastructure and ensure data sovereignty. This is crucial for leveraging AI effectively and ensuring that AI governance is relevant to the African context.

Evidence

The speaker mentions the need for effective enforcement of AI governance policies in Africa, suggesting measures like tax breaks for compliant companies.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

L

Leydon Shantseko

Speech speed

172 words per minute

Speech length

405 words

Speech time

141 seconds

Youth perspective highlights data sovereignty and localization issues

Explanation

The youth perspective emphasizes issues of data sovereignty and localization in Africa. There is a concern about the lack of local data centers and the implications for data governance when most data is hosted outside Africa.

Evidence

The speaker mentions that most African governments and organizations use platforms like Microsoft 365, with data stored outside Africa.

Major Discussion Point

Current State and Challenges of AI Governance

Youth involvement needed throughout AI governance process

Explanation

There is a need for greater youth involvement in AI governance processes. The speaker argues that youths should be engaged with an open mind, allowing for innovation to thrive while addressing potential risks.

Evidence

The speaker mentions the perception among youth that governance often comes in to regulate innovative ideas before they can thrive.

Major Discussion Point

Priorities for AI Governance Frameworks

Youth should be allowed to innovate while addressing risks

Explanation

The speaker advocates for allowing youth to continue innovating in emerging technologies like AI. There’s a call to give youth the benefit of the doubt to take risks and grow, while still addressing potential risks.

Evidence

The speaker points out that youths have been at the heart of most innovative ideas and have developed many of the foundational technologies we use today.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Agreed on

Importance of balancing innovation and regulation

Differed with

Thelma Quaye

Differed on

Focus of AI development in Africa

A

Audrey Plonk

Speech speed

139 words per minute

Speech length

1433 words

Speech time

616 seconds

OECD working to provide data and harmonization across AI approaches

Explanation

The OECD is working to provide data and evidence on AI trends through its AI observatory. This effort aims to help policymakers shape policy environments that implement broad principles like quality, fairness, and bridging divides.

Evidence

The speaker mentions the OECD AI observatory at oecd.ai, which provides data on trends like language model development, research cooperation, and investment in AI.

Major Discussion Point

Current State and Challenges of AI Governance

Developing interoperable governance tools and reporting frameworks

Explanation

The OECD is working on developing interoperable governance tools and reporting frameworks. This includes an implementation framework for the Hiroshima AI Code of Conduct, aimed at allowing organizations to report on their activities related to the code.

Evidence

The speaker mentions the upcoming finalization of a reporting framework to implement the Hiroshima AI Code of Conduct.

Major Discussion Point

Priorities for AI Governance Frameworks

Y

Yoichi Iida

Speech speed

121 words per minute

Speech length

2134 words

Speech time

1055 seconds

Multi-stakeholder cooperation needed to advance governance

Explanation

The speaker emphasizes the need for collaboration among different stakeholders to build an inclusive and trustworthy AI governance framework. This includes cooperation between government, industry, civil society, academia, youth, and others.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Thelma Quaye

Agreed on

Need for inclusive and international approach to AI governance

Agreements

Agreement Points

Need for inclusive and international approach to AI governance

Henri Verdier

Speaker 1

Thelma Quaye

Yoichi Iida

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

Africa needs to increase data infrastructure and sovereignty

Multi-stakeholder cooperation needed to advance governance

Speakers agree on the importance of a comprehensive, global approach to AI governance that involves multiple stakeholders and addresses various challenges across different regions.

Importance of balancing innovation and regulation

Henri Verdier

Speaker 1

Leydon Shantseko

Governments responsible for balancing innovation and security

Open source AI models promote innovation and economic development

Youth should be allowed to innovate while addressing risks

Speakers emphasize the need to foster innovation in AI while also addressing potential risks and security concerns through appropriate governance measures.

Similar Viewpoints

Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.

Thelma Quaye

Leydon Shantseko

Africa lacks necessary infrastructure and skills to fully leverage AI

Youth perspective highlights data sovereignty and localization issues

Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.

Henri Verdier

Speaker 1

Companies should be transparent about AI development and risks

Governments responsible for balancing innovation and security

Unexpected Consensus

Importance of youth involvement in AI governance

Leydon Shantseko

Yoichi Iida

Youth involvement needed throughout AI governance process

Multi-stakeholder cooperation needed to advance governance

While youth involvement might not typically be a primary focus in AI governance discussions, both speakers emphasized its importance, suggesting a growing recognition of the need for diverse perspectives in shaping AI policies.

Overall Assessment

Summary

The speakers generally agree on the need for a comprehensive, inclusive approach to AI governance that balances innovation with risk mitigation. There is consensus on the importance of international cooperation, addressing regional challenges, and involving diverse stakeholders, including youth.

Consensus level

Moderate to high consensus on broad principles, with some variation in specific focus areas. This level of agreement suggests potential for collaborative efforts in developing global AI governance frameworks, but also highlights the need for tailored approaches to address region-specific challenges, particularly in developing areas like Africa.

Differences

Different Viewpoints

Approach to AI governance

Henri Verdier

Speaker 1

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

While Andy Beaudoin emphasizes a holistic, international approach to AI governance, Speaker 1 focuses on reflecting the realities of the AI value chain and ecosystem in governance frameworks.

Focus of AI development in Africa

Thelma Quaye

Leydon Shantseko

Importance of building local African datasets and AI capabilities

Youth should be allowed to innovate while addressing risks

Thelma Quaye emphasizes the need for building local African datasets and AI capabilities, while Leydon Shantseko advocates for allowing youth to innovate freely in AI development.

Unexpected Differences

Data localization and sovereignty in Africa

Thelma Quaye

Leydon Shantseko

Africa needs to increase data infrastructure and sovereignty

Youth perspective highlights data sovereignty and localization issues

While both speakers address data sovereignty in Africa, Leydon Shantseko unexpectedly highlights the youth perspective on this issue, emphasizing the practical challenges of data localization when most platforms used are not hosted in Africa.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI governance, the focus of AI development in Africa, and the practical implementation of data sovereignty.

difference_level

The level of disagreement among speakers is moderate. While there is general agreement on the importance of AI governance and development, speakers differ in their specific approaches and priorities. These differences reflect the complexity of AI governance and the need for tailored approaches to different contexts, particularly in developing regions like Africa. The implications suggest that a one-size-fits-all approach to AI governance may not be effective, and that balancing global standards with local needs and perspectives will be crucial.

Partial Agreements

Partial Agreements

All speakers agree on the need for comprehensive AI governance, but they differ in their approaches. Henri Verdier emphasizes a holistic international approach, Speaker 1 focuses on reflecting the AI value chain realities, and Audrey Plonk highlights the OECD’s role in providing data and harmonization.

Henri Verdier

Speaker 1

Audrey Plonk

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

OECD working to provide data and harmonization across AI approaches

Similar Viewpoints

Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.

Thelma Quaye

Leydon Shantseko

Africa lacks necessary infrastructure and skills to fully leverage AI

Youth perspective highlights data sovereignty and localization issues

Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.

Henri Verdier

Speaker 1

Companies should be transparent about AI development and risks

Governments responsible for balancing innovation and security

Takeaways

Key Takeaways

AI governance requires a balanced, holistic international approach to address diverse challenges and opportunities

There is a need for inclusive, multi-stakeholder cooperation in developing AI governance frameworks

Infrastructure, skills, and data sovereignty gaps exist, particularly in Africa and developing regions

Open source and transparent AI development can promote innovation and economic development

Youth involvement throughout the AI governance process is crucial

Resolutions and Action Items

OECD working on finalizing a reporting framework to implement the Hiroshima AI Code of Conduct

France organizing an international AI summit in February 2024 to discuss global AI governance

Efforts to make AI governance tools and frameworks more interoperable across different initiatives

Unresolved Issues

How to effectively balance innovation with regulation and risk mitigation

Addressing data localization and sovereignty concerns, particularly for developing regions

Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions

How to enforce AI governance policies effectively, especially in regions like Africa

Suggested Compromises

Using voluntary reporting frameworks and codes of conduct as a middle ground between strict regulation and no oversight

Partnering between public and private sectors to develop representative data and research capabilities

Allowing for innovation while simultaneously developing safeguards and addressing potential risks

Thought Provoking Comments

Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind.

speaker

Henri Verdier

reason

This comment frames AI development not just as technological advancement, but raises the crucial question of whether it will truly benefit humanity as a whole. It sets the tone for considering the broader implications and ethics of AI.

impact

This framing shifted the discussion towards considering the holistic impacts of AI on society, culture, and human progress, rather than just focusing on technical capabilities.

I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog.

speaker

Thelma Quaye

reason

This analogy provides a powerful and accessible way to understand both the potential benefits and risks of AI, especially from an African perspective. It highlights AI’s transformative potential while also acknowledging the need for proper governance.

impact

This comment brought attention to the specific context and needs of developing regions, leading to a discussion on inclusivity, infrastructure challenges, and the potential for AI to address developmental gaps.

We need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us.

speaker

Thelma Quaye

reason

This insight highlights the critical issue of digital sovereignty and the importance of local infrastructure for AI development, especially in the context of developing nations.

impact

It sparked a deeper conversation about data localization, infrastructure gaps, and the geopolitical implications of AI development and deployment.

We have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working.

speaker

Leydon Shantseko

reason

This comment exposes a critical gap between policy intentions and practical realities in data governance, particularly in the African context.

impact

It led to a discussion on the challenges of implementing effective data governance and AI policies in regions that lack control over their technological infrastructure.

Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead.

speaker

Henri Verdier

reason

This comment brings attention to the often overlooked geopolitical and competitive aspects of AI development, framing it as not just a technological issue but a matter of global power dynamics.

impact

It broadened the discussion to include considerations of international relations, economic competition, and the potential for fragmentation in global AI governance.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to encompass ethical, geopolitical, and developmental aspects of AI. They highlighted the need for inclusive, globally coordinated governance that addresses the specific challenges of developing regions while also considering the competitive and power dynamics at play. The discussion evolved from abstract principles to concrete challenges in implementation, particularly around data sovereignty and infrastructure development. This multifaceted approach underscored the complexity of AI governance and the necessity for diverse perspectives in shaping global policies.

Follow-up Questions

How can we connect the various AI governance conversations and frameworks to ensure they work together coherently?

speaker

Melinda Claybaugh

explanation

There are many parallel conversations happening around AI safety, data privacy, copyright, and governance of different types of AI. Connecting these is important for developing comprehensive and effective AI governance.

How can AI governance frameworks reflect the realities of the AI value chain, particularly for open source AI?

speaker

Melinda Claybaugh

explanation

Different actors in the AI ecosystem have unique roles and responsibilities. Governance frameworks need to account for these differences, especially considering open source AI models.

How can we increase AI infrastructure, particularly data centers, in Africa?

speaker

Thelma Quaye

explanation

The lack of data centers in Africa compared to its population size limits the continent’s ability to leverage AI effectively and maintain data sovereignty.

How can we use AI to scale up technical and vocational education and training (TVET) skills in Africa?

speaker

Thelma Quaye

explanation

Using AI and virtual reality to create classrooms for learning technical skills could help address the skills gap in Africa.

How can we develop more African-specific AI datasets?

speaker

Thelma Quaye

explanation

Having AI trained on African datasets is crucial for ensuring AI tools are relevant and unbiased for African contexts.

How can African countries effectively govern data that is not hosted locally?

speaker

Leydon Shantseko

explanation

Many African governments and organizations use platforms hosted outside Africa, creating challenges for local data governance and regulation.

How can we ensure a fair balance between global data use and local data governance?

speaker

Leydon Shantseko

explanation

There’s a need to balance the benefits of global AI systems with local control and governance of data.

How can we increase AI adoption across industries, particularly in smaller companies?

speaker

Audrey Plonk

explanation

Current AI adoption rates are low, especially outside of large companies. Increasing adoption could lead to significant benefits but requires addressing issues of trust, safety, and accessibility.

How can we ensure public research can reproduce or preempt private sector AI developments?

speaker

Henri Verdier

explanation

There’s a growing gap between public and private AI research capabilities, which could limit common knowledge and public oversight of AI developments.

How can we develop partnerships between public and private sectors to advance common AI goals?

speaker

Melinda Claybaugh

explanation

Collaboration is needed in areas such as research capabilities and developing representative global datasets for AI training.

How can we improve the enforcement of AI governance policies, particularly in Africa?

speaker

Thelma Quaye

explanation

There’s a need for effective enforcement mechanisms to ensure AI governance policies have real impact.

How can we better involve youth in AI governance discussions and decision-making processes?

speaker

Leydon Shantseko

explanation

Youth perspectives are often left out of governance processes, despite young people being at the forefront of many AI innovations.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #14 Children in the Metaverse

Session at a Glance

Summary

This discussion focused on children’s rights and safety in the metaverse and other virtual environments. Experts from various fields explored the challenges and opportunities presented by these emerging technologies. The conversation highlighted that children are already active users of virtual worlds, with over 50% of metaverse users being under 13 years old. Participants discussed the need for age verification and data protection measures, balancing these with data minimization principles.

The discussion touched on existing regulations such the GDPR and their applicability to virtual environments, while noting the need for more specific governance frameworks for the metaverse. Experts emphasized the importance of involving children in the development of policies and technologies that affect them, as well as the need for child-friendly reporting mechanisms and effective remedies in virtual spaces.

The potential benefits of the metaverse for children’s education, creativity, and advocacy were highlighted, alongside concerns about privacy, safety, and potential exploitation. Participants stressed the importance of digital literacy for both children and adults. The discussion also covered the role of parents and educators in supporting children’s safe engagement with virtual environments.

The Global Digital Compact was presented as a framework for shaping the digital environment, with a focus on protecting children’s rights online. Overall, the discussion emphasized the need for a balanced approach that protects children while also empowering them to benefit from the opportunities presented by virtual worlds and emerging technologies.

Keypoints

Major discussion points:

– The metaverse and virtual worlds are already inhabited by many children, raising concerns about safety and rights

– There are challenges around age verification, data collection, and protecting children while allowing participation

– Existing regulations such as the GDPR provide some guidance, but there are gaps in regulating virtual reality environments

– Children want to be empowered and involved in shaping policies for the metaverse, not just protected

– The metaverse offers opportunities for education and child advocacy if developed responsibly

The overall purpose of the discussion was to explore how children’s rights and safety can be ensured in virtual reality and metaverse environments, while also leveraging the opportunities these technologies offer for children’s development and participation.

The tone of the discussion was primarily serious and concerned about child protection, but became more optimistic towards the end when discussing the potential benefits and children’s desire to be involved. There was a mix of caution about risks and enthusiasm about possibilities throughout.

Speakers

– Jutta Croll, Chairwoman of the Digital Opportunities Foundation

– Michael Barngrover, Managing Director of XR4Europe

– Deepak Tewari: Founder of Privately, a technology company focused on online child safety

– Sophie Pohle: Advisor at the German Children’s Fund (Deutsches Kinderhilfswerk e.V.)

– Lhajoui Maryem: Projects and Network Lead at the Digital Child Rights Foundation

– Emma Day, Nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and a human rights lawyer

– Deepali Liberhan, Global Director of Safety Policy, Meta

– Torsten Krause, Project Consultant at the Children’s Rights in the Digital World

– Hazel Bitana: Deputy Regional Executive Director at the Child Rights Coalition Asia

Full session report

The Nature and Scope of Virtual Worlds and the Metaverse

The discussion began with Michael Barngrover, Managing Director of XR4Europe, exploring the broad nature of virtual worlds. He highlighted at least four relevant concepts: virtual reality (VR), mixed reality, 3D gaming, and social media platforms. Barngrover also noted that social media platforms are already functioning as virtual worlds based on user behavior, and discussed the cognitive load of multi-presence in these environments.

Deepak Tewari, founder of Privately, provided concrete statistics, noting that the metaverse currently has 600 million monthly active users, with 51% under the age of 16 and 31% under 13. This revelation underscored the urgency of addressing children’s rights and safety in virtual environments.

Children’s Rights and Safety in Virtual Environments

Sophie Pohle, representing the German Children’s Fund, referenced General Comment 25 as a framework for children’s rights in digital environments, establishing a legal and ethical foundation for the conversation.

Lhajoui Maryem from the Digital Child Rights Foundation emphasized the need for age-appropriate content and safer digital experiences for children. Hazel Bitana from the Child Rights Coalition Asia presented children’s perspectives from the Asia-Pacific region, stressing the importance of child-friendly reporting mechanisms and effective remedies in virtual spaces. She also highlighted children’s desire to be involved in policymaking processes.

Deepali Liberhan outlined Meta’s approach, explaining that the company implements default privacy settings, parental supervision tools, and third-party age verification through YOTI for teens. However, this platform-specific approach was contrasted with calls for broader, more comprehensive safeguards across all virtual environments.

Data Collection, Privacy Concerns, and Age Verification

Emma Day, a human rights lawyer and artificial intelligence ethics specialist, highlighted the unprecedented amount of sensitive data collected in the metaverse as a significant concern. The discussion revealed tensions between effective age verification and data privacy.

Deepak Tewari presented on privacy-preserving age detection technology, arguing that such solutions already exist but face pushback. He suggested implementing age verification at the device or operating system level to minimize data collection. This contrasted with Liberhan’s emphasis on balancing age verification with data minimization principles, highlighting the complexity of implementing safety measures without compromising user privacy.

Regulation and Governance of Virtual Environments

The applicability of existing regulations like GDPR to virtual environments was discussed, with Emma Day noting that while these regulations apply, new challenges emerge in the metaverse context. She advocated for a multi-stakeholder approach to developing governance frameworks.

Torsten Krause mentioned the Global Digital Compact as a framework for protecting children’s rights in digital spaces. He elaborated on its principles and potential impact on international cooperation for digital governance. Krause also emphasized the responsibility of states in implementing child safety policies and standards.

The discussion touched on the Australian law preventing children under 16 from using social media, sparking debate about the effectiveness and implications of such stringent measures.

Opportunities and Risks of the Metaverse for Children

The potential benefits and risks of the metaverse for children were explored. While some speakers highlighted educational opportunities through immersive experiences, audience members raised concerns about addiction and disconnection from reality. Hazel Bitana provided a balanced perspective, acknowledging the potential for creative expression while also warning about the risk of deepening discrimination in virtual spaces.

The importance of avatars in virtual worlds was discussed, highlighting their role in self-expression and identity formation for children and teens.

Unresolved Issues and Future Directions

Several key issues remained unresolved, including effective age verification without excessive data collection, appropriate consequences for virtual harms, and closing digital divides in access to metaverse technologies. The ongoing challenge of balancing innovation with protection in metaverse regulation was emphasized.

Jutta Croll mentioned an upcoming session on age-aware internet of things, while Emma Day noted a panel on governance of edtech, neurotech, and fintech, indicating the breadth of related topics to be explored.

Conclusion

The discussion underscored the complex nature of children’s rights and safety in virtual environments. While there was broad agreement on the importance of protecting children, significant differences emerged in approaches to implementation. The conversation highlighted the need for continued dialogue, research, and multi-stakeholder collaboration to develop effective governance frameworks that balance safety, privacy, and children’s right to participation in the evolving digital landscape.

Session Transcript

Jutta Croll: Welcome to our workshop, Children in the Metaverse. My name is Jutta Kroll, I am the chairwoman of the Digital Opportunities Foundation. Welcome to those people who are on site in Riyadh, and also to those who are taking part in our session online. As I said before, my name is Jutta Kroll, I’m chairwoman of the German Digital Opportunities Foundation, and I prepared the workshop proposal together with my colleague, Torsten Traude, and with Peter Josius from the Netherlands. Thank you for being here. Yes, just to set the scene, with the availability of artificial intelligence and the creation of virtual worlds, the immersion of digital media in everyday life has reached a new level of development, although the concept of virtual reality dates 35 years back, like the World Wide Web and like the Children’s Convention. So we have kind of a coincidence of certain developments, but nonetheless, virtual reality has only come into our lives during the last several years. And what we already know from 20 years of children using digital devices, being online in the internet, they are the early adopters, and they are those who will also be the first inhabitants of the virtual environment. So that is why we have come together to consider how children’s rights can be ensured in virtual reality in the metaverse, and I’m really happy to have those esteemed speakers around me and also online. I will introduce the speakers to you once it’s their turn to speak, and we will begin with is Michael Barngrover, he’s already to be seen in the Zoom, you can see him. Michael is the Managing Director of XR4Europe, which is an industrial association with the mission to strengthen collaboration and integration between XR industry stakeholders across the European continent. So he’s coming from the technical part, but I know that he has children’s rights on his mind, and I will hand over to you, Michael, your screens are already to be, your slides are already to be seen and just start your presentation. Thank you.

Michael Barngrover: Excellent. Thank you very much. Just to confirm that, that everyone hears me all right. Good. Yep. Okay. Then yes, I’m the Managing Director of XR4Europe, and we are an association that supports those who work with XR all over the continent of Europe, both researchers as well as companies and policymakers, helping and hoping to make the future of virtual worlds in Europe, one that we want to have happen, that we want future generations to be healthy and productive. And so we can go to the next slide, I’d like to focus on the scope. So scoping virtual worlds. So the next slide, is that something that I don’t have control over doing? No, I do not. Can you move next slide? Yeah, thank you. Okay. So with virtual worlds, it is a challenging term, because it’s very broad and very encompassing. So there are at least four broad concepts of virtual worlds that I think are very relevant. The first one being VR, a virtual reality. So you’re completely immersed, taking place and sharing place, often with others in a digital environment, a fully digital environment. But of course, it’s not fully digital, because you are there, and you have both a digital and a non digital existence. And we have this mixed reality, which is what is becoming more prominent, starting last year with new devices, new headsets that can do this. And almost every device going forward, is going to be increasingly capable of mixed reality, which brings the digital and the non digital together in an integrated experience, which means that in the same way that you are present in this digital environment, this digital medium, you are still present in your physical interactively. So you’re having to interact with both. But both of these are built off of another kind of virtual world, which is much more mature and much more common, which is basically traditional 3D gaming, and even 2D gaming. So these worlds like this image here of Fortnite, these are places where hundreds of millions of users, billions of users are actively engaged, including a lot of children. So this is where the virtual world starts to extend beyond what we think about the future and really what we’re actually concerned about today right now. So the lessons that we can draw from what makes these environments safe and healthy, we should be able to draw lessons from what we’re trying to do today to future virtual worlds that would be mixed reality and virtual reality. But even these 3D worlds follow a lot of the same experiences as 2D and non-visually represented or non-immersive, non-3D. environments, like even social media platforms. Social media platforms are virtual worlds, as in that they are populated, they are active, they are interactive, but they are virtual. They don’t have a direct correlate with the physical world, except through us, the users. Can I go to the next slide, please? Because in the next slide, we’re going to talk about, sorry, it’s back a couple of slides. It’s the cognitive loads of multi-presence. So a couple of slides before this one. So there is a cognitive load to be thinking about and managing your activities and your presence, and thus the activities and presence of others in multiple worlds, the non-digital world, but also the digital virtual worlds. And when we have many virtual worlds that we are active in, then that is an additional load. So already this is, again, something we’re doing right now with social media platforms, but it’s also true of 3D platforms that are persistent. The traditional gaming platforms are always there. Our friends may be there. When we’re not there, these worlds are still active, and we are still maybe thinking about them, particularly children who generate a lot of their social equity through their activities in these platforms. As these become 3D immersive spaces, it’s actually even more challenging. There’s more that we have to think about. We have to think about, as we move around in this space, the gist of the actual physical space as it maps and correlates to a virtual space. For example, a mixed reality environment. The table or the chair, those things are in the virtual environment, whether they are represented there or not. They’re in my physical space, so they’re also in my virtual space. even if they’re not virtually represented there. So even from a safety perspective, you have to manage these things. But when we start putting more people in there with us, that becomes more complicated as well. So cognitive load for everyone, but especially children, is something really to be concerned about. Can we go to the next slide, please? The avatar slide. So when we’re talking about being in these worlds with each other, yes, we have this question about how we’re represented, how we’re viewed there. So avatars are a really important topic. I’ll go a little quicker just because we’re getting, I think, a little over time for my part. But the avatar here is just to say that avatars come in many different forms, and it’s not possible for any one company to provide you all the ranges of representation here, whether cultural or even age and ideology. So there’s a big question between whether people should be able to make their own avatars and bring them into these virtual worlds, or whether they should limit themselves to the options that are provided to them. And these, of course, have tremendous consequences, particularly for younger people, as they start to spend more time building social equity while using avatars to represent themselves with others. And it’s not just their choice. When they choose an avatar, it does not mean that that’s what they are. They have to then still negotiate with others, with peers to establish what their identity is in that social environment. So avatars are just a tool towards establishing identity in a virtual world. And then the next slide, please. Just very quickly, once we’re in there and we’re represented, sorry, a couple slides back. There’s a slide about policing, but basically, once we’re in this environment, it is still something of a free-for-all environment in virtual worlds, because we do not have something keen to policing or criminal justice. So when crimes happen in these virtual environments, right now, it’s very hard to arrive at a consequence, an acceptable consequence for them, but it’s also very difficult to know what is a suitable consequence. A virtual crime is a crime by definition, but we don’t know how much it’s equivalent to the similar or same crime that may take place outside of virtual realm. And this is, again, not new to virtual worlds. If we consider social media. platforms to be a form of virtual world, then this is something that already we’re trying to legislate and understand. So I’ll close it there, so we can move on to the next speaker, but thank you very much.

Jutta Croll: Thank you, Michael, for giving us a first insight into what virtual worlds will be, can be, or are already. We will now go to Deepak Tewari from Privately, who will tell us a little bit about technology to guide us through the metaverse. Deepali, I can see, sorry, but I can see everyone and I can secure it. Okay, we can hear you, and now we can also see you. Would you please introduce yourself, and then we will have a look whether your slides will work, because there are some technical issues. It’s the first day of the Internet Governance Forum, and we are in a really busy, busy room, and the technicians are doing a great job, but sometimes things go wrong. We will try. It’s all right. Even if there are no slides, I will be able to conduct myself,

Deepak Tewari: I’m very happy to be here addressing all of you. I run a company based out of Lausanne in Switzerland, and the company is called Privately. Privately is a technology company, and for the last decade, we have been developing and deploying various technologies to keep miners safe online, and this includes technology that identifies threats and risks online, but also technology the technology of age assurance and age assurance is essentially the, the technology behind being able to identify whether someone is a child online. And hopefully I could get my slides up and I would, I would have really wanted to show you how it’s working and, and what it is like but going back to the subject at hand the metaverse. So my previous speaker he was mentioning about avatars and the state of the metaverse or virtual worlds in general. And that’s essentially where I’m calling in from I’m sitting very close to that lake that you see here and it’s very beautiful. So, very happy to be here. And if you could kindly go to the next slide. So, this summarizes what privately does so we have various kinds of technology, safeguarding privacy enhancing technology but something which is very pertinent to today’s discussion is the, the technology behind being able to identify who is a participant online is a child, and especially in the metaverse is a child. And can we do this. So that’s, that’s something I’m going to talk about but as I said let’s back up about the metaverse a couple of notes that I took. So, as of the latest statistics today. What roughly categorizes as the metaverse has about 600 monthly active users 600 million. So that’s roughly 600 million monthly active users. And according to some reports that I have seen, particularly from wired and another agency called data reg 51% of the active users in the metaverse are children. of the users of the metaverse, so-called metaverse, are under the age of 13. And 84% are under the age of 18. And you might ask me why that is the case, because most of what we know as the metaverse or virtual world is made of gaming environments, Roblox, Minecraft, Fortnite, which explains why such a large number of, you know, the participants in there are actually miners. One more interesting piece of data, 31% of US adults have never heard the term metaverse. So that just tells you how much is the divide, and essentially this virtual world that we are talking about is actually full of children. And this has been our experience as well, developing technology, which takes me to the next slide, please. If, yes, I don’t know if you can get this, yes, if you can get this working, then I’ll let you see it, and then I’ll comment after. There is some, there is a volume behind it, if you can see it. Well, if the volume is not playing, I’ll tell you, first of all, can we get the audio playing on that one? It’s playing, but it’s very faint. Okay, so let me tell you what’s happening. So first an adult, so this is actually, this is our technology at work inside a, it’s a VR headset, and first an adult is speaking. And when the VR headset detects that it’s the adult which is speaking, it gives them access to adult content. As you saw that there was, the advertisements were meant for adults. But after that, if the audio was playing, you would now hear a child speaking. And as soon as the technology detects that it’s a child which is speaking, the environment has changed. And the only content, and in this case, it is advertising. The advertising that is, that is. shown to the participant is only related to children. So what I wanted to illustrate out of this, you will see this time it is the child. So it goes inside, you’ll see all child-related content. The point we are trying to make here, well, there is a QR code here, so you could actually watch this video on YouTube directly as well. The point we are trying to make here is that technology exists today from companies like Privately, which have developed fully privacy-preserving, GDPR-compliant solutions for detecting age of users. And as you can imagine, the detection of the age of users is the cornerstone to keeping users safe online. And this is the demo that you just saw. We actually had it implemented for a very, very large social media network. It was in trial. Of course, they decided not to pursue it, but for reasons best known to them. But as you can imagine that the technology today exists, and we are seeing more and more of these technologies which are privacy-preserving, run in the background, but are able to differentiate in real time, just like as we, as adults do. As human beings, we do that this is a child, this is an adult, and they can do so in the background. And in doing so, then they can ensure that the platform or the service or the virtual environment is able to deliver an age-appropriate experience. So this is essentially my message. If you need to know more about the tech, that’s the QR codes that I have left behind. You could very happily either contact me or look at the codes here at the showroom. You can even test the technologies. So thank you for your time. And that’s me. That’s over now.

Jutta Croll: Thank you, Deepak. Thank you so much for giving us an insight into also how children or maybe all users might be protected in virtual environments. We have now about seven to eight minutes for questions, either from the floor here inside or from our online participants. Do we have any people who want to come in with their questions? Michael and Deepak will be there to answer your technology-related questions. or whether you have any further questions so far. Yes, we have a hand raised. You will need to take a microphone.

Audience: Hi, my name is Martina from Slovakia. I’d like to ask you for this solution for the privacy of the children. Is it not true that the child’s voice will change to adult’s voice?

Jutta Croll: Yes, please Deepak, go ahead.

Deepak Tewari: So, look, this is a real time. Maybe to clarify, this is not happening on a one day basis. This will become a feature of the microphone. Each time you’re speaking, the microphone can detect whether it’s a child or an adult. So it’s not like there is a one time check done and you are categorized as an adult or a child. This is real time, this is continuous, and this is a feature of the device itself. Very much in a similar way, we’ve also created age aware cameras. If you go into shops, the camera looks at you, it detects if it’s an underage person. They will not serve you alcohol or any restricted item. So this is a feature, you have to think of it as being continuous and not done once forever or for a long time.

Jutta Croll: So I see she’s nodding, so the answer was accepted. It gives me the opportunity, as you’ve been speaking about age awareness, that we will have another session on Wednesday at 10.30. Yes, at 10.30 on age aware internet of things. So probably we’ll put that in your schedule. We have another question from the floor. The microphone.

Audience: What if the adult is using an AI to convert his voice from an adult to a child, to deceive the program and get into the child’s room?

Deepak Tewari: Yes, that is correct. That is, you’re right, there is a threat because these days there are artificial programs and generative AI, which is attacked. You have to go into a little bit of a depth into this. There are two kinds of attacks, one which is called a presentation attack, which is, you know, you use an external program, you show, you play a child’s voice, for example. That’s one way. The other one is you actually inject a child’s voice into the program, which is a little bit more difficult. It’s called an injection attack. So there are, obviously there are technologies to detect both, and you could always argue that it’s a battle between how good the technology to detect is vis-a-vis the technology which is trying to fool or spoof. But an interesting thing here is that the fact that this is continuous, at some point, you know, in some use cases, we’ve seen that the technology is being used to detect anomalies. So the same person is talking like a child, a child, and he’s talking like an adult to someone else. That produces an anomaly in the system. And that could be used to track that there is a person who’s probably malified in that group. So there are ways and means to detect these things. But like you rightly said, there’s a contest between technologies. But solutions exist, by the way, to detect spoof voices and, you know, injection attacks.

Jutta Croll: Thank you, Deepak. We have time for one more question, and then we go to our colleague, Sophie. Thank you, Deepak.

Audience: This is a very important work, especially in a world where governments are now contemplating where children under 16 even should have access to platforms or not. So you talked about working with some big platforms, but of course, is there ownership of such kind of work? At a more strategic level, you know, is there any ownership of this kind of work? Because this means less consumers of their content. And also, because of the way you register for these platforms, these big tech platforms, you can just say under 16, they don’t give you a voice or anything that you can detect. So is there any more work by privately that you can talk about how to stop the exploitation of children by big tech platforms?

Deepak Tewari: Luke, thank you for your question. It is true that I have to say that we are seeing a big pushback from big platforms, big tech. And I hate to say this, but it is because age assurance comes as a threat to their business model. If you imagine you’re advertising to everyone and saying, I’m only showing these ads to adults, but a part of your users are kids. So if you just follow the money, then this is a misappropriation of advertising revenues. Minus all the morality and minus all the child safety here. So it is not in the interest of big tech to support age verification, which is why you see stalling and fear, uncertainty and doubt being sown by pretty much all the big tech platforms. We are actually even fighting a case in the Supreme Court in the U.S. right now. So we are active everywhere. The technology exists, but big tech does not want to do it directly. And it’s always about liability. Who has the liability for this? As long as that is not settled and as long as there are no strict fines, unfortunately, big tech will be pushing this out a little bit. So we as a small company, we are trying our our bits, challenging in court, doing thought leadership, also going into games, building it and showing people that it’s privacy preserving. It works. It’s functional. We have certified all of the technology publicly. But I have to say that there is a business model contradiction with the with big tech. So, yeah, then there is a problem.

Jutta Croll: We already have big tech also in the room. So we’ll go to that question afterwards. But first, I would like to refer to the lady from the floor who referenced children’s rights and their right to access to to digital media, to digital communication. And that’s the point when Sophie Pohler from the German Children’s Fund come into play and she will give. us a short introduction on the general command number 25. For those of you who have not heard about that, we have the document printed out here and when you leave the room you can take it and also you will find it on the website www.childrens-rights.digital. Yes, thank you. Sophie, over to you please.

Sophie Pohle: Thank you, Jutta. I hope everyone can hear me. So, hello and welcome from cold and rainy Germany, from Berlin. I’m Sophie Pohler. I’m from the German Children’s Fund, which is a children’s rights organization in Germany here and we have been collaborating with the Digital Opportunities Foundation for years, including the joint coordination of Germany’s consultation process for the GC25, the general command number 25. And now I’ll set aside my role as an online moderator today of our session to give you a brief overview of the general command number 25 as a framework for our discussion. So, let’s start. The UN Committee on the Rights of the Child published the general command number 25 in 2021, so three and a half years ago, and it’s focusing on children’s rights in the digital environment. This document guides state parties on implementing the Convention on the Rights of the Child in digital contexts. It was developed with input from governments, from experts, and also from children and offers a practical framework to ensure comprehensive and effective measures are in place. Our session today explores the metaverse, a topic which is not directly named in general command number 25. However, the GC highlights that digital innovations significantly influence children’s lives and rights, even when they do not directly engage with the Internet and by ensuring meaningful, safe and equitable access to digital technologies, GC25 aims to empower children to fully realize their civil, political, cultural, economic and social rights in an evolving digital landscape. Let me briefly introduce the four general principles of the GC25, starting with non-discrimination, which emphasizes ensuring equal access and protection for all children. Second, we have the best interests of the child, prioritizing children’s well-being in the design, regulation and governance of digital environments, including of course virtual worlds or the Metaverse. Thirdly, we have the right to life and development, that means to ensure digital spaces support children’s holistic growth and development. And last but not least, we have the principle of respect for the views of the child, which means to consider children’s perspectives in digital policymaking and platform design. What are the central statements or key emphasis of the GC25? The general command recognizes children’s evolving capacities and how user behavior varies with age. It highlights opportunities and different levels of protection needs based on age and also stresses the responsibility of platforms to offer age-appropriate services for children. The GC25 calls on the states’ parties to support children’s rights. to access information and education using digital technologies. It also urges to ensure that digital technologies enable children’s participation at local, national and international levels and highlights the importance of child-centric design, which means to integrate children’s rights into the development of digital environments, for example with age-appropriate features, content moderation, easy accessible reporting or blocking functions and so on. On the one hand we have those opportunities and participation calls and on the other hand we also have the GC25 calling the states parties to identify risks and threats for children incorporating their perspectives and it also calls for solid regulatory frameworks to establish clear standards and international norms on the one hand and to implement legal and administrative measures to protect children from digital violence, exploitation and abuse. Also GC25 encourages cooperation among stakeholders like governments, industry and civil society to tackle dynamic challenges and last but not least it underlines to promote digital literacy and safety awareness for children, parents and educators to ensure informed participation. In my last minute I’d like to give a very brief insight into children’s perspectives that were collected during the consultation process on GC25. The principles I laid out in GC25 are directly informed by the needs and expectations children have voiced globally during the consultation. consultation process. I think it was more than 700 children globally that were consulted. And yeah, to conclude, I brought some key insights on a very general level from young people on how we can better support them in the digital world before Mariam after me will take us through the children’s perspectives in way more detail. So what do children want? They want equitable, reliable access to digital technology and connectivity. They also wish for age-appropriate content and safer digital experiences where platforms protect them against harassment, against discrimination, against aggression, and rather enable them to participate and express themselves freely. Children themselves demand greater privacy and transparency about data collection practices. And they want also more digital literacy education, also for their parents, by the way. And they also want the recognition of their right to play and leisure, which is also crucial when we talk about the metaverse. So much for now. I think my time is up. That was on a very general level. Yeah, thanks a lot for your attention. And I’m happy to answer questions.

Jutta Croll: Thank you, Sophie. Thank you so much. Welcome. Can you hear me? Okay, it’s working. Thank you so much for already touching upon digital literacy because we will also come to that point to discuss how the metaverse will open up opportunities, huge opportunities for training of digital literacy for children as well as for their parents and other adults. But first, we go to Mariam. Mariam, also, she is a very young person. She holds a bachelor’s degree and a master’s degree. So I’m really impressed about you. But you’re representing youth in our panel. And please go ahead. Your slide is already on the screen.

Lhajoui Maryem: Thank you. Can everyone hear me? Yes. So before I start, thank you, Sophie. And I’m very happy to see so many people in the room, actually. Thank you for being here. So my name is Mariam, and I’m here on behalf of Digital Child Rights Foundation. And so we are a youth platform, expertise center based in the Netherlands, but we focus internationally. So what we do at Digital Child Rights, our main goals are to really have the importance of digital child rights and to really have them included children’s opinions in, you know, what is safe for them and to include their opinions. We do this through different ways. We create playful tools for children. We also gather most of their information on how they view things through our child rights panel. And then for youth, we have many youth ambassadors in the Netherlands, but also in different other countries. And so we really want to give youth a platform to connect through connection challenges. So Sophie, thank you again. A very clear overview of the general comment 25. You might see some overlap here. So at Digital Child Rights, these are our 10 themes, and they are based on, yeah, they’re based off the general comment 25. So while the metaverse that we’re talking about offers great opportunities for children, for youth to actively participate, there’s many chances in ways we didn’t know before, right? At the same time, we also have to remain critical and acknowledge that there’s challenges, right? We have to prioritize safety and privacy and fairness in the best interest of the children, and that’s what we really focus on. So when it comes to privacy, it was also already mentioned before, who’s collecting my data, where is my data collection going, how old am I? Safety, also addressed very clearly by my colleague Deepa. So when it comes to age verification, can I pretend to be way older than I am, or can I pretend to be way younger than I actually am? And so we know the dangers that come along with it, but what is actual action that should be taken there, and how do children even, how do they view this? Do they even know that it’s possible to, you know, to meet other people online that might not have the best interest, unfortunately? So this is where the rules are very important, right? So there’s rules, also as outlined in the general comments, like Sophie explained, so how do they view it, and what are we going to do about this? And then also really important is this, at Digital Child Rights, and in many others, I also heard it, is this digital inclusion, right? So can everyone participate? So we’re based in the Netherlands, we speak with a lot of youth in the Netherlands, and but it’s different when you look at other countries, so can everyone participate? And at Digital Child Rights, we actually conducted some interviews at Gaza Camp, Jordan, this year, where we spoke to Palestinian refugees in Jordan, and about access to internet, access to the metaverse, what are opportunities for them in the metaverse, what’s important to them? So this digital inclusion is very important, especially when it’s so closely interlinked with education, so when education plays a very role. And then also very important, we write it as an opinion, is can everyone give their opinion online? Are children free to say what they want? and also be able to do that within the frameworks of do not bully, non-discriminatory, as outlined in the general comment. So very important that it’s equal and that they can give their opinion, which we also really, we gather a lot of their opinion. And so, yeah, at Digital Child Rights, we’re always looking to connect with other youth platforms. So I do invite all of you that are sitting here to talk to me afterwards about what are opportunities to enhance this, to really strengthen the voice of the youth and children, not only in this room and not only in the Netherlands or in Europe. And that’s why also we are here. And I would love to meet so many of you. So the question for us is, what can we do in the best interest of children? And how do we really include them in it? So I hope to talk to many of you after this.

Jutta Croll: Yes, but we also have now about 10 minutes for you to take questions as well as for Sophie. So do we have any questions here in the room or otherwise in the online chat? Anyone who wants to raise their hand? Yes, okay, Emma, please. It should work.

Audience: Thank you for the presentation. So I think the question was for me, right?

Lhajoui Maryem: Yes, thank you, Emma, for your question. So that’s a really good question. We should always ask ourselves how much knowledge is already present in the room when speaking about these important issues. So when it comes to children, we create playful tools to connect with them. So there is like, what do you call it, Kofta? Like a card game and my colleague is holding it. Perfect. So there is a card game. It’s a playful way, but to really also talk about what are the, so what’s connected to, can everybody participate? What does that even mean? And also, so for example, with safety, okay, what does safety even mean? And then it’s also, yeah, so it’s like, we also wrote it here. You can take a look at it so that you’re aware that you can ask for help when you’re in danger and that there is many opportunities for you, chances. So there is a card game. We also offer different kinds of workshops. So there’s also a workshop with masks. So where children make their own mask. And this also really portrays, right, this age verification and also really, as outlined in the previous presentation by Michael, I think, yeah, about the avatar. So who am I even in the online world? Is it also a different person than I am in the real world? So we really encourage them to think about this before they then put their opinions. And then still, there’s so much to learn. And through these playful tools and also through these connection challenges that they do with other platforms, we encourage them to learn more. And then we can learn more from them. And then they can also give better opinions. So I hope that answered your question, Emma.

Jutta Croll: We have another question here. And then I would also turn to Sophie to tell a little bit, if you’re able to do so, about the children’s participation for general command number 25, which pretty much refers to what you have said. So we take those two questions there. Yes.

Audience: I’m the youth ambassador from Hong Kong, from the One Path Foundation. And I’m here, I want to ask a question that there are lots of attractions in the metaverse. And how can you prevent children from addicted to those attractions and cut off from the reality?

Jutta Croll: Okay, it’s a question with regard to addiction to the metaverse. You’re talking about addiction to the metaverse, right? Yes. Okay, I’m not sure whether this question should go to the panel or whether we can go to it afterwards. Who would you want to pose the question to, Mariam? Okay, go ahead.

Audience: Because just now she said that there are lots of playful tools.

Lhajoui Maryem: question, actually. So there is many, there’s many things in the online world. And I must admit that I also get maybe a bit addicted sometimes. And maybe I can get a bit lost, you know, when you’re scrolling. So what can we do? This is actually also a question for you and for me at the same time. So what can we do to help each other and help our friends that are the same age to really not get lost in that? So what we at Digital Child Rights do is we, so like I said in the beginning, the metaphors, yeah, we can get a bit lost in it. But it’s also that, you know, we’re also going with the time. So it is also a place where there’s many great opportunities as long as we know how to handle it, right? So we really try to make young people aware of the dangers, yeah, the challenges and also the nice things. So we really try to tell you, if anything happens, then there’s always some kind of help offered. And that you are really aware that you have the right to be safe, right? Because Winston, if I tell you, yeah, you should not spend any time on your phone. That’s a bit crazy, right? Maybe. I don’t know. If you want. But we need to regulate it. We can’t spend too much time. But it’s also interlinked. You can learn a lot. There’s also, it can help you with your education. So to answer your question, I’m sorry.

Audience: Hi. I’m from India. I just wanted to ask Mariam and Sophie both. She talked about digital literacy. You’re talking about the programs with children. How do you involve parents and educators when it comes to children in metaverse? Because that’s crucial to have them on board when we talk about children’s engagement in the online spaces.

Jutta Croll: I’ll give that question to Sophie. But only a short answer, Sophie, please. Because we are running out of time.

Sophie Pohle: Yeah, that’s a question we also discussed a lot in Germany for a few years and I think the key is to involve the schools more to reach every child because every child goes to school and there we have the parents too. So that’s key I think and we also need to think about how can we reach those parents who are not already sensitive to these topics because often we see that parents inform themselves when they see there’s a problem but we have a lot of parents also that do not have access to this information and who do not have the resources and we do need to think about more how to involve them and to get to them directly. It’s a very complex question to be honest, very difficult for a short answer. Okay, we will follow up with that as well.

Audience: Hello, it’s not a question, it’s a proposal to the Digital Child Rights Foundation. So I am Sadat Rahman from Bangladesh. We are also working for teenagers in Bangladesh and in Bangladesh we have a helpline 13 to 19 is a cyber teens helpline if in any Bangladeshi teenager facing cyber bullying, cyber harassment, they can call us and we are working with Netherlands also in Child Helpline International and I received the International Children Peace Prize 2020 from Kissright. So I would like to work with Digital Child Rights Foundation. We need mentors. Thank you.

Jutta Croll: Thank you so much. So we leave it at that time because we have already our next speaker on my left side and Hazel is also in the room, in the Zoom room, but we will start with Emma Day. She is a human rights lawyer and an artificial intelligence ethics specialist. Also she’s the founder of the consulting company specializing in human rights and technology for UN agencies. Emma, over to you because now we want to talk about regulations and when we set up that virtual proposal, the Australian law keeping children under the age of 16 was not enforced, even not on the debate. So now we have a different situation but I’m pretty sure you will be able to address it.

Emma Day: We specialize in human rights and technology and I’m going to talk to you a bit about the existing legal regulatory place in the metaverse and maybe where some of the gaps might be. So the metaverse is an evolving space and some of it we’re talking about is a little bit futuristic and may not actually exist yet, but we’re talking about a kind of virtual space where unprecedented amounts of data is collected from child users and adult users as well. So already today we have apps and we have websites which collect lots of data about users, about where they’re going online, how they’re navigating their apps, but in the metaverse companies can collect much larger volumes of data and much more sensitive data. So things like users, physiological responses, their movements and potentially their brainwave patterns even, which may give companies a much deeper insight into users’ thought patterns and behaviors and then these can be used to really target people with marketing, to track people for commercial surveillance or even shared with governments for government surveillance. So it’s something that takes us to another level when we’re thinking about data governance. Then we have people’s behavior in the metaverse, both children and adults. If someone says something in the metaverse space, is it the same as posting content online? What laws should apply there? Who is liable for content they post online? If I use an avatar online, am I responsible for their speech? And if my avatar abuses somebody who’s wearing some kind of haptic device that they can feel the touch to them, then how do we deal with that? So there are some questions that we don’t really have answers to from regulators, but there are some regulations that we know apply. So for example, the GDPR wouldn’t still apply in the metaverse. So this is the European regulation around data protection and there are many laws around the world now to fight against the GDPR, which is used for data protection for business. And that means also that the children’s codes, which have been developed like the UK age appropriate design codes and similar codes in other countries, which is guidance on how the GDPR applies to children, would also apply. But it may be difficult within data protection law to determine who is the data controller and who is the data processor. So a data controller is the entity responsible for deciding how the data is going to be used for what purposes. And they then ultimately are the most liable and accountable for that. But if you have lots of different actors in the metaverse space who are sharing data between them, it may become quite confusing. And then the data controller, before they process data, particularly from children, they should be telling them, giving them a privacy notice. So how many privacy notices can you have in different parts of the metaverse? And say you’re in an experience, maybe a child is walking along a street in a virtual town and they stop in front of a bakery and they’re looking in the bakery window. Then maybe one of the companies involved can see that maybe they are hungry and they can target them with some food advertising because they stopped in front of that bakery. So it’s kind of a different scenario to the way we use websites and apps currently. And then, of course, there’s this question of how to determine which users are children. And sometimes not even which users are children and which are adults, but precisely how old is that child for data processing purposes. Then you also have a body of regulations which are about online safety. So we have online safety acts. We have this law in Australia which is preventing children under 16 from using social media, which maybe they would apply also to the metaverse. But then you have in the US, there is a section 230 of the Communications Decency Act, which you may have heard of, which really provides online platforms with immunity from third-party content, from liability for that. So that may change in the US as the metaverse develops. This is a very political topic in the US. We don’t really know what direction that will go in. So we have a very diverse global regulatory framework and, in fact, not a lot of very specific regulations and not a lot of enforcement currently. But I think that the main takeaway for me would be that the common thread globally is human rights and children’s rights. And what we do have is the UN Guiding Principles on Business and Human Rights, which were endorsed by the Human Rights Council in 2011. And these are the global authoritative standard for preventing and addressing human rights harms connected to business activity. And they’re aimed at both states and business enterprises. So they call on states to put in measures to protect children’s rights, including in the digital environment, and also for businesses to respect children’s rights. And this includes tech companies. And when we think about tech companies, we need to think about also the safety tech and the age assurance tech. These are also tech companies. And so both the platform who is providing the metaverse and also any technologies used within that platform need to carry out risk assessments, where they look at all of the different rights that could be impacted for both children and adults. And in a mixed audience, you need to look at both children and adults and make sure that all of the stakeholders are engaged with and consulted with. And so that something that is introduced to protect children doesn’t then have a adverse impact on other rights, so we need to make sure that all of the rights online are protected, and the UN Guiding Principles provide a methodology for stakeholder engagement, for risk assessment, and then for identifying how to mitigate those risks in accordance with children’s rights and human rights. So I think that if tech companies carry out their human rights due diligence, do their child rights and human rights impact assessments, then they should be in good shape, in fact, to comply with regulations that may be coming down the line. I will leave it there.

Jutta Croll: Thank you so much, Emma, for giving us that insight on the situation of regulation so far, and we know that it does not address the metaverse at this time, but it will definitely be going that way. And now I’m handing over to Deepali, who was already somehow addressed because she’s coming from META, and I don’t think you will be able to react to all the things that have already been said about service providers like META is one, but I would like to refer to something that Michael said at the beginning, that social media platforms are already virtual worlds, and they are coming on us via our own behavior. So that is something that your company is working on, and could you explain a little bit your position?

Deepali Liberhan: I can. I think it’d be useful to talk a little bit about what the objective is here. The objective that we have is to make sure we have safe and age-appropriate experiences for young people on our platform, and irrespective of regulation, we’ve been working to do that, and it’s across our apps, whether that’s Facebook, Instagram, or the VR and AR products that we offer. We’ve actually adopted a best interests of the child framework that our teams use to develop products and features for young people, and while I won’t go into all of those considerations, I think two important considerations that I do want to talk about. The first is, and it was lovely to hear from you, Miriam, is exactly engagement with young people and families who are using our products, and it’s really important to engage not just teens, but also parents, and we’ve done that where we have, in the last couple of years, we’ve rolled out parental supervision tools across our products, including MetaQuest, which is available in the Metaverse. Why parental supervision tools are really important is exactly to the point that somebody made that, you know, parents don’t necessarily know how to talk to their young kids about the metaverse of virtual reality. We had these consultations and, you know, engagement with parents and teens who are sitting in the same room to help to be able to design our parental supervision tools, and we’ve designed our parental supervision tools in a way that respects the privacy, as well as promotes autonomy for young people, but also, you know, gives parents some amount of oversight and insight on their teen’s activities, especially in the VR space. So, for example, you can see how much time your teen is spending in VR. You can set daily limits. You can schedule breaks. You can also approve and disallow apps. So, these are some of the things that are inbuilt into our parental supervision tools, and I think that it’s really important that, along with these tools, I think that there was a mention of digital literacy as well. We’ve worked with experts to make resources available in our digital hub so that parents can get guidance on how to talk to their kids about virtual reality and about AR and about how to stay safe online. This is, you know, this is one consideration that we have when we’re building for young people. The second is building safe and age-appropriate experiences. So, irrespective of whether you choose to have parental supervision or not, and I think it’s really important to have parental supervision, the other thing that’s really important is what are we doing to make sure that, you know, young people who are using, you know, who are using the Metaverse products that we offer are safe is essentially a set of inbuilt protections that we have. For example, 13 to 17-year-olds have default private profiles, and, you know, if you’ve used Instagram or if you’ve used, you know, if you’ve used MetaQuest, you know a private profile is very different from a public profile. You exactly, you can allow people to follow you. Not everybody can see what you’re doing. The second is that we’ve also, by default, we have a personal boundary, and I don’t know if any of you who’ve used MetaQuest have used that personal boundary. That personal boundary is an invisible bubble that is around you that is on by default, so that is to prevent against unwanted interactions when you’re in, you know, avatar space and you’re engaging with other avatars. The third thing that we’ve done is we’ve also limited, you know, interactions with the other adults that teens are not connected with on your platform, so you want to use the, you know, the Metaverse to connect with your aunt who’s in a different country, but you don’t necessarily want your teen to be able to engage with strangers, and we have built-in protections in place to limit those particular interactions. The other thing that’s really important is, you know, we live in the world of having really clear policies, so all the apps that are listed on the MetaHorizon store, for example, have an age and content rating like the film, you know, like you have a film rating, and teens are not able to, are not even going to download apps where they, which are inappropriate for their particular age, so, and there’s a lot more that we are doing in terms of making sure that we are building those safe and age-appropriate experiences on our platform. The other thing that, you know, that I do want to point out is that, and I know somebody said earlier, that a majority of teens online are right now using it for gaming and entertainment, but at Meta, we also feel like the potential for the metaverse in immersive learning is really, is really immense, and I remember I was in school, and we used to read about, you know, interesting places like the Pyramids of Giza or the Eiffel Tower. In virtual reality, you can have young people who are actually visiting those worlds, and you can actually have young people from different countries, at different economic backgrounds, actually be able to study together, and I think that kind of innovation, any kind of regulatory or legislative regime, it’s also important to, it’s also important to protect and promote that kind of innovation, and Meta is actually, you know, we’ve set aside a 150 million fund just for immersive learning, and there are, you know, many projects that we have that I can talk about. I don’t know how much

Jutta Croll: time is up, but if you want to take some more questions, I’m looking around in the room, I’m probably, there, there we have a question, and we have another question over there. Question is for you. Can you hear me now? Yes, we can hear you.

Audience: So, we heard about privately, privately, I believe the company was, and we know there’s other technology out there to verify children’s age on platform. What is Meta doing to verify age verification on platform?

Deepali Liberhan: So, we have a, we have a number of, you know, we have a number of ways we, we use to assure age on our platforms, and I think one of the important things to, to understand is that it’s, it’s a fairly, it’s a, it’s a fairly complicated area, because we want to make sure that we’re, we’re balancing different principles, and a couple of principles that I’ll talk about. The first is data minimization. We’ve all talked about, you know, many years ago, where we said that it would really be easy for everybody to collect digital IDs at the point of verification, right, but we, you don’t want to do that for multiple reasons. I don’t think any, anybody, least of all regulators or legislators wants companies like us to collect data. So, what is an effective way to assure age, which balances, you know, the, you know, data minimization with effectiveness and with proportionality? We have a number of ways that we, we do age assurance. For example, we have people trained on our platforms, for example, Facebook and Instagram, to identify underage platform, underage users, and they’re able to remove these underage users. We also have invested in proactive technology, and that proactive technology looks at certain kinds of signals. So, for example, the kind of accounts you’re following or the kind of content you’re posting, and those are some of the, some of the ways that we’ve developed, and, you know, proactive technology is something that keeps, keeps on developing, that we’re using to, to identify and remove under, underage accounts. We’re also working with, I don’t know if you’ve heard about YOTI, but YOTI is a third-party organization, like, like Privately, that essentially has come up with a way, in a privacy protective way, to, to be able to identify age range just based on your selfie. So, what we’ve done is, we’ve, we, and you’ll see it if you, if you use Instagram, that if we find that there’s an age liar, if I’m a young, you know, young person who’s 12 years of age, that person is not allowed on the platform. But, for example, if a 15-year-old wants to change their age to 18-year-old, one of the options to verify that person’s age is, you can give your ID, but also you can use YOTI, which is, you take a video selfie, and that selfie is kept for a little amount of time, and then deleted, to be able to identify age. So, in, there are a variety of ways that we’re, that we’re working to assure age on a platform.

Jutta Croll: We will be around to explain later, maybe, on YOTI. We have another question there. Please be brief.

Audience: Hi, just briefly, I’ll also sort of pick up on the age verification. I mean, you talked about data minimization. There are plenty of already available options that do age estimation, age verification, without gathering any data. None of the social media platforms employ those properly, and I think META’s reported several billion dollars of revenues from under-13s in its latest results. So, it’s clear that, whether it’s META or any of your competitors, none are really doing that seriously. So, some of the limits you’ve talked about, about restricting access by age to different content, are lovely, but if you don’t have effective age verification, then they’re also meaningless, sadly. So, it’s a comment rather than a question. I think all the social media platforms could do far better. At the moment, they do the bare minimum, in my opinion, and there’s a lot more that you could do. Thank you.

Deepali Liberhan: I’ll quickly respond to your question. I’m available after this if you want to have a broader discussion. We are working on exploring options to do age assurance in a more effective way. YOTI is one such organization that we work with. It’s been recognized by the German regulators. We’re looking at ways on how to use it at more points of friction on our platforms. The other thing that I would say is that it’s also a broader ecosystem approach. One of the legislative solutions that we’ve talked about that we think is a fine balance between data minimization and also making it really simple for parents is to have age verification or age assurance at the app level or the OS level, which allows parents to oversee approval of their apps in one particular place and also minimizes the collection of data at one place. A lot of third parties have also talked about how this is one of the ways where we can approach this in a broader ecosystem perspective. While those discussions are happening, at the same time, we are working to ensure that we are building more effective ways to do age assurance on our platforms. We’ve also recently launched teen accounts in certain countries, and we’re going to roll out in the rest of the world. We’re also investing in proactive technology to give us the right signals to make sure that the teens are not lying about their age, because, as you know, they will lie about their age.

Jutta Croll: They are doing already. Yes, thank you so much. So, we are a bit under time pressure. That’s why I’m going now to the last block of our session, and that is coming to what can internet governance do to support a common approach to keep children safe in the digital and virtual environment. And I’m handing over to my colleague Torsten, who will give us a short reflection on the Global Digital Compact and what responses the compact does give to children’s rights and the metaverse. Over to you, Torsten.

Torsten Krause: Thank you very much, Jutta. And regarding or with listening to all the thoughts and information shared, we want to have a closer look to the Global Digital Compact, which was adopted in this year’s September. It’s a global digital compact, and it’s a global digital summit of the future. You may recognize it. It is a part of our common agenda of the United Nations and determines the basic principles for shaping the digital environment. It’s not a legally binding document like the Convention on the Rights of the Child, but the states express their commitments to aligning the further development of the digital environment. What does it mean? The GDC describes several objectives, and just to put some of them, it said to close all the digital divides and accelerate the progress across the sustainable development goals, to expand inclusion and to reap the benefits for all, to foster an inclusive, open, safe and secure digital space that respects, protects and promotes human rights and children’s rights are a part of human rights, as you all know. And they declare that to create safe, secure and trustworthy emerging technologies, including AI with a transparent and human-centric approach and effective human oversight. So all what is done should be controlled in the end by humans. When we have a closer look to what is mentioned about children’s rights in the GDC, then I’m not aware of how you’re following the progress in the past. Then in the first draft, there was no children’s rights directly expressed, and several organizations do their stances and give their perspectives and comments to put children’s rights in this compact. And in the end, there are several points where we can touch on, and the biggest area or field of child rights is around protection rights. So the states are asked to strengthen their legal systems and policy frameworks to protect the rights of the child in the digital space. So every one of you can ask their governments how they do it, how they put in place their policy frameworks. The states also should develop and implement national online child safety policies and standards, and they call on digital technologies companies and developers to respect international human rights and principles. We heard some of that in the session. All the companies, developers, and social media platforms should respect human rights online, and to implement measures to mitigate and prevent abuses, inclusive, also with effective remedy procedures, as also Sophie mentioned, in line with the general comment. And the broader part comes to counter and address all forms of violence, including sexual and gender-based violence. Hate speech also is mentioned, like discrimination, misinformation, and disinformation, but also cyber bullying and child sexual exploitation and abuse. Therefore, it’s necessary to monitor and review digital platform policies and practices on countering child sexual exploitation and abuse, and to implement accessible reporting mechanisms for the users. When we have a closer look to what’s about provisions with regard to child rights, then it’s mentioned in the GDC that the states are responsible to connect all schools to the internet, and it’s referred to the GIGA initiative of the ITU and the United Nations Children’s Fund in this way. And with regard to digital literacy, it is said that it’s necessary that children and all users, of course, should meaningfully and securely use the internet and safely navigate the digital space. Therefore, digital space, digital skills, and lifelong access to digital learning opportunities are very important. So, the states are responsible to establish and support national digital skills strategies, adapt teacher training programs, and also adult training programs. That is with regard to your question, Vincent, so that they have in mind that it’s not just necessary to teach the children, the users, but also the responsible person around them, so they can provide support and protect them. When we have a look to participation, then it would be very short. It’s mentioned that meaningful participation of all stakeholders is required, but it’s not said that children be part of that. But in regard of that, it’s so important that children and young people also take part at the internet governance forum at the multi-stakeholder level to bring in their voices and perspective, so that they come in in this meaningful participation process of all stakeholders. That’s what a short overview of the GDC, and I hope it was meaningful.

Jutta Croll: Yes, thank you so much, Torsten. We also would like to remind everybody, if you haven’t had a look at the Global Digital Compact, please do so, and it’s open for giving your consent or for endorsement of individuals, as well as organizations, companies, and so on. So, the more people endorse the Global Digital Compact, the more gravity will get all these recommendations. So, eventually, Hazel, thank you for your patience, waiting in the Zoom room. We are happy to have you here. to give us the perspective of children with a special focus on the Asian Pacific area. Over to you.

Hazel Bitana: Thank you, Jutta. I’m Hazel from Child Rights Coalition Asia or Earth Asia, a network of organizations working for and with children. And I would say that to keep children safe and help them reap the benefits of virtual environments, internet governance would be anchored on child rights principles, which is founded on in general comment number five, as Sophie presented, the principles of the best interest of the child, child participation that takes into consideration children’s evolving capacities, non-discrimination and inclusion and children’s overall development and full enjoyment of all their rights. These are the underlying messages from children, such as when my organization, Child Rights Coalition Asia held our 2024 Regional Children’s Meeting in August in Thailand, where we had 35 child delegates representing their own national or local child-led groups based in 16 countries in Asia. Although we focus our discussions mainly on emerging practices such as sharenting or kid fluencers and generative AI in line with civil and political rights. The recommendations we gathered from this platform could be applied in emerging technologies like the metaverse. Summarizing their inputs, one of the recommendations is an approach that recognizes children as rights holders and not just as passive recipients of care and protection. Children want to be involved and be empowered to be part of the discussions and policymaking processes. They want child-friendly spaces and information, having child-friendly versions of the terms of end conditions or privacy policies that they agree. Another relevant information is one of their key recommendations. Involvement of children in the decision-making process. making processes allows us to have a holistic perspective. We get to learn how children are leveraging these emerging technologies to create positive change, which are not usually highlighted in discussions. When we talk to children about generative AI, they said that it is beneficial for their advocacy work as child rights advocates or child human rights defenders for their recreation, the enjoyment of their right to play and leisure and take part in creative activities and for their education. And I think these are echoed in the Metaverse as well. And by getting children’s perspectives, you also get to see the impact of these emerging technologies on a number of children’s rights. There are already evidence on sexual exploitation and abuse in the Metaverse. And from our regional children’s meeting, the child delegates raised concern on the impact of generative AI to their right to a healthy environment in the context of climate change due to energy consumption of generative AI. And this could be a concern as well when we talk about the Metaverse. Another concern is in relation to the right to privacy and informed consent, especially considering the unique privacy and data protection issues both by generative AI and Metaverse as briefed by a number of our speakers today. And additionally, echoing one of the points from Miriam earlier, a key approach to internet governance is ensuring non-discrimination and inclusion in a number of aspects. For one, due to the price of virtual reality hardware or devices or the internet speed bandwidth required, Metaverse is widening the digital divide. And in terms of freedom of expression, including gender expression, Metaverse has the potential to provide children the platform to enjoy this freedom with avatar serving as a creative tool for expression. Michael expounded earlier, but at the same time without safeguarding these positive potential could instead deepen the discrimination. Cultural diversity should also be taken into consideration to keep children safe in virtual environments, in the metaverse harmful body language, gestures and non-verbal communication are the additional aspects that should be included in the list of violations that children can submit in the reporting mechanism in the metaverse. And this brings me to my next point regarding the importance of having child-friendly reporting mechanism and effective remedies in the metaverse with a diversity of languages, context, social norms, especially in the Asian region, which is always feeling left behind because our language are not always the popular ones in the digital environment. What more now in body language and gestures are included in the context of the metaverse. With this diversity, the importance of having specialized regional or local safety teams should be a part of the internet governance system. This facilitates timely and effective response and prevention mechanisms. And lastly,

Jutta Croll: I need to take out the time because we have now two interventions from the online participants and I want to give them also a bit of time. Maybe they have questions for you or for anyone else. Thank you so much. Sophie, will you hand over to the online participants or will you read out their questions from the chat?

Audience: I had one raised hand when we were talking about children’s rights and the children’s rights blog. It was from Andrew De Alvis, but I’m not sure if they’re still with us because I cannot see them in the participants. participants list anymore, so maybe a participant with a question is already gone. But if not, feel free to raise your hand again. But I think they’re gone. And I have another question from Marie Yves Nadeau. I can read it out. She has a question for a META speaker. The OECD report highlights the impact of virtual reality for children’s development. What prompted META to lower the age requirements? And how does the company address the potential risks to children? Over to you, please. Thanks for the question.

Deepali Liberhan: So, as I said before, we’ve worked with parents and young people as part of our best interests of the child framework. And a lot of parents themselves want to be able to have their kids experience or start experiencing META products in a very protective way. So, we’ve done two things. The first is, you know, 13 to 17-year-olds, which are allowed on our platform, which have certain default protections, as well as, you know, as well as the ability to have parental supervision. For less than that age, when, you know, for less than that age, those accounts are actually managed by the parents. And what we’ve heard from parents is that, and we have a, you know, we did this similarly for Facebook as well. We have Messenger for kids, which is managed by parents, which gives an opportunity for parents to really effectively manage their child’s presence online and be able to deliver all the benefits of that in a very, very protective way. So, we’ve done this with consultation with parents, as well as with experts.

Jutta Croll: Thank you, Deepali. We have now Ansu in the room, not online. So, please go ahead. I can give you only one minute. Yes. And one minute for the response.

Audience: Yeah. So, the question is, have you considered, anyone can answer this, have you considered the design principles for a governance framework? Because I’m a researcher in this area. Thank you.

Jutta Croll: So, you’re asking for the design principles of regulation. Yeah.

Audience: Design principles when developing a governance framework, has anyone considered that? And if so, what?

Jutta Croll: Okay. Will you be able to answer that? So, I think she’s talking about privacy by design, safety by design. Safety by design, child rights by design,

Emma Day: do you mean that kind of design principle? Or just the way that, in general? Yeah. I mean, maybe one of the, there are safety by design and privacy by design regulations, but I think also the theme of the Internet Governance Forum is multi-stakeholder governance, right? And one of the things, Tech legality has been working with UNICEF actually on data governance for edtech, and some of that is related to this with the impressive learning type of work. And what we’ve been looking at is, as part of this is the use of regulatory sandboxes, which there is an organization called Data Sphere Initiative, and they have been looking at trying to make a multi-stakeholder regulatory sandbox. So, bringing together, even across borders, regulators, private sector, and also civil society. And civil society are often the missing piece, but trying to bring all those stakeholders together to look at these frontier technologies and think about how to regulate. So, I don’t know if that answers your question. If you want to hear more about that, we have a panel at 4.30 p.m. today local time, where we’re looking at governance of edtech, neurotech, and fintech, and we’ll look at that multi-stakeholder. Yeah. Thank you.

Jutta Croll: Yes. Thank you so much. Thanks to all of you who have been in the room or taken part online. I have now only three minutes to wrap up, but I will try to do my very best. I do think that we’ve learned that, as we said at the beginning, the virtual environment, virtual worlds, are already inhabited by children. We heard from deep Park that 50% of the 600 million users are under the age of 13. Also, we know that there are age restrictions. And that is mostly due to the fact that we find virtual worlds in the gaming area. But nonetheless, we also heard that social media platforms are already virtual worlds. And that is because we are inhabitants of these social media environments, and we are giving there our data, our profiles, so several information about our identity. That led us to the question of data minimization, which is kind of a controversy to knowing the age of the users, either by age estimation or by age verification, which is always going hand in hand with gathering, collecting data about the users. And that was also something that Meta representative Deepali told us, that they are trying to balance that and trying to minimize the data. We have some regulations, especially the GDPR was mentioned, which is applicable only in Europe. But still, it has been copied to several areas of the world, which give us kind of an orientation what would be the principle of data minimization and how it could also be applied to the metaverse. But then we also learned that there are larger amounts of data and more sensitive data collected. So that would be the reason why we need another level of data governance for the metaverse, considering that we already have a gap of regulation when it comes to virtual reality. Eventually, I would like to go back to the children’s rights. We have learned that we have the area of protection rights, provision rights, and participation rights. And when we are talking about virtual environments, we always have a bit more focus on children’s safety. But still, we have seen that it’s really a huge opportunity to build on the evolving capacities of children to provide them with the education, with peer education in virtual environments. And that will also ensure that they have their right to participation. Finally, we heard that children want to be empowered and involved, and that they understand generative AI as an instrument for children’s advocacy. So I think that is a very important and future-oriented message that we got at the end of our session. And that is the message that I will conclude with. Let’s focus on the opportunities that the metaverse will provide children without neglecting their right to be protected. Thank you so much.

M

Michael Barngrover

Speech speed

163 words per minute

Speech length

1215 words

Speech time

444 seconds

Virtual worlds encompass VR, mixed reality, 3D gaming, and social media platforms

Explanation

Michael Barngrover explains that virtual worlds are a broad concept including various digital environments. He highlights that these range from fully immersive VR experiences to traditional 3D gaming and even social media platforms.

Evidence

He mentions specific examples like Fortnite as a 3D gaming world and notes that social media platforms can also be considered virtual worlds.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Deepak Tewari

Jutta Croll

Agreed on

Virtual environments are already heavily populated by children

D

Deepak Tewari

Speech speed

159 words per minute

Speech length

1634 words

Speech time

616 seconds

Metaverse has 600 million monthly active users, with 51% under age 13

Explanation

Deepak Tewari provides statistics on metaverse usage, highlighting the significant presence of children. He emphasizes that a majority of users in these virtual environments are minors.

Evidence

He cites specific statistics: 600 million monthly active users, 51% under age 13, and 84% under age 18.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Michael Barngrover

Jutta Croll

Agreed on

Virtual environments are already heavily populated by children

Privacy-preserving age detection technology exists but faces pushback

Explanation

Deepak Tewari argues that technology for privacy-preserving age detection is available but not widely adopted. He suggests that big tech companies are resistant to implementing such technologies due to potential impacts on their business models.

Evidence

He mentions his company’s development of age-aware cameras and microphones that can detect age in real-time without storing personal data.

Major Discussion Point

Data collection and privacy concerns

Differed with

Deepali Liberhan

Differed on

Effectiveness of age verification methods

J

Jutta Croll

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Social media platforms are already virtual worlds based on user behavior

Explanation

Jutta Croll suggests that social media platforms can be considered virtual worlds due to user behavior. She argues that users inhabit these platforms by providing personal data and creating digital identities.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Michael Barngrover

Deepak Tewari

Agreed on

Virtual environments are already heavily populated by children

Generative AI seen as tool for children’s advocacy

Explanation

Jutta Croll highlights that children view generative AI as a potential tool for advocacy. This perspective emphasizes the positive potential of emerging technologies for empowering children.

Major Discussion Point

Opportunities and risks of metaverse for children

D

Deepali Liberhan

Speech speed

167 words per minute

Speech length

1881 words

Speech time

675 seconds

Metaverse offers immersive learning opportunities beyond gaming

Explanation

Deepali Liberhan emphasizes the educational potential of the metaverse beyond entertainment. She argues that immersive virtual environments can provide unique learning experiences for children.

Evidence

She mentions the possibility of virtually visiting historical sites like the Pyramids of Giza or the Eiffel Tower, and collaborative learning across geographical and economic boundaries.

Major Discussion Point

Opportunities and risks of metaverse for children

Meta implements default privacy settings and parental controls for teens

Explanation

Deepali Liberhan describes Meta’s approach to child safety in virtual environments. She highlights the implementation of default privacy settings for teens and parental supervision tools across Meta’s products.

Evidence

She mentions specific features like default private profiles for 13-17 year olds, personal boundary settings in VR, and parental controls for time limits and app approvals.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Sophie Pohle

Lhajoui Maryem

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

Differed with

Hazel Bitana

Differed on

Approach to child safety in virtual environments

Need to balance age verification with data minimization principles

Explanation

Deepali Liberhan discusses the challenge of verifying users’ ages while adhering to data minimization principles. She emphasizes Meta’s efforts to find effective age assurance methods that don’t compromise user privacy.

Evidence

She mentions the use of trained personnel to identify underage users, proactive technology using behavioral signals, and third-party solutions like YOTI for age estimation.

Major Discussion Point

Data collection and privacy concerns

Differed with

Deepak Tewari

Differed on

Effectiveness of age verification methods

S

Sophie Pohle

Speech speed

114 words per minute

Speech length

961 words

Speech time

501 seconds

General Comment 25 provides framework for children’s rights in digital environments

Explanation

Sophie Pohle introduces General Comment 25 as a guiding document for children’s rights in digital contexts. She explains that it offers a practical framework for implementing the Convention on the Rights of the Child in digital environments.

Evidence

She outlines the four general principles of GC25: non-discrimination, best interests of the child, right to life and development, and respect for the views of the child.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Lhajoui Maryem

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

L

Lhajoui Maryem

Speech speed

154 words per minute

Speech length

1313 words

Speech time

510 seconds

Need for age-appropriate content and safer digital experiences for children

Explanation

Lhajoui Maryem emphasizes the importance of creating safe and age-appropriate digital experiences for children. She argues for the need to involve children in the design and policy-making processes of digital platforms.

Evidence

She mentions the use of playful tools and workshops to engage children in discussions about online safety and digital identity.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Sophie Pohle

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

H

Hazel Bitana

Speech speed

140 words per minute

Speech length

724 words

Speech time

309 seconds

Importance of child-friendly reporting mechanisms and effective remedies

Explanation

Hazel Bitana stresses the need for child-friendly reporting mechanisms and effective remedies in virtual environments. She argues that these systems should consider cultural diversity and language differences, especially in the Asian region.

Evidence

She mentions the need for specialized regional or local safety teams to facilitate timely and effective response and prevention mechanisms.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Sophie Pohle

Lhajoui Maryem

Agreed on

Need for age-appropriate content and safety measures in virtual environments

Differed with

Deepali Liberhan

Differed on

Approach to child safety in virtual environments

Children want to be involved in policymaking processes

Explanation

Hazel Bitana emphasizes children’s desire to be actively involved in discussions and policymaking related to digital environments. She argues for an approach that recognizes children as rights holders rather than passive recipients of protection.

Evidence

She cites inputs from the 2024 Regional Children’s Meeting in Thailand, where child delegates expressed their desire for involvement and child-friendly information.

Major Discussion Point

Regulation and governance of virtual environments

Potential for creative expression but also deepening discrimination

Explanation

Hazel Bitana discusses the dual nature of the metaverse for children’s expression. While it offers opportunities for creative expression through avatars, she warns that without proper safeguards, it could deepen existing discrimination.

Major Discussion Point

Opportunities and risks of metaverse for children

E

Emma Day

Speech speed

151 words per minute

Speech length

1267 words

Speech time

500 seconds

Unprecedented amounts of sensitive data collected in metaverse

Explanation

Emma Day highlights the increased data collection in metaverse environments. She explains that this data can be more sensitive and voluminous than traditional online platforms, potentially including physiological responses and brainwave patterns.

Evidence

She mentions potential uses of this data for targeted marketing, commercial surveillance, or government surveillance.

Major Discussion Point

Data collection and privacy concerns

Existing regulations like GDPR apply but new challenges emerge in metaverse

Explanation

Emma Day discusses the applicability of existing regulations like GDPR to the metaverse. She points out that while these regulations still apply, the metaverse presents new challenges in areas like determining data controllers and processors.

Evidence

She gives an example of the complexity of providing privacy notices in different parts of a virtual environment.

Major Discussion Point

Regulation and governance of virtual environments

Multi-stakeholder approach needed for governance frameworks

Explanation

Emma Day advocates for a multi-stakeholder approach to developing governance frameworks for new technologies. She suggests using regulatory sandboxes to bring together regulators, private sector, and civil society to address frontier technologies.

Evidence

She mentions work with UNICEF on data governance for edtech and the Data Sphere Initiative’s efforts to create multi-stakeholder regulatory sandboxes.

Major Discussion Point

Regulation and governance of virtual environments

T

Torsten Krause

Speech speed

123 words per minute

Speech length

743 words

Speech time

361 seconds

Global Digital Compact calls for protecting children’s rights in digital spaces

Explanation

Torsten Krause introduces the Global Digital Compact as a framework for shaping the digital environment. He highlights its emphasis on protecting children’s rights in digital spaces.

Evidence

He cites specific objectives from the GDC, including closing digital divides, fostering inclusive digital spaces, and creating safe and trustworthy emerging technologies.

Major Discussion Point

Data collection and privacy concerns

States responsible for implementing child safety policies and standards

Explanation

Torsten Krause emphasizes the responsibility of states in implementing child safety measures in digital environments. He argues that states should develop and enforce national online child safety policies and standards.

Evidence

He references the GDC’s call for states to strengthen legal systems and policy frameworks to protect children’s rights in digital spaces.

Major Discussion Point

Regulation and governance of virtual environments

A

Audience

Speech speed

140 words per minute

Speech length

781 words

Speech time

333 seconds

Risk of addiction and disconnection from reality

Explanation

An audience member raises concerns about the potential for addiction to virtual environments. They question how to prevent children from becoming overly immersed in the metaverse and disconnecting from reality.

Major Discussion Point

Opportunities and risks of metaverse for children

Agreements

Agreement Points

Virtual environments are already heavily populated by children

Michael Barngrover

Deepak Tewari

Jutta Croll

Virtual worlds encompass VR, mixed reality, 3D gaming, and social media platforms

Metaverse has 600 million monthly active users, with 51% under age 13

Social media platforms are already virtual worlds based on user behavior

Speakers agree that virtual environments, including social media platforms, are already widely used by children and constitute a significant part of their digital experience.

Need for age-appropriate content and safety measures in virtual environments

Deepali Liberhan

Sophie Pohle

Lhajoui Maryem

Hazel Bitana

Meta implements default privacy settings and parental controls for teens

General Comment 25 provides framework for children’s rights in digital environments

Need for age-appropriate content and safer digital experiences for children

Importance of child-friendly reporting mechanisms and effective remedies

Multiple speakers emphasize the importance of creating safe, age-appropriate digital experiences for children with proper safeguards and reporting mechanisms.

Similar Viewpoints

These speakers highlight the tension between effective age verification and data privacy concerns in virtual environments, acknowledging the need for balance and the challenges posed by extensive data collection.

Deepak Tewari

Deepali Liberhan

Emma Day

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

Unprecedented amounts of sensitive data collected in metaverse

Both speakers advocate for involving children in the design and policy-making processes of digital platforms, emphasizing the importance of children’s perspectives in creating safe and appropriate digital experiences.

Lhajoui Maryem

Hazel Bitana

Need for age-appropriate content and safer digital experiences for children

Children want to be involved in policymaking processes

Unexpected Consensus

Potential of virtual environments for education and advocacy

Deepali Liberhan

Jutta Croll

Hazel Bitana

Metaverse offers immersive learning opportunities beyond gaming

Generative AI seen as tool for children’s advocacy

Potential for creative expression but also deepening discrimination

Despite discussions largely focusing on risks and safety concerns, there was unexpected consensus on the positive potential of virtual environments for education and children’s advocacy. This highlights a balanced view of the metaverse’s impact on children.

Overall Assessment

Summary

The main areas of agreement include the widespread use of virtual environments by children, the need for age-appropriate content and safety measures, and the challenges of balancing age verification with data privacy. There was also recognition of the potential benefits of virtual environments for education and advocacy.

Consensus level

Moderate consensus was observed among speakers on key issues. While there were differing perspectives on implementation details, there was general agreement on the importance of protecting children’s rights in virtual environments. This consensus suggests a shared foundation for developing policies and regulations for children in the metaverse, but also highlights the need for continued dialogue to address complex challenges like data privacy and age verification.

Differences

Different Viewpoints

Effectiveness of age verification methods

Deepak Tewari

Deepali Liberhan

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

While Deepak Tewari argues that effective privacy-preserving age detection technology exists and should be implemented, Deepali Liberhan emphasizes the need to balance age verification with data minimization, suggesting current solutions may not fully address privacy concerns.

Approach to child safety in virtual environments

Deepali Liberhan

Hazel Bitana

Meta implements default privacy settings and parental controls for teens

Importance of child-friendly reporting mechanisms and effective remedies

Deepali Liberhan focuses on platform-specific safety measures implemented by Meta, while Hazel Bitana emphasizes the need for broader, culturally sensitive reporting mechanisms and remedies across virtual environments.

Unexpected Differences

Perception of generative AI by children

Jutta Croll

Hazel Bitana

Generative AI seen as tool for children’s advocacy

Potential for creative expression but also deepening discrimination

While Jutta Croll presents a positive view of children using generative AI for advocacy, Hazel Bitana highlights both the creative potential and the risk of deepening discrimination. This unexpected difference shows the complexity of emerging technologies’ impact on children’s rights.

Overall Assessment

summary

The main areas of disagreement revolve around the implementation of age verification technologies, the approach to child safety in virtual environments, and the involvement of children in policymaking processes.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of protecting children’s rights in virtual environments, speakers differ significantly in their proposed approaches and solutions. These differences highlight the complexity of balancing privacy, safety, and children’s participation in the rapidly evolving digital landscape, particularly in the context of the metaverse. The implications of these disagreements suggest that a multi-stakeholder approach, as proposed by Emma Day, may be necessary to develop comprehensive and effective governance frameworks for children’s safety and rights in virtual environments.

Partial Agreements

Partial Agreements

All speakers agree on the importance of child safety in virtual environments, but differ in their approaches. Deepali Liberhan focuses on platform-specific measures, while Hazel Bitana and Lhajoui Maryem advocate for more direct involvement of children in policymaking and design processes.

Deepali Liberhan

Hazel Bitana

Lhajoui Maryem

Meta implements default privacy settings and parental controls for teens

Children want to be involved in policymaking processes

Need for age-appropriate content and safer digital experiences for children

Similar Viewpoints

These speakers highlight the tension between effective age verification and data privacy concerns in virtual environments, acknowledging the need for balance and the challenges posed by extensive data collection.

Deepak Tewari

Deepali Liberhan

Emma Day

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

Unprecedented amounts of sensitive data collected in metaverse

Both speakers advocate for involving children in the design and policy-making processes of digital platforms, emphasizing the importance of children’s perspectives in creating safe and appropriate digital experiences.

Lhajoui Maryem

Hazel Bitana

Need for age-appropriate content and safer digital experiences for children

Children want to be involved in policymaking processes

Takeaways

Key Takeaways

Virtual worlds/metaverse are already widely used by children, with 51% of users under age 13

Existing regulations like GDPR apply to metaverse but new challenges emerge around data collection and privacy

Children’s rights frameworks like General Comment 25 should guide metaverse governance

Age verification and parental controls are important but must be balanced with data minimization

Metaverse offers opportunities for education and creative expression but also risks like addiction and discrimination

Multi-stakeholder approach involving children is needed for effective governance

Resolutions and Action Items

States should implement national online child safety policies and standards as per Global Digital Compact

Companies should conduct child rights impact assessments for metaverse products

More research needed on impacts of virtual reality on child development

Develop child-friendly reporting mechanisms for metaverse environments

Unresolved Issues

How to effectively verify age in metaverse without excessive data collection

Appropriate consequences for virtual crimes or harms

How to close digital divides in access to metaverse technologies

Balancing innovation with protection in metaverse regulation

Suggested Compromises

Age verification at device/OS level rather than by individual platforms to minimize data collection

Use of privacy-preserving age estimation technologies instead of strict verification

Allowing children access to metaverse under parental supervision with strong default protections

Thought Provoking Comments

With virtual worlds, it is a challenging term, because it’s very broad and very encompassing. So there are at least four broad concepts of virtual worlds that I think are very relevant.

speaker

Michael Barngrover

reason

This comment provided a framework for understanding the complexity and breadth of virtual worlds, setting the stage for a more nuanced discussion.

impact

It led to a deeper exploration of different types of virtual environments and their implications for children, moving the conversation beyond simplistic notions of the metaverse.

There is a cognitive load to be thinking about and managing your activities and your presence, and thus the activities and presence of others in multiple worlds, the non-digital world, but also the digital virtual worlds.

speaker

Michael Barngrover

reason

This insight highlighted an often overlooked aspect of virtual worlds – the cognitive demands they place on users, especially children.

impact

It shifted the discussion to consider the psychological and developmental impacts of virtual worlds on children, beyond just safety concerns.

Technology exists today from companies like Privately, which have developed fully privacy-preserving, GDPR-compliant solutions for detecting age of users.

speaker

Deepak Tewari

reason

This comment introduced concrete technological solutions to age verification, a key challenge in protecting children online.

impact

It moved the conversation from theoretical concerns to practical solutions, sparking discussion about implementation and effectiveness of such technologies.

The metaverse is an evolving space and some of it we’re talking about is a little bit futuristic and may not actually exist yet, but we’re talking about a kind of virtual space where unprecedented amounts of data is collected from child users and adult users as well.

speaker

Emma Day

reason

This comment grounded the discussion in reality while highlighting the potential future challenges, particularly around data collection.

impact

It refocused the discussion on the need for proactive regulation and governance to address future challenges in virtual environments.

Children want to be involved and be empowered to be part of the discussions and policymaking processes. They want child-friendly spaces and information, having child-friendly versions of the terms of end conditions or privacy policies that they agree.

speaker

Hazel Bitana

reason

This comment brought the crucial perspective of children themselves into the discussion, emphasizing their desire for agency and understanding.

impact

It shifted the conversation from a protective stance to one that also considered children’s rights to participation and information, leading to a more balanced discussion of safety and empowerment.

Overall Assessment

These key comments shaped the discussion by broadening its scope from a narrow focus on safety to a more comprehensive consideration of children’s experiences in virtual worlds. They introduced technical, legal, and child-centric perspectives, leading to a richer, more nuanced dialogue about the challenges and opportunities of the metaverse for children. The discussion evolved from defining virtual worlds to exploring their cognitive impacts, from theoretical concerns to practical solutions, and from adult-centric protectionism to child-empowering approaches.

Follow-up Questions

How can addiction to the metaverse be prevented in children?

speaker

Youth ambassador from Hong Kong

explanation

This is important to address potential negative impacts of immersive virtual environments on children’s wellbeing and development.

How can parents and educators be effectively involved when it comes to children in the metaverse?

speaker

Audience member from India

explanation

Parental and educator involvement is crucial for ensuring children’s safety and positive experiences in virtual environments.

What are the design principles for developing a governance framework for the metaverse?

speaker

Audience member Ansu

explanation

Establishing clear design principles is important for creating effective and ethical governance structures for virtual environments.

How can age verification be implemented more effectively on social media platforms without compromising data minimization principles?

speaker

Audience member (unnamed)

explanation

Balancing effective age verification with data protection is crucial for ensuring child safety while respecting privacy rights.

What prompted META to lower the age requirement for the Quest, and how does the company address the potential risks to children?

speaker

Marie Yves Nadeau (online participant)

explanation

Understanding the rationale behind age requirement changes and associated risk mitigation strategies is important for assessing the impact on child safety.

How can cultural diversity be effectively incorporated into safeguarding measures in the metaverse?

speaker

Hazel Bitana

explanation

Ensuring cultural sensitivity in safety mechanisms is crucial for creating inclusive and effective protection for children from diverse backgrounds.

How can child-friendly reporting mechanisms and effective remedies be implemented in the metaverse?

speaker

Hazel Bitana

explanation

Developing accessible and effective reporting and remedy systems is essential for protecting children in virtual environments.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

IGF 2024 Newcomers Session

Session at a Glance

Summary

This discussion focused on the Internet Governance Forum (IGF), its history, structure, and role in shaping internet policy. The IGF emerged from the World Summit on the Information Society in the early 2000s as a multi-stakeholder platform for dialogue on internet issues. It brings together governments, civil society, the private sector, and technical communities to discuss emerging internet topics and best practices.

Key features of the IGF include its bottom-up agenda setting, inclusivity, and non-commercial nature. The forum has grown significantly since its first meeting in 2006, now attracting thousands of participants. It produces outputs like best practice forums and policy networks on topics such as cybersecurity, AI, and internet fragmentation. The IGF also supports national and regional initiatives to foster local internet governance discussions.

The speakers emphasized the IGF’s commitment to multi-stakeholder engagement, capacity building, and addressing the UN Sustainable Development Goals. They highlighted the forum’s role in facilitating knowledge exchange, influencing policy processes, and promoting an ethical, rights-based approach to internet governance. The discussion also touched on the upcoming renewal of the IGF’s mandate and its potential role in implementing the Global Digital Compact.

Audience questions addressed topics such as including ethics in IGF discussions, engaging media stakeholders, and incorporating emerging technologies like blockchain. The speakers encouraged active participation from newcomers and emphasized the IGF’s openness to community-driven improvements and new ideas.

Keypoints

Major discussion points:

– History and purpose of the Internet Governance Forum (IGF)

– Structure and components of the IGF, including stakeholder groups, best practice forums, policy networks, etc.

– Growth and evolution of the IGF over time

– Importance of multi-stakeholder participation and bottom-up agenda setting

– Connections between the IGF and other UN processes/SDGs

Overall purpose:

The purpose of this discussion was to provide an overview and introduction to the Internet Governance Forum for newcomers and first-time attendees. The speakers aimed to explain the IGF’s history, structure, and importance, while encouraging active participation from all stakeholders.

Tone:

The overall tone was informative and welcoming. The speakers were enthusiastic about the IGF and eager to share information. There was an emphasis on inclusivity and openness throughout. Toward the end, the tone became more interactive as audience members asked questions and shared their perspectives, creating a collaborative atmosphere.

Speakers

– Chengetai Masango: Head of the Secretariat of the Internet Governance Forum

– Nnenna Nwakanma: Digital Policy, Advocacy and Cooperation Strategist

– Audience: Various audience members asking questions

Full session report

The Internet Governance Forum (IGF) Discussion: An In-Depth Overview

This comprehensive summary provides an extensive look at a discussion centered on the Internet Governance Forum (IGF), its history, structure, and role in shaping internet policy. The discussion featured Chengetai Masango, an IGF Secretariat representative, as the primary speaker, with contributions from audience members.

1. History and Purpose of the IGF

The IGF emerged from the World Summit on the Information Society in the early 2000s as a multi-stakeholder platform for dialogue on internet issues. Chengetai Masango emphasized that the IGF was established to address internet governance issues through inclusive, multi-stakeholder dialogue. This approach brings together governments, civil society, the private sector, and technical communities to discuss emerging internet topics and best practices.

Since its inception, the IGF has experienced significant growth. What began as a modest forum has now evolved into a global event attracting thousands of participants. The current event had 8,800 registrations, while the previous event in Japan saw over 10,000 participants. This growth underscores the increasing importance of internet governance discussions on the world stage.

2. Structure and Activities of the IGF

A key feature of the IGF is its bottom-up agenda setting process. Masango highlighted that the forum’s themes and topics are determined through community input, ensuring that discussions remain relevant and responsive to current issues in the internet governance landscape.

The IGF produces several tangible outputs, including:

– Best practice forums

– Policy networks

– Dynamic coalitions

These outputs cover a wide range of topics such as cybersecurity, artificial intelligence, and internet fragmentation. The forum also supports national and regional initiatives to foster local internet governance discussions, allowing for more granular, context-specific dialogues.

Masango noted that the IGF has various engagement tracks, including business, parliamentary, and youth tracks. This structure ensures that diverse perspectives are represented and that the forum remains inclusive of different stakeholder groups.

3. Impact and Partnerships

The discussion highlighted that the IGF has led to concrete outcomes in some regions, such as the establishment of Internet Exchange Points. This demonstrates the forum’s potential to influence real-world internet infrastructure development.

Masango emphasized that the IGF aligns its work with the UN Sustainable Development Goals, showcasing its commitment to broader global development objectives. The forum also partners with various UN agencies, regional bodies, and private companies to extend its reach and impact. A specific partnership with ICANN was mentioned, highlighting collaborative efforts in capacity building and outreach.

4. Participation in the IGF

The speakers encouraged active participation from newcomers and emphasized the IGF’s openness to community-driven improvements and new ideas. Masango explained that individuals can join IGF working groups and mailing lists to get involved. Additionally, national and regional IGFs allow for local participation, while online participation enables remote engagement for those unable to attend in person.

The IGF offers a travel support program to facilitate participation from developing countries and underrepresented groups. Masango also highlighted the IGF’s website as a valuable resource for accessing reports, documents, and other materials related to internet governance discussions.

5. Inclusivity and Accessibility

In response to an audience question, Masango addressed the IGF’s commitment to inclusivity, particularly regarding people with disabilities. He mentioned ongoing efforts to improve accessibility at IGF events and encouraged continued feedback and suggestions from the community to enhance these efforts.

6. Media Engagement

An audience member from Kenya highlighted the existence of an African media caucus and suggested formalizing media participation in the IGF by creating a dedicated caucus. This proposal underscores the importance of media engagement in internet governance discussions.

7. Non-Commercial Nature and Community Support

Masango emphasized the IGF’s non-commercial nature and its dependence on community support. He encouraged participants to contribute their time and expertise to various IGF initiatives and working groups.

8. Future Directions and Challenges

The discussion touched on several potential future directions for the IGF:

– Ethics: An audience member suggested formally including ethics as part of the IGF’s overall aspirations, particularly in relation to AI and human rights discussions.

– Emerging Technologies: An online participant proposed reintroducing distributed ledger technology (blockchain) as a major discussion topic, given its growing importance in the digital economy.

– Organizational Meetings: A long-time participant suggested using the IGF as a platform for organizational meetings and partner gatherings to maximize the value of in-person events.

9. IGF Renewal and WSIS Plus 20 Review

Towards the end of the discussion, Masango mentioned the upcoming renewal of the IGF mandate and the WSIS Plus 20 review. These processes will be crucial in shaping the future of the IGF and its role in global internet governance. The Global Digital Compact, which references the IGF, was also mentioned as an important development in this context.

In conclusion, this discussion provided a comprehensive overview of the Internet Governance Forum, its history, structure, and ongoing evolution. It highlighted the IGF’s commitment to multi-stakeholder engagement, capacity building, and addressing global internet governance challenges. The discussion also emphasized the forum’s openness to new ideas and community-driven improvements, encouraging active participation from all stakeholders in shaping the future of internet governance.

Session Transcript

Chengetai Masango: who don’t. So, and the internet at that time, around the turn of the century, 2000, was increasing in relevance, economically, socially. It was no longer just an academic network. And we had our first, you know, MySpaces and et cetera. I don’t know if those who are old enough to remember those kind of things. So, and this was when Kofi Annan was Secretary General. And so the Secretary General decided that yes, there will form a World Summit on the Information Society. And ITU was one of the lead organizations with UNDP, UNESCO, DESA, et cetera. And as Omer said, there were two phases. Phase one was in Geneva. And when they were discussing, they found out that, okay, we’re discussing something called internet governance. But what exactly is internet governance? Nobody really knew what internet governance was. So what they did, as in all good UN processes, they formed a working group on internet governance. And this working group was actually made up of multi-stakeholders. So governments, civil society, the private sector, as well as IGOs. And they came up with a report, which was presented in the second phase in Tunis, which also formed part of the Tunis agenda. And the mandate of the IGF is written in paragraph 72 of the Tunis agenda, which called upon the United Nations Secretary General to convene a meeting by the end of 2006, which would discuss, oh yes, we asked the United Nations Secretary General in an open and inclusive process to convene by the second quarter of 2006. 2006, a meeting of a new forum of multi-stakeholder policy dialogue. So this was the most important thing, multi-stakeholder policy dialogue. And at that time, this concept of multi-stakeholder policy dialogue, which was envisioned here, was new, especially to the UN system. And at that time, multilateral meant just governments speaking together, and civil society, and the companies, et cetera, would be observers, if that. But now we realize that for the Internet, most of the knowledge, of course, was in the technical community, and also civil society had to have a say. So the main thing about the IGF is that everybody has a say, everybody participates on an equal footing at the IGF. And part of it was to, of course, engage stakeholders, identify emerging issues, build capacity as well, which was also very important, especially for the global South, especially for the youth, elderly as well, people with disabilities. We had to make sure that all these people were involved and nobody was left behind. So the first meeting of the IGF and OMA, slight correction, was in Athens. And there in Athens I think we had over, I don’t know, about 800 people coming together. Today we have a registration of over 8,800 people. So we’ve grown over the years, which is very good. So the key points of the IGF, it’s bottom-up agenda setting. When we set our agenda, we do send out a call for themes, and anybody can participate and say what is the theme. the theme of the year. Last year, it was most definitely AI. The year before that, it was internet fragmentation, et cetera. So each year, there’s a new hot topic there. And we also do emerging issues, which is also very important. And we have capacity building. I don’t know, I’ve been talking a lot. Would you want to say a little bit of what we do with… This is actually the most exciting part about the IGF, I think, the intersessional work that we do during the year. So it’s just not planning the global forum at the, which are here now. We have the best practice forums and the policy networks. And what we do is we look at hot topics, things that we think that you and the stakeholders, our community would be interested in, and the mag decides on which types of best practice forums and policies. So what we do is we get together, we talk, and we come up with what’s the best practice, what we look at standards, do comparisons of them, and then we produce a report. So I encourage you to go to the IGF’s website, and you’ll see the different work that we do over the past few years. They change after a while, but the best practice forum that we have today is cybersecurity. For policy networks, we have three. It’s AI, internet fragmentation, and meaningful access. So then we have the cooperation of the NRIs and dynamic coalitions. I encourage everybody to also, again, go to the internet, to the website, and to see if your country has a, or your region has a national or regional IGF and participate. So you just don’t have to participate at the end of a year. You can try to be active during the entire year. We’ve moved on. Okay. So then we have the two major groups. We have the IGF leadership panel and the multi-stakeholder advisory group. So the MAG, which we call, I’m the current chair, and the chair’s selected based on the cycle we are for each stakeholder group that we have. So currently, I’m actually government stakeholder, and that was for this year. And then we have the leadership panel, which we collaborate together in order to come up with the great stuff that we have today and that you see here today. And as you can see, the chair for the leadership panel is Vint Cerf, one of the founders of the internet underlying architecture. And the vice chair is Maria Ressa. She’s a Nobel Peace Prize winner. And of course, we also have… a selection of people who represent each stakeholder group. For Africa, we’ve got Gebenga Sessan, and for the business community, we’ve got Maria Fernanda Gaza from the ICC basis, and she’s also based in Mexico. She’s a small business owner in Mexico. We also do have the Secretary General’s Envoy on Technology, as a ex-official member, and the three, the past host, so for Japan, has currently changed a bit, the present host, and today, we will also have the next host, which is Norway, joining the leadership panel. And as it says there, the IGF Secretariat is based in Geneva. We’re a very small secretariat. We’re only seven people, full-time, and then we do have consultants, but we do depend on community support. That’s one of the things that, we are like the internet, open-sourced. There are, of course, some people who are full-time, but most of the people are volunteers, like Omer was, and that also has helped. And again, key principles, open, multi-stakeholder, bottom-up, inclusive, transparent, non-commercial, and community-centered process. We do not charge anything for the IGF, and also the national and regionals don’t charge anything for the IGF as well. These are where we started. We started off in Athens, and then we went to Brazil. And last year, we were in Kyoto, and the year before that, we were in Addis Ababa. We do try and move the IGF meeting from region to region, just to make sure that people do have access, because if we have it, okay, we have it here, for instance, people from South America, it’s very difficult for them to come. So, but when we have it in Brazil, it’s the other way around. But we do. sure that people can come and participate physically if they want to more easily. And over the past years, and this is when we had COVID. So we had an online meeting and from then we have actually increased our online and remote participation so that it’s seamless between the two. Next. Yes. So over the years, Secretary General has attended. This was in Paris where we had the president of France there. And of course, Engel Merkel of Germany as well. And as I noted, remote participation is also very important. We’ll go to the next. Right. So that brings us to where we are today. The MAG looked at the topics from the that the community sent in. And what we thought this year was that since there’s so many cross cutting topics that we would look at it as a holistic view. So we ended up with building a multi stakeholder digital future. And as Shanghai was saying that the multi stakeholder part that’s you, me, everybody is extremely important to the entire process and moving forward. So we have enhancing the digital contribution to peace, development and sustainability, advancing human rights and inclusion in the digital age, harnessing innovation and balancing risks and the space, improving digital governance for the internet we want. And we thought that these four will help us to build our multi stakeholder digital future. One of the so we discuss and there has been at the beginning, at least, you know, oh, we’re just a talk shop, etc. Well, we’re not just a talk shop. But also, there’s nothing wrong with being a talk shop, it is very important to have stakeholders who normally don’t meet to discuss issues and learn from each other, and then go home and discuss what they have learned here, and implement it at their home station. And this is the third mandate of the IDF. And we’re due for renewal next year. And we have had studies to see what is the actual impact of the IGF. And one famous example, which I always quote, is that, for instance, the establishment of the Internet Exchange Point in East Africa was because they came, the regulatory authority came to the IGF, met people from Packet Clearinghouse, et cetera, had a discussion, went home, continued that contact, and a IXP was established. And the effect of that ISP was that usually Internet traffic would come directly from England, et cetera. Now that, and even if it’s local, if I wanted to send an email and I was in East Africa next door, it would first go to England and then come back. So with the establishment of the IXP, all the traffic was local, which, of course, brought down the cost of the Internet because there’s no intercontinental traffic, et cetera. And that’s also something that we are proud of, or I’ll say I’m proud that the IGF has managed. Can we just go back? Sorry. And yes, I’m taking too long. And also, we do have outputs, as Carol just said. We have best practice forums where we discuss what are best practices or actually good practices, and also highlight bad practices as well, because it’s very important to see. And the world is not a uniform world. There’s some where economies of scale can happen, but then we also have small island developing states where it’s very difficult to have that connection. But people have faced those problems or faced those challenges, and it’s very good to have that knowledge exchange on how they have solved that problem, so people don’t have to reinvent the wheel. the wheel. And as part of our outputs, we do have the IGF messages and we try and couture our outputs so that they can be inputs into other processes. And we’ve also seen that the IGF has been mentioned in G7 processes, in African Union processes as well. So we do see the effects. And that’s why I say the biggest effect of the IGF is the second order effects that we have. And you want to take this one? No? Okay. Yes. As we said, these are the different parts of the IGF. And it has grown over the years. As I said, in Athens, we had about 800. Now we have a factor of 8,000. So we’ve grown by a factor of 10. In Japan, we had over 10,000 people. So yes. So we have business engagement. We have sessions for business because it is very important. We have a parliamentary and judiciary track. Parliamentarians are the ones who set those public policy instruments. And it’s very important that they have an understanding of how their laws can affect normal internet life. For instance, how do you set laws to combat counterfeiting, et cetera. If you just block IP addresses from a server, you might block not just that server that’s selling fake Gucci handbags, but a whole lot. The same server may be having a hotline that helps people with disabilities, et cetera. So how to combat those and also how knowledge sharing amongst parliamentarians. There’s some parliamentarians who are more advanced. The European Parliament is quite well known for having good policy instruments, the GDPR, et cetera. And they can share how they do things. Of course, newcomers that were here. And we also try and encourage newcomers to go around. Remote hubs for people who cannot make it. Instead of watching the IGF by themselves, they can gather universities or some businesses and actually participate as a group, which we find that actually encourages more debate and more interaction. We do have travel support. I know some of you are here by the travel support. It’s not as much as we would like, but at least it’s something to bring people in. Youth engagement. Youth are our future, of course, and we really do want them to engage. There’s a summer school, there’s a South School, which is South American, there’s Asia-Pacific, and also there’s Eurodig as well for Europe. And inclusion is, of course, something that we actually do look at. After every single IGF, we look at the statistics, we see who has participated, we see who we are missing, and then we look and see how can we include them for the next IGF. It’s very important for us. So if you’re wondering how you could continue your experience from today, you can join any one of these groupings that you see here, and we look forward to persons giving their input because, again, it’s multi-stakeholder, and that means you. So you could join a best practice forum, there are mailing lists, and you can help to formulate the outcomes. There’s also the policy networks, as we said, on internet fragmentation, meaningful access, artificial intelligence. So if you want to be a part, if you have something to offer, if you want to learn something, then one of these groups, you should join and participate. And then we have the dynamic coalitions, there are 32 of those, and they cover a range of topics, blockchain, child rights online, connectivity, internet values, data, health, DNS, gender. So there’s something here for everyone. So we really encourage you to please join the mailing list and let your voices be heard. So here we have the multi-stakeholder groups, civil society, academia and research organizations, tech companies, intergovernmental and international organizations, governments. industries, and technical communities, so we’re all part of the IGF. So what we try to do as well when we form the program is to also look at the SDGs in order to help facilitate meeting the goals for 2030. So we discuss issues that shape the digital future, accountability, accountability and trust, trust is very important these days, innovation, even though we talk about issues, we can’t let the issues downplay the innovation that technology will allow you. So we try to do a balance between that and find out how we can balance, we look at standards and values, which is very important, what’s the norm of a country, of a member state, what’s the culture, we can’t leave those things out. Then we brainstorm with decision makers and users so that we can form a digital future today that the future could enjoy. So we have partners and corporations that help us in this endeavor. We have over 174 national and regional IGFs, as you can see from all over the world, and I think one of the, Saudi Arabia did join a couple of weeks ago, so there is an IGF and the other one, one of the later ones is Ireland, the Irish IGF was also formed. Is there a Norwegian IGF? So So, please, when you go back to your countries, see where the national IGF is. If there’s no national IGF, you’ve got two choices. Join the regional or sub-regional IGF. For instance, in Africa, there’s also a Southern African IGF, there’s a North African IGF, et cetera. Please do join and participate, or just come to our website and see where you can join. And if there isn’t, make one. Because the Internet is for everybody, and we will help you form that IGF. Youth? Yes, and we also have youth IGFs, so national, sub-regional, and youth IGFs as well. So yes, please join us. I know it may be a little bit intimidating. We have over 300 sessions. Just go, take it bit by bit, and in maybe not this year, but in a year or two, you’ll be able to navigate seamlessly throughout. And talk to people. I mean, like, Omer here, or anybody else. We are very approachable. Myself, even, Carol, the chair, we’re very approachable, and we will talk to you about things, yes. IGF and other processes? Okay. As Carol has said, we try and join with the UN 2030 Agenda and the SDGs. Each of our sessions are aligned with one or more of the SDGs. We’re currently behind. COVID did give us, make us be a little bit behind, and there is a question whether or not we will make those goals. But as you know, ICTs are seen as an accelerator, so we’ll see how. And as far as our partners are concerned, we have internal, basically the whole of the UN is our internal partners. We work very well with the ITU, UNESCO, UNDP, UN Habitat, UN Environment as well. So we do work. And also outside, we work with the African Union, as I said. EU, we work with the regional, like Economic Commission for Africa, Economic Commission for Asia, ECE, Economic Commission for Europe, et cetera, and companies. We do work with Google, Meta, Disney, and even the small companies as well. As I said, the person who is on our leadership panel is actually in Mexico, and also ICC that has been passed, the Global Digital Compact, and the GDC did mention the idea as one of the methods that can help with the implementation of the GDC, and due to our strong multi-stakeholder component as well, and we look forward to seeing how we can do that, and we also call upon you to help us as well with that. And next year, or starting now actually, there is the WSIS Plus 20 review, which will be the main focus for next year, and it’s also coming up to the renewal of the IGF for next year. We don’t know how long it’s going to be renewed for, but hopefully we will see. I will not even guess. I’ll not speculate, but we do need your support for that to happen. So if you find the IGF useful, please help in supporting us for the renewal during the course of next year. Thank you. And with that, I also would like, we’ve got 13 minutes for questions. Anything you’d want to talk about, we’re here. That was very interesting.

Audience: I’m a first-timer at IGF, and I was looking at the SDGs and some of the values you had, for example, standards values. I was wondering whether it’s time to include ethics as part of, you know, the overall sort of aspirations of the IGF.

Chengetai Masango: Yes. Okay. is very important, our policy network for AI also is talking about, because we need an ethical internet. We need ethical AI. And we are looking at ways of how to integrate that into all our processes so that it’s designed that way. So ethics, we are human rights based as well. So everything that we do, everything as the UN does, well, one of our foundational documents is, of course, the Declaration of Human Rights. And that also seeps its way into what the IGF does as well. And that should include ethics. Anj, if you could go to slide 9. Sorry. No, keep going. Oh, that’s the one with the topics, right here. So you’ll see here things like advancing human rights and inclusion in the digital age. You’ll find in that part, you’ll see a lot about ethics. Also, in digital governance, ethics is covered there. We do have an online. Mohibi, can we get the question? While they’re figuring that one out, yes, please.

Audience: Good morning. Appreciate your efforts. We are considered the sons of IGF. We are consultative status, and we are participated in this IGF since five years ago. And I take this opportunity to call you to our booth because we have launched a platform for protecting intellectual property in the digital area, aligning with the IGF recommendations. And I call you to our booth. call you to know about this platform. And it’s our pleasure to be a volunteer for IGF leadership panel. Thank you.

Chengetai Masango: All right, thank you very much. We have one hand up there.

Audience: Thank you. Hi, can you hear me?

Chengetai Masango: At the back, you see there? Yeah.

Audience: Hi. Yeah, can you hear me? Yes. Well, I’m a first time IGF attendee and I want to find out if you would be including distributed ledger technology as part of this. I’m from Nairobi, Kenya. And one thing I’d like to appreciate is I come from the Association of Freelance Journalists and I really appreciate, I want to appreciate Carol, Vint, and the team at IGF for having the media caucus come and join the IGF for the first time. Thank you for the support that you’ve given the journalists. We really appreciate, because I guess it’s time that the IGF stories are able to be told from the inside, not from the outside looking, I mean, from the outside and everyone is just saying things. Now, the journalists will have an experience and I hope this will be the beginning of many more like, and even we’ll be able to create a caucus for the media. I’d like to let you know that we currently have the African media caucus that we started and we’ve been going on and we have several journalists in that group and we meet on WhatsApp and we get to share what’s happening in the different IGF fraternity and in the different NRIs that are happening. So I really appreciate. One thing, I don’t know if it’s too early to ask, if one of the, I saw the caucuses that you’ve already mentioned, that you talked about the civil society, private sector, and all the others, let’s have the media also be part of that caucus if it’s possible. Thank you.

Chengetai Masango: Thank you very much. And also, is it possible to have any of the online people speak?

Audience: Hi, can you hear me?

Chengetai Masango: Is it possible to have the online people speak?

Audience: Can you guys hear me? Thank you. Hello. Hi. Thank you for your time. My name is Ismail Sif from Guinea. I would like to thank you. As a first timer in IGF, I have two questions, please. First of all, countries who don’t have yet in the IGF, I would like to have information how to join IGF. That is my first question. And the second one is, is it possible to share this brief story with IGF? How can I get it if you have a website or if you can share a PDF or something like that? Thank you.

Chengetai Masango: No, thank you very much for your question. We will put it on the website so you’ll be able to find it and download it. And if you want to join a national and regional initiative, the lady sitting in front there, please just approach her and she’ll show you how. Is it possible for Nnenna to speak?

Audience: It is, I believe. Can you hear me?

Chengetai Masango: Yes. Yes, we can.

Audience: I can’t seem to activate my video, but that’s fine. Hello, everyone. My name is Nnenna. I come from the Internet. I had to wake up early in my current location to join you online, first because I’m a resource person, but because I also wanted to show you that you can fully participate from anywhere in the IGF series. I wanted to share two thoughts with you, especially for those who are joining recently. I have been participating in IGF for the past 19 years. Actually I was there before IGF started. So it is a good thing for us to listen to our incoming colleagues. I want to raise one thing. Thank you for the one who raised ethics. Over the 20 years, every improvement in IGF has been community motivated. What we have as best practice forums today was not called best practice forums earlier on. They used to call them beds of the same feather. These are people who think that an issue is important. They cuck us just like our media colleague. And they bring it up to the secretariat and we integrate it. So please, if there is a new idea, if there is a new dimension that you want to see in IGF, please do not be shy. Speak up. Find beds of the same feather, familiar spirits or kindred spirits. Cuck us together. Bring it to the fore. And you will see it materialize in IGF. There is something else. On the IGF website, you have a lot of reports. One of the things you want to do is to read a lot. Read up on who is doing what. Look at ngov.org, has a lot of resources. Reports and people. Past IGF sessions are also there. You want to find out who is doing what where and get connected to them. Most of us are online. Most of us are approachable. Most of us are resource people. Please make use of that. Finally, if you happen to be in any IGF, it is a great place to bring media, to bring attention to what you are already working on. You are meeting new people. you are meeting the people you’ve always wanted to meet. And if you are launching a book, if you’re launching a report, that will be a good place. And there are a lot of things that do not show up on the official IGF agenda. So your organization may use this to have a partners meeting, you have one-on-ones, you have a lot of other things you can do on the margin of an IGF. And that explains why most people really use that opportunity. Sometimes to even have a team meeting. If your organization is working in the digital space, that would be a good time to have your partners meeting, a good time to have even your board meeting, a good time to combine IGF with your team retreat because it cuts down your cost. So please, if you can’t do the global one, do the regional one. If you can neither do those, do the national one, but please participate. And if you can’t do it in person, you can just do it online like myself. Thank you very much.

Chengetai Masango: Thank you very much, Nnenna. And then now we have, if I can read, Mo Hippel. Sorry if I’m messing your name up, but you’re second in line online after Nnenna, please speak. If not, then we’ll go to Anja Albers.

Audience: Hi, I want to find out, I’m a first time attendee at the IGF. It’s fantastic. I’m attending in, right now I’m online, but I will be in Riyadh right in about a few hours time. And I want to find out if we can include distributed ledger technology as part of a forum, because that has a. massive parallel revenue stream or economy that’s going on and certainly requires this kind of multi-stakeholder engagement. Thank you.

Chengetai Masango: Can you just repeat if we can include one?

Audience: If we could include distributed ledger technologies such as blockchain as well as others into this forum as a discussion point because we do have, I’m also a PhD researcher, so hence the interest in this area.

Chengetai Masango: Yes, blockchain was a big issue at the IGF a couple of years ago, I think about five years ago and we had many workshop sessions on blockchain and distributed ledgers. As we mentioned before, it is quite easy to reintroduce that. We do need to have a critical mass of people who are interested in it. So, for instance, for next year’s IGF, we’ll be doing issues and if you put blockchain there and distributed ledgers and other people do so and it raises to the top or near the top, then yes, we can introduce it as a theme. But also, you yourself can have either a workshop session on blockchain, you just have to…

Audience: Sorry, I can’t hear you. Okay, there is a disturbance, please.

Chengetai Masango: I’ll try again and I think the technical people will just close off the microphones. So, we will have a call for sessions as well and you in your individual capacity can apply for a session, a workshop session. You would have to bring in some other stakeholders. two other stakeholders with you from different stakeholder groups to apply for a workshop session and have a session for next year. And the final thing you can do is to have a day zero session where the requirements aren’t that stringent as such to discuss distributed ledgers. And from that, it can help pick up the momentum for that.

Audience: Okay, that’s great. Thank you. Hello. We will hear. Yeah.

Chengetai Masango: Okay, we’ll have you and then we’ll have somebody from the. And then, unfortunately, we’ll have to close, please. If you can.

Audience: Yeah. Actually, you call me actually at the time, it was not. It was muted. So, just.

Chengetai Masango: That’s fine. Please state your question. Yeah.

Audience: Okay, so, yeah, so I’m from Montreal. And it’s almost 2am here so yeah yeah so yeah. So my question is, I’m from the technical community and I would like to know like, how can I can ICN and IGF work together to make the Internet score systems like domain names. More secure and accessible for public infrastructure. So, yeah. Yes. How could IGF and I can I can work on the technical issues. Yeah, that’s my question.

Chengetai Masango: Thank you very much. Actually, I can we work very closely with I can at this meeting, you’ll see in the opening session, just now, we will have the incoming CEO and President of I can making a speech. There’s a large contingent of I can staff and also board of directors here at the meeting, and the I give I myself goes to I can meetings as well. Thank you. meetings online, not just to discuss technical issues, but since we are somewhat similar in the way we organize our conferences, we also give each other best practices and how to do things better from our both perspectives. So to answer your specific questions, we will just continue to work with them. And also, not only in our meetings, but in the summer schools as well in our intersessional activities. Our last question, and then we have to go, sorry. Okay.

Audience: Yeah, so may I ask somebody else? Hi everybody, my name is Sanjay Jackson, Jr. I’m from Columbia, and I would like to find out if there are plans IGF has to include the Handicap Society, or is there a special section for the Handicap Society?

Chengetai Masango: All right, thank you very much. Yes, we do have actually a dynamic coalition on access and with people with disabilities as well. And we do work closely with them. They have actually given us a document which we attach to our host country agreement to make sure that all our meeting venues are accessible to people with disabilities. And you will also see in the opening ceremony and session, and in the plenary hall, we do have sign language provided as well. And it’s because of them. As Lena was saying, we didn’t have it at the beginning, but because the community asked for it, we’ve had it. And we’ll continue to have it. And of course, if there’s more to be done, please feel free to approach any of us or to send us an email with your suggestions. As we said, it’s a community effort. It’s a bottom-up process. And with that, I’m sorry we have to go, but every single one of us is very much approachable. We can have discussions in the corridors, et cetera. You can come into our offices, which are at the front there, and we’ll be happy to discuss anything you’d want. Thank you.

C

Chengetai Masango

Speech speed

127 words per minute

Speech length

4185 words

Speech time

1971 seconds

IGF established to address internet governance issues through multi-stakeholder dialogue

Explanation

The IGF was created to provide a platform for discussing internet governance issues. It was designed to be a multi-stakeholder forum where all parties could participate on equal footing.

Evidence

Kofi Annan as Secretary General initiated the World Summit on the Information Society, which led to the creation of the IGF

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Multi-stakeholder approach of IGF

IGF provides a platform for stakeholders to discuss emerging internet issues and build capacity

Explanation

The IGF serves as a forum for identifying and discussing new internet-related issues. It also focuses on capacity building, especially for the global South, youth, elderly, and people with disabilities.

Evidence

The IGF has grown from 800 participants in Athens to over 8,800 registered participants currently

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Multi-stakeholder approach of IGF

IGF has grown significantly in participation since its inception

Explanation

The IGF has seen substantial growth in participation over the years. This growth indicates increasing interest and relevance of the forum.

Evidence

First IGF meeting in Athens had about 800 people, while current registration is over 8,800 people

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Growth and importance of IGF

IGF uses bottom-up agenda setting with community input on themes

Explanation

The IGF employs a bottom-up approach to setting its agenda. The community is invited to suggest themes for each year’s forum, ensuring relevance and inclusivity.

Evidence

Examples of past themes include AI and internet fragmentation

Major Discussion Point

Structure and Activities of the IGF

Agreed with

Audience

Agreed on

Growth and importance of IGF

IGF produces outputs like best practice forums and policy networks

Explanation

The IGF generates tangible outputs through its best practice forums and policy networks. These outputs focus on current hot topics and aim to provide practical insights and recommendations.

Evidence

Current best practice forum is on cybersecurity, and policy networks include AI, internet fragmentation, and meaningful access

Major Discussion Point

Structure and Activities of the IGF

IGF has various engagement tracks including business, parliamentary, and youth

Explanation

The IGF has developed specific tracks to engage different stakeholder groups. These tracks ensure that diverse perspectives are included in the discussions and outcomes.

Evidence

Mentions of business engagement sessions, parliamentary and judiciary tracks, and youth engagement initiatives

Major Discussion Point

Structure and Activities of the IGF

Agreed with

Audience

Agreed on

Inclusivity and accessibility of IGF

IGF has led to concrete outcomes like establishment of Internet Exchange Points

Explanation

The IGF has facilitated tangible outcomes beyond just discussions. These outcomes have had real-world impacts on internet infrastructure and accessibility.

Evidence

Example of the establishment of an Internet Exchange Point in East Africa, which reduced costs and improved local internet traffic

Major Discussion Point

Impact and Partnerships of the IGF

IGF aligns with UN Sustainable Development Goals

Explanation

The IGF’s work is aligned with the UN’s Sustainable Development Goals (SDGs). This alignment ensures that the forum’s activities contribute to broader global development objectives.

Evidence

Each IGF session is aligned with one or more SDGs

Major Discussion Point

Impact and Partnerships of the IGF

IGF partners with UN agencies, regional bodies, and private companies

Explanation

The IGF collaborates with a wide range of partners, including UN agencies, regional organizations, and private sector companies. These partnerships enhance the forum’s reach and impact.

Evidence

Mentions of partnerships with ITU, UNESCO, UNDP, African Union, EU, and companies like Google and Meta

Major Discussion Point

Impact and Partnerships of the IGF

Individuals can join IGF working groups and mailing lists

Explanation

The IGF encourages individual participation through various working groups and mailing lists. This allows for broader engagement and input from the community.

Evidence

Mentions of best practice forums, policy networks, and dynamic coalitions that individuals can join

Major Discussion Point

Participation in the IGF

National and regional IGFs allow for local participation

Explanation

The IGF has established national and regional forums to facilitate local participation. This structure allows for more context-specific discussions and engagement.

Evidence

Mention of 174 national and regional IGFs worldwide

Major Discussion Point

Participation in the IGF

A

Audience

Speech speed

139 words per minute

Speech length

1342 words

Speech time

577 seconds

Online participation enables remote engagement

Explanation

The IGF provides options for online participation, allowing individuals to engage remotely. This increases accessibility and inclusivity of the forum.

Evidence

Example of Nnenna Nwakanma participating online and encouraging others to do so

Major Discussion Point

Participation in the IGF

Agreed with

Chengetai Masango

Agreed on

Inclusivity and accessibility of IGF

Ethics should be included in IGF discussions

Explanation

An audience member suggested that ethics should be explicitly included in IGF discussions. This reflects a growing concern about ethical considerations in internet governance.

Major Discussion Point

Future Directions for the IGF

Distributed ledger technology could be a future IGF topic

Explanation

An audience member proposed including distributed ledger technology (like blockchain) as a topic for future IGF discussions. This suggestion reflects the growing importance of these technologies in the digital economy.

Major Discussion Point

Future Directions for the IGF

IGF should continue improving accessibility for people with disabilities

Explanation

An audience member inquired about IGF’s plans for including people with disabilities. This highlights the ongoing need for improving accessibility in internet governance discussions.

Major Discussion Point

Future Directions for the IGF

Agreed with

Chengetai Masango

Agreed on

Inclusivity and accessibility of IGF

Agreements

Agreement Points

Multi-stakeholder approach of IGF

Chengetai Masango

Audience

IGF established to address internet governance issues through multi-stakeholder dialogue

IGF provides a platform for stakeholders to discuss emerging internet issues and build capacity

There is a consensus on the importance of the multi-stakeholder approach in the IGF, allowing diverse groups to participate and contribute to internet governance discussions.

Growth and importance of IGF

Chengetai Masango

Audience

IGF has grown significantly in participation since its inception

IGF uses bottom-up agenda setting with community input on themes

There is agreement on the significant growth of IGF participation and its importance as a platform for discussing internet governance issues.

Inclusivity and accessibility of IGF

Chengetai Masango

Audience

IGF has various engagement tracks including business, parliamentary, and youth

Online participation enables remote engagement

IGF should continue improving accessibility for people with disabilities

There is a shared view on the importance of making IGF inclusive and accessible to various groups, including remote participants and people with disabilities.

Similar Viewpoints

Both the speaker and audience members emphasize the importance of active participation in IGF activities and working groups to contribute to internet governance discussions.

Chengetai Masango

Audience

IGF produces outputs like best practice forums and policy networks

Individuals can join IGF working groups and mailing lists

Unexpected Consensus

Importance of local and regional IGF initiatives

Chengetai Masango

Audience

National and regional IGFs allow for local participation

Online participation enables remote engagement

There was an unexpected consensus on the importance of both local/regional IGF initiatives and online participation, showing a shared understanding of the need for diverse engagement methods.

Overall Assessment

Summary

The main areas of agreement include the multi-stakeholder approach of IGF, its growth and importance, inclusivity and accessibility, active participation in IGF activities, and the value of both local/regional initiatives and online engagement.

Consensus level

There is a high level of consensus among the speakers and audience members on the core principles and functions of the IGF. This strong agreement implies a shared vision for the future of internet governance and the role of IGF in facilitating discussions and solutions to emerging issues.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

No significant areas of disagreement were identified in the discussion

difference_level

Low level of disagreement. The discussion was primarily informative about the IGF’s structure, activities, and goals, with speakers largely agreeing on the presented information. This implies a unified understanding and presentation of the IGF’s role and functions among the speakers.

Partial Agreements

Partial Agreements

Similar Viewpoints

Both the speaker and audience members emphasize the importance of active participation in IGF activities and working groups to contribute to internet governance discussions.

Chengetai Masango

Audience

IGF produces outputs like best practice forums and policy networks

Individuals can join IGF working groups and mailing lists

Takeaways

Key Takeaways

The Internet Governance Forum (IGF) was established to address internet governance issues through multi-stakeholder dialogue

IGF has grown significantly in participation since its inception, now attracting thousands of participants

IGF uses a bottom-up approach for agenda setting and produces outputs like best practice forums and policy networks

IGF aligns its work with the UN Sustainable Development Goals and partners with various UN agencies, regional bodies, and private companies

IGF has led to concrete outcomes like the establishment of Internet Exchange Points in some regions

There are multiple ways to participate in IGF, including joining working groups, attending national/regional IGFs, and engaging online

Resolutions and Action Items

IGF will put the presentation slides on their website for download

IGF will continue to work closely with ICANN on technical issues and best practices

IGF will seek support for its renewal in the coming year

Unresolved Issues

How to more formally include ethics discussions in IGF processes

Whether to reintroduce distributed ledger technology/blockchain as a major discussion topic

How to further improve accessibility for people with disabilities at IGF events

Suggested Compromises

For those unable to attend global IGF events, participating in regional or national IGFs was suggested as an alternative

Online participation was highlighted as an option for those who cannot attend in person

Thought Provoking Comments

I was wondering whether it’s time to include ethics as part of, you know, the overall sort of aspirations of the IGF.

speaker

Audience member

reason

This comment introduced an important new dimension – ethics – that had not been explicitly mentioned before. It challenged the IGF to consider expanding its focus areas.

impact

It prompted Chengetai Masango to explain how ethics is already implicitly part of IGF’s work, especially in areas like AI and human rights. This led to a deeper discussion of IGF’s values and priorities.

I’d like to let you know that we currently have the African media caucus that we started and we’ve been going on and we have several journalists in that group and we meet on WhatsApp and we get to share what’s happening in the different IGF fraternity and in the different NRIs that are happening.

speaker

Audience member from Kenya

reason

This comment highlighted grassroots initiatives and self-organization within the IGF community, showcasing how participants are taking ownership of the process.

impact

It drew attention to the role of media and journalists in the IGF process, leading to a suggestion to include media as an official caucus. This expanded the conversation about stakeholder groups and representation.

Over the 20 years, every improvement in IGF has been community motivated. What we have as best practice forums today was not called best practice forums earlier on. They used to call them beds of the same feather. These are people who think that an issue is important. They cuck us just like our media colleague. And they bring it up to the secretariat and we integrate it.

speaker

Nnenna

reason

This comment provided valuable historical context and emphasized the bottom-up, community-driven nature of IGF’s evolution.

impact

It encouraged new participants to actively contribute ideas and shape the future of IGF. This shifted the tone of the discussion from informational to more participatory and empowering.

I want to find out if we can include distributed ledger technology as part of a forum, because that has a massive parallel revenue stream or economy that’s going on and certainly requires this kind of multi-stakeholder engagement.

speaker

Online audience member

reason

This comment brought up a specific emerging technology area (blockchain/DLT) and made a case for its inclusion in IGF discussions.

impact

It led to an explanation from Chengetai Masango about how new topics can be introduced to IGF, including through workshop proposals. This provided practical information for participants wanting to shape the agenda.

I would like to find out if there are plans IGF has to include the Handicap Society, or is there a special section for the Handicap Society?

speaker

Sanjay Jackson, Jr.

reason

This question raised an important issue of inclusivity and accessibility, which are crucial for a truly global and representative forum.

impact

It prompted a discussion of IGF’s efforts to include people with disabilities, both in terms of physical accessibility at events and representation in discussions. This highlighted IGF’s commitment to inclusivity and responsiveness to community needs.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond the initial presentation of IGF’s history and structure. They introduced new topics (ethics, blockchain), highlighted grassroots initiatives, emphasized the community-driven nature of IGF, and raised important questions about inclusivity. This led to a more interactive and dynamic conversation that showcased IGF’s adaptability and responsiveness to participant input. The discussion evolved from a one-way informational session to a more collaborative exploration of IGF’s present and future directions.

Follow-up Questions

How to include ethics as part of the overall aspirations of the IGF

speaker

Audience member (first-timer at IGF)

explanation

The speaker suggested it may be time to explicitly include ethics in IGF’s goals, given its importance for issues like AI and internet governance.

Possibility of creating a media caucus within IGF

speaker

Audience member from Association of Freelance Journalists

explanation

The speaker suggested formalizing media participation in IGF by creating a dedicated caucus, similar to other stakeholder groups.

How countries without an IGF can join

speaker

Ismail Sif from Guinea

explanation

As a first-time attendee from a country without an IGF, the speaker wanted information on how to establish participation.

Including distributed ledger technology (blockchain) as a discussion topic

speaker

Online audience member (PhD researcher)

explanation

The speaker suggested reintroducing blockchain and distributed ledger technologies as a major topic, given their growing importance in the digital economy.

How ICANN and IGF can work together on technical issues

speaker

Online audience member from Montreal

explanation

The speaker from the technical community wanted to know how IGF and ICANN could collaborate to improve security and accessibility of core internet systems.

Plans for including the handicapped society in IGF

speaker

Sanjay Jackson, Jr. from Columbia

explanation

The speaker inquired about specific initiatives or sections dedicated to including people with disabilities in IGF activities.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.