WS #150 Language and inclusion – multilingual names

WS #150 Language and inclusion – multilingual names

Session at a Glance

Summary

This panel discussion focused on the challenges and opportunities of implementing Internationalized Domain Names (IDNs) and promoting universal acceptance to foster a more inclusive and multilingual internet. Experts from organizations like ICANN, UNESCO, and national telecommunications regulators shared insights on the progress made and obstacles faced in this area.


The panelists highlighted that while IDNs have been introduced over a decade ago, their adoption remains low at only 1.2% of global domain registrations. Key challenges include technical issues with universal acceptance across applications, lack of awareness among users and decision-makers, and the need for more robust policies and standards. The discussion emphasized that achieving true multilingual internet access requires coordinated efforts across stakeholders, including governments, businesses, and technical communities.


Several initiatives were discussed, such as ICANN’s work on universal acceptance, UNESCO’s promotion of digital inclusion, and national efforts to implement Arabic domain names. Panelists stressed the importance of raising awareness about IDNs and their benefits, particularly in preserving local languages and cultures online. They also noted the need for more empirical studies on the economic benefits of multilingual internet access to encourage adoption.


The experts agreed that while progress has been made, significant work remains to be done in areas like improving user experience, addressing security concerns, and ensuring consistent support across platforms and applications. The discussion concluded that achieving a truly multilingual and inclusive internet requires ongoing collaboration, technical innovation, and policy development at both national and international levels.


Keypoints

Major discussion points:


– The importance of internationalized domain names (IDNs) and universal acceptance for digital inclusion and a multilingual internet


– Technical and awareness challenges hindering widespread adoption of IDNs


– The role of governments, regulators, and other stakeholders in promoting IDNs and universal acceptance


– The need for a holistic, global approach to creating a truly multilingual internet


– Efforts to increase awareness and technical readiness for IDNs and universal acceptance


The overall purpose of the discussion was to explore the current state of internationalized domain names and universal acceptance, including progress made, ongoing challenges, and potential solutions to create a more inclusive and multilingual internet.


The tone of the discussion was generally informative and collaborative, with panelists sharing insights from their various perspectives and experiences. There was a sense of both optimism about progress made and recognition of significant work still needed. The tone became slightly more urgent towards the end, with calls for greater prioritization and strategic planning around multilingual internet initiatives.


Speakers

– Moderator: Panel moderator


– Bhanu Neupane: Program manager for ICT and sciences and open access to scientific research at UNESCO


– Theresa Swinehart: Senior vice president global domains and strategy at ICANN


– Walter Wu: President of internet dot trademark organization limited, Universal acceptance ambassador


– Hesham M. AL-Hammad: Domain Names Director at the Communications, Space, and Technology Commission of Saudi Arabia


– Manal Ismail: Chief expert internet policies at the National Telecommunication Regulatory Authority of Egypt, Egyptian government representative in ICANN’s governmental advisory committee


– Sarmad Hussain: ICANN representative (specific role not mentioned)


Additional speakers:


– Fouad Bajwa: Audience member asking question (role/expertise not specified)


– Abdulmenem: Works for Telecom Regulator of Egypt


– Jamal Shaheen: Audience member asking questions (role/expertise not specified)


Full session report

Expanded Summary: Panel Discussion on Internationalized Domain Names and Universal Acceptance


This panel discussion brought together experts from UNESCO, ICANN, and national telecommunications regulators to explore the challenges and opportunities surrounding Internationalized Domain Names (IDNs) and universal acceptance. The discussion aimed to address the current state of IDNs, progress made, ongoing challenges, and potential solutions to create a more inclusive and multilingual internet.


Importance of IDNs for Digital Inclusion


Panelists unanimously agreed on the crucial role of IDNs in fostering digital inclusion and preserving cultural diversity online. Bhanu Neupane from UNESCO emphasized that IDNs are essential for cultural diversity and digital inclusion. Manal Ismail stated that IDNs are vital for continued internet expansion and reaching the next billion users. Hesham M. AL-Hammad highlighted the importance of IDNs in preserving local languages and culture online, while Theresa Swinehart from ICANN noted that IDNs allow users to engage online in their own languages and scripts.


Current State and Challenges in IDN Adoption and Implementation


Despite their recognized importance, IDNs face several significant challenges:


1. Lack of awareness: Walter Wu pointed out the limited knowledge about IDN availability and benefits.


2. Technical issues: Sarmad Hussain highlighted problems with universal acceptance of IDNs across systems.


3. Script complexity: Hesham M. AL-Hammad discussed the challenges of managing variants in scripts like Arabic, where a single word can have multiple valid representations.


4. Low demand: Manal Ismail noted the slow uptake of IDNs in many regions.


5. Infrastructure readiness: Hesham M. AL-Hammad used an analogy to illustrate the complex interdependencies in implementing IDNs: “Sometimes we have the planes, but the airport is not ready. But still, this is a fact, and we need to work with it. So we need to work in parallel.”


Walter Wu provided specific insights into the Chinese market, noting that while IDNs have a 20% market share in China, challenges remain, such as the need for better email address support and improved user experiences.


Efforts to Promote IDNs and Universal Acceptance


Panelists shared various initiatives to promote IDNs and universal acceptance:


1. UNESCO’s efforts to advance digital inclusion and multilingualism.


2. ICANN programs to enable IDN awareness and universal acceptance, including Universal Acceptance Day, which involved over 80 events across 40 countries.


3. Saudi efforts to develop tools and applications for Arabic IDNs.


4. Chinese registrar community’s push for IDN usage and promotion.


Sarmad Hussain provided a technical explanation of how multilingual email addresses work, emphasizing the progress made in this area. He also noted growing awareness and adoption of IDNs and email addresses in local languages over time.


Role of Governments and Stakeholders


The discussion emphasized the critical role of governments and multi-stakeholder collaboration in promoting IDNs and universal acceptance:


1. Manal Ismail argued that governments should lead by example in promoting IDNs.


2. Hesham M. AL-Hammad stressed the need for collaboration between regulators, industry, and academia.


3. Bhanu Neupane highlighted the importance of including universal acceptance in national internet policies, noting that many governments still lack this context in their policies.


4. Sarmad Hussain advocated for a multi-stakeholder approach to address technical and policy issues.


Unresolved Issues and Future Directions


Several unresolved issues were identified:


1. Increasing demand and uptake of IDNs in many regions.


2. Addressing ongoing technical challenges with universal acceptance across systems.


3. Managing the complexity of script variants, particularly for languages like Arabic.


4. Lack of empirical studies on the economic benefits of a multilingual internet. Bhanu Neupane mentioned a single report suggesting $9 billion in potential economic benefits but emphasized the need for more substantiated research.


The panel suggested several action items:


1. UNESCO and ICANN partnering to prepare policy briefs for member states on universal acceptance.


2. Continuing efforts to raise awareness about IDNs through events like Universal Acceptance Day.


3. Working on improving technical solutions for IDN implementation and universal acceptance.


4. Encouraging governments to include universal acceptance in national internet policies.


5. Using IDNs as a criterion for measuring digital transformation readiness, as suggested by Hesham M. AL-Hammad.


An audience member raised a question about country codes in IDNs and how they are chosen, highlighting the need for awareness at different levels of implementation.


Conclusion


The panel discussion provided a comprehensive overview of the current state of IDNs and universal acceptance. While progress has been made, significant work remains to achieve a truly multilingual and inclusive internet. Manal Ismail’s statement, “We owe it to those who need it so we should continue pursuing this forward slowly but surely,” underscored the moral imperative and long-term commitment required. The discussion emphasized the need for ongoing collaboration, technical innovation, and policy development at both national and international levels to overcome the challenges and realize the full potential of IDNs.


Session Transcript

Moderator: with another mic, I hope it’s better. OK, so language and inclusion. As many of you know, multilingual internet was one of the Tunis agenda items back in 2005. Language plays a crucial role in fostering cultural diversity and promoting digital inclusion. And to address language inclusion in the digital age, we should consider how individuals and communities can have equal opportunities to benefit from digital technologies, regardless of their language or how and where they access these technologies. Today’s panel will focus on domain names, which are crucial to the functioning of the internet. I’m joined by a panel of experts to discuss ongoing collaboration and efforts aimed at ensuring all domain names, including internationalized domain names and email addresses, are treated equally and can be used seamlessly across all internet-enabled applications, systems, and devices. We will also explore the roles of various stakeholders in advancing universal acceptance of domain names and promoting internationalized domain names to support a more inclusive and accessible internet. So let me take a couple of minutes to introduce our speakers. Here in the room, I’m joined by Hisham Al-Hammad, Domain Names Director at the Communications, Space, and Technology Commission of Saudi Arabia. CST is the organization that operates Saudi’s top level. domains ascii.sa and thearabic.saudia. Also in the room with me Teresa Swinehart, senior vice president global domains and strategy at ICANN and many of the key programs and initiatives we’re going to touch upon today actually are being you know run and spearheaded by Teresa’s team. Joining remotely we have Manal Ismail. Manal is chief expert internet policies at the National Telecommunication Regulatory Authority of Egypt. Manal is also the representative of the Egyptian government in the governmental advisory committee of ICANN. Also remotely we have Walter Wu, president of internet dot trademark organization limited. Walter is also a universal acceptance ambassador and joining shortly or maybe has joined already is Banu Nupayn, program manager for ICT and sciences and open access to scientific research at UNESCO, has been with UNESCO for over 23 years and brings significant experience. Experience with internet multilingualism and language related technologies and initiatives. So all right so I think Banu has already joined so let me let me start with Banu. Banu UNESCO plays an essential role in promoting digital inclusion. You yourself have been involved in many multilingualism and language-related initiatives. So maybe you can tell us a little more about those initiatives and how they contribute to the fulfillment of WSIS agenda and the sustainable development goals. Can we unmute Banu? Banu, are you able to speak? Banu, I think you can speak now.


Bhanu Neupane: Yes, can you hear me now? Yes, we can. Okay, sorry about that. It’s okay. Sorry, Chair, I think you’ll have to repeat your question one more time so that I can start from the beginning. Sorry about that. No, that’s fine. So it was about UNESCO’s role in promoting digital inclusion through the various initiatives you’ve been involved in. If you can tell us a little more about those initiatives and how they contribute to fulfillment of WSIS goals and agenda. Thank you so much, Chair. extremely pertinent question. As you know, UNESCO is an intergovernmental organization and we’ve been playing a very decisive role in advancing digital inclusion through targeted initiatives that align with the WSIS agenda and support the advancement of sustainable development goal as large. For WSIS, UNESCO leads Axellein C3, C8, and I think this session will pretty much fall in the bracket of C8, C9, and C10. And we are also responsible for the e-science component of C7, which is one of the ICT application for this one. For STG goal along the same line, we are working for STG4, STG9, and STG16, in which primarily looks at peace, justice, and strong institution. And access to information also falls within this particular STG. So that goal is pretty much advancing on something that we are working on, digital inclusion. One of the key focuses of UNESCO is universal access to information and knowledge. And this primarily ensures that digital transformation promotes equity, inclusion, and multilingualism. So these are the three areas that we are focusing on at. The commitment dates back to 2003 recommendation. There was a recommendation that member state had in fact agreed in 2003, just about the time when WSIS was also starting, when they thought about agreeing on a recommendation concerning the promotion and use of multilingualism and universal access to cyberspace, which emphasizes on importance of linguistic diversity. and inclusivity in the digital world. Quite interestingly, in 2023 UNESCO recognized something that I think this audience primarily understands and it co-opted universal acceptance


Moderator: as one of the pillars that it would accomplish


Bhanu Neupane: while implementing this recommendation. So since then, we are pretty much focusing on and working primarily with ICANN to see how the domain names can be internationalized and how universal acceptance can be brought to the doorstep of member state. How can they start making their internet universal acceptance ready? That’s one. The other thing that we are also working is building digital and media literate and where we are primarily taught empowering and enhancing the capacity of member state in understanding that the internet must be utilized in more than one languages. And we are in fact, we’re working on that. Of course, UNESCO is also in its cradle is the ethical AI and its inclusivity. This is where the recommendation of ethics for artificial intelligence comes to fore. We also have something that we are working and in advancing ICTs for education and primarily working in the area of open solution which sensor grosser we actually take on board three things that one talks about open educational resources. We also talk about openness of data. And now we have also started to talk about free and open source software and working more and more like in language technologies and trying to see that whether or not this language technologies will also. becomes a common fare to the member state and our stakeholders at large. And of course, we are also custodian of the 16.10.2, which primarily empowers member state to make information as the cradle on which development processes move forward. So I’ll stop there, and perhaps I’ll come back to these points in a bit. Over to you, Chair.


Moderator: Thank you. Thank you, Banu. That was very comprehensive. So let me move to Teresa. Teresa, ICANN community has introduced internationalized domain names at the top level of the root of the DNS more than a decade ago. So how do you think IDNs have helped foster a more inclusive digital landscape, and what are the key challenges that continue to hinder their spread?


Theresa Swinehart: Thanks, Pahir, and thanks for organizing the panel and for everybody who’s participating. As the IGF knows, for a truly connected world, you need to have a unified, interoperable, and a system that’s accessible to all. But as users of the internet, we all want to be able to engage and communicate in the languages that we speak, in the scripts that we use, and in the methodology that we want to use, the length of something, a domain name, right to the, or left to the dot, around all that. We want to do that in our day-to-day lives, and there’s no reason that we shouldn’t be able to do that while we’re online in the virtual world. Audio from the room is not clear, so let’s, maybe you can try this mic. Is that better? Yes, excellent, excellent. Oh, hold it like this, okay. We got it? Okay, excellent. By the end of this meeting, we’ll all be very trained in how to do this. So in this regard, ICANN has a role, we have a limited role within our mission with relation to internationalized domain names and tables and a lot of the work that is, no, not working. Thank you. Is that better? Is that working? Excellent. Okay. Okay. So where I was was that we have a limited mission and a responsibility around the space and we’re working with the community on it. But we also work with many of the partners and other entities on how to enable both awareness around internationalized domain names and universal acceptance and then those that play a role in enabling universal acceptance more generally. And many of you are in the room here as well and participating. So as you know, internationalized domain names are important in the way that we help make the Internet multilingual and inclusive. For those of you who are not familiar, it essentially means it enables somebody to use a domain name in the local language or script. For example, Arabic, Chinese, or Cyrillic. And with this introduction more than ten years ago, we’ve seen many, many language communities around the world coming online. Part of the challenge, though, is an awareness that they can communicate in their own language or in their own script. And so with that, we’re still in the early days of multilingual DNS and as you’ve noted, it’s been about a decade since IDN top-level domains were first introduced. Some of the challenges, though, that are existing around this is overcoming the geographic or linguistic barriers, the generation of local content, the enabling of technical skills and capacity development. These are all critical for enabling the usage around this and that in turn has an impact on economic growth, independence, and the minimizing of resilience on others. So a key goal in enabling this is actually what’s referred to as universal acceptance. That is the ability for the technology, the platform, to accept the script or the length of what is to the right or the left of the dot in the communication methodology. And it means that all Internet applications and systems treat all top-level domains in a consistent manner. This is essential for the continued expansion of the Internet and it provides a gateway to the next billion Internet users in a way that will make it more meaningful for them. But we’re not there yet and progress has been made, but there’s still gaps. For instance, testing shows that only about 11% of the top 1,000 global websites can accept international… email addresses and just 22.2% of email servers support them. So these challenges highlight the urgent need to look at how to address universal acceptance in the partnerships that we can enable with governments, businesses, any entity around that. All the actors developing applications and operating services online are encouraged to ensure their systems are updated to support all domain names and email addresses. And this alignment will not only promote a digital landscape that’s unified, but it will have a ripple effect on encouraging people to utilize local content and engage in their own languages. So within ICANN, as Bahair noted, we have a team that’s dedicated to identifying and making important software fixes to allow for universal acceptance. My colleagues online, Sarmad, Seda, Pitanan, are experts in this, and as we go into more detail, more than happy to engage in those conversations. As noted, we’re also working with the broader ICANN community to ensure awareness around this. And with that, we are now in our third year of, I believe it’s the third year, yes, of the Universal Acceptance Day, which is held on or around the 28th of March in both 2023 and 2024. And in this year, we had over 52 events in 47 countries. We will be hosting it again, but in partnership with our colleagues from UNESCO, which we’re looking forward to very much in 2025, and have already received many applications from interested parties to participate from around the world. So as we move closer to enabling the ability to engage in your own language, we’re also moving closer to opening up our next round of new top-level domains in 2026. This will be an important opportunity for universities, governments, businesses that wish to apply for a top-level domain in their own language or their own script to participate and to apply for that, and thus enabling even more inclusivity in the world. So with that, Bahar, I’ll turn it back over to you, but thanks for the opportunity.


Moderator: Thank you, Teresa. And on the UA Day, it may be worth noting that several UA Day events will be held in this region as well, including one in Saudi Arabia in partnership with the CST. So, okay, let’s move to Walter. I’ll come to you now with, you know, your significant experience with the implementation of IDNs, specifically the Chinese names. What do you see as the potential value of IDNs in promoting local businesses and communities? And if you can tell us more about this experience and the challenges you’ve encountered. I hope you can unmute yourself. Got it.


Walter Wu: Thank you. Actually, I’m very glad to got this opportunity to share kind of development status in China about the IDN market. And I will have a self-introduction before I at the very beginning for this part. Actually, I’m from a registry of Chinese IDN, Daoshanbiao, means trademark in Chinese. As a co-founder of the Daoshanbiao registry personally, I’m very grateful for the new GTLD program of ICANN, because it gave us an opportunity to do some innovative attempt for launching a dedicated TLD to trademark holders. Shangbiao is this Chinese character word, means trademark in English. Compared with the traditional TM or R mark, it creates the best awareness for the customer that is a trademark. So the key value for us to choose Shangbiao as the TLD name to operate is the trademark identify function of these two characters. Shangbiao not only permit the registration for registrar holding a trademark right, but Daoshanbiao registry verify the trademark right of registrar. And the registrar name has to match their trademark and brand name. This service focus on the Chinese-speaking region. It provides opportunity for brand owners to create an exact match domain name with their brand name and post the Chinese. And the second part, I will share some market status of Chinese overall internationalized domains. By the end of Q2, 2024, the number of overall ID and GTLD globally is around 408,000. The total number of Chinese IDN is 347,000. Chinese IDN gets the market share of 85% of the global IDN. .wangzhizaixian means web address, means online, shangcheng means shop, gongsi means company, shangbiao means trademark. And also, shouji means mobile phone, a top Chinese IDN TLD. Those six Chinese IDN TLD get around 64% of the global IDN market. And next part, I will talk about the value for IDN for the Chinese customer or the Chinese communities. The new GTLD program and overall IDN development give us the opportunity to implement a trust domain solution for end-users like enterprises and organizations. Most of the organizations use local Chinese name. In English IDN system, the registrant must use translation, abbreviation, pronunciation symbol, we call pin, that’s a kind of a symbol for pronounce the Chinese character. That actually makes the domain name are not easily remembered and recognized. It also easily generate efficient website since the domain name cannot easily be distinguished by the internet users. For brand owners in Chinese-speaking community, the domain name exactly same with the brand name in Chinese brings high value for online direct-to-customer marketing. Not only local brands need the IDN, actually the multinational, they also need the IDN in China. There are several examples for the Chinese IDN by the registered by the multinational, like Starbucks, actually they register their Chinese brand name, Starbucks, actually they register their Chinese brand name, and TESOL, and TESOL, they register Tiansuo, you know, dot trademark, they register Vantime names. Because in China, a lot of customers may not easily remember the English name. I don’t have the exact statistic, but I guess over 90% of Chinese consumers can now spell right for T-Salt or Starbucks. So a lot of multinationals will localize their brand when they enter China market. All of those companies have a Chinese local name. Before the Chinese IDM program launch, they can only use their English brand name and register their domains. Since the customer may not remember and spell right for the domain names, it will be a big challenge for those brand customers to launch their official website. And last, I will also share the kind of difficulty of IDM implementation in Chinese. Actually, overall speaking, lacking the overall awareness of IDM is the biggest challenge for the internationalized domain names. Without the awareness, because not a lot of registrants very actively use and promote that. Part of the reason is they met the UA challenges. Browsers is always the most important UA issue. Although in the last couple of years, the browser issue has been dramatically solved. But unfortunately, recently, we also meet another UA issue. Because Safari, after the iOS upgrade, they do not support the Chinese IDM right now. So it’s like always, we met all kinds of challenges about the UA. Beside the browsers, actually, search engine, EAI, and hyperlink for the social media is also the several key issues. So that’s a major difficulty that we face. think, you know, but in the ideal development in China. So thank you very much.


Moderator: Thank you, Walter. And perhaps we’ll come back to you in the next round of questions. Maybe you have some solutions to share as well. So let’s come back to the room. Hisham, so Saudi, of course, and Saudi Nick and the team at the CST, you’ve been in the forefront of advocating for the introduction of IDNs, specifically the Arabic domain names, and promoting their use in web, email, and various other applications. So could you tell us more about this journey, and what were the biggest challenges where you are now, and so on?


Hesham M. AL-Hammad: Thank you, Bahir, for inviting me to this session, and I am happy to be with the distinguished guests and speakers, and thank you for all to attending this session. As you mentioned, maybe Saudi, represented by Saudi and IC, started early to introduce the Arabic domain names. And the first start was in maybe 2004, with a cooperation with GCC, the Gulf country, and also with the Arabic League. And based on this, they developed a pilot project, and this pilot project is used to identify the aspects that need to be developed to be ready for the introduction of IDNs when it’s introduced by ICANN. The Saudi domain names and the Arabic IDN for Saudi domain launched on 2010, with all the thanks for ICANN for their support. And the first maybe domain, Arabic domain names was on the internet, was a Saudi domain name. After that, we start the journey to make sure that these domains will work as expected, and to avoid any challenges that can face the users, and even the registrants. One. One of the major challenges that faced us, maybe you know that the Arabic language is part of the Arabic script. And the Arabic scripts contain many languages. Even it’s used by more than 43 countries. And there is, in the shape of the letters of the Arabic, there is the shapes of the letter is the same between different languages. And even within the Arabic, there is also some kind of variance for the same letter. And this was like a very challenging thing. And we focus on the part related to the registry. We want to make the registration straightforward. We want to avoid any confusion that can occur based on registering specific domain names. So we focus on this kind of variance. And the variance, we can see it, for example, for the English. But it’s solved by protocol, in the protocol itself. Like, for example, Google, it has variance regarding the capital and small. But it’s solved by the protocol. But in Arabic, we have many problems with the variance. For example, if we get the two first words from our organization, which is in Arabic, Hayat al-Atsalat, Telecommunication Commission, this maybe, as a variance, it’s more than 2 million variance. This one, it’s having 2 million variance. And this should be considered. So what we did, and our valuable colleagues and pioneers, they start to develop algorithms which use something called master key. So we have one master key for all the variance. So we can, by registering one domain, we can block all the variance and enable the user to enable any variance for that. The master key even, this algorithm, also save our space, instead of registering the 2 million variance, to avoid anyone to register this domain and make it confusing for them. the users, it will be only using this master key. Again, we face also the problem. This 2 million variance is not based on the language itself. It’s not used, and it’s not practically used. For this, we go to another stage to build a filter. And this filter used to make it to have like four levels. The first level is the must to be allocated, and it’s for reachability. The second one is about desired variance. Then we have not desired variance, and the fourth category is blocked. And this minimize the number of variance by maybe 98%. For example, if we get the name like the holy city, Mecca and Mukarramah, for example, it’s around have more than 3,000 variance. Yes. No, maybe the zoom. For example, for Mecca and Mukarramah, this have maybe around 3,200 variance. The desired variance, it’s only four. So this was minimizing the number of variance that can be enabled. In this situation also, we provide for our registrant the ability to specify what is the desired variance, and they can register it. There is no need for register another domain. It’s considered as a variance, and it have the same specification and configuration that needed for the same domain. Before 2021. We were providing the registration directly. We have a direct registration without fees. After 2021, now we have our registrar. We transmit to the registered registrar model. So in this situation, also, we engage our registrar with us. Now we provide him with the ABI to define the variance, define the desired variance, and give him the ability to enable any variance to enable the Arabic domain. Also, we go a step further, and we work with the email. And we launch, maybe, in early stages, the RASIL, the first one of RASIL in the period between 2010, 2013. And this was launched before the standardization for the international or email address internationalization. And we implemented as, like, do a hack for the Outlook and some clients. And we success in this part. After introducing the AI, we also developed the second phase of RASIL. And we tested with the international emails provider, like Google and Microsoft. And it’s worked fine with success. And our implementation was to find what is the difficulties, what the challenges. It’s like a proof of concept to this part. Again, we face some challenges about the domain. We solve it by the variance. For example, we have the user part before the domain. Also, we’ll have the same situation with the variance. And it should be considered to avoid having the same shape for. for the username for different ASCII code. And this is also one of the challenges that we face. One of the initiatives, even before maybe the universal acceptance started, we also published some reports about the readiness of the browsers and other systems for IDNs. We published it on 2010 and also 2014. To make sure and to show the case that there is a problem with specific providers. This may be our journey, in short, on summary. Maybe the most, if you wanted to talk about it now, the challenges, or make it in the second. OK.


Moderator: Thank you. Thank you. Thank you, Hisham. This is obviously a very rich journey. And I’m sure lots of lessons learned. So we’ll come back to you to talk about not only the challenges, but maybe some, because you’ve also touched on some of the solutions that you have made to deal with some of the challenges in the applications, emails, and so on. So maybe we can talk further about that. So now I’d like to go to Manal. Manal, you’ve been part of this journey for a long time, since ICANN started working on policies to allow IDNs, CCLDs at the top level, process known as Fast Track and so on. Now, how do you, and then of course, you’re part of the government, part of the regulator in Egypt, who’s also the manager of Egypt’s IDN.Masr. So how do you think national regulators can help foster digital inclusion at the local level, particularly in the DNS area and in collaboration with other players in the ecosystem? So what role can regulators play in that regard? And if you also want to touch on universal acceptance, you can do that, or we can come to that later on. Manal, over to you.


Manal Ismail: Thank you. Behr, can you hear me? Yes, we can. Perfect. Thank you very much, Behr, for the opportunity. And thanks to Saudi Arabia for hosting this year’s IGF and availing remote participation. I’m sorry I was not able to join you in person, but thank you for the opportunity. Multilingual internet is crucial for digital inclusion, as has been mentioned already by other speakers. And a truly multilingual internet is essential for the continued expansion of the internet and the growing of the online population. And it is necessary to have a gate open for the next billion internet users to connect meaningfully to the internet. ICANN has already introduced more than 1,200 new GTLDs, 100 of which. our IDNs. We have around 60 IDN ccTLDs and we started to have mailboxes that are no longer just in ASCII and we have another round of new gTLDs also on its way. Yet the actual number of IDN registrations remains relatively low at 1.2 percent of the global domain name market and its uptake is very slow also and per the EURID IDN world report 2024 only three ccTLDs witnessed notable growth in their IDN registrations over the past year while the majority of ccTLDs experienced minimal or no growth at all with 19 ccTLDs reporting contraction in their total IDN registrations. So this makes universal acceptance a fundamental requirement for unleashing the full potential of IDNs and internationalized email addresses and providing a truly multilingual and digitally inclusive internet. Internet multilingualism is also of strategic importance to governments in accordance with their digital transformation plans and the promise to leave no one behind. They have a distinct role to play in that respect in order to reduce inequalities and bridge digital gaps. So universal acceptance would deploying it on a wide scale would surely benefit government efforts regarding digital transformation and digital and social inclusion where the pandemic here played a way served as a wake-up call to everyone that the internet is not luxury but a basic need. Also preserving culture and advancing digital identity through preserving local languages and encouraging their use on and off the internet equally. Ensuring that government online services reach citizens nationwide and in the official language of the country of course and stimulating the growth of the local IDN market and online multilingual content by increasing competition and innovation, increasing customer choice, availing internationalized email addresses which have been hindering the uptake of IDNs and this is also out of our experience in Egypt and driving the use of local IDNs and email addresses as opposed to Latin-based ones hosted off borders. This also helps increasing internet penetration and bridging the digital divide and promoting digital literacy and facilitating meaningful access to the internet and lastly acquiring future proof systems and applications and an analogy with IPv6 could be drawn here. To the second part of your question regarding government role or NTRA role in making government systems universal acceptance ready. So we are at NTRA working closely with the Ministry of Communications and Information Technology being the technical arm of the government to ensure awareness of the issue and provide consultancy through subject matter experts. Establish pilots for proof-of-concept and stress testing. Assist in stocktaking of government’s current systems and applications to ensure they are UA ready. Take into consideration, of course, UA readiness of online government services. Include universal acceptance as a requirement in procurement processes, tenders and purchase orders. Assist in universal acceptance-wide deployment in cooperation with licensed operators, vendors and other stakeholders. Try also with the academic sector to add universal acceptance to the curricula of the relevant students. Collaborate on a regional level to drive adoption and universal acceptance readiness in cross-border platforms. So, there is so much to be done. I cannot claim we have achieved everything yet, but we have a long list of things to do. But again, it’s a win-win for all and I’ll leave this till later. Thank you, Beher. Back to you.


Moderator: Thank you, Manel. So, we’ve heard, I think, from everyone on the panel. So, maybe it’s time to pause and see if there are any questions from the floor, here in the room or online. Any questions? Fouad.


Audience: Hello, this is Fouad Bajwa. I have, you know, the IDN growth hasn’t been as expected and, you know, that’s been discussed. But at the same time, what steps need to be taken before the new round opens up? Because if you haven’t been successful, that successful as compared to the other, you know, English GTLDs. So, what do we need to do? What are the concrete steps that we need to take? And now that we are near the other round, the applicant support program is already open. There could be requests in that from the community as well. So, what are we going to improve going forward in terms of, you know, the adoption of IDNs? And within market competition, do we expect, you know, GTLDs to increase by a significant number? Or is the lack of confidence on, you know, all of those that the GTLDs have already issued? What are we foreseeing in the near future? Thank you.


Theresa Swinehart: Perhaps I could start and then ask my colleague, Sarmad, who’s online. So I think that there’s two aspects. One is awareness that it is possible to utilize one’s own script or have a different length or utilization to the right or the left of the dot. I recall being at some events where you discuss that and there’s awareness that you currently can do it to the left of the dot or to the right of the dot but not having the full experience of using something in a full experience. And I would equate it with the address on an envelope. I can write the address entirely in my language or I have to put some parts that are not in my language. So I think there’s a lot of awareness that needs to occur around that from an information aspect and what that could imply. But then I think that there’s also some very solid technical aspects that need to occur with trainings at universities and things. And that’s where I might ask my colleague, Sarmad, to share a little bit. If I may, Behar.


Moderator: Sarmad, could you try to unmute yourself?


Sarmad Hussain: Okay. Hi. Can you hear me? This is Sarmad. Yeah. Go ahead. Okay. So thank you, Behar, and hello, everyone. Just to follow up on what Teresa was saying, that, you know, of course, I think the start of the journey is with creating awareness. Many people around the world still do not know that these options exist in their own languages. And so making them aware of this is key. But then as a next step, when people become aware that they can actually have a domain name for their own business, for example, or an email address for their personal use in their own language and script, next step is for them to acquire that and use that. And we see, obviously, some challenges here which have been shared that, first of all, there is a registrant journey. Actually, as a registrant, you would want to be able to register the domain name of your choice in your local language, for example, and then not just register it, but take it further to host it and deploy it and have an email address against it. And what we see is that that is not always an easy set of steps. So the industry which is working with domain names, obviously, and the tools they’re using, sometimes there is a lack of capacity or lack of support which, I guess, needs to be addressed. In addition to that, even when somebody is able to get their domain name and email address functional, what we see is that some of the applications downstream, for example, social media, other application e-commerce websites, they are not also aware of domain names and email addresses in local languages. We see that with the work which is being done by community, by ICANN, by others, there is now a growing awareness. We see more and more not just awareness, but adoption of domain names in local languages and e-mail addresses in local languages over time. For example, many large e-mail providers, generally globally, also in different countries, are now at least allowing e-mail addresses in different local languages to be able to send and receive through their platforms. There is now also, I guess, more maturation in the browser technology to support domain names. So we see that the effort which the community has actually been putting in over last decade or so is bearing fruit. There is now growing support. We cannot say that we are there, but there is certainly a momentum building up and we are making our way there. We do need to keep working in this area. So back, we’re trying to, I guess, respond to Fawad’s question. We’re certainly well on our way. We see good support, but we need to keep working to create more awareness and more support. Thank you.


Moderator: Thanks so much. Manel.


Manal Ismail: Thank you, Behar. I fully agree with what Teresa and Sarmad already said. It’s surely an awareness thing and wide deployment of universal acceptance. With the awareness, it’s an interesting situation here with the universal acceptance that we need to convince both sides at the same time, the supply and the demand. So there is no appealing product in the market yet to attract demand, nor oppressing demand from the community to trigger supply. So ending up in a unique situation where we need to work on the awareness of both sides at the same time. Also, another challenge is the universal acceptance needs to be widely deployed before it bears its fruits. So there is no use in being UA ready alone. So we need to have concerted efforts to push for the wide deployment. Also, as Sarmad mentioned, some may… not even know that this option exists and this relates again to the awareness. So for users who don’t have a language barrier those are already online and they feel that everything is working fine. For those offline because of language barrier they are taking it for granted and not knowing that there is a solution to their problem. Also the business model may not seem very appealing or pressing at the moment but it’s definitely future proof and opens a new market. So again it’s a multi-stakeholder issue which has technical, strategic, commercial and cultural dimensions and needs the buy-in of everyone and collaborative efforts. Thank you Beher, back to you.


Moderator: Thank you, thank you Manal. And perhaps I’d like to go back to Walter because I mean we’re talking about challenges, we’re talking about awareness and Walter you mentioned that right now 85% of the IDN domains are Chinese so it seems to me that there is demand in China. At the same time you also spoke about some challenges so how do you see the scene in China in relation to challenges and at the same time opportunities? Yes, actually the Chinese IDN community especially the registrar side is currently trying to push the usage of IDN because we think that’s a key part of the increased awareness. Actually promotion of IDN needs to start from the registrar side, that’s that’s my personal opinion, because Internet users cannot get awareness of IDN only when they can see a lot of IDN domain names in daily life, like they can see the IDN domain names in the advertisement, in the enterprise brochure, in their service scene. That service means, for example, a restaurant. Inside of a restaurant, they can publish their IDN name of their restaurant, then every customer in the restaurant can see and get awareness of IDN. Actually, I think the whole industry in China, generally speaking, we all began to examine our promotion strategy. At the very beginning, when we launched the IDN, actually a lot of customers registered the IDN because of domain name investment or brand protection purpose. But now, we see more and more registrants realize the value of IDN. IDN is the most important tool for the direct-to-customer DTC marketing strategy, and it provides a direct link between brands and customers. Moreover, IDN can be a very important tool that can save their cost and enhance the efficiency for their online and offline promotion. For our own Daoshan Bell Registry, we are trying to penetrate to the different industries. We have a good connection with the trademark and brand industry. I think the major purpose for when we connect with the brand industry is we hope to create the motivation of the brand. brand customer. You know, we not only want the customer to register the IDN, but we need to proactively publish the IDN and use the IDN and provide more cases for the registrar. So I think only by more and more brand customers publishing their IDN, then the Internet user can see the real IDN case and gradually they can build the habit to use the IDN.


Walter Wu: So I think that’s what the Chinese community focus on. Thank you.


Moderator: Thank you, Walter. Okay, Hisham, back to you now, because you also spoke to the challenges, but at the same time, Saudi NIC or CST, you’ve done significant efforts in developing tools, applications, and so on. So you kind of provided some of the ingredients of the supply side, but how about the demand side? Do you see demand coming from the local community? I’m talking about Arabic domain names in this case.


Hesham M. AL-Hammad: Thank you, Bahir, and thank you for all the colleagues. As you said that we build the demand, build the supply part, and we provided the things that can minimize the risks that can appear from issuing the Arabic domains. But still, the demand is very low, and I think there is something, first of all, we need to understand that DNS and domain names is part, it’s a component of the infrastructure, and changing in the infrastructure standardization, it’s something that needs time. So for example, even now that DNSSEC is produced maybe 1997, the adoption is still around 25%. to 30% even now after maybe more than 20 years. And there is a continuous improvement for the standardization for DNSSEC to avoid the complexity. The problem with the now we have for example for the IDNs we have a very good and very valuable efforts from the universal acceptance and we see now first we identify the problems we provide the solution there is a technical solution there from the universal acceptance that can solve all the levels that can affected by the domain names. But again still the user experience it’s still very difficult so if I register a domain in Arabic I need to have a hosting most of the hosting need me to add a bunny code not add it with the Arabic language so this is difficult for the registrant. Again if I have a variance I need to add each variance and enable it in the name server. If I want to send my domain name through one of the platforms like whatsapp if I send the email or the domain without adding the protocol it will not be identified as a URL but I need to add the protocol for example HTTP before the domain. When we when I add the HTTP this you have a problem for the bi-direction the HTTP is left to right and the Arabic is right to left so still the experience for the user is not giving the people the like incentive to go with the domain name. And as my colleague Walter from Chinese mentioned that even sometimes we have for For example, he mentioned that Safari, for example, it was Safari supporting the Chinese IDN. Then after the upgrade, the support is not more there. This also flabbing between the support. You can find, for example, a provider supporting the Arabic domain now. Then after a specific upgrade, it will not support it. So there is no stability in this situation. And this, I think, one of the difficulties that face the Arabic domain. Maybe the question from our colleague about the domain, and should we wait to solve the problems before we issue or open another round or this kind of discussion, maybe it’s the philosophy for the chicken before or the egg before. Sometimes we have the planes, but the airport is not ready. But still, this is a fact, and we need to work with it. So we need to work in parallel. We have efforts on the high level from universal acceptance point of view. We need to think about the platforms and how we make it ready and support these ideas. And I think we need to have a step more to think about the protocol itself. Can we have a solution from the protocol itself? I know that it’s very difficult, but we can have a solution. The DNS now is invented before maybe 40 years, and it’s based on the ASCII. Everything is based on the ASCII. Even the computers before using the ASCII, now it’s solved, and we use the Unicode by nature. What about the DNS? DNS was built before 30 years without any security measures, but now we have the DNSSEC. It’s built. The DNSSEC, it’s built but it’s not covering all the issues. We have the DOT, the DOH. Still, we are trying to solve all the problems. Why we not look on this part even from the protocol point of view? Let’s see what kind of solution we can have. We can, there is a lot of… like unlimited records, unlimited flags, that we can use it, reuse it, and see how we can solve the problem. Another levels that we can go, for example, the CDNs, the load balancers, the antiviruses, how they will deal with the other IDNs, and the reachability. For example, if I have a keyboard for Urdu language, and I want to write like Arabic domain with specific words that is in the shape is the same between the two languages, but it’s different in the ASCII code, we should have this kind of reachability. And I think we should solve it from the second level, and even from the top level. So again, I think we are doing, there is an effort, which is move on. And we appreciate the universal acceptance issue. And we have the universal acceptance day that started maybe two years ago. And we see the potential. One thing about the awareness, the difficulties in the awareness that even for the DNS itself, now the graduate people and students, even about the DNS, they don’t aware about it. All the people now and the graduate, they talk about AI. The infrastructure-related topics is not that important now in the university part. So we need to step in this part about the DNS, and also about the IDNs and other aspects regarding even the new GTLDs. Because I faced one of my colleagues, he was thinking that the new GTLD is fishing site. He only understand. only .com and also the two letters. When he see, for example, .services, he think that it’s not a correct domain name. Even, by the way, he’s a technical guy, but he’s not aware about this kind of domain name. So we have a gap in this part. And with collaboration, we can cover this gap. And thank you.


Moderator: Thank you. Thank you, Shem. So obviously, yeah, more awareness is needed. I think everyone agrees on this. Significant efforts are being done, yet more to be done. Now, are there any questions? Yeah, Abdulmenem. Yeah, thank you.


Audience: Thank you, Behr. Actually, I would like, this is Abdulmenem for the record, working for the Telecom Regulator of Egypt. Actually, I would like to add something to my colleague, my brother, Shem, said about the challenges in regards to the use of IDNs. There are, the first thing is that we don’t have many services built upon IDN. It’s one of the challenges. The other challenge that it is related to email addresses. Assume that I have my Arabic email address, and I send an email to Chinese email address. How could the other receiver be accurate that I am Abdulmenem? I am the correct guy who sent this email? We need to have a workaround about this. This fact will affect the use of email addresses that are using IDNs or AI mailboxes. This is the first point. Second point, it is just a comment rather than a question. Awareness is somehow is a public word. It’s an open word. There is awareness difference between the mail administrator inside your organization and there is another awareness for the software companies who make the software used by this organization. The difference is here. For the company who makes, the provider who makes this email software, he needs awareness from day one. But the other side who are consuming this software, whatever he is, entity or organization or person who uses this software, he needs to have release notes about that there is an update for the new version of this email software that supports AI and universal acceptance ready. At this time, me as email administrator, I will ask what is this email AI means? What is UA means? At this time, I could conduct awareness session for them. It is different. We need to go for both ways. Thank you.


Moderator: Thank you. Thank you. Thank you. Thank you. I want to get to your question. You asked the question at the beginning about the email and I want to check if Sarmad could answer this question. Can you repeat the question about the email and the identity behind the email?


Audience: I said that the source email address is Arabic email address, pure Arabic email address. And I sent an email to Chinese mail server who has Chinese email address. How could the receiver who owns this Chinese email address understand or be sure that I am the correct one, I am Abdulmanum who sent this email address? It’s one of the issues. Maybe the answer is that if you are Arabic, you assume that to send Arabic to other entity who are in Arabic, but it’s not the case. Maybe I am a businessman. I wanted to send emails to my organizer outside Chinese, something like this.


Moderator: Thank you. Thank you. Sarmad? Thank you. Can you unmute yourself .


Sarmad Hussain: The way I would understand the question and the possible answer is that, you know, we, in the virtual world we, you know, online world, we, our, I guess, experience is not very different from our real world. When we are talking or communicating in the real world, we are communicating in many different channels in many different languages so when I’m actually speaking to a friend of mine who is local I will talk to them in a local language. But when I’m talking to colleagues who are perhaps not local. I would choose to communicate in a language which we mutually understand. So that’s really what the real world experiences. And that’s, of course, what would be a online world experience as well right when I would want to communicate with my friends and we would see that also in social media. So, you know, it’s not that people write in local languages, because they are part of a community which actually understands that language. But when you’re communicating for example for business across languages different languages, then you choose to switch to a language which will communicate with somebody who doesn’t speak my local language but can understand English. I have a mailbox setup where I actually have an Arabic email address. I also have an ASCII or English email address. I can use the same mailbox to send send an email from either of those email addresses. And if I get a response back to any of those email addresses, it comes to the same one single mailbox. So I can have multiple email addresses all, for example, synchronized or, I guess, pointing to the same mailbox. And that allows me to switch to different languages with reasonable ease. That’s, for example, one solution. Thank you.


Moderator: Thank you, Sarmad. We have another question here in the room. Thank you.


Audience: My name is Jamal Shaheen. Thank you very much for the presentations. That was very insightful. I have a few naive questions. Well, I hope they’re not too naive, but a couple of questions that I’d like to just address. I’m new to this field, so please bear with me. One of the questions that I wanted to address was, how will country codes, how will countries be identified in the different scripts? Will you go for two characters still? Or how will that actually sit? And how will you actually be able to distinguish between country websites and then other GTLDs? And how will that actually play out in the ICANN system? And then another thing that has been really enlightening from this conversation was, in previous discussions with technical community experts, I had the feeling that everything was sorted technically, that this was very easy. It’s just a question of rollout. That, yeah, no, it’s just we need people to be aware and then accept. And now, as I’m hearing the conversation go around here, this is more than there are big technical issues that we still need to address. And I’m trying to put into these stages of awareness, technical issues, policy issues, what are the priorities right now, and so on and so forth. That leads to a third question, sorry. In terms of public administrations and governments, is it just sufficient to mandate that IDNs are in any services that public administrations and governments offer that IDNs… IDNs should be the baseline, and then that’s the way we go. Then we have to have a standard that we have to agree to on the technical side. So I’m wondering which comes first. You talked about chickens and eggs. There’s a whole batch of chickens here, right? That’s the question, sorry.


Moderator: Thank you, Jamal. So there are three questions. The last one, I think I’m going to go to Hisham to answer about the governments and how they mandate use of IDNs. But before that, two questions. One about the country names in IDNs and how those names were kind of chosen. I can start and then Sarmad can continue, and then Sarmad can also tackle the second question about the technical issues, right? So back in the days when country names were introduced in IDNs, the ICANN community came together and agreed on a process of how to select or pick the country names. It’s based on the language scripts, how the country is being listed in the UN, different UN lists and so on. And one of the rules that was put in this process was that only countries with official languages that use scripts other than the Latin script can apply for getting or obtaining their IDN country name. So Arabic-speaking countries, they got their name in Arabic, Russian and so on, in Cyrillic and so on. So that is the first question. Now I’m going to hand it to Sarmad to correct me and then to add more. Thank you, Bar. Could you please also repeat the second question again, just to make sure that I answered it accurately? Yeah, I think the second question was that Jamal said that he thought, or he was under the impression, that technical issues pertaining to IDNs have been already sorted out. from what he’s hearing in this session, there seems to be some serious technical issues yet to be resolved.


Sarmad Hussain: Thank you, Bahar, this is Sharma. So just on the first question, so I think you provided good details. Basically, the criteria used for country codes is the two-letter code, which is available through ISO 3166 standard. And if a country or a territory wants to apply for internationalized domain name, country code, top-level domain, IDNCC, DLD, they can, if they already have an ISO 3166 code, and then for local language, as Bahar said, they could apply for any language, which is the administrative language of that particular country or territory in a script, which is used by the local community. And there is actually a process to apply for it. They can choose any string, some countries use a short name for the country, some could use, others have decided to use an abbreviation. So that is really up to the particular territory or country and their community to decide the particular string they would like. There’s a link in the chat. Please go there, and there are more details available. As far as the technical, I guess, ability or adoption of universal acceptance is concerned, Jamal, you’re actually quite right in assessing that we are well on our way as far as the journey is concerned, but we’re still certainly not at our destination. We are in the process of creating more awareness, creating more adoption, creating more technical solutions. We certainly already have available many solutions, but there’s more to do for all of us. Thank you.


Moderator: Thank you, Sarman. And the third question was, is it enough or is it sufficient for the local government to mandate the use of IDNs and then, you know, everything will be fine, everyone will follow and use IDNs, or is it a little bit more complicated?


Hesham M. AL-Hammad: I think it will be complicated, yeah. But in general, I think going with the last solution always, I think it’s not a choice, especially that if the solution is not mature until now. But sometimes we have it like another, we have like a good experience. We try it this year with the DNSSEC, for example. We didn’t mandate the government. We do a cooperation with the digital government authority, DGA, and they have like a measurement every year. They measure the digital transformation readiness for the government entities. And they have some specific rules. This year, last year, we add two criteria, new criteria with the coordination with them. One is about IPv6 and one about DNSSEC. And we see the effect, but it’s not a must, but it’s like only measuring the maturity. If they didn’t do it, only their maturity will decrease, and you maybe attend before maybe a So, this kind of techniques, I think it’s more valuable, more than make it mandate. About the IDNs, I think we’re still very early in the stages to mandate something about the IDN, but we can promote it using some kind of this kind of criteria to make the accessibility for the user more solid. The point with the IDNs, for example, that it helps the people to identify if it is phishing or not. For example, we have like a national service which is called Absher.sa. It’s like a centralized identification provided by the Ministry of Interior. The Absher is spelling in English. It can vary. For example, you can add two E’s, one E, but in Arabic, it’s very clear. If we use Arabic, it will be more clear for the user that this really is Absher, not another site. So, from this point, we can achieve it. Thank you.


Moderator: Thank you, Hisham. I see Manal’s hand is up. Manal, go ahead.


Manal Ismail: Thank you, Behar. I would like to start from the intervention from the floor. It’s very good to know you’re new to the topic. This is excellent. It means that we’re reaching beyond ourselves. So, it’s always good to know that we have people who are new to the topic. I think governments should lead by example. So, they should start promoting and using the IDNs and resolving universal acceptance issues. But still, I think we need to strategize the issue more. So, it’s not just that we need to implement or deploy. universal acceptance but rather we need to have multilingual internet and we need to have a holistic view of what we want to achieve and we have a global plan because it was very surprising to me to hear that with upgrades we lose the progress we’ve already done. So we need things to be prioritized, institutionalized and maintained so that we continue to progress and not go back. So I think this needs to be a global initiative to have a multilingual internet rather than just talk about the technical part of it. Again we will not know all the problems until we start using IDNs and universal acceptance extensively and like anything else I mean it can be as easy as enabling some libraries and doing the technical part but then it comes with a long list of other issues as Hisham mentioned, security, variants and the list goes on. So in terms of direct implementation it’s not that tough but as anything else it comes with a long list of challenges. The whole thing was built with ASCII in mind and what we’re trying to do now is work around that we need to institutionalize and have it transparent from the end user. So anyway we owe it to those who need it so we should continue pursuing this forward slowly but surely. Back to you Ben, sorry.


Moderator: Thank you Manal. I think we have like a couple of more minutes. Are there any questions or maybe if there are any closing remarks by any of the panelists. So I think from what I’m hearing, it’s a long journey and it’s been a long way. A lot has been achieved, more to be done. And I see Bhanu’s hand is up. Bhanu, I think you will have the final or the last comment.


Bhanu Neupane: Thank you very much, Bahar. Just a few very important thing that I want to raise here is, we are following and then tracking this recommendation that was agreed by the member state in 2003. And perhaps that was the first time ever an intergovernmental process had recognized the importance of multilingual IDNs and of course, the top level domain name. And internet were just starting at that time, like world had very few internet users. And now that has in fact become too large a number. But one of the things that we have identified is every four years, we go back and ask the government to report on this one. Most of the time that they say that, okay, we have allowed different languages to go in the internet, but they never talk about IDN or GTLD. So there is an extremely poor awareness on the part of the decision maker. And then we have also started to realize that many governments or most governments still do not have the context of universal acceptance recognized as part of their internet policy. So this is a major drawback for us to move forward. Perhaps there’ll be a time when reaching out to everyone using internet will become a mainstay for the government and perhaps it will be recognizing. exactly as the data breach that happened several years ago and the GDPR became a mainstream. But actually, something similar like that must come forward so that the government will say that unless a new, say, service provider or an internet regulation around the world are universal acceptance ready, they should be kind of like, something should be done on that one. Perhaps this wouldn’t be part of the global internet, you know, say fraternity in some sense. So I think that this is one thing that we have been very much observant about and we are now in a partner with ICANN to do exactly that. Primarily, three things that we want to do. One, that we are trying to prepare a policy brief targeting the member states around the world to bring their understanding of universal acceptance differently. There are not many examples that, okay, we have been asking them that, okay, make your internet multilingual. First, you know, they’ll say that what is the benefit for that? It’ll cost us a lot of money. And I think there is just one report, you know, that is out there which actually puts some figure that, okay, if you make the internet multilingual, there is $9 billion that you can get as part of, and you can, in fact, benefit, you know, or get a chunk of that resources. Those type of empirical studies, you know, we do not have. So I think, you know, we need, you know, many more of that. And as empirical evidence of the benefit of multilingualism in all sense of the word. The other thing is the technical capacities around the world is extremely poor. So I’ll just stop.


Moderator: Yeah, sorry, we’re running over time. And I’ve been informed that we need to wrap this up. I’d like to thank everyone for joining today, both here in Riyadh and remotely. And please join me in thanking our panelists for this very informative discussion. Thank you, everyone. Thank you. Thank you. Thank you. you you you you


B

Bhanu Neupane

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

IDNs crucial for fostering cultural diversity and digital inclusion

Explanation

UNESCO plays a decisive role in advancing digital inclusion through initiatives aligned with the WSIS agenda and sustainable development goals. These efforts focus on universal access to information and knowledge, promoting equity, inclusion, and multilingualism in digital transformation.


Evidence

UNESCO leads WSIS action lines C3, C8, C9, and C10, and is responsible for e-science in C7. They also work on SDGs 4, 9, and 16.


Major Discussion Point

Importance of Internationalized Domain Names (IDNs) for Digital Inclusion


Agreed with

Manal Ismail


Hesham M. AL-Hammad


Theresa Swinehart


Agreed on

Importance of IDNs for digital inclusion


Importance of including universal acceptance in national internet policies

Explanation

Many governments do not have universal acceptance recognized as part of their internet policies. This lack of policy recognition is a major drawback in advancing the adoption of IDNs and achieving true multilingual internet access.


Evidence

UNESCO’s observations from member state reports on the implementation of the 2003 recommendation on multilingualism and universal access to cyberspace.


Major Discussion Point

Role of Governments and Stakeholders


M

Manal Ismail

Speech speed

105 words per minute

Speech length

1249 words

Speech time

710 seconds

IDNs essential for continued internet expansion and reaching next billion users

Explanation

Multilingual internet is crucial for digital inclusion and the continued expansion of the internet. It is necessary to provide a gateway for the next billion internet users to connect meaningfully to the internet.


Evidence

ICANN has introduced over 1,200 new gTLDs, with 100 being IDNs, and around 60 IDN ccTLDs.


Major Discussion Point

Importance of Internationalized Domain Names (IDNs) for Digital Inclusion


Agreed with

Bhanu Neupane


Hesham M. AL-Hammad


Theresa Swinehart


Agreed on

Importance of IDNs for digital inclusion


Low demand and slow uptake of IDNs in many regions

Explanation

Despite the introduction of IDNs, their actual number of registrations remains relatively low at 1.2 percent of the global domain name market. The uptake is very slow, with only three ccTLDs witnessing notable growth in IDN registrations over the past year.


Evidence

Per the EURID IDN world report 2024, the majority of ccTLDs experienced minimal or no growth, with 19 ccTLDs reporting contraction in their total IDN registrations.


Major Discussion Point

Challenges in IDN Adoption and Implementation


Agreed with

Hesham M. AL-Hammad


Walter Wu


Sarmad Hussain


Agreed on

Challenges in IDN adoption and implementation


Governments should lead by example in promoting IDNs

Explanation

Governments should take the initiative in promoting and using IDNs, as well as resolving universal acceptance issues. However, a more strategic approach is needed, focusing on creating a multilingual internet rather than just implementing universal acceptance.


Major Discussion Point

Role of Governments and Stakeholders


Differed with

Hesham M. AL-Hammad


Differed on

Approach to promoting IDN adoption


H

Hesham M. AL-Hammad

Speech speed

122 words per minute

Speech length

2324 words

Speech time

1139 seconds

IDNs help preserve local languages and culture online

Explanation

IDNs allow users to register domain names in their local languages and scripts, which helps preserve and promote local languages and cultures online. This is particularly important for languages using non-Latin scripts, such as Arabic.


Evidence

Saudi Arabia’s experience with implementing Arabic IDNs and developing algorithms to handle language variants.


Major Discussion Point

Importance of Internationalized Domain Names (IDNs) for Digital Inclusion


Agreed with

Bhanu Neupane


Manal Ismail


Theresa Swinehart


Agreed on

Importance of IDNs for digital inclusion


Differed with

Manal Ismail


Differed on

Approach to promoting IDN adoption


Complexity of variant management for scripts like Arabic

Explanation

Managing variants in Arabic script IDNs is complex due to the large number of possible variants for each domain name. This complexity can lead to confusion and potential security issues if not properly managed.


Evidence

Example of ‘Hayat al-Atsalat’ having over 2 million variants, which was reduced to just four desired variants using specialized algorithms and filters.


Major Discussion Point

Challenges in IDN Adoption and Implementation


Agreed with

Manal Ismail


Walter Wu


Sarmad Hussain


Agreed on

Challenges in IDN adoption and implementation


Need for collaboration between regulators, industry and academia

Explanation

Addressing IDN challenges requires collaboration between various stakeholders, including regulators, industry, and academia. This collaboration is necessary to develop technical solutions, raise awareness, and implement policies that promote IDN adoption.


Evidence

Example of collaboration with the Digital Government Authority in Saudi Arabia to include DNSSEC and IPv6 criteria in digital transformation readiness assessments for government entities.


Major Discussion Point

Role of Governments and Stakeholders


T

Theresa Swinehart

Speech speed

154 words per minute

Speech length

1121 words

Speech time

435 seconds

IDNs allow users to engage online in their own languages and scripts

Explanation

IDNs enable users to communicate and engage online using their native languages and scripts. This is essential for creating a truly inclusive and accessible internet that reflects the linguistic diversity of its users.


Evidence

Introduction of IDN top-level domains more than ten years ago, allowing many language communities to come online.


Major Discussion Point

Importance of Internationalized Domain Names (IDNs) for Digital Inclusion


Agreed with

Bhanu Neupane


Manal Ismail


Hesham M. AL-Hammad


Agreed on

Importance of IDNs for digital inclusion


ICANN programs to enable IDN awareness and universal acceptance

Explanation

ICANN is working on various programs to raise awareness about IDNs and promote universal acceptance. These efforts aim to ensure that all internet applications and systems treat all top-level domains consistently, regardless of script or length.


Evidence

ICANN’s Universal Acceptance Day initiative, which held over 52 events in 47 countries in its third year.


Major Discussion Point

Efforts to Promote IDNs and Universal Acceptance


W

Walter Wu

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of awareness about IDN availability and benefits

Explanation

There is a general lack of awareness among internet users about the availability and benefits of IDNs. This lack of awareness is a significant challenge in promoting the adoption and use of IDNs.


Major Discussion Point

Challenges in IDN Adoption and Implementation


Agreed with

Manal Ismail


Hesham M. AL-Hammad


Sarmad Hussain


Agreed on

Challenges in IDN adoption and implementation


Chinese registrar community pushing IDN usage and promotion

Explanation

The Chinese registrar community is actively working to promote the usage of IDNs. They believe that increased visibility of IDNs in daily life will help raise awareness and encourage adoption among internet users.


Evidence

Examples of Chinese IDNs being used by multinational companies like Starbucks and Tesla for their localized brand names in China.


Major Discussion Point

Efforts to Promote IDNs and Universal Acceptance


S

Sarmad Hussain

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Technical issues with universal acceptance of IDNs across systems

Explanation

There are ongoing technical challenges in achieving universal acceptance of IDNs across all internet applications and systems. These issues need to be addressed to ensure seamless use of IDNs and internationalized email addresses.


Evidence

Only about 11% of the top 1,000 global websites can accept internationalized email addresses, and just 22.2% of email servers support them.


Major Discussion Point

Challenges in IDN Adoption and Implementation


Agreed with

Manal Ismail


Hesham M. AL-Hammad


Walter Wu


Agreed on

Challenges in IDN adoption and implementation


Multi-stakeholder approach needed to address technical and policy issues

Explanation

Addressing the challenges of IDN implementation and universal acceptance requires a multi-stakeholder approach. This involves collaboration between technical experts, policymakers, and other stakeholders to create awareness, develop solutions, and promote adoption.


Major Discussion Point

Role of Governments and Stakeholders


Agreements

Agreement Points

Importance of IDNs for digital inclusion

speakers

Bhanu Neupane


Manal Ismail


Hesham M. AL-Hammad


Theresa Swinehart


arguments

IDNs crucial for fostering cultural diversity and digital inclusion


IDNs essential for continued internet expansion and reaching next billion users


IDNs help preserve local languages and culture online


IDNs allow users to engage online in their own languages and scripts


summary

Speakers agree that IDNs are crucial for digital inclusion, preserving cultural diversity, and expanding internet access to new users in their native languages.


Challenges in IDN adoption and implementation

speakers

Manal Ismail


Hesham M. AL-Hammad


Walter Wu


Sarmad Hussain


arguments

Low demand and slow uptake of IDNs in many regions


Complexity of variant management for scripts like Arabic


Lack of awareness about IDN availability and benefits


Technical issues with universal acceptance of IDNs across systems


summary

Speakers highlight various challenges in IDN adoption, including low demand, technical complexities, lack of awareness, and issues with universal acceptance.


Similar Viewpoints

These speakers emphasize the crucial role of governments and multi-stakeholder collaboration in promoting IDNs and universal acceptance through policy, leadership, and partnerships.

speakers

Manal Ismail


Hesham M. AL-Hammad


Bhanu Neupane


arguments

Governments should lead by example in promoting IDNs


Need for collaboration between regulators, industry and academia


Importance of including universal acceptance in national internet policies


Unexpected Consensus

Need for empirical evidence on IDN benefits

speakers

Bhanu Neupane


Walter Wu


arguments

Chinese registrar community pushing IDN usage and promotion


Importance of including universal acceptance in national internet policies


explanation

While not directly stated, both speakers indirectly point to the need for more concrete evidence of IDN benefits. Bhanu Neupane mentions the lack of empirical studies on the economic benefits of multilingual internet, while Walter Wu’s emphasis on Chinese registrars’ promotion efforts suggests a need for demonstrable benefits to drive adoption.


Overall Assessment

Summary

The speakers generally agree on the importance of IDNs for digital inclusion and cultural preservation, the challenges in IDN adoption and implementation, and the need for multi-stakeholder collaboration, especially government involvement, in promoting IDNs.


Consensus level

There is a high level of consensus on the fundamental importance and challenges of IDNs. This consensus implies a shared understanding of the issues, which could facilitate coordinated efforts to address challenges and promote IDN adoption. However, the diversity of specific challenges mentioned suggests that solutions may need to be tailored to different contexts and languages.


Differences

Different Viewpoints

Approach to promoting IDN adoption

speakers

Hesham M. AL-Hammad


Manal Ismail


arguments

IDNs help preserve local languages and culture online


Governments should lead by example in promoting IDNs


summary

While Hesham emphasizes technical solutions and collaboration with industry, Manal argues for a more proactive government-led approach to promoting IDNs.


Unexpected Differences

Perception of technical readiness for IDNs

speakers

Theresa Swinehart


Hesham M. AL-Hammad


arguments

ICANN programs to enable IDN awareness and universal acceptance


Complexity of variant management for scripts like Arabic


explanation

While Theresa’s argument suggests significant progress in IDN implementation, Hesham’s focus on the complexity of Arabic script variants reveals unexpected technical challenges that are still unresolved.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to promoting IDN adoption, the readiness of technical infrastructure, and the role of different stakeholders in advancing IDNs.


difference_level

The level of disagreement is moderate. While all speakers agree on the importance of IDNs for digital inclusion, they have different perspectives on implementation strategies and priorities. These differences could impact the coordination of efforts to promote IDN adoption and universal acceptance globally.


Partial Agreements

Partial Agreements

All speakers agree on the importance of promoting IDN awareness and adoption, but they differ in their focus areas. Theresa emphasizes ICANN’s programs, Walter highlights industry efforts in China, while Sarmad points out the need to address technical challenges.

speakers

Theresa Swinehart


Walter Wu


Sarmad Hussain


arguments

ICANN programs to enable IDN awareness and universal acceptance


Chinese registrar community pushing IDN usage and promotion


Technical issues with universal acceptance of IDNs across systems


Similar Viewpoints

These speakers emphasize the crucial role of governments and multi-stakeholder collaboration in promoting IDNs and universal acceptance through policy, leadership, and partnerships.

speakers

Manal Ismail


Hesham M. AL-Hammad


Bhanu Neupane


arguments

Governments should lead by example in promoting IDNs


Need for collaboration between regulators, industry and academia


Importance of including universal acceptance in national internet policies


Takeaways

Key Takeaways

Internationalized Domain Names (IDNs) are crucial for digital inclusion and fostering cultural diversity online


There are significant technical and awareness challenges hindering widespread IDN adoption


Multi-stakeholder collaboration is needed to promote IDNs and universal acceptance


Governments have an important role to play in leading IDN adoption and policy development


Progress has been made on IDNs but there is still a long way to go to achieve a truly multilingual internet


Resolutions and Action Items

UNESCO and ICANN to partner on preparing policy briefs for member states on universal acceptance


Continue efforts to raise awareness about IDNs through events like Universal Acceptance Day


Work on improving technical solutions for IDN implementation and universal acceptance


Encourage governments to include universal acceptance in national internet policies


Unresolved Issues

How to increase demand and uptake of IDNs in many regions


Addressing ongoing technical challenges with universal acceptance across systems


Managing the complexity of script variants, particularly for languages like Arabic


Lack of empirical studies on the economic benefits of multilingual internet


How to maintain progress on IDN support through software/system upgrades


Suggested Compromises

Using both IDN and ASCII domain names/email addresses to allow flexibility in communication


Implementing IDN support gradually through incentives rather than strict mandates


Focusing initial IDN promotion efforts on specific use cases like government services


Thought Provoking Comments

We see that with the work which is being done by community, by ICANN, by others, there is now a growing awareness. We see more and more not just awareness, but adoption of domain names in local languages and e-mail addresses in local languages over time.

speaker

Sarmad Hussain


reason

This comment provides a balanced perspective on the progress of IDN adoption, acknowledging both achievements and ongoing challenges.


impact

It shifted the conversation from focusing solely on challenges to recognizing progress, while still maintaining a realistic view of the work ahead.


Sometimes we have the planes, but the airport is not ready. But still, this is a fact, and we need to work with it. So we need to work in parallel.

speaker

Hesham M. AL-Hammad


reason

This analogy effectively illustrates the complex interdependencies in implementing IDNs and the need for simultaneous progress on multiple fronts.


impact

It prompted a more nuanced discussion about the timing and coordination of various aspects of IDN implementation, rather than a simple linear approach.


We owe it to those who need it so we should continue pursuing this forward slowly but surely.

speaker

Manal Ismail


reason

This comment reframes the discussion in terms of ethical responsibility and long-term commitment, rather than just technical or business considerations.


impact

It added a moral dimension to the conversation and reinforced the importance of perseverance in the face of challenges.


Many governments or most governments still do not have the context of universal acceptance recognized as part of their internet policy. So this is a major drawback for us to move forward.

speaker

Bhanu Neupane


reason

This insight highlights a critical gap in policy-making that is hindering progress on IDN adoption.


impact

It shifted the focus of the discussion towards the role of government policy in promoting IDNs and universal acceptance, suggesting a new area for advocacy and action.


Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to include policy, ethical, and strategic dimensions. They helped create a more comprehensive understanding of the challenges and opportunities in IDN adoption, emphasizing the need for coordinated efforts across multiple stakeholders and a long-term commitment to progress. The discussion evolved from focusing on specific technical challenges to considering broader systemic issues and the importance of government involvement and policy changes.


Follow-up Questions

What concrete steps need to be taken before the new round of gTLDs opens to improve IDN adoption?

speaker

Fouad Bajwa (audience member)


explanation

Important to address low IDN growth and ensure better success in the next round of gTLDs


How can we solve the problem of IDN support inconsistency across software updates?

speaker

Hesham M. AL-Hammad


explanation

Inconsistent support hinders user adoption and trust in IDNs


Can we develop solutions at the protocol level to address IDN challenges?

speaker

Hesham M. AL-Hammad


explanation

Fundamental protocol changes may be needed to fully support IDNs


How can we improve IDN support in CDNs, load balancers, and antiviruses?

speaker

Hesham M. AL-Hammad


explanation

These infrastructure components are critical for IDN functionality


How can we ensure secure identification of senders using IDN email addresses across different scripts?

speaker

Abdulmenem (audience member)


explanation

Important for trust and adoption of IDN email addresses in cross-cultural communication


How can we create more empirical studies on the economic benefits of multilingual internet?

speaker

Bhanu Neupane


explanation

Evidence needed to convince governments and stakeholders of IDN value


How can we improve technical capacities worldwide to support IDNs and universal acceptance?

speaker

Bhanu Neupane


explanation

Lack of technical capacity is hindering IDN adoption and implementation


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #8 AFRICAN UNION OPEN FORUM 2024

Open Forum #8 AFRICAN UNION OPEN FORUM 2024

Session at a Glance

Summary

This discussion focused on the African Internet Governance Forum (IGF) and digital initiatives across Africa. The African IGF 2024 was held in Addis Ababa, Ethiopia, with over 430 delegates from 43 member states discussing topics like artificial intelligence, cybersecurity, and digital inclusion. Youth and parliamentarian tracks were highlighted as important components. Plans for the 2025 African IGF in Tanzania were announced.

Representatives from the African Union Commission and UN Economic Commission for Africa presented upcoming initiatives. These include the development of a continental AI strategy, revision of the Malabo Convention on cybersecurity, and efforts to expand digital infrastructure and skills across Africa. Particular emphasis was placed on bridging the digital divide in rural areas and empowering youth and women in the digital sphere.

Participants raised questions about strategies for rural connectivity, capacity building, and increasing ratification of the Malabo Convention. The need for greater inclusion of diverse stakeholders in policy development was stressed. Speakers acknowledged challenges but highlighted ongoing efforts to collaborate with various partners, including universities and the private sector, to advance Africa’s digital transformation.

The discussion concluded with calls for better storytelling to showcase Africa’s digital progress and more targeted initiatives for women in technology. Overall, the session demonstrated Africa’s commitment to leveraging digital technologies for development while addressing persistent challenges in connectivity and inclusion.

Keypoints

Major discussion points:

– Updates on recent and upcoming African IGF activities, including youth and parliamentary tracks

– New initiatives from the African Union Commission and UN Economic Commission for Africa, such as AI strategies, cybersecurity efforts, and digital inclusion programs

– Challenges in bridging the digital divide, especially for rural communities in Africa

– The need for capacity building, training, and empowering youth and women in digital technologies

– Improving storytelling and communication about Africa’s digital initiatives and successes

Overall purpose:

The purpose of this discussion was to provide updates on African digital initiatives, share plans for upcoming projects and strategies, and gather feedback from stakeholders on priorities and challenges in advancing digital development across Africa.

Overall tone:

The tone was largely informative and collaborative, with speakers sharing updates and plans in a positive manner. There was also an undercurrent of urgency around addressing challenges like the digital divide. The Q&A portion introduced a more critical tone as participants raised concerns and pushed for more concrete actions on issues like rural connectivity and women’s inclusion. Overall, the discussion maintained a constructive tone focused on working together to advance Africa’s digital future.

Speakers

– Adil Sulieman: African Union Commission representative, moderator

– Speaker 1: Provided report on African IGF

– Amina Ramalan: Youth representative, reported on Africa Youth IGF

– Speaker 2: Parliamentarian representative from Malawi

– Lilian Nalwoga: MAG chair

– Waleed Hamdi: Head of Information Society Division, African Union Commission

– Makhtar Sheikh: Representative from UN Economic Commission for Africa

Additional speakers:

– Jingbo Huang: Director of United Nations University Research Institute in Macau

– Wisdom Donkor: Executive Director for Africa Open Data and Internet Research Foundation

– Levy Syanseke: Leader of Internet Society Zambia chapter and youth IGF

– Abdi Jalil Bashar: Partnership advisor of National Cyber Security Agency in Chad, member of Economic Council of African Union

– Ahmed Farak: Chair of the North African IGF

– Martin: Representative from GFC Africa

– James: Representative from Zimbabwe

– Winnie Kamau: Association of Freelance Journalists

– Aicha Jeridi: Vice Chair of the North African IGF

Full session report

Expanded Summary of African Internet Governance Forum Discussion

Introduction

This summary provides a comprehensive overview of a discussion focused on the African Internet Governance Forum (IGF) and digital initiatives across Africa. The session, moderated by Adil Sulieman from the African Union Commission, brought together representatives from various organisations and countries to discuss recent developments, upcoming initiatives, and persistent challenges in Africa’s digital landscape.

African IGF 2024 Outcomes

The African IGF 2024, held in Addis Ababa, Ethiopia, was reported as a success, with delegates from numerous member states participating. The forum covered a range of topics including artificial intelligence, cybersecurity, and digital inclusion. Two key components of the event were highlighted:

1. Youth Track: Amina Ramalan, a youth representative, reported that the Africa Youth IGF focused on digital governance and emerging technologies, with over 100 participants on-site and 100 online. The main theme emphasized the importance of engaging young people in these discussions.

2. Parliamentarian Track: A representative from Malawi discussed how parliamentarians explored legislation and stakeholder collaboration related to internet governance.

The MAG chair also noted that a new charter was developed to guide the organisation of future African IGFs, demonstrating a commitment to improving the forum’s structure and effectiveness.

Upcoming Initiatives and Strategies

Representatives from the African Union Commission (AUC) and the UN Economic Commission for Africa (UNECA) presented several upcoming initiatives:

1. Continental AI Strategy and African Digital Compact: Waleed Hamdi, Head of the Information Society Division at the AUC, announced the development of these comprehensive frameworks to guide Africa’s digital future.

2. PRIDA Phase 2: Set to launch in early 2025, this initiative will focus on internet governance across the continent.

3. Continental Cybersecurity Strategy: Hamdi reported that this strategy is under development and will be presented to AU organs for consideration in the second half of 2025.

4. Digital Strategy Support: Dr. Makhtar Sheikh from UNECA outlined plans to support 10 countries in developing their national digital technology strategies.

5. African Summit on AI: An event planned to take place in Mozambique before June 2025.

6. Global Digital Compact: Dr. Sheikh mentioned its adoption by the UN General Assembly in September 2024.

7. Continental CERT Initiative: Hamdi discussed plans for this cybersecurity measure.

8. Common Position on International Law in Cyberspace: Hamdi noted ongoing work in this area.

9. School Connectivity and Capacity Building: UNECA plans to connect schools and build capacity for 100,000 young students across 14 countries.

10. Digital ID Program: Ethiopia has registered 1 million people, with plans for 24 million next year.

11. STEAM Center in Rwanda and WSIS Plus 20 meeting in Benin: Both initiatives mentioned by Dr. Sheikh.

12. African Center of Cybersecurity in Togo: Plans for this center were discussed.

13. STI Forum: Planned for April in Uganda.

14. Data Governance Working Group: Plans to support six member states in building capacity on data governance.

These initiatives demonstrate a concerted effort to advance Africa’s digital transformation at both continental and national levels.

Challenges and Areas for Improvement

Throughout the discussion, several challenges and areas for improvement were identified:

1. Digital Divide: Audience members emphasized the need to address the digital divide in rural communities, calling for concrete strategies to bridge this gap. Dr. Sheikh noted that connectivity in Africa increased only 1% (from 37% to 38%) in the past year.

2. Capacity Building: Multiple speakers stressed the importance of capacity building for youth, women, and government officials in digital skills and technologies.

3. Malabo Convention: There were calls to revise the Malabo Convention on cybersecurity and increase ratifications among member states.

4. Communication and Storytelling: Winnie Kamau from the Association of Freelance Journalists highlighted the need for better storytelling to showcase Africa’s digital progress and IGF activities.

5. Women’s Inclusion: Aicha Jeridi, Vice Chair of the North African IGF, called for more targeted initiatives for women in technology. Dr. Sheikh mentioned that the digital gender gap has increased to 13.13 points.

6. Identity and Employment: Dr. Sheikh noted that 500 million people in Africa lack legal ID, and there’s a need to create 625 million jobs by 2030.

Collaboration and Support

The discussion emphasized the importance of collaboration and support in advancing Africa’s digital agenda, including invitations for collaboration from the United Nations University and requests for support for sub-regional IGF activities.

Agreements and Consensus

There was broad agreement on several key points, including the importance of capacity building, youth engagement, comprehensive digital strategies, and improved storytelling about IGF activities in Africa.

Conclusion and Next Steps

The discussion concluded with several key takeaways and action items, including Tanzania hosting the next African IGF and a call for new MAG members. While the discussion demonstrated Africa’s commitment to leveraging digital technologies for development, several issues remained unresolved, particularly strategies for effectively bridging the rural digital divide and increasing women’s participation in ICT and cybersecurity.

Session Transcript

Adil Sulieman: Please put your earpieces, please. Good afternoon. We will start momentarily. We are just waiting for the panelists to come to the podium. Good afternoon. Good afternoon. My name is Adel Suleiman. I’m with the African Union Commission. It gives me great pleasure and privilege to be here with you, the African IJF 20, the IJF 2024, with the main theme of digital, leveraging digital for peace, development, and stability. So we are very happy and pleased to be with you here. This is our annual gathering as Africans. It’s like a family gathering. It’s a family gathering, and we really feel like this is a talk between brothers and sisters. And I think the objective of this open forum, the annual open forum, is to share what the African institutions are doing in terms of new initiatives, and to share some good news with you on the new initiative and project that are in Africa. And also, there will be some representatives from African institutions in this gathering that are also going to be sharing with us what they are doing. And we also want to have a dialogue. I think that also we want to listen from you, your challenges, and what you are doing in terms of some progress on your side, either on an individual basis or on an institutional basis. So basically, I think we stand here with mixed emotions. We have good news. I start with the good news. One of our colleagues who was predominant in this discussion, Dr. Nyambura, I’m glad to announce that now she is the minister in Kenya. So I think we are very proud and happy to have one of the family members to take this high position in their country. I think you have to look yourself as future, also, leaders in Africa, because it’s not far from you. So you can achieve everything that you want. The sad news that, of course, we lost Makan Faye. Makan has been instrumental with the African IGF, as well as with the global IGF. If you can stand up, we just observe a minute of silence. Thank you very much. So let me just walk through the program for today. We will have feedback on the African IGF. So it’s going to be presented by our colleagues. And we have two other tracks during the African IGF. We have the youth track and the parliamentarian track. So also, we are going to get some reporting on those. After that, we will hear from the MAC chair, and then a couple of African institutions, the African Union Commission and the UN Economic Commission for Africa. They’re going also to share with us their new initiative in 25 and moving forward. Without further ado, let me give the floor to Sorine to give a high-level report on the African IGF. Sorine, the floor is yours.

Speaker 1: Thank you, Adil, and good afternoon, everyone. I just want to give a brief background. And we have everybody’s, OK, all the focal points for different aspects of the African IGF. And they will provide a detailed background information. So for the African IGF that had just concluded from the 20 to the 22nd of November in Addis Ababa, Ethiopia, we had over 430 delegates. All those delegates got the badges. And we had over 43 member states was represented. And more or less, we had the stakeholders, a diverse one, UN representative, academia, CISOs, parliamentarians, youth, and partner agencies, including research institutes, member states representing, and also private sector were present. Just to give you the demographic and gender balance, we had from the delegate that are present, we had 154 female and 276 male delegates were present during the forum. We started the African IGF first from the African School of Internet Governance, which was led by the APC. The focal point was UNRWA in Peace. They had one week long. The delegate that they covered in the AfRICC close to 55 participants, including the resource personnel, and also including parliamentarians for intergenerational dialogue. So we had, unlike any other days during the African IGF, this year we started quite late. So with the support of the MAG and the MAG chair, who will provide a detailed information on our process, we started quite late. But with the support of the host country, which is the Ethiopian government, we tried to buy in some of the time. As I said, we started with AfRICC. And that was from the 14th to the 19th of November. The topic that was covered or the practicum was around data governance and cross-data flow, which is on the implementation of the African Union Data Policy Framework. And also we had a parliamentarian session, which was facilitated by Celine. She is the focal point from IGF Secretariat. They had one full day parliamentarian track, which had three or four subtopics. And also they had parliamentarian session, included intergenerational dialogue with the youth. And on day zero also, we had a youth track or youth forum, which was a full day. The young lady, Aminata, and Mariam Jope, and Lilian, were the focal point. They will be talking about what the thematic area was, what was the message, and the call for action. We had also partners session. GFC is, I think, present. Martin Koyabe is there. They organized more or less three session. They hold the regional forum, GFC forum, on day zero. they had also a session on coordination committee across the partners and also implementation of the Togo cyber security session. Martin will be a good person to provide more detail. So on the main session of the African IGF, as I said, it was three days including day zero. We had across nine sub thematic areas, that is artificial intelligence and emerging technologies, cyber security and cybercrime, local content and multilingualism were there, technical and infrastructure and many other sessions including also technical infrastructure which was led by ICANN and ISOC sessions. I think we had also a plenary session where we tried to gather all the parallel workshop which were complete like 44 workshop that we hold, plenary sessions, one high-level session on the multi-stakeholder building our multi-stakeholder digital future for Africa, that is the main thematic area for the African IGF this year and we had a main session, one is on misinformation, disinformation in internet shutdown, the second one is around cybercrime and cyber security, the third one was advancing cyber security in digital economy, the last one was around e-government, this were the plenary session and during the high-level session we tried to revisit the global digital compacts in the WSIS annual review in the upcoming WSIS plus 20 review process that we’ll be holding. At the end we concluded our session on open mic which we heard from the community what went right, what went wrong and how we can improve the African IGF process and to make it an inclusive and more representative of the stakeholders and we had also closing ceremony which we announced the next year host for the African IGF, this time will be Tanzania has accepted and considering that the global IGF will be in June, the host government are thinking to hold it just before the global IGF but once the date is agreed with them will be communicated and the last session that I don’t want to forget is hosted by the global digital inclusion partnership, they had breakfast session to make sure that how the women initiative or initiative aim to bridge the digital gap to get their funding from existing funding agencies. So I think I will give back the mic to the moderator to give it to them.

Adil Sulieman: Thank you very much, I think let’s give her a round of applause. One key message is that the African IGF is going to be in Tanzania, most likely it’s going to be early in the year because the global IGF is going to be in June, so just to be ready for that so that we will attend and hopefully maybe we have some representative here will volunteer for the 2026. So I want to be with the presenter to be brief because I think the one of the essence of this session is to have dialogue. So we want to, if you could also take your indulgence that you park the question to the after we finish all the presentations. So I will give the floor now to the youth representative to just give us a briefing on what inspired in Addis Ababa when they met in the youth track. Amina, please go

Amina Ramalan: ahead. Good afternoon everyone, my name is Amina Ramalan. So the 2024 Africa Youth Internet Governance Forum held November 20th on day zero of the Africa IGF. The theme of the 2024 Africa Youth IGF was digital governance and emerging tech, amplifying youth voices in multi-stakeholder dialogue. So during the Africa Youth IGF we had over a hundred participants on site and we had about a hundred participants online from all over the world. We had four panel sessions and we had two workshops. The panel session covered the main theme of the Africa Youth IGF. We also spoke to the global digital compact, understanding its opportunities and perspectives of youth. We had a multi-stakeholder dialogue with policymakers. We had opportunities for engagement by our sponsors, by the stakeholders that were on our panel such as ICANN, UNECA, the Tony Blair Institute, GIZ. We also had a closing session by representatives from the UN IGF Secretariat. Some resolutions from the 2024 Africa Youth IGF include stakeholders committing to taking constructive criticism from the forum. Agreements were also made to increase youth participation and representation in the ecosystem. Governments through policymakers agreed to prioritize integrating digital literacy into academic curriculum. There was also a strong agreement by all participants for the need to develop Africa-specific solutions to Africa-specific problems such as in dealing with digital governance, agricultural health, education that includes integrating our local languages into AI and digital tools. Some of the key outcomes from the Africa Youth IGF include improving digital literacy among youth, advocating for representation of youth in the ecosystem, launching initiatives that will strengthen cross-border collaborations on digital rights. Stakeholders such as ICANN, UNECA also spoke to expanding their capacity building initiatives, prioritizing infrastructure development. There was also discussions on promoting lifelong learning and mentorship opportunities to foster youth leadership in Africa’s digital economy and there was also the consensus to strike a balance between digital innovation and human rights. Thank you. Thank you very much Amina. I

Adil Sulieman: think you did a wonderful job to summarize what inspired in Addis on the youth track. I hope that also the following presenters will also try to keep it short and precise. Let me take this opportunity to welcome Honourable Susan from Malawi to give us a briefing on what inspired during the Parliamentarian symposium. Honourable, please the floor is yours.

Speaker 2: Briefly, at the Africa IGF in November, we had a parliamentary track where we were giving out the issues that we have done so far in our countries, various countries. We had members of Parliament from different countries as well as members who represent Africa Parliamentary Network on Internet Governance. So we discussed issues whereby we spoke about issues of legislation. How can members of Parliament work together with other stakeholders to make sure that we work together. We shouldn’t leave anyone behind. We have to be working together as Africa. We have to, whatever issues we have, we need to put them together and so that we can face the challenges that we are facing as one continent. So it was a very good session and through that session we were able to come up with strategies, plans that can enable us Parliament to work on our duties. As you are aware, as members of Parliament we have a key role to play in our different countries. Coming up with legislation is not that easy so we want every stakeholder to work with us, to be with us and legislators should not be left behind but we have to work together so that we make sure that Africa is connected. Africa is one Africa we are doing the right things together. So as parliamentarians we assured everyone at that session that we are ready to work and we are ready to come up with legislation, maybe on the issues of AI as well on the AI frameworks and anything that will come up even if there will be issues of maybe some amendments to the laws that we already have in our countries. We are ready to to do that. In

Adil Sulieman: brief I think that’s what I can talk about. Thank you. I think we are stronger because we in the African IGF family we have the parliamentarians and we look forward to have other stakeholders like diplomats also so that the family can be also can grow and become more stronger. I think the parliamentarians since they joined in 2022 I think they they they had impact on the ground and we we’re really happy to have them along in this journey with us. Thank you very much. Let me now give the floor to the MAG chair Lilian just to give us a briefing on the MAG activities and what what they have done during last year. Thank you. And also part of this year.

Speaker 3: From the MAG I would like to thank my fellow MAG members and past members. Perhaps I would like to recognize the current. Yes send TiJaan you served last year, Aisha is around he’s currently joined this year. So these plus a number of our colleagues who are not in the room today were very helpful in the shaping of the program last year and this year. So part of our task force when we joined the MAG in towards end of 2002 was to revise the current charter and I would like to tell them inform the community that we have a new charter that guides the organizing and the convening of the Africa IGF. This charter started with a journey that was led by Dr. Makta Sek in Choto last year. It constituted a task force of 12 members and some of them are in the room. TiJaan I don’t see the rest Mama Mary and Harriet. It was led by Honorable Gavenger from the Gambia a Hodge a Hodge Honorable yes from the Gambia and they did quite a wonderful job in advising and guiding and revising so that the charter could fit within the current settings of the Africa IGF. The charter came live this year during the IGF in in Ethiopia last month so it is still very fresh and currently we’ll be opening up call to have new MAG members and this should come out towards the end of next month. I’m looking at Serena and Dr. Makta to be sure that this is going to happen because we realize that the IGF has to we have a very short time frame and again the current MAG I think is used to working under tight deadlines because we had a very short time frame this year to organize the Africa IGF and it was a good success so we believe that if we start as early as next year in constituting a new MAG then the community should be able to give be given time to organize around workshop submitting workshops give the MAG ample time to select the workshops and also have an agenda announced before probably we hear from Tanzania on the month or date they will host we should be able to have our program ready. So I think those are the updates we we have from the MAG so members just look out for that call inviting you to announcing the call for nominations for the MAG members and then we’ll take it on from there. Thank you.

Adil Sulieman: Thank you. Thank you very much Lillian. So you had the news there is a charter and there is an also MAG election so if you guys are interested please especially the youth please just be on the alert something is going to come up and then maybe you can put your name for the for this was some of this position even though they are voluntary but you know this is just to serve Africa and we we are really looking for the youth and their spirit and their you know hard work so that they can make a difference. Thank you very much to the panel. Thank you. Please give them round of applause. Let me ask Mr. Walid Hamdi and Dr. Maktar Sek to come to the high table so that they can also give presentation on some of the initiatives that from the African Union Commission and also the United Nations Economic Commission for Africa. I think this is maybe so I’ll be very interested interesting so because you’re going to hear some good news about some development of new policies and and also some new project that are going to be launched in Africa. So let me give the floor to Mr. Walid Hamdi. Mr. Walid Hamdi is the new head of division for the information society division within the African Union Commission. Mr. Walid please go ahead. Thank you. I think there was a mic here.

Waleed Hamdi: Hello. Very good evening brothers and sisters. Very good evening everyone. All protocol all protocol observed. I’m really proud to see this African representations in the room. It’s something that show how we are very keen to for our voices for African voices to reach to this global platforms and to be very effective in shaping global policies. I’m talking about 2024 both very intense as globally the directive the strategic directions was about how to move from strategies to actions. So at the African Union Commission particularly the information society division we were thinking about how to move from strategy strategies to actions and then we kind of like wanted also to promote African efforts and like I mentioned we are very keen that the African voices to be heard in this global platforms. So we started with Global Digital Compact. We had a very long session of thinking about how to submit an African common position to the Global Digital Compact so that our voice is united and also our goals objectives so that we can easily accomplish these tasks all as African all together. So in the preparation of the summit of the future under the African Union Commission stewardship we produce a common position towards an open free and secure digital future for all. These positions have been articulated in an African Digital Compact which was submitted during the summit of the future as I mentioned as a common position for Africa in the Global Digital Compact. That was during the second half of the year and I’m very proud also to say that we managed to develop the continental AI strategy at the same time. We have been working very hard utilizing our resources to provide the African Digital Compact as well as the AI continental strategy because as we all know AI is a very hot topic and we don’t want to be late and also the emerging technologies and specifically AI is now in every conversation around achieving sustainable development goals, addressing the African challenges and we have a lot of unique challenges in Africa as you may know. The continental AI strategy was endorsed by the EU Council back in July in July 2024 with the support from UNESCO and also the African Digital Compact was supported by our colleagues in GIZ. We provided these two frameworks, and those two frameworks developed by multi-stakeholder groups representing African actors in the digital sphere. They developed oriented with specific commitment that measurables, time-bound, and with ownership of objectives clearly assigned. I’m pleased to announce that both frameworks, of course, now are adopted by the EU Summit, by the EU Executive Council, and now they are available. Both of them are accompanied by implementation plans so that Member States can follow these implementation plans for the ease of start implementing those frameworks. I’m very pleased to share today with you all that the second phase of PRIDA, PRIDA Phase 2, will formally launch early 2025. We are speaking about second half of January. Internet governance will be a focus area under the new initiative. I would also like to announce that EU Commission will launch, with the support of LuxDev, a continental CERT initiative following the completion of the project formalization phase which is currently underway. The EU developed a common position on the application of international law, cyberspace, this year. This common position was adopted by EU organs early this year and the Commission is currently assisting Member States to develop their own national position. As I mentioned, we are moving from strategies and policies to actions so that this one is kind of like applying international law into the cyberspace and also to assist Member States to do so. Next, I’m also pleased to report that a continental cybersecurity strategy is under development. Once the draft is finalized, the EU Commission will carry out validation workshops with different stakeholders, African stakeholders, involving various groups in the continent. Ideally, the final draft should be presented to the EU organs for consideration and possible adoption within the second half of 2025. We started with 2024 very excited, but we are continuing to have also 2025 to have these big milestones where we will have the African cybersecurity strategy. This EU cybersecurity flagship project, the EUC will be launching cybersecurity initiatives in close collaboration with the World Bank and the German Foreign Affairs Ministry. This collaboration, the key areas will be domestication of the Malabo Convention Center for Cybersecurity. Also, the cyber strategy and we can’t talk about this without mentioning the child online safety and digital ID. With this, I’ll stop here and thank you so much.

Adil Sulieman: Thank you very much, Walid, for the presentation. Now, I’ll give the mic to my good friend, Dr. Makhtar Sheikh, to also give us some insights on the activities that were carried or plan to be carried by ECA in 2024 and 2025. Dr. Sheikh, please. I think so. You can use your.

Makhtar Sheikh: Good afternoon, everyone. Thank you for this. For we are going to let’s go faster because we have a meeting running soon. For UNECA, we have a lot of accomplishment during this 24 years, but 25 will be more busy. Just so we can highlight in 2024, the adoption of the Global Digital Compact by the UN General Assembly in New York in September. And also was the launch of the AI Working Group and also the Data Governance Working Group at the UN level. We work closely with the AUC and we support a lot of member states in several areas. And now we are going to align our work in 2025 on the five objectives of the Global Digital Compact. Let me give you a summary of what we plan to do in 2025. Regarding the first objective of the Global Digital Compact, closing the digital divide and achieving sustainability. Today, our support to member states to develop their national strategy on digital technology. This year, we focus on 10 countries. 10 countries will be beneficial for this support. We will continue also to work on digital public infrastructure, DPI, to support some member states and also to develop some African guidelines on digital and DPI. And focus more on climate change, the impact of the climate in our continent. Also, we will continue to support African members to expand their broadband infrastructure by bringing some investment companies to support them. 2025. Also, in terms of connectivity, we are going to continue our work to promote connectivity across the school. And as a joint program, we have UNICEF, ITU, and ECA to connect a lot of schools across the continent this year. And also capacity building. We are going to build the capacity of young students. We target 100,000 this year across 14 countries. Now, on the second objective regarding this digital inclusion and to make this digital technology beneficial for all. We have several activities like the digital ID program. We will continue our digital ID program in member states. Now, we focus in 2025 on seven countries. We will add four more countries on 2025. Now, we have a great result in Ethiopia. In the database, we have now, at the end of last week, 1 million registered to get their digital ID. And next year, we plan to go to 24 million. And we will add also, we already have finalized a strategy for Malawi. And we are going to start this implementation in Gambia, in Malawi, and other African countries. Also, we are going to launch this STEAM Center in Rwanda in June. Also, inclusion needs this whole meeting we are going to organize. WSIS Plus 20 will be discussed in May in Benin to discuss how we can align WSIS Plus 20 and the Global Digital Compact for African countries before the Global WSIS in July in Geneva and also before the General Assembly in September where we are going to organize the continuation of IGF and WSIS Plus 20 in line with the Global Digital Compact. The third objective regarding cybersecurity, make our cyberspace secure. We are going to continue our work on cybersecurity. We focus more on capacity building this year because last year, we have a lot of policy and AUC will develop the national African continental. We are going to revise the Malabo Convention. We are going to see how to assist member states to implement this new revision of the Malabo Convention. We will continue also the capacity building program targeting member states, parliamentarians, and the private sector as well as the law judiciary. On cybersecurity also, hopefully, we are going to launch this year the African Center of Cybersecurity in Togo. We are working closely with GFC and World Bank on that. We think by the end of this year, we are going to launch this cybersecurity center. Also, objective four regarding data governance and data sharing. We have the implementation of the African Union Data Governance Framework. We are going to support six member states to build their capacity on data governance and also to develop their data. their national strategy. The country has already identified, and the work will start as of 1st or 2nd January next year. Also, on data governance, we are going to develop this data governance guideline for Africa. We are going to set up in place a working group on data governance for the continent. Because at the world level, we already put a data governance working group. And this data governance working group, we have only four member states per continent. Africa will be represented by four member states. And at UNECA, we are going to put a large data governance group. And this data governance group, the outcome will be discussed in the global data governance to make sure all the parts of Africa are being taken into consideration in this data governance. The last one, it is AI, very important, AI governance. And we’ll continue our work to artificial intelligence policy supporting member states and to develop their national policy. Already, we support the AUC to develop the African AI strategy on AI. Now, we are implementing at the member states. We’ll organize next year’s African Summit on AI in Mozambique before June 2025. That agreed now with the government. We are going also to put a lot of capacity building on AI. Because the issue is we need to build the capacity of the policymaker and the youth generation on AI. And we are going to use this artificial intelligence center we have created in Congo to build more capacity for member states and for African youth in AI. Also, we are going to organize this STI forum in April in Uganda. And we are going to show a lot of innovation on AI at the continental level. We already developed a website showing more than 300 innovation on AI in several areas, going to agriculture, health, business sector, industry. And we are going to launch this platform during the African STI forum that we organized in April with the government of Uganda. On AI also, there is an issue of ethics of AI. It’s a regulation we will work with UNESCO to have a one ethic guideline for the continent, as well as how to regulate this emerging technology. We have an approach sandbox for Africa. I will provide the result as soon as possible. As you see, I’m going to stop there. Because we have a lot of activity in 2025 going to policy development, capacity building, project knowledge sharing, as well as a platform of dialogue like this, which is African IGF. And also, the think tank for Africa to discuss how they fit their priority in the global agenda. Because we have a big issue in Africa. If you look at the statistic last year, 2023, the connectivity, it was at 37%. And the last statistic we get two weeks ago by ITU, it is at 38%. We have only 1% increase on connectivity in one year. It is a big issue because we need to make progress. Also, when we look at this digital gender gap, the digital gender gap in 2023 was between men and women. Now, it is 13.13 point. And there is an increase. And we need to find a solution to make everybody access a digital space. And also, we have to curb the digital ID negative impact. Because we still have 500 million not connected without any form of legal ID. We have some challenges like that. And also, as you know, by 2050, 40% of the African youth population will represent 40% of the youth population. We need to build their capacity. And by 2030, we need to create 625 million jobs in Africa focused on digital skills. And we have to find a way to bridge the gap and to create more, more, more, more people connected and more, more people skilled in the digital era to make sure we leave no one offline. Thank you. Thank you very much, Dr. Sek, for the presentation. As you can see from the two presentations, there’s a lot to be done. And if you are representing an institution or maybe individual, we will seek your help with all this. Because we cannot do it alone. We need the support of everybody in their own capacity so that we can accomplish all these aspirations. With that, we finish our presentation.

Adil Sulieman: And we come to the real work of receiving comments from the floor. And please keep it to one question or one intervention comments, because there are so many people in the room. And we want to give opportunity to everybody who wants to speak, who wish to speak, so that we can learn from you. And also, maybe you can ask question not to only these two gentlemen, but also to the people who were here before on the African IGF, the MAG, and all that. So without further ado, let me open the floor. I see one hand here. Let’s take maybe four at a time. One, two, three, where is number four? And OK, four. Start with the lady. By the way, I am very proud that we have a very good gender balance in the room and also on the panel. I think this is good. And I think this is a testimony of the African spirit. I think this is great. Please go ahead. Thank you. Good afternoon, ladies and gentlemen.

Audience: My name is Jingbo Huang. I’m the director of United Nations University Research Institute in Macau. So I’m very enthusiastic about the plans for African continent, because we have a lot to offer. And UNU, as you know, is a UN organization, also think tank. We do teaching, training, capacity building specifically, and education. So our institute, the institute that I’m heading in Macau, is specialized in digital tech and SDG, research and capacity building. We have a 13 institute in 12 countries. And I hear from both of the distinguished speakers and also the previous panel the needs of strategy development, for example, related to digital tech, capacity building for youth, for policymakers. That’s exactly what we can do in UNU Macau. So I will be happy to connect offline to let you know. For example, we have a training catalog related to demystifying AI. We have also done the UNESCO AI ethics readiness assessments in some other countries. So we have a lot to offer. And in our institute, it consists of former university professors from different parts of the world and with different backgrounds related to digital tech. So just to provide you that we have this implementation power and also to support your visions in doing so. So the second point is that we hold a UNU AI conference. And this year, we had the first one. And we actually invited some African ministers of ICTs to our conference. And in collaboration with the UNDESA, we put together a data governance digital transformation workshop. So many of the ministers have already participated in it. So this year, 2025, we would like to invite our guests here to join our AI conference. October 24, UN Day 2035, is going to mark the UNU’s 50th anniversary and also the UN’s 80th. So it will be a big year. I hope that you will come to join us. Third is we have a UN for youth. During this conference, we will also have a AI social innovation hackathon awards competition for young people to present AI social innovation solutions for local problems. So I also invite the youth from Africa. Thank you.

Adil Sulieman: Thank you very much. Thank you. Thank you. Is there a mic? Thank you very much.

Audience: My name is Wisdom Donkor, the executive director for Africa Open Data and Internet Research Foundation. My concern goes to. Waleed, you talk about the strategies to help Africa, but my worry is that we still not talking about how we are going to get the rural communities out of the state in which they are now. So I want to understand from you what concrete strategies do you have in order for us to be able to bridge that digital divide gap within the rural communities because within our part of the continent that is where most of our food basket comes from, that is where most of the, I should say, employment comes from within the agriculture sector and trade. So I would like to know. And then also to Dr. Maktar, now we are talking about data governance and then the working group and all of that. The AI is also coming up. Other programs are also coming up. We still have the rural communities to connect. And then, I don’t know, it looks like we are overloading ourselves. I just want to understand how we can move towards these rural communities and then help them because everybody in the room here seems to come from the urban cities and all of that. The rural communities are still in the dark. When you talk about electricity, they are still in the dark. When you talk about connectivity, it’s still in the dark. And then other components, and if you take the educational sector as well, you mentioned UNESCO. Now, I didn’t hear about local content. I was expecting to hear about local content and what we can do to bridge that gap because if you look at our educational institutions, the urban cities, the programs that we learn, the rural communities, it’s the same thing that they also learn. But then the urban cities have that privilege of having everything digitally. The rural communities don’t have. But yet, this is the case that the rural communities sit in the same exam room as the urban cities. We write the same exam. So what do we have concretely to do for us to bridge that gap? Because that is where I think those problems are coming from. If we’re able to bridge those gaps, then we’ll be there. Good afternoon. I’m Levi Siansege from Zambia. I lead the Internet Society Zambia chapter and also the youth IGF. I think from its two comments in one, my concern question goes to Dr. Macta. You mentioned, I think, a number of projects are supporting a limited number of countries in Africa. I would like to find out which country, which project Zambia would fall in. I would like to write on that since I’m from Zambia. And then earlier, I think part of the opening, there was a concern about the Africa IGF 2026. If there is room to place a voluntary role for 2027, I’d like to pledge my country. We can work, I think, on a number of things to ensure that 2027 can be hosted in Zambia. And with that said, I think those are my questions and contributions. Thank you. Thank you very much. Okay. It’s okay? Okay. Thank you so much. Thank you so much. It is Abdi Jalil Bashar from Chad. I’m the partnership advisor of National Cyber Security Agency in Chad and also a member of Economic Council of African Union. So I need to thank Walid and Macta for the intervention because there’s a lot of things to do it in next year or in the few days. So about the artificial intelligence strategy in Africa and how to do the revision of Malabo Convention. So my question is about capacity building. So I think that capacity building is very key. So we need to train the young people, women, the official, the parliamentary. So how we can contact you because sometimes you send the official invitation through the foreign affair, but sometimes are not sent to the key minister of ICT or National Cyber Security Agency. So is that any opportunity to send the request directly to you and we can work together on that? Thank you so much.

Adil Sulieman: Just am I audible? Can hear me guys? You can hear me? Okay. I don’t know. Sorry for

Speaker 4: that. Yes. So I would like to start by thanking director UNU Macau. Definitely at the African Union we value this collaboration, we value this cooperation and we value these partnerships because we cannot do it all by ourselves and Africa is a part of the world and we need to position ourselves in this as partners to other organizations as well as also to kind of like exchange our African experience, our African knowledge in terms also to know what others are doing and we have already some collaboration with UNU and we are happy of course to receive your invitation and also to participate in UNU Macau event. So thank you so much for your comments, we appreciate it and we can also always discuss future collaboration. So my brother, that’s a great question regarding the strategies to bridge the digital divide especially focusing on rulers, areas and communities. So yes, you might not hear it today but we all know that it’s a real challenge. I talked about that we have real unique challenge in Africa but the reason that we mentioned AI and other frameworks it’s not to neglect this gap rather than also to look for innovative ways to kind of like bridge the gap of the connectivity. So it’s been in our agenda, in our priorities. The strategies we put in place for this is that we are working very closely with ATU, African Telecommunication Union. We are exploring new technologies that can help because connectivity issue, we had so many conversations with ITU for example, we look at the ITU fact sheets and we’re also monitoring this gap bridging. So we are working very closely with African Telecommunication Union, we are trying also to look at innovative and not so traditional way of bridging the gap by using you know internet satellites, maybe using low orbit connectivity. So all these things are things that we are thinking about, we work together, we push for bridging the gap with the rural area and we thank you for being the voice in the room for these communities that’s like you mentioned might not be able to come all the way here. We thank you for your interest, we thank you for being the voice for these communities and we work tirelessly to kind of like see improvement in this challenge. You mentioned electricity, earlier this month we have our STC for energy and we tabled Africa energy efficiency strategy and we have been very clearly given directions also to use AI technology in these things and to use AI in also kind of like when we use renewable energy to reserve the African resources and to best utilize these resources. So we are working on that, our strategies work together is to take a multi-stakeholder and collaborative approach with our colleagues in Ato. The capacity building Abdul Jalil mentioned, well as I said earlier that we have implementation plans for both for African digital compact and for AI strategy and in those framework, in those strategies there is a pillar for the capacity building, there is a pillar about advocating and promoting the AI application development. All this needs capacity building, all this needs training and maybe also as a director of UNU Macau mentioned that we are working, this is a good example in the room, that we are doing this either by our resources or by collaborating with other international organizations. How to approach us? Everyone knows in the AU my door is always open, you can have my card, you can have my email, there is an email also isd at africa union africa-union.org you can send us we keep monitoring and we don’t ignore any email any idea any suggestion any kind of like communication you can approach me directly if not all my team are there

Adil Sulieman: and also through the official channels of course through the AU official channels we are approachable and we are happy to work peer-to-peer and also to work together with anyone who is interested for the gentleman I will give Dr. Mokhtar because the question was addressed to him. Thank you I think you

Makhtar Sheikh: have a very interesting question first we work together we have a MOU and we develop some activity at the regional level on e-government I think we took note on your idea and we’ll see how we can involve you on our activity on capacity building and AI we work also on climate change review how to use digital for climate change to mitigate it this like climate impact in the continent digital gap you have a right because now Africa in the real area we have a 25% productivity as a digital area and our objective is to bridge this digital gap when you look at the first objective of now United Nation on the global standard compacted to bridge this digital divide in the UNEC our strategy is to involve the private sector in the development of the infrastructure in the real area now we start with some country we are going to start with Ethiopia how to bring private sector to develop the real area through PPP with the government and it will work very well because we did something like in Guinea to develop the broadband infrastructure and also we have a research strategy universal service strategy we are going to launch next year I think it is something we can help to bridge this digital divide and in the real area also we have our program on digital ID and we provide digital ID to all people across the country now Zambia we woke up and request if there is advisory service we have to receive a letter from the government just to if government want to develop national strategy or to develop policy or need support to develop one project digital ID program or cyber security or e-commerce or AI policy now we have to receive the letter from the government it is a process I think all the government Africa know how to work with UNECA and if we have activity on capacity building now we send directly we target people in the parliamentarian in the policymaker among youth and we target people to be part on this policy on this capacity building Zambia it will be digital ID we support Zambia in digital ID but our strategy is not is to we don’t have a possibility to support all African country for one project we build the capacity for one or two country and this country will serve as model for other country and other country can go there to learn and to develop their policy what we did I give you an example we build is the capacity of Rwanda in digital technology yeah me I spent seven years in Rwanda that just working on the digital Zambian project the digital now Rwanda is a model people country can go there learn about what they have done on the digital policy what they have done on the on digital finance and can use the system and lot of country now go to Rwanda for this kind of for collaboration how we can use it AI in the agriculture sector how we can transform with this real area by creating more activity we we have the example of Botswana and people can go there we have the developer the innovative technology at 700 kilometers from the capital at this place when we come there is no network there is nothing but now we have everything and the farm has very happy because there is a lot of new revenue using digital technology for for their farm and I think we thank you very much for this very good questions and I think if I may complement also the panel Jalil I think it’s the request has to be sent to the formal channel and also you can contact the individuals who are here I think that will make sure that and then to the point on the connectivity is the rural connectivity I think maybe we need

Adil Sulieman: to change the mindset not to talk about connectivity anymore rather to talk about solutions will provide solution to the communities and then part of the solution will be the connectivity but this is the different discussion so we can so we open the floor for the next round of questions okay you have one here two three and four yeah you are the first question you thank you you we can hear you

Audience: Technology is failing us. Thank you. My name is Ahmed Farak. I’m chair of the North African IGF and I would like to share with with you some activities that we have organized in 2024 and followed up with a question for the panel. We have organized our annual meeting the 8th annual meeting in October and in Mauritania and we have almost more than 10 sessions actually the 11th session and we have discussed some important topics regarding the data governance, the broadband and cybersecurity and more than this GDC as well and the women inclusion and we are in the North African IGF we are committed to organize a specialized session for the youth IGF initiative in the North African. Before the forum we have organized the school internet governance. We served the program for more than 34 students of from the seven countries of the North African IGF. As you see and as I mentioned before we are focusing on the youth okay we are dedicated our efforts to help them and to offer them a class building program that allow them to be the next leaders of the region. My question is if there any channel okay that can support the sub-regional in our continent Africa during the sub-regions IGFs that can might help us in the capacity building track that we are insist to promote and enhance the outcomes of these tracks. Thank you.

Adil Sulieman: Thank you very much for this opportunity. First of all let me take this opportunity to thank AUC and of course UNECA for the cooperation that we’ve

Audience: had. My name again is Martin from the GFC Africa. What I have is first of all I’ll just want to give you four areas that came out of our regional meeting which require some interventions from the floor and probably you may want to respond to. During the global during the regional meeting that we had in the margins of the Africa IGF we had a session where we covered four areas. The first area looked at the protocols the strategies that have been developed within the continent especially looking at Malabo Convention the IE’s strategy and so forth. The key question that came out within that was how do we allow and how do we expand the inclusion of specific interest groups in expanding the understanding of the implementation. For example we have different sectors we’ve got health we’ve got transport and so forth so those sectors need to be included so the question is how do we do that. And then there was also an intervention within the area of diversity and I know we haven’t covered quite a bit on that in terms of women and girls inclusivity and the key issue is to strengthen the intervention when it comes to policy especially to make sure that policies are gender sensitive within either regional or national level and also when we do capacity in those sectors how do we ensure that we get people absorbing the people who are being trained so that’s another question that came out of that and then the other one that we looked at was cyber diplomacy norms and issues of confidence building measures which is a new area and I know we’ve had a session today looking at cyber diplomacy but the key there was the issue around building capacity at regional level so what is it that we can do in the ECOWAS region we know ECOWAS is doing a good job but more likely in SADAC East African region to build capacity to understand those particular areas and then finally when it came to the issues of enhancing resilience we are seeing that there is an agreement that regional dynamics in terms of resilience is important we are seeing SATs being looked at and I’m glad AUC has actually formally announced that they are going to look at the formulation of the Africa SAT at a continental level and the key question is the understanding of the services and other areas that need to be absorbed and then lastly just a quick one here is the issue around the Togo Center which Mark Tasek talked about this has progressed significantly there are two areas which are being looked at one is the issue around the interest within the region and investment from the private sector thank you very much thank you okay so thank you just to know I want to know if there are some MOU with the different universities in Africa for okay I want to know if there are some MOU UN and African Union and each the university in Africa to let the knowledge you are providing to be available for everyone in Africa is there something like that you my name is James from Zimbabwe mine is on the revision of the Malabo Convention I think this is long overdue but really welcome now that we hear that African Union is going to look at that if you look at the fate of Budapest now you have a new UN treaty on cyber security and I think what was observed today should have been long observed for us under Malabo my hope is for African Union to devise a strategy to push for more ratifications of the Malabo Convention there is one hurdle in terms of ratification with the Malabo where a member state is supposed to have in place a data protection legislation in southern Africa we have tried to go around this by having model laws to assist member states to come up with their data protection legislations so perhaps the African Union can also consider something similar to help member states to come up with those data protection legislations as a condition to fulfill the ratification of Malabo and then we can have more ratifications it’s very possible to have 100% ratification of Malabo we have already seen it happening with the African continental free trade area protocol where you have a massive number of member states who have ratified the protocol so it’s an appeal to African Union to consider devising a strategy to increase the ratifications of Malabo otherwise it’s a document which won’t be useful let it be useful further and we can avail our AU network and arrangement collaboration arrangement to well I don’t want to say to more empower because now we are doing our best to empower the youth but to continue empowering them to the full limits so that we can they can assist us in our mission and to reach Africa we want so I’m very glad to continue this discussion maybe offline and put all our resources to support the regional initiatives dr. Martin you the question was about the four main areas the strategies how to include different players so the African Union Commission when we look at the strategies we we serve all our member states all of them equally and to be able to kind of like zoom in to integrate specific stakeholders regarding in different fields health education all our strategies comes with a pillar around partnerships how can we achieve the strategy goals of the strategy and we are working also the same approach cybersecurity strategy we are taking the same approach of trying to have a strategy that’s inclusive and that we can reach we can reach our strategic objectives with with integrating everyone however it’s challenging to be done completely at the continental level so there is some work to be done as a country level that we always try to put it as a direction in the strategies but it’s it’s also kind of like a shared responsibility between us and the member states gender inclusivity and gender sensitivity all our initiatives or meeting or groups are gender sensitive and inclusive and we always try to take initiatives that encourage African women to join us in the cybersecurity sphere and to participate more and also kind of like to have opportunities where they can show their abilities and also where can they learn from others and we started by ourself in the division where we always advocate for women in cybersecurity the cyber diplomacy the regional we tend to take peer-to-peer regional approach meaning that we are in we are advocating for regional experience sharing and peer to peer learning between all our regions and Mohtar is not here to answer the toggle part so I’m going to maybe you can take it offline with Dr. Seck so yes James regarding the Malabo Convention I am going to divert this question to Mr. Adil because he is he is one of the pioneers and he worked closely with the team and he spearheaded the Malabo Convention work so I think he’s going to make justice answering your question more than me so also just to share the floor not to feel that I’m under the spot by myself share some of the questions with him so Mr. Adil if you would like to address yeah thank you

Adil Sulieman: very much thank you for the questions let me go also through the just to enrich the answers on the questions that were raised. So the first question was about the regional initiative. I think PREDA2 is going to focus, as mentioned by Waleed, that IG is going to be one of the pillars of PREDA2 and of course you know PREDA2, IG is national, regional, continental. So there’s going to be support for the regional initiatives. Martin, I think we’ve been working with the UK government on CBMs and training for the RECs. We started with ECOWAS and we did SADEC and hopefully next year we’ll do Central Africa, East Africa, North Africa and we will do one for the continent so that we can go bottom up and then we have Africa common position when it comes to CBMs and norms. A Malibu Convention, you are right, I think we have to devise and I think this is also was announced by Waleed in his initial remark that this is something that we are going to be working on, on the ratification and also model laws so that the country can also ratify and accelerate the ratification. There is support from German government on that and the World Bank and we’ll be working on that exclusively. For the question on PREDA2 and the lesson learned and the scope, AI is going to be featured in PREDA2, IG and DTS, the sectorial strategies, so that’s going to be covered and also data is going to be also featured. Lesson learned, so we had COVID and then we had to work around COVID and this is something that also we need to focus on. Gender was missing in PREDA2, so this is lesson learned that we have also to have more focus on gender in PREDA2, but we are very excited about the initiative so that this is also going to be, is going to have a reflection on also IGF, the global IGF. We may have more African participating in the global IGF. I was told that I think we have five minutes, if there is one intervention. She wants to ask a question and then I don’t know if we are going to be covered more than one question. It’s okay. Hi, my name is Winnie Kamau from the Association of Freelance Journalists. You can hear me now? Thank

Audience: you. For me, I’m just wondering where is the role of storytelling in telling the good work that is being done by the IGF sector, especially in Africa. We are not telling our stories more. It’s the first time I’m hearing about North Africa having, I mean, what they did. I would like to hear more stories from the North Africa team, what they’re doing there and also in AU and everything. So I don’t know what is the role of the media and in your storytelling. Thank you. Initially, can you hear me? Yes, this is Aisha Jaridi. I’m the Vice Chair of the North African IGF. I am from North Africa and I just wanted to compliment what my colleague says, Ahmad, is that during the last North African IGF, we’ve launched a new initiative, especially for women, capacity building for women in North Africa. So it’s a series of webinars that we will conduct to address the needs of women in North Africa. This is being said, my question to be short, and we can link after that to tell you more about North Africa. My question is to you, Dr. Waleed or Dr. Adel. In your agenda, where is women? Do you intend or do you have specific objectives, specific inclusion measures for women or do you have specific projects targeting immediately women? So this is shortly what I wanted to know. Thank you. Yes, I think I will agree with my sister that storytelling is a very powerful tool. We are always using storytelling. We are always engaging with media. We share with them media briefs. We share with them what we are doing, what we have achieved, our success. But I think there is room

Adil Sulieman: for improvement. I think we need also to understand. We have a team of communication experts. Maybe if they were here, they would tell more. But for us, we need also to do more about storytelling, do more about sharing our success, do more about sharing our experiences. So I do agree with you. Storytelling is a very powerful tool to show and to learn also from these stories and inspire people also to come work with us. Because again and again and again, we will keep mentioning that we cannot do it alone. We need all Africans to come work with us in different capacities to achieve these goals for Africa we want to. Thank you so much for the question. Yeah, let me take the women question. I think BRIDA 2 is going to feature. I think we did a study on BRIDA 1 and how to improve women involvement in ICT and hopefully this is going to be reflected in BRIDA 2. And as mentioned also in the opening statement by Walid, that women in cyber, this is something that we are going to also, is going to take off in 2025. So this is a couple of things that we can talk about, we can make reference to. But I think we come to the end of this session. Thank you very much for being here and attending the session and actively

S

Speaker 1

Speech speed

120 words per minute

Speech length

813 words

Speech time

403 seconds

African IGF 2024 had over 430 delegates from 43 member states

Explanation

The African Internet Governance Forum 2024 was well-attended, with over 430 delegates representing 43 member states. This indicates a high level of engagement and participation from across the African continent.

Evidence

Over 430 delegates got badges, and more than 43 member states were represented.

Major Discussion Point

African Internet Governance Forum (IGF) Activities and Outcomes

A

Amina Ramalan

Speech speed

112 words per minute

Speech length

332 words

Speech time

177 seconds

Youth track focused on digital governance and emerging tech

Explanation

The youth track of the African IGF centered on digital governance and emerging technologies. This focus aimed to amplify youth voices in multi-stakeholder dialogues on these important topics.

Evidence

Over 100 participants on site and about 100 participants online from all over the world attended the youth track.

Major Discussion Point

African Internet Governance Forum (IGF) Activities and Outcomes

Agreed with

Speaker 3

Agreed on

Focus on youth engagement in internet governance

S

Speaker 2

Speech speed

137 words per minute

Speech length

292 words

Speech time

127 seconds

Parliamentarian track discussed legislation and stakeholder collaboration

Explanation

The parliamentarian track at the African IGF focused on legislative issues and collaboration between parliamentarians and other stakeholders. The aim was to ensure that legislators work together with other actors in the internet governance ecosystem.

Evidence

Members of Parliament from different countries and representatives from the Africa Parliamentary Network on Internet Governance participated in the discussions.

Major Discussion Point

African Internet Governance Forum (IGF) Activities and Outcomes

S

Lilian Nalwoga

Speech speed

137 words per minute

Speech length

428 words

Speech time

186 seconds

New charter developed to guide organizing of African IGF

Explanation

A new charter has been created to guide the organization and convening of the African IGF. This charter aims to improve the structure and processes of the forum to better serve the African internet governance community.

Evidence

The charter was developed by a task force of 12 members and came into effect during the IGF in Ethiopia last month.

Major Discussion Point

African Internet Governance Forum (IGF) Activities and Outcomes

Agreed with

Amina Ramalan

Agreed on

Focus on youth engagement in internet governance

W

Waleed Hamdi

Speech speed

99 words per minute

Speech length

782 words

Speech time

472 seconds

Continental AI strategy and African Digital Compact developed

Explanation

The African Union Commission has developed a continental AI strategy and an African Digital Compact. These frameworks aim to guide the development and implementation of AI and digital technologies across the continent.

Evidence

The continental AI strategy was endorsed by the AU Council in July 2024, and the African Digital Compact was supported by GIZ.

Major Discussion Point

African Union Commission and UNECA Initiatives

PRIDA Phase 2 to launch in early 2025 focusing on internet governance

Explanation

The second phase of the Policy and Regulation Initiative for Digital Africa (PRIDA) is set to launch in early 2025. This phase will have a specific focus on internet governance issues in Africa.

Evidence

The launch is planned for the second half of January 2025.

Major Discussion Point

African Union Commission and UNECA Initiatives

Continental cybersecurity strategy under development

Explanation

The African Union Commission is developing a continental cybersecurity strategy. This strategy aims to address cybersecurity challenges and enhance digital resilience across the continent.

Evidence

Validation workshops with different African stakeholders are planned, with the final draft expected to be presented to AU organs in the second half of 2025.

Major Discussion Point

African Union Commission and UNECA Initiatives

M

Makhtar Sheikh

Speech speed

136 words per minute

Speech length

2088 words

Speech time

920 seconds

Supporting member states on digital strategies, ID programs, and capacity building

Explanation

UNECA is providing support to African member states in developing digital strategies, implementing digital ID programs, and building capacity in various digital areas. This support aims to accelerate digital transformation across the continent.

Evidence

UNECA is supporting 10 countries in developing national digital strategies, implementing digital ID programs in 7 countries, and planning to build capacity for 100,000 young students across 14 countries.

Major Discussion Point

African Union Commission and UNECA Initiatives

Agreed with

Audience

Agreed on

Need for capacity building and digital skills development

A

Audience

Speech speed

124 words per minute

Speech length

2855 words

Speech time

1374 seconds

Need to address digital divide in rural communities

Explanation

There is a pressing need to address the digital divide in rural African communities. The lack of connectivity and digital access in these areas is hindering development and economic opportunities.

Evidence

Rural communities often lack electricity and internet connectivity, which impacts education and economic activities.

Major Discussion Point

Challenges and Areas for Improvement

Importance of capacity building for youth, women, and officials

Explanation

Capacity building for youth, women, and government officials is crucial for the development of Africa’s digital ecosystem. This includes training in various digital skills and technologies.

Major Discussion Point

Challenges and Areas for Improvement

Agreed with

Makhtar Sheikh

Agreed on

Need for capacity building and digital skills development

Revision of Malabo Convention and increasing ratifications

Explanation

There is a need to revise the Malabo Convention and increase its ratifications among African countries. This would strengthen the legal framework for cybersecurity and data protection across the continent.

Evidence

The speaker suggested developing model laws to help member states create data protection legislation, which is a condition for ratifying the Malabo Convention.

Major Discussion Point

Challenges and Areas for Improvement

Enhancing storytelling and media engagement about IGF activities

Explanation

There is a need to improve storytelling and media engagement about IGF activities in Africa. Better communication of successes and initiatives can inspire more participation and support for internet governance efforts.

Major Discussion Point

Challenges and Areas for Improvement

Invitation to collaborate with United Nations University

Explanation

The United Nations University (UNU) extended an invitation for collaboration with African institutions. UNU offers resources and expertise in digital technology research and capacity building.

Evidence

UNU has a training catalog on demystifying AI and has conducted UNESCO AI ethics readiness assessments in some countries.

Major Discussion Point

Collaboration and Support

Request for support of sub-regional IGF initiatives

Explanation

There was a request for support of sub-regional Internet Governance Forum initiatives in Africa. This support could help enhance capacity building efforts at the regional level.

Evidence

The North African IGF organized its 8th annual meeting in October, discussing topics such as data governance, broadband, and cybersecurity.

Major Discussion Point

Collaboration and Support

Suggestion to involve diverse sectors in strategy implementation

Explanation

There was a suggestion to involve diverse sectors such as health and transport in the implementation of digital strategies. This would ensure that digital policies are more inclusive and responsive to various sectoral needs.

Major Discussion Point

Collaboration and Support

Call for specific initiatives targeting women’s inclusion

Explanation

There was a call for specific initiatives targeting women’s inclusion in digital and internet governance activities. This aims to address gender disparities in the digital sphere and ensure women’s voices are heard in policy-making processes.

Evidence

The North African IGF launched a new initiative for women’s capacity building, consisting of a series of webinars addressing the needs of women in North Africa.

Major Discussion Point

Collaboration and Support

Agreements

Agreement Points

Need for capacity building and digital skills development

Makhtar Sheikh

Audience

Supporting member states on digital strategies, ID programs, and capacity building

Importance of capacity building for youth, women, and officials

Multiple speakers emphasized the importance of capacity building and developing digital skills across various groups in Africa, including youth, women, and government officials.

Focus on youth engagement in internet governance

Amina Ramalan

Lilian Nalwoga

Youth track focused on digital governance and emerging tech

New charter developed to guide organizing of African IGF

There was a shared emphasis on engaging youth in internet governance discussions and processes, both through dedicated youth tracks and in the overall organization of the African IGF.

Similar Viewpoints

Both speakers highlighted initiatives aimed at developing comprehensive digital strategies and frameworks at the continental and national levels in Africa.

Waleed Hamdi

Makhtar Sheikh

Continental AI strategy and African Digital Compact developed

Supporting member states on digital strategies, ID programs, and capacity building

Unexpected Consensus

Importance of storytelling and media engagement

Audience

Adil Sulieman

Enhancing storytelling and media engagement about IGF activities

I think we need also to do more about storytelling, do more about sharing our success, do more about sharing our experiences.

There was an unexpected consensus on the need to improve storytelling and media engagement about IGF activities in Africa, with both the audience and organizers recognizing its importance for increasing awareness and participation.

Overall Assessment

Summary

The main areas of agreement centered around the need for capacity building, youth engagement, development of continental digital strategies, and improving communication about IGF activities.

Consensus level

There was a moderate level of consensus among speakers on key issues, particularly on the importance of capacity building and youth engagement. This consensus suggests a shared vision for developing Africa’s digital ecosystem, which could lead to more coordinated efforts in implementing digital strategies and policies across the continent.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

No significant areas of disagreement were identified in the discussion.

difference_level

The level of disagreement among speakers was minimal to non-existent. This implies a high level of alignment and cooperation among African stakeholders in addressing internet governance and digital development challenges. The lack of disagreement suggests a unified approach to tackling issues such as digital divide, capacity building, and policy development across the continent.

Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers highlighted initiatives aimed at developing comprehensive digital strategies and frameworks at the continental and national levels in Africa.

Waleed Hamdi

Makhtar Sheikh

Continental AI strategy and African Digital Compact developed

Supporting member states on digital strategies, ID programs, and capacity building

Takeaways

Key Takeaways

The African IGF 2024 was successful, with over 430 delegates from 43 member states participating

The African Union Commission and UNECA have developed several new initiatives, including a continental AI strategy and African Digital Compact

PRIDA Phase 2 will launch in early 2025, focusing on internet governance

There is a need to address the digital divide in rural African communities

Capacity building for youth, women, and officials remains a priority

Collaboration with international organizations and support for sub-regional IGF initiatives is important

Resolutions and Action Items

Tanzania will host the next African IGF, likely early in the year before the global IGF in June

A call for new MAG members will be opened towards the end of next month

The continental cybersecurity strategy draft will be presented to AU organs for consideration in the second half of 2025

UNECA will support 10 countries to develop their national digital technology strategies

An African Summit on AI will be organized in Mozambique before June 2025

Unresolved Issues

How to effectively bridge the digital divide in rural communities

Specific strategies for increasing women’s participation in ICT and cybersecurity

Methods to accelerate ratification of the Malabo Convention

How to improve storytelling and media engagement about IGF activities

Suggested Compromises

Exploring innovative technologies like internet satellites and low orbit connectivity to address rural connectivity issues

Developing model laws to assist member states in creating data protection legislation to facilitate Malabo Convention ratification

Integrating gender sensitivity and inclusivity measures into all strategies and initiatives

Thought Provoking Comments

We still not talking about how we are going to get the rural communities out of the state in which they are now. So I want to understand from you what concrete strategies do you have in order for us to be able to bridge that digital divide gap within the rural communities

speaker

Wisdom Donkor

reason

This comment challenged the presenters to address a critical gap in their strategies – the digital divide in rural areas. It highlighted an important issue that had not been adequately addressed.

impact

This led to more focused discussion on rural connectivity challenges and prompted the speakers to elaborate on strategies for bridging the digital divide.

How we can contact you because sometimes you send the official invitation through the foreign affair, but sometimes are not sent to the key minister of ICT or National Cyber Security Agency. So is that any opportunity to send the request directly to you and we can work together on that?

speaker

Abdi Jalil Bashar

reason

This comment highlighted practical challenges in communication and coordination between African Union initiatives and national agencies. It raised an important operational issue.

impact

It prompted discussion on improving communication channels and processes for engagement between continental bodies and national agencies.

My hope is for African Union to devise a strategy to push for more ratifications of the Malabo Convention there is one hurdle in terms of ratification with the Malabo where a member state is supposed to have in place a data protection legislation

speaker

James from Zimbabwe

reason

This comment provided specific, actionable suggestions for addressing challenges with an important continental policy framework.

impact

It shifted the discussion towards concrete strategies for increasing ratification of the Malabo Convention and highlighted the need for supporting legislation.

For me, I’m just wondering where is the role of storytelling in telling the good work that is being done by the IGF sector, especially in Africa. We are not telling our stories more.

speaker

Winnie Kamau

reason

This comment introduced a new perspective on communication and highlighted the importance of narrative in promoting African digital initiatives.

impact

It broadened the discussion to include communication strategies and the importance of sharing African success stories in the digital space.

Overall Assessment

These key comments shaped the discussion by highlighting critical gaps in current strategies, particularly around rural connectivity and communication. They prompted more concrete discussion of implementation challenges and strategies, while also broadening the conversation to include important aspects like storytelling and regional cooperation. The comments helped shift the dialogue from high-level policy announcements to more practical considerations of how to effectively implement and communicate about digital initiatives across Africa.

Follow-up Questions

What concrete strategies are there to bridge the digital divide gap within rural communities?

speaker

Wisdom Donkor

explanation

Rural communities are still lacking in connectivity and digital resources, which impacts education and economic opportunities.

How can local content be developed and integrated to address educational disparities between urban and rural areas?

speaker

Wisdom Donkor

explanation

Rural students lack access to digital resources but are expected to compete with urban students in the same exams.

Which specific projects and countries is UNECA supporting in Africa?

speaker

Levi Siansege

explanation

Understanding which countries are receiving support can help others learn about potential opportunities for collaboration or assistance.

How can countries directly contact AUC and UNECA for capacity building and training opportunities?

speaker

Abdi Jalil Bashar

explanation

Improving communication channels could help ensure key ministries and agencies receive information about training opportunities.

Is there a channel to support sub-regional IGFs in Africa, particularly for capacity building tracks?

speaker

Ahmed Farak

explanation

Sub-regional IGFs are focusing on youth development and could benefit from additional support to enhance their programs.

How can specific interest groups and sectors (e.g., health, transport) be included in expanding the understanding and implementation of continental strategies?

speaker

Martin Koyabe

explanation

Ensuring diverse sector representation is crucial for comprehensive strategy implementation.

How can policies be made more gender-sensitive at regional and national levels?

speaker

Martin Koyabe

explanation

Gender-sensitive policies are important for promoting inclusivity in the digital sphere.

What can be done to build capacity for cyber diplomacy and norms at the regional level, particularly in SADC and East African regions?

speaker

Martin Koyabe

explanation

Regional capacity building in cyber diplomacy is crucial for addressing cybersecurity challenges.

Are there MOUs between the UN, African Union, and African universities to make knowledge more widely available?

speaker

Unnamed audience member

explanation

Formal agreements could help disseminate knowledge and resources more effectively across the continent.

Can the African Union devise a strategy to increase ratifications of the Malabo Convention, possibly including model laws for data protection?

speaker

James

explanation

Increased ratification is necessary for the Malabo Convention to be effective across the continent.

What is the role of storytelling in sharing the work being done by the IGF sector in Africa?

speaker

Winnie Kamau

explanation

Better communication of initiatives and successes could increase awareness and engagement across the continent.

What specific objectives or projects does the AUC have targeting women’s inclusion in digital initiatives?

speaker

Aicha Jeridi

explanation

Understanding specific measures for women’s inclusion is important for addressing gender disparities in the digital sphere.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy

WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy

Session at a Glance

Summary

This discussion focused on the challenges and opportunities of artificial intelligence (AI) in relation to human rights, inclusivity, and responsible development. Panelists from diverse backgrounds explored how AI systems can perpetuate societal biases and inequalities, particularly affecting marginalized communities and individuals with disabilities. They emphasized the need for a human-centered approach to AI development, incorporating diversity in teams, rigorous auditing, and transparency in algorithmic processes.

The conversation highlighted the importance of comprehensive regulatory frameworks and standardization to ensure AI accountability and fairness. Panelists stressed the critical role of governments in establishing clear guidelines and independent oversight mechanisms. They also discussed the significance of public awareness and education about AI systems to empower users and drive demand for responsible AI practices.

The discussion touched on specific examples of AI applications in healthcare, welfare systems, and assistive technologies, illustrating both the potential benefits and risks of these systems. Panelists agreed on the need for multi-stakeholder collaboration, including youth engagement, to address AI-related challenges effectively.

The importance of data quality, representation, and transparency in AI development was a recurring theme. Panelists advocated for proactive bias mitigation techniques and the establishment of clear mechanisms for individuals to challenge algorithmic decisions.

While acknowledging the complexities of making AI responsible and transparent, the participants concluded that pausing AI development is not a viable option. Instead, they called for continued efforts to improve AI explainability, enhance public understanding, and foster collaboration among all stakeholders to shape a more inclusive and ethical AI-driven future.

Keypoints

Major discussion points:

– The potential for algorithmic bias and discrimination in AI systems, especially impacting marginalized groups

– The need for human-centered approaches, diversity, and inclusion in AI development

– The importance of transparency, explainability, and accountability in AI systems

– The role of governments in regulating AI and establishing frameworks for responsible development

– The need for public awareness, education, and AI literacy

The overall purpose of the discussion was to explore the challenges and potential solutions for developing responsible, ethical, and inclusive AI systems that respect human rights and do not perpetuate or amplify existing societal biases and inequalities.

The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, there were also notes of optimism, especially towards the end, as speakers emphasized the potential for positive change through collaboration, education, and proactive policymaking. The tone shifted slightly from highlighting problems to focusing on potential solutions and calls to action.

Speakers

– Monica Lopez: CEO and co-founder of Cognitive Insights for Artificial Intelligence, expert in human intelligence, machine intelligence, human factors, and system safety

– Paola Galvez: Worked on AI readiness assessment and national AI strategy for Peru

– Ananda Gautam: Moderator

– Yonah Welker: Visiting lecturer at Massachusetts Institute of Technology, ambassador of EU projects to the MENA region

– Abeer Alsumait: Public policy expert with experience in cybersecurity, ICT regulation and data governance in the Saudi government

Additional speakers:

– Meznah Alturaiki: Representative of the Saudi Green Building Forum

– Aaron Promise Mbah: No specific role mentioned

Full session report

Expanded Summary of AI and Human Rights Discussion

Introduction

This discussion brought together experts from diverse backgrounds to explore the challenges and opportunities presented by artificial intelligence (AI) in relation to human rights, inclusivity, and responsible development. The panel included Monica Lopez, CEO and co-founder of Cognitive Insights for Artificial Intelligence and member of the global partnership on AI; Paola Galvez, who worked on the national AI strategy for Peru and has experience with Microsoft; Yonah Welker, a visiting lecturer at MIT; and Abeer Alsumait, a public policy expert with experience in cybersecurity, ICT regulation, and data governance in the Saudi government. The conversation was moderated by Ananda Gautam.

Key Themes and Discussion Points

1. Algorithmic Bias and Its Impact

A central theme of the discussion was the potential for algorithmic bias and discrimination in AI systems, particularly affecting marginalised communities and individuals with disabilities. Monica Lopez emphasised that algorithms are not neutral tools but powerful social mechanisms that can perpetuate or challenge existing power structures. This sentiment was echoed by Abeer Alsumait, who noted that AI systems have demonstrated bias against marginalised groups in various domains, including healthcare, as evidenced by the Pennsylvania University study she mentioned.

2. Addressing Algorithmic Bias and Promoting Responsible AI

To combat algorithmic bias, the speakers proposed several strategies:

a) Diversity in AI development teams: Monica Lopez stressed the crucial importance of diverse teams in AI development to mitigate bias. Paola Galvez emphasized the need for gender equity in AI.

b) Rigorous algorithmic auditing and transparency: Lopez advocated for comprehensive auditing processes and increased transparency in algorithmic decision-making, calling for standardization in AI audit documentation and metrics.

c) Proactive bias mitigation techniques: The implementation of proactive measures to identify and address bias before AI systems are deployed was recommended.

d) Comprehensive regulatory frameworks: There was consensus on the need for robust regulatory frameworks to guide responsible AI development, with multiple speakers referencing the EU AI Act as a potential model.

e) Ongoing community engagement: The speakers emphasised the importance of continuous dialogue with affected communities throughout the AI development process.

3. AI in Assistive Technologies

Yonah Welker highlighted the potential of AI to support people with disabilities through assistive technologies. He emphasized that creating disability-centric AI is a complex process requiring a multi-modal and multi-sensory approach. Welker also stressed the need for legal frameworks to complement assistive technologies and called for dedicated safety models and regulatory sandboxes for AI testing.

4. Government Role in Responsible AI Development

The panellists agreed on the critical role of governments in regulating AI adoption and ensuring responsible development. Abeer Alsumait discussed the Saudi Data and Artificial Intelligence Authority’s role in advancing AI governance. Paola Galvez advocated for structured public participation in AI policy development and stressed the importance of investment in AI skills development.

5. Education and Public Awareness on AI

Monica Lopez argued that increased public knowledge would drive demand for more responsible AI practices and create more engaged and critical AI users. An audience member raised the point that even professionals like judges require specialised understanding of AI systems, underscoring the breadth of educational needs across society.

6. Localization and Language in AI

Yonah Welker emphasized the need for localized AI solutions, especially for non-English languages, to ensure inclusivity and effectiveness across diverse populations.

7. Environmental Impact of AI

Paola Galvez raised concerns about the environmental impact of AI, highlighting the need to consider sustainability in AI development and deployment.

8. AI in Healthcare

Abeer Alsumait discussed challenges in healthcare AI, referencing the Pennsylvania University study that revealed biases in healthcare algorithms.

9. Content Recommendation and User Safety

An audience question addressed the issue of AI recommending potentially harmful content to vulnerable users, highlighting the need for responsible content curation and user protection.

10. Balancing AI Progress and Responsible Development

While acknowledging the complexities of making AI responsible and transparent, the participants concluded that pausing AI development is not a viable option. They advocated for continued efforts to improve AI explainability, enhance public understanding, and foster collaboration among all stakeholders to shape a more inclusive and ethical AI-driven future.

Conclusion and Future Directions

The discussion concluded with several key takeaways and action items, including the need for diverse development teams, rigorous auditing, proactive bias mitigation, comprehensive regulatory frameworks, ongoing community engagement, investment in education, and the development of national AI strategies.

Unresolved issues included how to effectively standardise AI audit documentation and metrics, balance rapid AI development with responsible implementation, make complex AI systems easily explainable to the general public, and ensure AI policies are effectively implemented and enforced.

The panelists, particularly Yonah Welker, expressed optimism about stakeholders working together to address these challenges and shape a more inclusive, ethical, and human-centred approach to AI development and governance.

Session Transcript

Monica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organizers for continuing to put together this summit on really such critical issues related to digital governance. I’m really excited to be here, at least online. And I also want to thank Paola Galvez for really bringing all of us from across the world together, whether virtually or in person. So, as a brief introduction, I’m Dr. Monica Lopez and I come from a technical background. So, I’m trained in the cognitive and brain sciences and I’ve been in the intersecting fields of human intelligence, machine intelligence, human factors, and system safety now for 20 years. I’m an entrepreneur and the CEO, co-founder of Cognitive Insights for Artificial Intelligence. And I essentially work with product developers and organizational leadership at large to develop robust risk management framework from a human centered perspective. I’m also an AI expert on scaling responsible AI for the global partnership on AI. So, I certainly do recognize many, many individuals. So, as for my contribution, I really do hope to complement the group here. I’m coming from the private sector perspective. So, certainly as we all know, today’s rapidly evolving digital landscape, we know that algorithms have essentially become the invisible architects, perhaps we can call that of our social, economic, and political experiences. And so, what we have are very complex mathematical models designed to process information and make decisions many times fully automated that now essentially underpin every aspect of our lives. And as we all well know at this point, from job recruitment to financial services, to criminal justice and social media interactions. And so, this promise of technological neutrality essentially masks a reality. One where algorithmic systems are not objective, but instead they are essentially reflections of the biases, the historical inequities and the systemic prejudices that are across our societies. And so they are essentially embedded in the design and training of data. And so this, as we all know, as well as this has direct human rights implications of algorithmic bias that are profound at this point and really far reaching. And these systems essentially perpetuate and amplify these existing inequalities and are creating digital mechanisms of exclusion that are systematically disadvantaging marginalized communities. And so just very quick before I enter into why we need a human centered perspective on this, but I’m sure very clear examples that you may be familiar with already are with facial recognition technology or FRT and that they have demonstrated significantly higher error rates for women and people of color. We continue to see that problem. AI driven hiring algorithms that have shown to discriminate against candidates based on gender, on race and other protected characteristics. And AI enabled criminal justice risk assessment tools. And we’ve certainly seen this, I’m based in the United States. And so we’ve shown that it has continued to perpetuate racial biases leading to more severe sentencing recommendations for black defendants compared to white defendants with similar backgrounds. So essentially, why do we have this? And the root of these challenges really lies in the fundamental nature of algorithmic development. And we know that machine learning models are trained on historical data that inherently reflect, as I mentioned earlier, these societal biases, power structures and systemic inequalities. And I want you to take a moment right now to consider what a data point even means. How a single- data point has limits. As those of you know, for those who work closely with data on a daily basis, and by that I mean whether you’re collecting it, whether you’re cleaning it, analyzing it, making conclusions from it, you know that the basic methodology of data is such that it systematically leaves out all kinds of information. And why? Because data collection techniques, they have to be repeatable across vast scales, and they require standardized categories, and while repeatability and standardization make database methods powerful, we have to acknowledge that they have power as a price. So it limits the kind of information we can collect. So when these models are then deployed, without any sort of critical examination, they don’t just reproduce existing inequities, they actually normalize and scale them. And so here is where I would argue, and I know the rest of the panel will continue to discuss this, but why a human-centered approach to algorithmic development offers essentially a critical, at this point, pathway to addressing these systemic challenges. And essentially what this means is that we need to reimagine technology as a tool for empowerment and well-being, instead of a tool for exclusion. And so in this regard, prioritizing human rights, equity, and meaningful inclusion, every single step of technological design through implementation. And by that, I mean across the entire AI lifecycle becomes essential. And so I work with a lot of clients, as I mentioned earlier, I am in the private sector, and there are key strategies right now that are very clear that we know that we can advance this human-centered approach. And I’ll just briefly mention five of them real quick. So first is we need comprehensive diversity across algorithmic development. I’m sure you’ve been hearing that a lot, but the problem is that the change has not, the transformative change has not really begun. And we know that if we diversify teams, we do get more responsible development of algorithmic systems. We do get new perspectives at the table. And so I would say that’s absolutely essential no matter what moving forward. The second element is rigorous algorithmic auditing and transparency. Again, that is another element that we have seen. It is now part, in fact, in part related to the European Union’s AI Act requirement. But what we need is we need to see this across the three perspectives of equality, equity, and justice. And this is not just for big tech companies to be engaging in. This is truly for everyone. And we know that irrespective of emerging legal requirements in some jurisdictions and some where there isn’t much work happening on the legal side, all organizations must implement mandatory algorithmic impact assessment to thoroughly examine the potential discriminatory outcomes before deployment. And then not just that, but continuously monitor those outcomes as more data are collected and models drift. And I have noticed that when companies do that, whether they’re small or medium-sized or large, we do see better outcomes. A third element is the establishment of proactive bias mitigation techniques. Now, there are all sorts of technical strategies for that. Some of them are based essentially on what I was mentioning in regards to, you really need to think about what data means. So careful curation of the trading data. We need to make sure it truly is representative and balanced across the data sets. It does matter and it does change outcomes. Implementation of fairness constraints. Also the development of testing protocols that actually specifically examine the potential for discriminatory outcomes. We know that when you identify that beforehand and you actually look for that, you will see it and you can actually mitigate change. and actually improve on the issue. The fourth element is of course, the classic need for legal and regulatory frameworks. And so here, I can’t stress enough at this point that governments and international bodies, we have to truly come up with comprehensive regulatory frameworks that treat algorithmic discrimination as a fundamental human rights issue. And from a business perspective, what this means is that there needs to be clear legal standards for algorithmic accountability. There also need to be very clear mechanisms for individuals to be able to challenge algorithmic decisions. There certainly are not enough. And even in some cases where we have the requirement for companies to actually put on their website, their auditing results, that is still not enough. And then of course, we need significant penalties for those systems. And then the last issue, which is the fifth, is that we need ongoing community engagement. I also cannot stress that inclusion does matter and it requires continuous dialogue with communities most likely to be impacted by algorithmic systems. And this is not an easy task. It’s a lot to ask for, but we know, and I’ve seen it with companies that actually make concerted efforts to create participatory design processes across the AI lifecycle. And that essentially means you’re establishing relevant feedback mechanisms of communications as you create and design these systems, you pilot them and you work with those individuals. And then you’re essentially end up empowering marginalized communities to actually want to actively provide their input because it is of value. So what I’m calling for here essentially to conclude is that we need this fundamental re-imagining of technological innovation. And we know at this point that algorithms are not neutral tools, but they’re very powerful social mechanisms. that either perpetuate or challenge existing power structures. So if we change now our methods, and every single one of us, in the design and deployment choices of today, then I think we will very profoundly actually shape the future of human rights in the digital age in a very positive way. And so I look forward to your questions, and I know we’re going to discuss this more in detail. So thank you. Thank you for listening.

Ananda Gautam: Thank you, Monica, for all your thoughts, and I think you have also covered the second part of the questions already. My apologies. I should have mentioned the time before. So I’ll go to Paola to give a short introduction, and for the first round, let’s wrap within five minutes, and then we’ll go for the second round of questions. For Monica, I think we’ll be going a bit short on the second round. We’ve covered almost most of the things. So, Paola, over to you.

Paola Galvez: Thank you, Ananda. Hello, everyone. Thank you so much for joining us to this very, very critical conversation. I’d like to start by posing a question. What does it take to make society more inclusive? You know, my interest in social impact began early, inspired in part by my grandfather, who is a judge in the Central Highlands of Purdue, who often spoke about societal disparities he witnessed. I went to law school believing it would really equip me with the tools to drive meaningful change in a country with high levels of inequalities and social disparities, like this country where I’m from. But my first year lacked inspiration at all. I think my courses were disconnected from real-world problems. But my perspective really changed in 2013 when I began an internship in Microsoft. I was looking at a demonstration video of seeing AI prototypes. It was 10 years ago. But it was this project that used artificial intelligence and helped the visually impaired perceive their surroundings that opened my eyes, that really showed me the profound potential that this technology can be as a catalyst for social change. So I said, I can really help as a lawyer to help leverage this technology as a force for inclusion, and I can use public policy and help drive human-centric and evidence-based policy. And that’s when my commitment started to transform Purdue into a more inclusive and detailed society. And I think that’s the path that led me to what I’m doing now, and I hope will help beyond. So I worked in the private sector for a long time. I was in the position that Dr. Monica Lopez was mentioning how private sector does. Then I received a proposition from the government to work there to help them with the national AI strategy and the digital assistance strategy. And most of my friends told me, you’re going to be so frustrated. The bureaucracy is going to kill you. Come on, you’re used to Microsoft, big tech. But I said, no, I can actually bring and shed a light on disruptive ways to govern. So I decided to do it. I’m a firm believer on participatory bottom-up processes. So the first thing I did was form a multi-stakeholder committee to do this policy. And we’re here at IGF, a global forum. We’re talking about AI and data at a global level. And I have seen firsthand a local experience bringing civil society, academia, private sector together to find solutions and challenges. And one of the most challenging things is AI policy. And I do believe that protecting democracy, human rights, the rule of law, and establishing clear guidelines on AI is a shared responsibility that a government alone cannot do, not a private sector company, nor academia. an endeavor that must be taken in a multi-stakeholder approach. But I do think that one stakeholder is crucial in this pursuit, and that is youth civil society. The youth must be included, and youth engagement is a critical area that we need to protect now. That’s what I believe in these first remarks and then I wanted to mention. Because I do see generative AI producing fake and biased synthetic content, large language models, reinforcing polarization, poorly designed AI power applications that are not compatible with assistive technologies leading to discrimination against youth with disabilities. And I have the expert here. Yona will mention more about that. But apart from that, I sincerely believe that AI holds immense potential as a technology if we use it wisely. AI systems can break down language barriers. I mean, if IGF is as powerful as it is, and the youth IGF and youth seek of internet society is powerful if we’re a community of more than 2,000 youth connected, and sometimes we use translation that is powered by AI. So that’s powerful. Or, of course, making resources more accessible to diverse youth populations. Sadly, AI has yet to live up to its potential. Dr. Monica Lopez mentioned most of its challenges, which I absolutely agree with. AI is reproducing the society’s bias. It is deepening inequalities. Someone heard, I heard someone saying, but that’s just the way the world is. The world is biased, Paola. What do you think? That’s what AI is going to do. And yes, that’s true, but I agree at one point that it depends on us how we want to develop this technology. It depends on us the results that this technology is going to provide us an output. Because data is the oxygen of AI, and transparency should be at its core. So it’s up to us to shape the future of AI now, to talk about the data that should be more representative. And the focus on IDF of bringing youth to the discussion, I think it’s a great tool to really congratulate, because we have a big youth community in this IDF. So I’m really looking forward to this discussion, and up to you, Ananda.

Ananda Gautam: Thank you so much, Paola, for touching a bit of how powerful it can be. And the work of your government bringing a multi-stakeholder committee was really commendable. I’d like to go with Yonah Wicker to give, I’ll also give you five minutes to briefly introduce you and touch upon the base that Dr. Monica and Paola has set up. Over to you.

Yonah Welker: Yes, thank you so much. It’s a pleasure to go back to Riyadh. Three years ago, I had the opportunity to curate the Global AI Summit of AI for the Good of Humanity, and we continued this movement. I’m a visiting lecturer for Massachusetts Institute of Technology, but also I’m an ambassador of EU projects to the MENA region. And my goal is to bring all of these voices and ideas to actual policies, let’s say EU AI Act or Code of Practice. And today, I specifically would love to address how it may affect the most vulnerable groups. And as Paola mentioned, individuals with disabilities. And that’s why I would love to quickly share my screen. Hopefully, you can see it. So 28 countries signed the agreement about AI safety, including not only Western countries, but the countries of the Global South, Nigeria, Kenya, countries of the Middle East, Saudi Arabia, and UAE. And the big question, how these actual frameworks can address designated and vulnerable groups. For instance, currently, there is a one billion people. It’s 15% of the world live with disabilities, according to the World Health Organization. And it’s important to understand that sometimes these disabilities are invisible. Let’s say neurodisabilities, at least one in six people living with one or more neurological conditions. And it’s actually a very complex task to bring all of these things to the frameworks. And let’s say why for EU, we have a whole combination of laws and frameworks. We address classifications and taxonomies in Accessibility Act and standardization. directive. We’re trying to address manipulation and addictive design at the level of AI Act, Digital Services Act, GDPR. We’re trying to understand and identify higher risks for systems related to certain critical infrastructure, transparency risks, prohibiting particular use of effective computing. But still it’s not enough because we need to understand how many systems we actually have, how many cases we have. And for instance, for assistive technologies, we have over 120 technologies for the racing OECD report and I had opportunity to contribute to this report. We use AI to augment smart wheelchairs, walking sticks, geolocation and city tools. We use AI to support hearing impairment using computer vision to turn sign language into text. We support cognitive accessibility including ADHD, dyslexia, autism. But we also should understand all the challenges which are coming with the AI including recognition errors than individuals with facial differences or asymmetry, craniofacial syndrome, just not properly identified by facial recognition system, as was mentioned by my colleague, or accused identification errors than individuals can understand AI interface. They can hear or see this signal or when they deal with excluding patterns and errors or exclusion by generative AI and language-based models. Also we have all the complexity driven by different machine learning techniques, supervised learning which are connected to errors induced by humans, unsupervised learning which brings all the errors and social disparities from the history, or reinforcement learning which is limited by training environments including robotics and assistive technologies. And finally we should understand that AI is not limited by software. it’s also about hardware, it’s about human centricity of physical devices, it’s about safety, motion and sensing components safety, power components and environmental safety, production and training cycle. So overall, working on disability-centric AI is not just about words, it’s an extremely complex process of building environments where we have a multi-model and multi-sensory approach. When we deal with the families, caregivers, patients and different types of users, then they try to understand and identify scenarios of misuse, actions and non-actions, so-called omission, potential manipulation or addictive design. So it’s why the next level of AI safety institutes, offices and oversight will include all these comprehensive parameters. When we talk not only about risk-based approach, but understanding different scenarios, workplaces, education, law enforcement, immigration, we think about taxonomies, frameworks and accidents repositories, working with UN, World Health Organization, UNESCO, OECD and finally we try to understand the intersectionality of disabilities, thinking about children and minors, women and girls and all the complexity of history behind these systems and context. Thank you. Thank you, Yonah, for your wonderful things, how AI could be used in

Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we cannot accept minimal level of error in the use of like AI in healthcare system and we’ll come back to you on these questions. So I’ll ask Avir to talk about herself and I’ll give you five minutes as well. Please introduce yourself and we’ll do opening remarks. Thank you. Thank you. Hello everyone. It’s a privilege to be a part of this discussion and I would like also to thank Paola for initiating this and kick-starting it. I’d like to thank the rest of the panel and the moderators as well and event organizers. Just to introduce myself briefly, this is Avir Smyth, I’m a public policy expert with a little over a decade of experience in cybersecurity, ICT regulation and data governance in the Saudi government. I hold a master’s degree in public policy from Oxford University and a bachelor of science in computer and information sciences. My interest lies in shaping inclusive and sustainable digital policies that drive innovation and advance the digital economy. I would like to briefly just start the conversation of this session by mentioning examples that show while algorithms and AI promise efficiency and innovation, they have the power to replicate and amplify suicidal inequalities when not governed responsibly. The first example I would like to mention is from France. In France, a welfare agency used an algorithm to detect fraud in welfare and errors in payments and this algorithm, while in text, was a wonderful idea, in practice it ended up impacting specific segments of its population and marginalized groups, specifically single parents and individuals with disabilities far more than any others. It ended up tagging them more as high risk more frequently than the rest of the beneficiaries of the system. So this impact was profound on those individuals, it led to more investigations, a lot more stress and in some cases even suspension of benefits. So in October of this year, a coalition of human rights organizations launched legal action against the French government for this algorithm used by the welfare agencies, arguing that this algorithm actually violates the privacy laws and anti-discrimination regulations. So this case shows us the reminder of how risks can be inherent in some opaque systems and maybe governed AI tools. Another example I would like to quickly highlight in the healthcare sector, where a study in 2019 from Pennsylvania University highlighted an AI-driven healthcare system that was used to allocate medical resources for a little over 200 million patients, and that system relied on hysterical healthcare expenditure as a proxy for healthcare needs. So this algorithm was not considering the systematic disparity in healthcare access and spending in society at that time, and it ended up resulting in black patients being less likely to be flagged for need of enhanced care to a percent that reached 50 percent than their white counterparts. So even though this algorithm and this system was intended to streamline healthcare delivery, it ended up perpetuating inequality and deepening distrust in AI systems and in technology overall. So this example will underscore one undeniable truth that algorithms are not neutral when built on biased data or flawed assumptions, and it might lead to amplified existing injustices and exacerbate exclusion, often impacting the most vulnerable population. These challenges and these issues generated actions from governments on an international level, one of which, as mentioned by Dr. Lopez, the EU AI Act that was entered forth this year, and it classifies AI systems based on risk, classifying things such as welfare, employment, and healthcare as areas of high risk where very high standards of transparency, quality, and human intervention is required. A lot of nations and governments followed suit, I believe. One example for that is here in my country, in Saudi Arabia, the Saudi Data and Artificial Intelligence Authority, established a few years ago, started or adopted recently the AI ethics principles that emphasizes transparency, fairness, and accountability. Therefore, I believe governments play a very important role. While every actor and every player is really important in discussions and conversations, governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way. Thank you.

Ananda Gautam: Thank you, Avit. So, I’ll come back to Dr. Monica. You have touched over how algorithmic bias are there and what could be the role of private sector. I’d like to touch you up on what are the measures that private sector could take to overcome those biases, along with the role of other stakeholders. If there are any best practices that could be shared, kindly share, and I’ll ask you to wrap up very soon. Thank you.

Monica Lopez: Absolutely, thank you. Thank you for that question. And I know I briefly mentioned some of them, but I think I’ll right now highlight some that in fact, many of our fellow colleagues have been already in fact mentioning. So the first one and one that is starting to happen, but not to the extent that I believe should happen more is the whole question of diversity in teams. Again, we hear this a lot. We hear that we need to bring in different perspectives to the table, but at the end of the day, unfortunately, and I have seen even startups, so small and medium-sized enterprises who make the argument, we don’t have enough resources, we can’t. And they actually do. And sometimes it’s as simple as bringing the very customers, the very clients that they intend to be, that they intend for their product or their service to be for to the discussion. So I would say that that is one very key element and we just need to make that a requirement at this point. And it needs to essentially be something, a best practice, frankly, at this point. And the other one is the bias audits. We are seeing certainly across legislation, the need for the requirement for, so one needs to comply now with providing audits for these systems, particularly on the topic of bias. So to ensure that they are non-discriminatory, non-biased. So that is a good thing. However, what ends up being the problem is that we haven’t yet standardized the type of documentation, the type of metrics and the benchmarks. So that is right now the conversation, at least in the, not just in the private sector, but certainly also in academia as to what should we, and at the, I also, I am in communication work with individuals from IEEE, from ISO, who set the industry standards. And so this is a very big topic right now, a debate as to how do we standardize what these audits. should look like, and how do we make sure that not only we standardize that, but we actually have the right committees in place, experts who can then review this documentation. So I would say that that in a way, while extremely important, sometimes does become a barrier of sorts, precisely because individuals, just organizations rather, companies don’t know exactly what needs to be put into these audits. So that’s the second element. And the third and final point here is the whole issue of transparency and explainability of these systems. We’ve heard many, many times about the black box nature, black box nature of these systems. But to be quite honest, we know much more about these systems. Developers do know the data that is involved. We do make mathematical assumptions. So there’s a lot of information at the very pre beginning stage of data collection of system creation, for which we have a lot of information about. And we’re not necessarily being very transparent about that in the first place. So I would say that that in and of itself is extremely important, but also is becoming a type of best practice because if you can establish that from the beginning, it does have downstream effect across the entire AI life cycle, which then becomes extremely important when you start integrating a system. And let’s say you have a problem, you have a negative outcome as is, someone ends up being harmed. And then you can essentially reverse engineer back again, if you have that initial very clear transparency put out in the beginning. We are starting to see some good practices around that, particularly around model cards, nutrition like labels that have all been, especially in healthcare, there’s examples given in healthcare. I do a lot of work with the healthcare industry. And so there’s a very big push right now to essentially standardize and normalize and nutrition like labels around AI model transparency that I think then should be utilized. across all systems, frankly, at this point, all contexts and domains. Thank you.

Ananda Gautam: Thank you, Dr. Monica. So I’ll go to Paula. I think after you guys complete, we can go. So I’ll go to Paula that you have already worked on the AI readiness assessment for the country and how countries and regions are making declarations, how it can be transitioned into the action, you know, like based on your experience. Can you share, please?

Paola Galvez: Sure, Ananda. And what you say, it’s key, right? How to pass from declarations to actions. We’ve seen so many commitments already. So great call. Thank you for the question. I’d say, first of all, we need to start by going into the international frameworks of AI. If there are countries that have not adopted, they will be left out. So that way, we ensure alignment with global standards, best practices, and that also helps with local business to join and be easy to go out to the borders. This is first. But second, when you start formulating the national AI policies, governments need to develop a structured and meaningful public participation process. This means receiving comments from all stakeholders, but it’s not only that, because that happens a lot in my country, I can tell you. By law, they need to publish 30 days, any regulation. Actually, it just happened. The second draft of the AI Act regulation was published. But what we need for a meaningful participation is government saying how they took this comment, and if they are not considering, why? I believe that the citizens and all the civil society organizations, the private sector, the committed, need to know what happened after they commented at any bill. Third, enhance transparency and accessibility. Any AI policy material must be readily accessible, complete, and accurate to the public. Then, independent oversight, I think it’s a must. Ananda, creating or designating an independent agency. Here, Abir mentioned the Saudi Data and AI Agency. I think that is a very good example. Sometimes, governments have a challenge with this, because they say, oh, it’s a huge amount of effort, people, resources, right? But if it’s not possible having a new one, then let’s think, maybe the Data Protection Authority can take over AI. Also, and I think this cannot be left behind, investment in AI policy, AI skills development, that’s a must. We can have the best AI law, but if we don’t help our people understand what is AI, how to read and know that the AI can hallucinate, we will be lost. So AI skills for the people is a must. And just to finish, from always what I’ve said, with a gender lens, because gender equity and diversity in AI is a must, as something that is not being looked at as it should be. You mentioned I conducted the AI readiness assessment methodology of UNESCO, and I’m proud to say that the UNESCO recommendation on the ethics of AI is the only document at the moment that has a chapter on gender. And it must be reviewed because it’s very comprehensive and it has practical policies that should be taken into consideration and into practice. And of course, environmental sustainability in AI policy should be considered, it is often overlooked. What is the impact on the energy? Should we promote an energy-efficient AI solution? Definitely. Minimizing carbon footprint, of course, and fostering sustainable practices, because this is, I will finish with this data, but when you send a question to a large language model, as we all know, HHTTP, cloud, Gemini, et cetera, it’s the same consumption that an airplane has in a year from Tokyo to New York. So we should be thoughtful on what are we sending to AI, or maybe Google can do it for us, too. Thank you.

Ananda Gautam: Thank you, Paula, for your strong thoughts. I’ll come back to Yona. You have mentioned about AI in assistive technologies. So now I’ll come back to how legal frameworks can complement on the assistive technologies while protecting the vulnerable population that are using those technologies. We have briefly underlined that minor, either might be a major in case of assistive technologies. Over to you, Yona.

Yonah Welker: Yes. So first of all, we have a few main elements of these frameworks. The first one is related to taxonomies and repositories in cases. And here I would love to echo my colleagues, Dr. Monica and Paula. We actually need to involve all of the stakeholders, and for instance, cooperating with OECD. involve over 80 organizations to understand existing barriers of access to these technologies. It’s affordability, it’s accessibility, it’s energy consumption, it’s safety, it’s adoption technique is the first thing. Second thing is the accuracy in original solutions. So one of the lessons we learned both working in EU and MENA region, we can’t localize open AI, we can’t localize Microsoft solutions, but we can build our own solutions, sometimes not large language models, but small language models, not with the 400 billion parameters, maybe with 5, 10, 15 billion parameters, but for more specific purposes or languages. For instance, when we’ve made the research for Hungarian language, we have in 1,000 times less sources of training for charge GPT in comparison to English. So we have a similar situation for many other non-English languages. It just doesn’t work, not only from original perspective, but from scientific research and development perspective. Another thing is a dedicated safety models. Sometimes we can’t fix all of the issues within the model, but we can build dedicated agents or additional solutions which track or improve our existing systems. For instance, currently for the Commission, I evaluate a few companies and technologies which will address the privacy concerns, compliance with the GDPR, with the data leakages, breaches, and also online harassment, hate speech, and other parameters. It’s also complemented with the safety environments and oversights. So it’s the job of the government to create so-called testbeds and regulatory sandboxes. It’s a kind of specialized centers where startups can come to in order to test their AI model, to make sure they’re on one hand compliant and also they build actually safe systems. It specifically relates to areas of a so-called critical infrastructure. These are areas of health, education, smart cities, and for instance, Saudi Arabia is known for so-called cognitive cities. All these areas are a part of our work when we’re trying to build efficient, resilient, and sustainable solutions. And finally is a cooperation with intergovernmental organizations. So for instance, we work on frameworks called digital solutions for girls with disabilities, with UNICEF. We work with UNESCO on AI for children. So we’re trying to reflect more specific scenarios, and adoption techniques related to specific ages, let’s say from eight to 12 years old, or specific regions, or specific gender, including both specific of adoption, but also safety considerations and even unique conditions or illnesses, which are very specific to particular region. For instance, we have a very different statistic related to diabetes, craniofacial syndrome, different types of cognitive and sensory disabilities if we compare the MENA region in EU. So it’s a very complex process. And as I’ve mentioned, now our policy is becoming overlapped. So even for privacy, for manipulation, for addictive design, we have an overlap not only in AI Act, but also for other frameworks, Digital Services Act, for data regulation. So some essential pieces of our vision exist in different frameworks. even governmental employees are aware of it. And the final thing is AI literacy adoption. So we’re working to improve the literacy of governmental workers and governors who will employ these policies and bringing to the life.

Ananda Gautam: Thank you, Yonah, so much. So I’ll come back to Avil. So we have been talking about the complexity of making AI responsible. And when it comes to making the responsible AI, it demands for ensuring accountability and transparency. While we are seeing many automated AI systems, who will be responsible if automated car kills a man in the street? This has been kind of serious question and there are other consequences. So in this context, how governments can ensure the responsible AI while ensuring the accountability and transparency? Kindly go through. Thank you.

Abeer Alsumait: Thank you. So I think this question actually relates to what Dr. Lopez mentioned. The keywords here are transparency and explainability. Of course, for sure, regulations and law establish responsibilities and make it sure every actor involved in any event, knows their role and knows when to be responsible. But also the fact that they can explain and they can be transparent at how they work and how they operate and how they might impact others and individuals specifically vulnerable populations is really key. And as Dr. Lopez mentioned, private sector knows more than maybe we understand. But we’re not very clear on how we want the transparency and explainability to work. And maybe my thoughts on that is government should work hand in hand, should push for standardization to happen as soon as possible, should be clear in establishing the responsibility and be clear about what it is, what it means to have a point for transparency, for AI and algorithm. One extra thing that I think government should also focus on is to establish a right, establish a way for individuals to challenge such systems and impactful algorithms on their life. So my idea is that there should be continuous evaluation and risk assessment of how it is actually working in the real life, in case any incident of bias or discrimination happens, there should be a clear way, clear procedure for individuals and for governments to start auditing, reviewing any system that’s in work and impacting lives of individuals.

Ananda Gautam: Thank you, Abir. Maybe we’ll come back to you going to the Q&A session. There is one contributor in our audience. I’ll ask her to provide her and then Martilda will bring what we have in the discussions in the chat and if there are any questions online and we’ll go to the question and answer. Over to you, please.

Audience: Thank you so much. My name is Zemizna Atareki and I’m representing the Saudi Green Building Forum, which is a non-governmental and non-profit organization that supports and promotes green practices as well as decreasing carbon emissions and decreasing energy consumption. Of course, it contributes to the digital transformation that the world now is witnessing. And for that, I would like to just participate and give an idea that we’re going through a critical perspective, which means that we’re having, as algorithms offers an immense potential to enhance our daily lives, yet we face a fundamental challenges relating to biases and exclusion. Now, many of these systems function as an opaque, as Dr. Monica said, lacking transparency, which of course perpetuates a social disparities and exacerbates discrimination against marginalized communities. Now, in the absence of the proper scrutiny and accountability, algorithms sometimes contribute to human rights violation instead of addressing them. What we should do about that as a civil society? We need to take an action, and we need to call for a greater transparency and accountability to ensure algorithms are open to scrutiny and include clear mechanisms for identifying and addressing biases. Of course, we need to integrate human rights into algorithm design, which means we need to focus on developing human-centered algorithms that prioritize the needs for marginalized groups. Of course, finally, we need to foster a multilateral collaboration to engage all stakeholders, as you all mentioned, to ensure algorithms are fair and inclusive, considering diverse cultural and social dimensions. Now, we recommend the following. First, we need to launch a global algorithmic transparency initiative that establishes an international platform to set standards for evaluating the impact for algorithms on human rights and promoting transparency. Second, design inclusive-oriented algorithms, which develops algorithmic tools that prioritize accessibility, improve service delivery for people with disabilities, and ensure greater inclusivity. And last but not least, implement training programs

Ananda Gautam: that build capacity of developers and design makers to understand the risks of algorithms, bias, and address them effectively. Thank you. Thank you so much. So if we have any kind of question on site, there are no online questions, I believe. So while asking questions, please also mention whom you are asking to so that it is easier to answer. Or if it is common, let it know as well. Please. OK. Thank you.

Audience: My name is Aaron Promise-Amba. And I worked on this with Paola and all of you here. So I’m very excited because of the insights we’ve been sharing. So I have a question. And I would like Dr. Monica to help me address it. I understand where you talked about algorithms helping for marketing and some other business, right? And then the kind of divide that comes with it, the risk that comes with it, that it can actually amplify the digital divide, especially with persons with disability, right? And then I’ve worked with some persons with disabilities using social media and all of that. And then there’s a particular case where I think Abe also mentioned something about depression, suicide, right? Suicide, right? So now you have someone click on Spotify to listen to music. Maybe he’s feeling down. And then after that, you see Spotify recommending music, suicide and music, right? That kind of music. So how do we address this, right? And Paola also mentioned something about, sorry, let me get the standardization, right? Having a policy. And then countries are making declarations, right? How to take action is how to take action on this. Then she talked about ownership, right? Public participation. Now, when you are talking, you talk about a particular policy that Nigeria, I’m from Nigeria, right? A particular policy that Nigeria has adopted. So I wanted to know, Nigeria has a lot of policies, even AI policy, right? We are always at the forefront of adopting when we look at other countries doing a lot of things and then we start doing our own. And then we have a lot of this document, but then there’s no implementation and enforcement, right? So now how do we ensure that it’s not just paperwork, right? We don’t just do all of these policies and it’s just the creator said. That’s, it’s actually been enforced and then it’s followed through onto an implementation on all of that. So if you can share some of your insights about that. Thank you very much.

Monica Lopez: Thank you for that question. a very complex, I mean, you really touched upon many, many aspects, but I think something that actually really stands out and perhaps Paola, I think had also mentioned this at one point, is that there really needs to be, I think at this point, what makes, so let me backtrack a second. So yes, everybody’s talking about regulation. Everybody’s talking about standards, normalization. Everybody’s talking about, we need implementation. How do we do this enforcement? But I think part of the problem lies in, we simply do not have enough public awareness and understanding. Because I think if we actually did have more of that, there would be more of a demand. And I see this in terms of, I mean, yes, we hear some even very tragic examples. So you did mention about, you know, someone who has depression and may use Spotify and then get recommended different new types of music to apparently, quote unquote, improve or fix, it’s one has to be careful with the words one uses here, deal with that situation. And we’ve seen two recent even suicides as a result of chatbot use, because of an anthropomorphization of these systems. And I think it really goes back to this question of many times, many users, unfortunately, maybe most users do not understand these systems fundamentally. That’s an education issue. That’s an education question. Because if you know and understand, then you can critically evaluate these systems. You can be more proactive because you know what’s wrong or you see the gap, you see what needs to be improved. And I say it as, so I’m also in, I didn’t mention this, but I’m also in academia and I do teach in the School of Engineering at Johns Hopkins University in the Washington DC, Maryland region in the United States. And I teach the courses on AI ethics and policy and governance to computer scientists and engineers. And I love when they come to the beginning of class with no awareness. And at the end, they are absolutely more engaged and they all say, we wanna go and be those engineers who can talk to policymakers. And so to me, that is very clear evidence, whether they’re high schoolers, undergraduate students, professionals, working professionals go back to school, graduate students, whatever it is, I see this change. And it’s changed because of the power of knowledge. So my main, really my call here is we need far more incentivization to make much more educated users in everyone, all ages. Then we’re gonna see the need for, and I really think that because there’s gonna be that demand from companies that we wanna ensure that our data is private. We wanna ensure that we’re not being harmed. We wanna ensure that we actually have benefits from these technologies. I’ll stop there. I think, yeah, others can add to it, I’m sure.

Ananda Gautam: Hello. Thank you, Monica, for your wonderful response. We have only five minutes left. I have been already one. So Martinda, is there any online discussion or question or any contribution? No? If there is any question, please feel free and contributions are also welcome. We have five minutes. Please keep the time in mind, both speakers and like speakers.

Audience: Thank you, I’ll be quick. It’s been a great discussion. We do get this, the point on education is very well made and we’ve realized in our work, I work in New Delhi in India, and we’ve realized even with very specialized sector of the population like judges and lawyers, it takes a lot of conversation, a lot of detailing to get to a point where something like bias that judges work with daily, for them to start to understand what bias in an AI system might look like. So my question, I guess what I’m trying to ask is when something requires such specialized and detailed understanding, then clearly the problem isn’t with people not being able to understand, maybe it’s with the technology not being at a stage where it’s readily explainable, where it’s easily explainable for societal use. So is there any merit to, frequently we keep getting these discussions on maybe there’s a need to pause, especially with technologies like deep fakes, which everyone who does research in this area knows are primarily going to be used for harm, or not primarily, but massively going to be used for harmful ends. So is there any credence or is there any currency to pushing for a pause at certain levels, or are we way past that? that point already, and we just have to mitigate now. That’s a small question. Sorry if it’s a little depressing in there. Yeah.

Ananda Gautam: Thank you so much for the question. If there are any questions, let’s take it. And I’ll give each speaker with one minute, and then wrap it up. Any questions, contributions from the floor? No. None from online. So each speaker can have one minute and respond. If not, they can proceed. Yeah. OK. Like just one liner that you want to give for the wrap-up. Thank you. You can start with Abir, maybe.

Abeer Alsumait: Quick question to end that. I think we’re just pondered about it. I don’t think there is a real answer. Are we beyond that point? I don’t think so. But should we pause? I also don’t think so, to be honest. I think we can put more effort into making technology more explainable and just bridging the gap little by little. And that’s, I think, what everyone, every actor and every player should work towards. That’s my thoughts on that.

Paola Galvez: Totally agree. Absolutely, we cannot pause. Because if some group decides to do it, then some others will continue. And it’s like just put a blanket in your eyes. So we cannot do it. But we can use what we have. And if our countries don’t have a data protection law or an AI national strategy, we need to pull for it to happen. Because if a country does not have the idea of how they want this technology to develop, what is the future of us as citizens? So I just leave this question for us. And let’s reflect on how we can contribute to the future of AI.

Ananda Gautam: Thank you, Paola. Now, Monica and Jona, please.

Monica Lopez: Yeah, I would agree absolutely with. the both comments. We can’t pause, we can’t ban, that’s not going to work, absolutely. We’re moving far too fast anyway at this point. But I would say that where there’s a will, there’s a way. So if we all come to the agreement and acknowledgement that we need, and I mean all of us, not just those of us right now here and our colleagues, everyone, that we need to do this, then I think it’s possible and we need to act.

Ananda Gautam: Yona, please.

Yonah Welker: Yes, I’m always on the positive side, because finally we have all the stakeholders together and it includes also the European Commission. I would love to quickly respond to the question of Aaron about the key words in suicide, because it’s actually about awareness. Yes, because if you know that recommendation engines use so-called stop words, if you know how the history of these engines works, you can easily fix it through regulatory sandboxes. And emerging companies and start-ups just come into the centers and you can provide the oversight to fix these issues. The same as a bias. Then you know that bias is not an abstract category, but just the problem of under- or over-representation. Just bigger error for smaller groups is purely data and mathematical things coming from society. You can clearly identify the issue. It can be a technical issue, it can be a social issue, and then you see it, you can fix it. And that’s why now we have these tools, testbeds, regulatory sandboxes, policy frameworks, and all the stakeholders working together to come up with real-life terms, understanding, and finally we can fix it together. Thank you.

Ananda Gautam: Thank you, Jona. Thank you, all of our panelists, and thank you, Paula, for organizing this. To all of our on-site audience and audiences online, and this is not the end. of the conversation. We have just began it. You can just connect with our speakers in the LinkedIn or wherever you are. Thank you so much everyone. Have a good rest of the day. Thank you all. Can you just stay our panelists? We can take a picture with you on the screen. Thank you.

M

Monica Lopez

Speech speed

158 words per minute

Speech length

2839 words

Speech time

1075 seconds

Algorithms reflect societal biases and perpetuate inequalities

Explanation

Algorithms are not neutral tools but reflections of existing biases and systemic prejudices in society. These biases are embedded in the design and training data of algorithmic systems.

Evidence

Examples include facial recognition technology showing higher error rates for women and people of color, AI-driven hiring algorithms discriminating based on gender and race, and criminal justice risk assessment tools perpetuating racial biases.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Abeer Alsumait

Paola Galvez

Agreed on

Algorithmic bias perpetuates inequalities

Algorithmic bias has profound human rights implications

Explanation

The biases in algorithmic systems have far-reaching consequences for human rights. These systems can systematically disadvantage marginalized communities and create digital mechanisms of exclusion.

Major Discussion Point

Algorithmic Bias and Its Impact

Diversity in AI development teams is crucial

Explanation

Having diverse teams in algorithmic development is essential for responsible AI. This diversity brings new perspectives to the table and helps in creating more inclusive and unbiased systems.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Rigorous algorithmic auditing and transparency are necessary

Explanation

There is a need for comprehensive auditing of algorithmic systems to ensure they are non-discriminatory and unbiased. Transparency in these audits and in the overall functioning of AI systems is crucial.

Evidence

The European Union’s AI Act requirement for algorithmic auditing was mentioned.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Proactive bias mitigation techniques should be implemented

Explanation

Organizations must implement proactive measures to mitigate bias in AI systems. This includes careful curation of training data, implementation of fairness constraints, and development of testing protocols to examine potential discriminatory outcomes.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Comprehensive regulatory frameworks are needed

Explanation

Governments and international bodies need to develop comprehensive regulatory frameworks that treat algorithmic discrimination as a fundamental human rights issue. These frameworks should include clear legal standards for algorithmic accountability and mechanisms for individuals to challenge algorithmic decisions.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Agreed with

Paola Galvez

Agreed on

Need for comprehensive regulatory frameworks

Ongoing community engagement is essential

Explanation

Continuous dialogue with communities most likely to be impacted by algorithmic systems is crucial. This involves creating participatory design processes across the AI lifecycle and establishing relevant feedback mechanisms.

Evidence

Companies that make concerted efforts to engage with affected communities have seen better outcomes.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Agreed with

Paola Galvez

Agreed on

Importance of public participation and awareness

Public awareness and understanding of AI systems is lacking

Explanation

There is a general lack of public awareness and understanding about how AI systems work. This lack of knowledge makes it difficult for users to critically evaluate these systems and be proactive in identifying issues.

Evidence

Recent suicides as a result of chatbot use were mentioned, highlighting the dangers of anthropomorphizing AI systems.

Major Discussion Point

Education and Public Awareness on AI

Education is key to creating more engaged and critical AI users

Explanation

Educating users about AI systems is crucial for creating a more engaged and critical user base. This education can lead to more demand for responsible AI practices from companies and policymakers.

Evidence

The speaker’s experience teaching AI ethics and policy to computer scientists and engineers, who become more engaged and want to bridge the gap between technology and policy after learning about these issues.

Major Discussion Point

Education and Public Awareness on AI

A

Abeer Alsumait

Speech speed

134 words per minute

Speech length

1054 words

Speech time

471 seconds

AI systems have demonstrated bias against marginalized groups in various domains

Explanation

AI systems have shown biases that disproportionately affect marginalized groups. These biases have been observed in various sectors including welfare, healthcare, and criminal justice.

Evidence

Examples include a French welfare agency’s algorithm that disproportionately flagged single parents and individuals with disabilities as high risk for fraud, and a healthcare algorithm that underestimated the healthcare needs of black patients compared to white patients.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Monica Lopez

Paola Galvez

Agreed on

Algorithmic bias perpetuates inequalities

Pausing AI development is not a viable option

Explanation

Despite the challenges and risks associated with AI, pausing its development is not considered a viable solution. The focus should be on addressing issues and improving the technology rather than halting progress.

Major Discussion Point

Balancing AI Progress and Responsible Development

Differed with

Paola Galvez

Differed on

Pausing AI development

Focus should be on making AI more explainable and bridging knowledge gaps

Explanation

Instead of pausing AI development, efforts should be directed towards making AI systems more explainable and understandable. This involves bridging knowledge gaps between AI developers and users.

Major Discussion Point

Balancing AI Progress and Responsible Development

P

Paola Galvez

Speech speed

152 words per minute

Speech length

1513 words

Speech time

597 seconds

AI can deepen inequalities if not developed responsibly

Explanation

While AI has the potential to be a catalyst for social change, it can also exacerbate existing inequalities if not developed and implemented responsibly. The output of AI systems depends on how we choose to develop this technology.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Monica Lopez

Abeer Alsumait

Agreed on

Algorithmic bias perpetuates inequalities

Structured public participation is needed in AI policy development

Explanation

Governments need to develop a structured and meaningful public participation process when formulating national AI policies. This involves not only receiving comments from all stakeholders but also providing feedback on how these comments were considered.

Major Discussion Point

Government Role in Responsible AI Development

Agreed with

Monica Lopez

Agreed on

Importance of public participation and awareness

Investment in AI skills development is crucial

Explanation

There is a need for investment in AI skills development for the general public. Understanding AI is crucial for people to critically engage with these technologies and make informed decisions.

Major Discussion Point

Government Role in Responsible AI Development

Agreed with

Monica Lopez

Agreed on

Importance of public participation and awareness

Independent oversight of AI systems is necessary

Explanation

An independent agency or body should be established to oversee AI development and implementation. This oversight is crucial for ensuring responsible AI practices.

Evidence

The example of Saudi Data and AI Agency was mentioned as a good practice.

Major Discussion Point

Government Role in Responsible AI Development

Countries need to develop national AI strategies

Explanation

It is crucial for countries to develop national AI strategies to guide the development and use of AI technologies. Without such strategies, the future of citizens in relation to AI remains uncertain.

Major Discussion Point

Balancing AI Progress and Responsible Development

Agreed with

Monica Lopez

Agreed on

Need for comprehensive regulatory frameworks

Y

Yonah Welker

Speech speed

125 words per minute

Speech length

1479 words

Speech time

706 seconds

AI has potential to support people with disabilities

Explanation

AI technologies have significant potential in supporting people with disabilities. Various assistive technologies powered by AI can help improve the lives of individuals with different types of disabilities.

Evidence

Examples include AI-augmented smart wheelchairs, walking sticks, geolocation tools, and technologies that support hearing impairment and cognitive accessibility.

Major Discussion Point

AI in Assistive Technologies

Challenges exist in developing inclusive AI for assistive technologies

Explanation

Developing inclusive AI for assistive technologies comes with various challenges. These include recognition errors for individuals with facial differences, exclusion by generative AI, and issues related to different machine learning techniques.

Evidence

Examples of challenges include facial recognition systems not properly identifying individuals with facial differences or asymmetry, and errors in AI interfaces that some individuals cannot understand, hear, or see.

Major Discussion Point

AI in Assistive Technologies

Legal frameworks need to complement assistive technologies

Explanation

Legal frameworks should be developed to complement and support the use of AI in assistive technologies. These frameworks need to consider various aspects including accessibility, safety, and potential misuse scenarios.

Major Discussion Point

AI in Assistive Technologies

Collaborative efforts are needed to address AI challenges

Explanation

Addressing the challenges in AI development and implementation requires collaborative efforts from all stakeholders. This includes the use of tools like testbeds, regulatory sandboxes, and policy frameworks.

Evidence

The speaker mentioned the existence of tools like testbeds and regulatory sandboxes where emerging companies and startups can come to fix issues related to AI systems.

Major Discussion Point

Balancing AI Progress and Responsible Development

A

Audience

Speech speed

153 words per minute

Speech length

905 words

Speech time

353 seconds

Specialized understanding is needed even for professionals like judges

Explanation

Even specialized professionals like judges and lawyers require detailed conversations and explanations to understand concepts like bias in AI systems. This highlights the complexity of AI technologies and the challenges in making them easily explainable for societal use.

Evidence

The speaker’s experience working with judges and lawyers in New Delhi, India, to help them understand AI bias.

Major Discussion Point

Education and Public Awareness on AI

Agreements

Agreement Points

Algorithmic bias perpetuates inequalities

Monica Lopez

Abeer Alsumait

Paola Galvez

Algorithms reflect societal biases and perpetuate inequalities

AI systems have demonstrated bias against marginalized groups in various domains

AI can deepen inequalities if not developed responsibly

The speakers agree that AI systems and algorithms can reflect and amplify existing societal biases, potentially deepening inequalities if not developed and implemented responsibly.

Need for comprehensive regulatory frameworks

Monica Lopez

Paola Galvez

Comprehensive regulatory frameworks are needed

Countries need to develop national AI strategies

Both speakers emphasize the importance of developing comprehensive regulatory frameworks and national AI strategies to guide responsible AI development and implementation.

Importance of public participation and awareness

Monica Lopez

Paola Galvez

Ongoing community engagement is essential

Structured public participation is needed in AI policy development

Investment in AI skills development is crucial

The speakers agree on the need for public engagement, participation in AI policy development, and investment in AI skills development to create a more informed and engaged public.

Similar Viewpoints

All speakers agree that pausing AI development is not the solution. Instead, they advocate for continued development with a focus on making AI more explainable, addressing challenges collaboratively, and implementing national strategies and frameworks.

Monica Lopez

Abeer Alsumait

Paola Galvez

Yonah Welker

Pausing AI development is not a viable option

Focus should be on making AI more explainable and bridging knowledge gaps

Countries need to develop national AI strategies

Collaborative efforts are needed to address AI challenges

Unexpected Consensus

Importance of diversity in AI development

Monica Lopez

Yonah Welker

Diversity in AI development teams is crucial

Challenges exist in developing inclusive AI for assistive technologies

While coming from different perspectives (general AI development and assistive technologies), both speakers emphasize the importance of diversity and inclusivity in AI development, highlighting an unexpected area of consensus.

Overall Assessment

Summary

The speakers generally agree on the existence of algorithmic bias, the need for comprehensive regulatory frameworks, the importance of public participation and awareness, and the necessity of continued AI development with a focus on responsible practices.

Consensus level

There is a high level of consensus among the speakers on the main challenges and necessary actions for responsible AI development. This consensus suggests a shared understanding of the critical issues in AI governance and ethics, which could facilitate more coordinated efforts in addressing these challenges across different sectors and stakeholders.

Differences

Different Viewpoints

Pausing AI development

Abeer Alsumait

Paola Galvez

Pausing AI development is not a viable option

We cannot pause. Because if some group decides to do it, then some others will continue. And it’s like just put a blanket in your eyes. So we cannot do it.

While both speakers agree that pausing AI development is not feasible, they have slightly different reasons. Abeer Alsumait focuses on the need to improve the technology, while Paola Galvez emphasizes the risk of falling behind if some groups continue development.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were minimal, with speakers largely agreeing on the importance of responsible AI development, the need for regulatory frameworks, and the challenges of algorithmic bias. The primary differences were in emphasis and specific approaches rather than fundamental disagreements.

difference_level

The level of disagreement among the speakers was low. This general consensus implies a shared understanding of the challenges and potential solutions in AI development and governance, which could facilitate more unified approaches to addressing these issues.

Partial Agreements

Partial Agreements

All speakers agree on the need for regulatory frameworks and strategies for AI development, but they emphasize different aspects. Monica Lopez focuses on human rights and algorithmic accountability, Paola Galvez stresses the importance of national strategies, and Yonah Welker highlights the need for frameworks specific to assistive technologies.

Monica Lopez

Paola Galvez

Yonah Welker

Comprehensive regulatory frameworks are needed

Countries need to develop national AI strategies

Legal frameworks need to complement assistive technologies

Similar Viewpoints

All speakers agree that pausing AI development is not the solution. Instead, they advocate for continued development with a focus on making AI more explainable, addressing challenges collaboratively, and implementing national strategies and frameworks.

Monica Lopez

Abeer Alsumait

Paola Galvez

Yonah Welker

Pausing AI development is not a viable option

Focus should be on making AI more explainable and bridging knowledge gaps

Countries need to develop national AI strategies

Collaborative efforts are needed to address AI challenges

Takeaways

Key Takeaways

Algorithmic bias is a significant issue that perpetuates and amplifies societal inequalities

A human-centered approach is crucial for developing responsible AI

Diversity in AI development teams is essential to mitigate bias

Governments play a critical role in regulating AI and ensuring responsible development

Public awareness and education about AI systems is lacking but necessary

AI has potential benefits for assistive technologies but also poses challenges

Transparency and explainability of AI systems are crucial for accountability

Resolutions and Action Items

Implement comprehensive diversity across algorithmic development teams

Conduct rigorous algorithmic auditing and increase transparency

Establish proactive bias mitigation techniques

Develop comprehensive legal and regulatory frameworks for AI

Engage in ongoing community engagement, especially with marginalized groups

Invest in AI skills development and public education

Create independent oversight mechanisms for AI systems

Develop standardized documentation and metrics for AI audits

Unresolved Issues

How to effectively standardize AI audit documentation and metrics

How to balance rapid AI development with responsible implementation

How to make complex AI systems easily explainable to the general public

How to ensure AI policies are effectively implemented and enforced, not just created

Suggested Compromises

Instead of pausing AI development, focus on making technology more explainable and bridging knowledge gaps

Use existing regulatory frameworks and adapt them for AI governance

Develop smaller, more specific language models for regional needs instead of relying solely on large, generalized models

Thought Provoking Comments

Algorithms are not neutral tools, but they’re very powerful social mechanisms that either perpetuate or challenge existing power structures.

speaker

Monica Lopez

reason

This comment reframes algorithms from neutral technical tools to powerful shapers of society, highlighting their profound social impact.

impact

Set the tone for discussing the ethical implications and societal effects of AI throughout the conversation.

AI holds immense potential as a technology if we use it wisely. AI systems can break down language barriers.

speaker

Paola Galvez

reason

Provides a balanced perspective by highlighting AI’s positive potential alongside its risks.

impact

Shifted the discussion to consider both opportunities and challenges of AI, leading to more nuanced analysis.

Working on disability-centric AI is not just about words, it’s an extremely complex process of building environments where we have a multi-model and multi-sensory approach.

speaker

Yonah Welker

reason

Highlights the complexity of developing truly inclusive AI systems, especially for those with disabilities.

impact

Deepened the conversation around inclusivity in AI, prompting discussion of specific challenges and approaches.

Governments play a very important role. While every actor and every player is really important in discussions and conversations, governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way.

speaker

Abeer Alsumait

reason

Emphasizes the crucial role of government regulation in ensuring responsible AI development.

impact

Shifted focus to policy and regulatory aspects of AI governance.

We simply do not have enough public awareness and understanding. Because I think if we actually did have more of that, there would be more of a demand.

speaker

Monica Lopez

reason

Identifies lack of public understanding as a key barrier to effective AI governance.

impact

Prompted discussion on the importance of AI literacy and education for the general public.

Overall Assessment

These key comments shaped the discussion by highlighting the complex societal impacts of AI, the need for balanced consideration of risks and opportunities, the importance of inclusivity, the role of government regulation, and the critical need for public education on AI. The conversation evolved from identifying problems to exploring multifaceted solutions involving various stakeholders, emphasizing a holistic approach to responsible AI development and governance.

Follow-up Questions

How to standardize the type of documentation, metrics, and benchmarks for AI bias audits?

speaker

Dr. Monica Lopez

explanation

Standardization is crucial for effective bias audits across the industry, but there’s currently a lack of consensus on how these audits should be conducted and documented.

How can we improve AI literacy and adoption among government workers and governors who will implement AI policies?

speaker

Yonah Welker

explanation

Improving AI literacy among policymakers is essential for effective implementation and enforcement of AI regulations.

How can we ensure that AI policies and regulations are not just paperwork but are actually enforced and implemented?

speaker

Audience member (Aaron Promise Mbah)

explanation

Many countries adopt AI policies, but there’s often a gap between policy creation and actual implementation, which needs to be addressed.

How can we address the issue of AI systems (like music recommendation algorithms) potentially exacerbating mental health issues?

speaker

Audience member (Aaron Promise Mbah)

explanation

This raises concerns about the unintended consequences of AI systems on vulnerable individuals and the need for safeguards.

Is there merit to pushing for a pause in the development of certain AI technologies, particularly those with high potential for harm like deepfakes?

speaker

Audience member (unnamed)

explanation

This question addresses the ethical dilemma of whether to slow down AI development in potentially harmful areas to allow for better safeguards and regulations.

How can we make AI technology more readily explainable for societal use?

speaker

Audience member (unnamed)

explanation

The complexity of AI systems makes it difficult for the general public to understand and critically evaluate them, which is crucial for responsible AI adoption.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #78 Intelligent machines and society: An open-ended conversation

WS #78 Intelligent machines and society: An open-ended conversation

Session at a Glance

Summary

This discussion focused on philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers, Jovan Kurbalija and Sorina Teleanu from the Diplo Foundation, emphasized the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world. They raised concerns about the centralization of knowledge by large tech companies and advocated for bottom-up AI development to preserve diverse knowledge sources. The speakers questioned how AI might affect human communication and creativity, and whether humans should compete with machines for efficiency. They introduced the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimization. The discussion touched on the anthropomorphization of AI and the need to consider other forms of intelligence beyond human-like AI. Practical examples of AI tools for knowledge management and analysis were presented, demonstrating how AI can be used responsibly with proper attribution. Audience questions addressed topics such as AI ethics education, the potential personhood of advanced AI, and open-source approaches to AI development. The speakers concluded by proposing further philosophical discussions on AI’s impact across various cultural traditions, emphasizing the importance of examining what it means to be human in an AI era.

Keypoints

Major discussion points:

– The need to focus on immediate and practical impacts of AI rather than long-term hypotheticals

– Concerns about AI’s impact on human knowledge, communication, and identity

– The importance of maintaining human agency and imperfection in an AI-driven world

– Questions about the nature of AI intelligence compared to human intelligence

– The need for more philosophical and ethical considerations in AI governance discussions

Overall purpose:

The goal was to raise deeper philosophical questions about AI’s impact on humanity and encourage more critical thinking about AI beyond surface-level discussions of ethics and bias. The speakers aimed to challenge common AI narratives and highlight overlooked issues.

Tone:

The tone was thoughtful and somewhat skeptical of current AI hype and narratives. The speakers took a critical stance toward simplistic AI discussions while maintaining curiosity and openness to AI’s potential. There was an underlying sense of concern about AI’s societal impacts, balanced with calls for practical engagement rather than alarmism. The tone became more interactive and solution-oriented during the Q&A portion.

Speakers

– Jovan Kurbalija: Director of the Diplo Foundation and Head of Geneva Internet Platform

– Sorina Teleanu: Director of Knowledge at Diplo Foundation

– Mohammad Abdul Haque Anu: Secretary-General of Bangladesh Internet Governance Forum

– Tapani Tarvainen: Electronic Frontier Finland, Natta Foundation

– Henri-Jean Pollet: ISPA in Belgium

Additional speakers:

– Andrej Skrinjaric: Head of linguists at Diplo Foundation

Full session report

The Philosophical and Ethical Implications of AI: A Critical Examination

This discussion, featuring experts from the DiploFoundation, delved into the profound philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers emphasised the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world.

Key Themes and Arguments:

1. Introduction and Framing

Jovan Kurbalija, Director of the Diplo Foundation, opened the discussion by stressing the importance of understanding basic AI concepts to engage in meaningful discussions beyond dominant narratives of bias and ethics. He argued for a more critical examination of AI’s impact on human knowledge and identity, proposing the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimisation.

Sorina Teleanu, Director of Knowledge at Diplo Foundation, presented a series of thought-provoking questions to frame the discussion. She questioned the anthropomorphisation of AI and the tendency to assign human attributes to machines. Teleanu raised concerns about how AI might affect human communication and creativity, encouraging consideration of other forms of intelligence beyond human-like AI.

2. Philosophical Considerations of AI

The discussion touched on various philosophical aspects of AI. Kurbalija introduced the concept of a “right to be humanly imperfect,” arguing for the preservation of human agency and imperfection in an AI-driven world. This idea resonated with other speakers, who expressed concern about the potential loss of human elements in pursuit of AI-driven efficiency.

Teleanu expanded on her concerns regarding the anthropomorphization of AI, highlighting the potential risks of attributing human characteristics to machines. She also raised important questions about the interplay between AI and neurotechnology, emphasizing the lack of privacy policies for brain data processing.

A thought-provoking perspective on the potential personhood of advanced AI was introduced. The idea that if Artificial General Intelligence (AGI) becomes indistinguishable from humans in capability, it might deserve human rights, challenged conventional notions of humanity and consciousness.

3. AI Governance and Development

The speakers agreed on the need to focus on immediate and practical impacts of AI rather than long-term hypotheticals. Kurbalija criticised ideological narratives that postpone addressing current issues in education, jobs, and daily life. He advocated for bottom-up AI development to preserve diverse knowledge sources and prevent the centralisation of knowledge by large tech companies.

Kurbalija also stressed the importance of defining accountability in AI development and deployment, arguing that legal principles regarding AI responsibility are fundamentally simple and should be applied accordingly.

Henri-Jean Pollet emphasised the importance of open-source models and data licensing for AI development. He proposed systems to validate and test AI outputs, similar to human education processes, to ensure reliability and prevent “hallucinations” in AI-generated content.

4. Human-AI Interaction and Ethics

The discussion touched on various aspects of human-AI interaction. Teleanu raised questions about the impact of AI on human-to-human communication, wondering how increased reliance on AI-generated text might change how humans interact with each other in the future. This point highlighted the need to consider the long-term sociocultural implications of AI integration.

Mohammad Abdul Haque Anu, Secretary-General of Bangladesh Internet Governance Forum, expressed concerns about AI ethics education and implementation. He questioned who should be responsible for teaching AI ethics and how to ensure adherence to ethical guidelines in AI development and deployment, particularly in the context of developing countries.

Kurbalija shared an anecdote about AI-generated speeches at conferences, illustrating the potential for AI to influence human communication in professional settings.

5. Diplo Foundation’s Approach to AI Development

Towards the end of the discussion, Kurbalija elaborated on Diplo Foundation’s approach to AI development. He explained their focus on creating AI tools that preserve and enhance human knowledge, particularly in the field of diplomacy. These tools aim to assist diplomats and policymakers by providing quick access to relevant information and analysis, while maintaining human oversight and decision-making.

Conclusion and Practical Demonstration:

The discussion concluded with a practical demonstration of AI tools developed by the Diplo Foundation. Kurbalija showcased how these tools can be used to analyze complex diplomatic texts and generate summaries, emphasizing the potential of AI to augment human capabilities in specialized fields.

The speakers emphasized the importance of continuing these philosophical discussions to examine what it means to be human in an AI era. Key unresolved issues included the effective implementation of AI ethics education, the long-term impacts of AI on human identity and interaction, and the ethical implications of AGI potentially becoming indistinguishable from humans.

This thought-provoking discussion challenged common AI narratives and highlighted overlooked issues, encouraging a more critical and philosophical approach to understanding AI’s role in shaping the future of humanity. The session ended with an invitation for continued dialogue and exploration of these complex issues.

Session Transcript

Jovan Kurbalija: No? No. Solution is very binary. Yes or no? Good. Well, welcome to our session and sorry for a slight delay for our colleagues online and those in the room. You can hear me? You made some expression, no? Waiting in and out. Okay. But we have to, because of people outside, we have to speak. No? I know, I can tell you, I was attending, I’ve been attending the last one, IGF since the first IGF. The biggest challenge, two biggest challenges are food sometimes and the second is sound. Therefore, it seems the sound works now? Yes. It’s okay. Good. Thank you for coming today. I’m Jovan Kurbalje, Director of the Diplo Foundation and Head of Geneva Internet Platform. My colleague Sorina Taleanu is Director of Knowledge at Diplo. Therefore, Sorina is working a lot on the intersection between artificial and human intelligence. Human, artificial, artificial, human. One minute, one minute and a half, two? Yeah, the screen is gone. Okay. I think you go ahead a little bit. I have permission. I think audience’s permission. Where are you coming from? I’m from Bangladesh. Bangladesh. Okay. You guys in Indian subcontinent, you invented number zero and put us in trouble. Otherwise, we were using Roman numbers and it was so easy. But you invented one and zero and digital world and all trouble starts, you know, including sound in this room. I’m always teasing my colleagues and friends from India, Bangladesh, because number zero, Shunya, was invented in the continent and they came through the Al-Khwarizmi, who invented the algorithm, first to Arab traders, and then it came to North Africa and Mr. Fibonacci. Music is coming. It’s better than me. Me. Speaking of digital challenges. Those are challenges. Oh, there was a proposal that I start singing, which is a terrible idea. I can tell you. Okay. Some people are giving up. I’m sorry. I’m sorry. Okay. I’ve been working on AI myself personally since 1992, when I did my master thesis on international law and expert systems. At that time, AI was about expert systems, rule-based AI. You basically try to cover, in this case international law, and do, if there is a breach of this, then, and the other thing. Now it’s been five years since neural networks started, and especially after the initiation of the question of chargeability, we have been focusing on AI, but in a slightly different way than other systems. We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech. Mix of bias, mix of ethics, and what we notice is that there is a narration which is not going into the critical issues. What we basically did in this context was the following. I’m sorry. It should work. We were discussing, are we holding the globe together, or we are fighting machines and people, and most of our discussions, especially when the hype came in 2023 with ChatGPT, we were really concerned because being involved in AI, we were concerned that there was… almost surrealistic discussion. And those of you who are in this field, you know that there was a long-termism movement of effective altruism, basically send the message, you are, no? Like this? Upside, upside. Upside? Yes. Okay. So many details. That was basically long-termism movement, which was saying, don’t discuss AI. I’m simplifying. Don’t discuss AI currently. Let’s see what will happen in 20, 30 years, and let’s prevent that AI destroys humanity. I’m caricaturing a bit, but it was the narrative. You can recall the letter signed by lead scientist, stop AI, and these things. One worrying signal, which I noticed, was it became a bit of ideological narrative. And when there is ideology, then there is something fishy. I was born in socialist country, and I can detect in no time ideological narrative. And it was ideological narrative. More or less similar to communist narrative, which says, don’t ask any question now. Just enjoy, just follow the orders of the party, and in 30 or 40 years, we will be in the ideal society. If you complain, you go to gulag and you get in trouble. Now, that was a bit of initial narrative, and we said, okay, it’s a bit tricky. We have to make more serious discussion about it. Therefore, we started pushing for immediate risks of AI in the education, in jobs, in the day-to-day life, in use of AI by content moderation platform, by Google, in any walk of the life. And we were particularly concerned about one aspect, which you can consider as a mid-term aspect, which was a massive codification of human knowledge by a few platforms. Mainly platforms in the United States. States and China. Therefore, when you interact with ChargePT, you basically, as we know, you’re training the model. And that was, or Google, or Alibaba, it’s not matter where from it is coming, but the idea that knowledge is centralized and captured was very problematic. Therefore, we started developing bottom-up AI, trying to show that with relatively little resources, you can develop your own AI, and you can preserve your knowledge. And we say that it is technically feasible, financially affordable, and ethically desirable. Because knowledge, which is codified by AI, is that it defines our identity, our dignity. And that’s extremely important, that we know what is happening with our knowledge, or knowledge of the generations before us. This is more or less the context in which we have been doing that, but I’ll ask Serena now to build more on these discussions.

Sorina Teleanu: Let’s see if I can do this right. Good afternoon, everyone. Thank you for joining our session. I think what we wanted to do today is to have a bit of a dialogue around some of the more philosophical issues surrounding AI, because at least I personally feel we talk a lot about, you know, challenges and opportunities of AI, and we need to govern it, because how do we deal with the challenges, and how do we leverage opportunities? But what about the more human side of all of this? And I do have a few questions that I’m hoping we can go through quickly, and I’ll just, yeah, no, I’ll actually do it myself, if we can just maybe switch. I do like slides quite a bit. quite a lot, and at Diplo we do have quite a few very nice illustrations that I couldn’t miss an opportunity to share with you. So I’m going to do a bit of that, bear with me. And then hoping to have more questions from you as well. What you will see on the slide is mostly questions. Things that I hope we would feature more in our discussions on AI governance, on the impact of AI on economy, society, humanity and whatnot, and which I feel we miss more often than not. So starting with this, we talk a lot about large language models, right? And generative AI and chat GPT and all of these things. But what are some of these challenges in knowledge creation? How do we look at large language models and at generative AI tools? Are they a tool? Are they a shortcut? Are they our assistant, new coworker? How do we relate ourself with these tools? Are we even making conscious choices when we interact with them? Or are we exercising our agency as humans? To what extent? And yeah, the broader question, what roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean? And if we do need to do that, how do we actually get there? And then something that really, really bothers me is this way through which we do assign human attributes to AI. We do talk a lot about AI understanding, AI reasoning, and using these sorts of words that are much more adequate to use for human intelligence, right? But does AI actually reason? Does AI actually understand? And when we use this word, do we actually understand what we mean by them? And yeah, there is a bit of hype around anthropomorphizing. I spoke too much today. AI, and I’ll just give you one example. I’m not sure how many of you here might have attended the big event in Geneva, which happens on a yearly basis. This is the Global AI for Good Summit. Yes. I see at least one person who’s around Geneva, but he’s busy typing. Thank you for that. So last year, there was a lot of focus on humanoid robots. So you would walk to the conference venue and see a human-like looking robot here, another one there, another one there. But what I didn’t see was people actually questioning, okay, what does that robot actually mean? Does it understand? Does it reason? Does it think? Or is it just another way for us to hype technology in some way? You have them here as well. Yeah, we have robots here as well. And then a bit more on the interplay between AI and other technologies that tend to also join a bit the hype. Another example from exactly same conference. Last year, the focus was on neurotechnologies and how the interplay between AI and neurotechnology might be impacting the way we relate to these technologies in the future. There were many companies who are coming at the summit and presenting their neurotechnology devices, applications, and what not, and we had this random curiosity, okay, let’s see what kind of privacy policies these companies have when it comes to their processing of brain data. You know, if you use a neurodevice, there is some processing of brain data. What do you think we found out? Any guess?

Jovan Kurbalija: Out of how many, eight or nine?

Sorina Teleanu: I think eight or nine. Those are the companies that we looked at. One had a line in the privacy policy saying, well, we might be processing your brain data, but because you agreed to use the service, you also agreed to us processing the data, and then all others, their privacy policies were mainly related to cookies, how you interact with the website, but nothing about the technology itself. And then the question is, if you as an international organization invite these organizations, companies, to speak about how amazing neurotechnology can actually be at the end of the day, and I don’t know, help solve whatever problems, shouldn’t you also be a bit more careful about how they also deal with the human implications of this with human rights and what not? I think sometimes we talk, but we don’t also walk the talk in the policy space, and I’m hoping we will see a bit more of that going forward. More questions. When words lose their meaning, how many of you here use tools like ChatGPT? At least, okay, I’m seeing a lot of hands. I’m a bit worried, to be honest. Giovanna and I are also teaching at the College of Europe, and you know how it is. When you have an assignment which is write an essay, you just go to ChatGPT, you put it there, and you get your essay. We’re also seeing this in the policy space quite a lot. There was a funny anecdote from the head of an international organization in Geneva. Would you like to tell that shortly, about going to a conference and hearing the same?

Jovan Kurbalija: Yes, they went for a conference, and they were. hearing all same speeches by all opening statements and that was it. And we had one organization that created their strategy on 120 pages and we initially, it’s important organization, and we said let us read it and then somebody even didn’t dare to remove ChatGPT references and then we said oh my god what’s going on. That was funny anecdote with eight speeches or nine speeches basically. We then analyzed them and we found the patterns that were basically generated by. Even they don’t make an effort to go to Gemini or or other platforms, but everything was generated by ChargePT, okay. Okay, just for colleagues online, but there was a comment which is good comment and we often discuss that. We always ask how AI should be perfect, but then in the same time we are who we are and the speeches as you know even written by humans are not that exciting and then it was it was good good point. But what worried us was this huge document on AI strategy and we were thinking it’s many countries will read it as a strategy for AI for that organization. First, can they read hundred ten or twenty pages? Second, and second is it really expression of the policy interest of that organization? It’s not. It’s on very common sense level and that’s that’s it. Serena.

Sorina Teleanu: Thank you Jovan and speaking more of concerns, I think the first question right there is the one expressing the best what we have been discussing for quite a while at least at Diplo, but I don’t see it so much in the broader space. If we rely so much on AI tools to write our text and communicate in our emails or whatever else, right now it’s easy to kind of easy to spot what is AI generated text and what at least has some sort of human intervention. But if we end up relying so much on chat GPT and like tools, will we still sound like humans five, ten, fifteen, twenty years from now? And also what does it have what does yeah it happen with all this kind of how do I call it self-perpetuation of AI? If AI comes up with new text based on data already available now but five years from now all of the new data will still be AI generated. What does it also mean for broader issues of human knowledge and also for how us as humans actually relate to each other at the end of the day and how we communicate to each other. This is one of my favorite books. I’m not sure if someone in the room has actually read it. I think we also have this kind of obsession as humans to try to develop AI which is really like us. We want to generate you. No, general artificial intelligence to be as good as humans as every single task because I don’t know we want that to happen But what about other forms of intelligence out there? Can we develop intelligent machines that act more like octopuses? Which we have discovered recently that are quite smart right and intelligent more like fungi more like forests What about other forms of intelligence around us that we might want to borrow a bit from as we develop? Whatever we mean by intelligent machines. We tend to be so focused on us humans. We are at the center of the earth We have the best we know the best but maybe it’s not exactly that also as we look into developing technology Yes, and we’re having more trouble with Technology because why not and I’ll end with a few more questions. This is probably the overarching one What does it mean to still be human in an increasingly AI era? This is more on the interplay. I was mentioning earlier between Neurotechnology more invasive technologies AI and your one can cover that a bit later as well another example of this interplay What we have been trying to kind of advocate for in the policy space in Geneva, is this right to be humanly imperfect? Jovan, would you like to reflect on this a bit?

Jovan Kurbalija: Well, the idea is counterfactual and when I go to Human Rights Council and the human rights community they look at me As if I am from the other planet, but I was I was arguing for quite some time Three to five years and I even proposed the workshop at the at the IGF but probably the the whatever mug they dismissed that as I was arguing I’ve been arguing for the Idea that we have a right to be imperfect But our efficiency civilization made Centered about optimizing efficiency is basically making it unthinkable That you have a right to be lazy You have a right to be lazy, you have a right to make mistakes, you have a right to ask, you have a right to this. But if you really consider carefully, humanity made its breakthroughs when we were lazy. In ancient Greece, these people had plenty of time to walk through the parks and to think about philosophy. Or in Britain, all this tennis, football was invented in the British time, obviously. Some other people were working for the elite, that’s another story. But they were lazy and they were inventing things. Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines. I have my bet that in five years’ time, I will be already retired, but some of you are younger, we will have at least one workshop at IGF of asking, do we have a right to be imperfect? And then I can offer it as a bet that we do it. But it’s a very serious question going beyond, let’s say, a catchy title. It’s going into what Sorina said, the critical question, what does it mean to be human today? And what will be our humanity in the relation to machines in the coming years?

Sorina Teleanu: Thank you. And it’s not so much about talking about robots coming and taking over and this Terminator business that used to be in focus for quite a while. It’s more about this very human-to-human interaction and how AI comes into play in this human-to-human interaction. We’ll end with a few more questions, and then we’re hoping you will be adding more questions at the bottom of our slide. So trying to wrap up, what do we actually want from AI? There is, yeah, maybe I shouldn’t. There is a quote from a company, maybe I shouldn’t call the company at least, developing artificial… It’s a prominent company. Yeah. It’s developing or trying to develop artificial general intelligence, again, the type of AI that would be at least as good as human at doing everything and anything. So the quote, I’m going to paraphrase it probably, goes a bit like this. Our aim as a company is to develop artificial general intelligence to figure out how to make it safe and then to figure out its benefits. So when I saw that statement, I was thinking, okay, but shouldn’t it be a bit like the other way around? Like sort of… out what are the benefits of AGI, then see how to make it safe, and then actually develop it? Isn’t it a bit of a wrong narrative there? And I think in our policy discussions at the IGF and elsewhere, we should be questioning these companies a bit more. I feel they’re just going around the place saying, hey, AI is going to solve all of our problems and we’re going to sleep happily every night because AI will do the work for us. Okay, but have we actually thought carefully about this? And again, it’s not about Terminator and these kind of robots killing humanity, but what does it mean to still be human? When you said sleep, we are sleepwalking. We kind of are, exactly. Because again, we don’t see many of these questions, unfortunately, in the policy debates. And we’re coming from Geneva, where every other day, at least, you do have a discussion on AI governance. You can confirm. Thank you for nodding. How many of these questions do you actually see in those debates? Yeah. Yeah. So I’m hoping we’ll be seeing more of them. Again, how do we interact with AI? To what extent are we even aware of these interactions? And how much of these interactions involve informed choices? How about our human agency? As I was saying, is AI having an impact on how we interact and how we relate with each other as human beings? Is AI making choices for us? Should it be making choices for us? Again, the notion of human agencies. I wanted to go to Jovan’s point about the right to be humanly imperfect and this focus on efficiency. In a world driven so much by economic growth and GDP and this way of measuring progress, can humans compete with machines? Should humans compete with machines? Should we just do what we’re good at and leave machines do repetitive tasks? And finally, is there a line to be drawn? Can we still draw this line at the point? Is it too late? Can we be asking more questions? Over to you, I would say. And I’m hoping you will have some reflections on some of these questions and ideally more questions because I think questions are important and we should be asking more of them.

Jovan Kurbalija: Internally, I’m enthusiastic on the AI side because I have the geeky approach. and Sorina is not enthusiastic. And for those of you who consulted Sorina’s book, I would probably repeat myself, she wrote a book on the Global Digital Compact. When it was adopted on 27th of September at the UN, Sorina has been following Compact for last 18 months. Her slides are shared by many governments. And I said, Sorina, let’s use AI and convert your slides into the book. And she said, OK, you know how Sorina is kind. She said, OK, maybe. And the next day I see Sorina typing. I said, Sorina, come on, let’s put the slides and we convert into the book and you have a book. Sorina wrote the book herself in 47 days on 200 pages. Here is the book. And I lost my battle that AI can help us in writing the… This is a book which was written in a very solid analysis in 47 days. Therefore, we have internal battle which is healthy debate with me being more optimistic and Sorina being more careful and pessimistic. But we do reporting from IGF through the website, I don’t know, with the use of AI. And we are going to ask at the end of the IGF, whole analysis of IGF, how many, in our view, realistic question about AI were asked over the five days. And how many are discussion on ethics, biases, and these things. Mind you, that’s important discussion. But we are very often not seeing a forest for a tree. Critical issues about the future of the knowledge. Therefore, we start with the questions or comments. Introduce yourself, please.

Mohammad Abdul Haque Anu: My name is Onu. I’m a Secretary-General of Bangladesh Internet Governance Forum. My question is, who institute teaching, giving teaching the AI ethics? Everybody knowing, everybody saying that we should follow the AI ethics. Who delivered a teaching process that we should follow this curriculum is that this is the AI it is My question.

Jovan Kurbalija: Okay. I’m first bit bit cautious about their ethics. I think there is a human ethics and I’m cautious about biases. For example, I’m a fool of biases. You are full of biases. We are biases historically culturally therefore there is a very important discussion how How to organize that and how to make a common-sense Curriculum not not too hyped and I’m afraid that a lot of energy in AI debates are is going to the let’s say ethics You have now close more than 1,200 guideline standards on AI and ethics that is Losing any context now, I would be careful on that. I wouldn’t focus on AI and ethics I will focus on how AI functions and what are implications for society direct implication We run AI apprenticeship people create their chatbot. They interact We tell them this makes sense. This doesn’t make sense. Is it going to offend somebody? What are the biases that could be tolerated? What are the biases which are illegal or can harm people therefore you put you put? General discussion on ethics. I love philosophy, but philosophy should be practical You know, you put it to the very practical level. This is missing and that That should be in our view developed through the apprenticeship programs Yep

Mohammad Abdul Haque Anu: Absolutely, I agree with you already we are suffering is the misinformation disinformation Misinformation malinformation here throw by social media so many channel given the There is no ethics. There is nobody follow the ethics Now we are facing upcoming day is the AI ethics ethics. How we are suffer and how we are manage this kind of thing. Nobody maintain the rules, nobody maintain the ethics, but everybody saying that we should follow the ethics.

Jovan Kurbalija: Let me say it’s very simple. Diplo has AI systems. I’m director of Diplo. If you go to our website and you ask the question and you feel insulted, you won’t, but if you feel insulted, I’m responsible. I mean, law is very simple since Hammurabi who invented the first legal rules. It’s very simple. You start the business or non-profit, you start something, nobody forced you to start it, you use it, you’re responsible if damage is created. I’m sorry to say, but you have some laws on AI on close to 200 pages and principles are legal. I’m lawyer by training. Legal principles are very simple. Diplo does the chatbot. Okay. Has anyone forced you to do it? No. Is your chatbot harming somebody? So far no, but if it harms, I’m responsible and that’s the end of the story. There are legal rules that apply to it and frankly speaking, I’ve been in this field for many years in AI, but I’m always amazed when I go to these AI events and I think then they go into algorithm models that machine will take over and then it becomes, but these issues are rather in its core rather simple and the law is codification of ethics. Don’t kill somebody. Don’t insult somebody. Don’t steal somebody’s property and that’s simple. We are trying to simplify discussion, but not oversimplify, because we found that a lot of energy, including, I’m sorry, in this space is focused on issues which are nice to talk about, but which are not even sometimes good philosophy. If we discuss good philosophy, it’s great. But sometimes it’s basically repeating notions on ethics, bias, and that’s a bit of my concern on it. Therefore, practical apprenticeship, responsibility, legal responsibility, focus on issues, concrete issues, train people to understand it. That would be my advice. We have a question over there. Let me just bring, you were last year as well, you know, you’re our followers.

Tapani Tarvainen: Okay, I’m Tapani Tarvainen from Electronic Frontier Finland, Natta Foundation. Now I want to jump straight to the deep end of the philosophical questions here. Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well. So how do I know that you’re not a machine, and why should I care in the end? If, you know, because I don’t know what goes inside your brain. We don’t understand how we think for that matter. Now, I don’t think we are LLMs, you know, these large language models, nor any other type of present artificial intelligence. There are other kinds of, you know, AlphaGo or Stockfish, as somebody might know here. They are not LLMs, but they are still artificial intelligence, but perhaps, you know, we can come up with true AGI. And also I have to point out historical point here that neural networks have been studied since 1980s by Teova Kohonen, as you presumably know. So it’s not a new thing, but only now because the data available has made it more practical. But the philosophical point, is there a line to be drawn? Is there a line to be drawn reading from the screen? If and by whom? And at what point it becomes actually acceptable to treat robots as if they were human? Or does it I? Suggest that as soon as they start to behave enough human like that. We can’t take it tell it apart from talking to them Pinching them whatever Then they will be fighting for their human rights

Jovan Kurbalija: Fantastic question let me let me how I see it and then we’ll hear from Serena if you See it as basically we are powerful and we decided to run this world the not Chimpanzee not fungi not octopus and we say okay, we’re in charge It’s a human centered world all of you had there’s a natural or robotic aspect you are going to follow our rules I’m now simply simplifying probably over simplifying But your your your points are and I will make a few links if you We can go into there is a Ibn Sena Arab philosopher who did the flying flying Man story where basically he discuss what you said are we real? He basically argued that flying man who is floating maybe in this room Is it’s philosophical exercise But removed from the body from the sensory experiences and is there a free will is there a free consciousness of the flying man? it’s for me still one of the most fascinating lines of the Virtuality of thinking and to the point Many you you raise many issues, but that’s one the other issue which you raised is so what? It is sort of echoing into into so what why we should be worried? My colleague Andre who is our head of our linguists raise your hand Andre. Yeah He raised recently one interesting aspect which You can narrate later on this on this there was discussion this type of discussion in Abu Dhabi at one event and He said why we are afraid of AI Because It is the first machine which is not doing exactly what we expect. When you press the light, you got the light. When you press the accelerator in your car, you accelerate. When you write the text in the word processor, you have the text which you write. Always the reaction is basically expected, predictable. For the first time, we have a machine which is hallucinating and which looks like us. Therefore, we started being worried about AI because it is not anymore this precise machine like everything else before invented by technology. Suddenly, it’s like us. It hallucinates. It has the right to be imperfect. Imperfect. It has the right to be imperfect. Therefore, it was an interesting insight by Andrej and Hime later on. We have a question from you. Introduce yourself.

Henri-Jean Pollet: I’m Henry Pollet from ISPA in Belgium. I wanted to jump on what you said there because then we should submit AI to exams and graduation by having a smart question like the human to select them and bring them to something because… Sorry, I don’t know if it works. Okay. Now, I’m saying that because it can generate… There’s a lot of topics right now, but just jumping on that answer. If you don’t know what to generate, then you must know how can we have a system to validate what the AI is doing, what this engine is doing. You need to kind of like a human goes to a graduation process and at the end of it, through its education, you graduate. The AI should also be kind of testable and make sure that what you say is not hallucination. That’s what I wanted to say about that. And the second, if I may ask two other points. How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated. It would be great to have like a common efforts so everybody could work towards an AI of value because this kind, I wouldn’t say standard, but the model could be integrated afterwards. Today it’s like a big collection, like a big vacuum cleaner that is sucking all the possible data to different purposes and sometimes maybe a commercial purposes, more than intelligent purposes, but it is something that we should tackle. And the second aspect, the last comment I would say is what about using the model like open source because who’s the data from? Is it the one that produce it or is the one that collects it? And if the open source of software could give some interesting aspect because you have different kind of license. You have license that say my data is public domain, do whatever you want with it. Some others would keep it. No, this is a proprietary domain. If I give it to your model, you need to give me something in exchange or I give you free, but then you give it free. That’s the difference between all this license model in open source. Could that be an interesting part considering the data like software in a way? That was my question. Thank you.

Jovan Kurbalija: An excellent question. And while we are preparing the answer, it will be very concrete and answer with practical tools, not just narration, but how it can be done. Sorina, you can reflect a bit on the evaluation, validation of knowledge. I’m not of the, this.

Sorina Teleanu: Unfortunately, I kind of missed your question, so I’m not sure I’m in a good position to try to answer. So we’ll probably have to wait for Jovan to show us the practical things.

Jovan Kurbalija: Okay. Well, I was fast in finding the answer to your questions, which are critical questions. Yes, knowledge can be developed bottom up by us, and here is a very concrete example. We are basically, we use, all what we use is open source, from language models to all applications that we use. And we are now doing transcribing or reporting from IGF. Our session will be transcribed. And here is the key question. Let’s say yesterday was a session benefit, you can click on any session, and you have a session at glance, you have a report, full transcript. And what is very critical, you have also what were speakers and their knowledge and their input into the AI model. You have a speech length, you have knowledge graph for the overall session. You have in-depth analysis, did people agree? Today we had many agreements, not disagreement, but different view between differences, partial agreement and other elements. You don’t have it for the whole day. Therefore, what was discussed during the whole day, here is a knowledge graph of discussion. And it’s very busy. You can find your discussion and how you relate to all other discussion. Now the key question is, okay, we created then AI chatbot, and we ask, we can ask the question, let me ask the question. be human right to be imperfect? Probably nobody referred to it, but let’s see, maybe. Yes, five minutes. We basically processed through all transcripts and we get the answer. Common knowledge, that’s, but then what is the key? AI identifies on what basis the answer was given. Therefore, it makes a reference to what you said to, if you spoke at the session and you said this, that should be referred to you, not into some abstract chatbot where you have, let me ask some other question, which is more common. What should be AI governance? That’s everybody nowadays talking about this. I’m sure there will be some answer. Therefore, it is kind of just to explain. Therefore, we transcribe the public sessions into transcript, analyzed by AI system, and we had here some answer. Now, answer is based on sources where you can have exactly who said it, at what time, what were the pointers, and we always, this is our principle, whenever we generate any answer, it must be attributed to somebody or something for the sake of precision, for the sake of fairness. If it is book written by somebody, an article, this is the major problem. And your question pointed in this direction, can we do it? Yes, it’s doable. And big companies, OpenAI and Google and others can do it. Why they don’t do it? This is another question, but they cannot give us explanation that it is technically impossible. It is technically possible. And that’s basically what is, I would say, critical for the future of serious discussion about AI. Or here, let me, you have video summaries, you can go to our website, everything is public. And then here you have the answer with the details based on the logic and knowledge which was delivered yesterday and today here in this space. And here is, you find the answer and then here are the sources. What was the specific session, Wojci Aida, what AI decided to rely on this, Abdulla Alshmarani, Ala Zahir, Rupeng. Therefore the answer was based, AI decided to choose their paragraphs and that’s critical. And then when you click you go to the website, to the page with the transcript from that session and you can go through that session. Therefore your point, it is technically possible, financially affordable and ethically desirable. And I try to answer with concrete example, what we are experiencing today in the building. We got indication that we have five minutes. What I suggest, this issue started with the questions, there are many questions which Sorina listed, which we have to discuss. You can drop us an email, we are creating a small group of philosophers. We are proposing one idea to Norwegian host of IGF, to have a Sophie’s World for AI, those of you who read Sophie’s World, and to have a session with author of the Sophie’s World in Oslo and to ask him how he would be considering writing Sophie’s World today. But in the process we plan to engage discussion on Arab philosophy, Asian philosophy, Confucius, Buddhism, Christian, Ubuntu, African philosophy. Those are traditions that have to feed into this session. serious discussion. Ethics is important part, but I will say knowledge, education, what does it mean to be human? What does it mean to interact with each other? Is it going to be refined or remain the same? Those are critical issues in addition to ethics, bias and other things. Now, without risking of becoming persona non gratis with our generous host, I would like to thank you for patience, excellent questions. And we will leave the cards, you can, if you’re interested in this possible Sofie’s World, I don’t know if Norwegians are going to click on that. I would like to invite you with or without Sofie’s World that we continue this discussion and see how far we can move and provide more in-depth answers to your excellent questions. Thank you very much. Thank you. Thanks. Thanks for having me. Nice to meet you Eric.

J

Jovan Kurbalija

Speech speed

136 words per minute

Speech length

3755 words

Speech time

1653 seconds

AI’s impact on human knowledge and identity

Explanation

Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.

Evidence

Diplo Foundation’s efforts to develop bottom-up AI and show that it is technically feasible, financially affordable, and ethically desirable.

Major Discussion Point

Philosophical and ethical implications of AI

Agreed with

Sorina Teleanu

Agreed on

Need for critical examination of AI’s impact on society

Focus on immediate risks of AI rather than long-term hypotheticals

Explanation

Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.

Evidence

Comparison to communist narrative that promised an ideal society in the future while ignoring present concerns.

Major Discussion Point

AI governance and development

Right to be humanly imperfect in an AI-driven world

Explanation

Kurbalija proposes the idea of a right to be imperfect in an increasingly efficiency-driven world. He argues that human breakthroughs often come from periods of leisure and imperfection, which may be threatened by AI-driven optimization.

Evidence

Historical examples of inventions and philosophical developments during periods of leisure in ancient Greece and Britain.

Major Discussion Point

Human-AI interaction

Agreed with

Sorina Teleanu

Agreed on

Importance of preserving human agency and identity in AI development

Potential for bottom-up AI development to preserve human knowledge

Explanation

Kurbalija demonstrates the possibility of developing AI systems that preserve and attribute knowledge to its original sources. He argues that this approach is technically possible and ethically desirable for maintaining the integrity of human knowledge.

Evidence

Demonstration of Diplo Foundation’s AI system that transcribes IGF sessions, analyzes content, and provides attributed answers based on the discussions.

Major Discussion Point

Human-AI interaction

S

Sorina Teleanu

Speech speed

180 words per minute

Speech length

2129 words

Speech time

705 seconds

Need to question anthropomorphizing AI and assigning human attributes

Explanation

Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.

Evidence

Example of humanoid robots at the Global AI for Good Summit and the lack of critical questioning about their actual capabilities.

Major Discussion Point

Philosophical and ethical implications of AI

Agreed with

Jovan Kurbalija

Agreed on

Need for critical examination of AI’s impact on society

Lack of privacy policies for brain data processing in neurotechnology

Explanation

Teleanu highlights the lack of adequate privacy policies for brain data processing in neurotechnology companies. She argues that this raises concerns about human rights and data protection in the context of AI and neurotechnology integration.

Evidence

Survey of privacy policies of neurotechnology companies presenting at the Global AI for Good Summit, finding only one with a relevant mention of brain data processing.

Major Discussion Point

AI governance and development

Impact of AI on human-to-human communication

Explanation

Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.

Major Discussion Point

Human-AI interaction

Agreed with

Jovan Kurbalija

Agreed on

Importance of preserving human agency and identity in AI development

M

Mohammad Abdul Haque Anu

Speech speed

115 words per minute

Speech length

131 words

Speech time

68 seconds

Concerns about AI ethics education and implementation

Explanation

Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.

Evidence

Reference to existing problems with misinformation and disinformation in social media as an example of ethical challenges in technology.

Major Discussion Point

Human-AI interaction

T

Tapani Tarvainen

Speech speed

179 words per minute

Speech length

304 words

Speech time

101 seconds

Possibility of AGI becoming indistinguishable from humans

Explanation

Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.

Major Discussion Point

Philosophical and ethical implications of AI

H

Henri-Jean Pollet

Speech speed

170 words per minute

Speech length

457 words

Speech time

161 seconds

Importance of validating AI outputs through testing

Explanation

Pollet suggests that AI systems should undergo testing and validation processes similar to human education and graduation. He argues that this would help ensure the reliability and accuracy of AI-generated outputs.

Major Discussion Point

AI governance and development

Importance of open source models and data licensing for AI development

Explanation

Pollet proposes the use of open source models and various data licensing approaches for AI development. He suggests that this could help address issues of data ownership and promote more collaborative and transparent AI development.

Evidence

Reference to different types of open source software licenses as a potential model for AI data licensing.

Major Discussion Point

AI governance and development

Agreements

Agreement Points

Need for critical examination of AI’s impact on society

Jovan Kurbalija

Sorina Teleanu

AI’s impact on human knowledge and identity

Need to question anthropomorphizing AI and assigning human attributes

Both speakers emphasize the importance of critically examining AI’s impact on human knowledge, identity, and societal interactions, rather than accepting prevalent narratives.

Importance of preserving human agency and identity in AI development

Jovan Kurbalija

Sorina Teleanu

Right to be humanly imperfect in an AI-driven world

Impact of AI on human-to-human communication

The speakers agree on the need to preserve human agency, imperfection, and authentic communication in the face of increasing AI integration in society.

Similar Viewpoints

Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.

Jovan Kurbalija

Henri-Jean Pollet

Potential for bottom-up AI development to preserve human knowledge

Importance of open source models and data licensing for AI development

Unexpected Consensus

Concern about oversimplification of AI ethics discussions

Jovan Kurbalija

Mohammad Abdul Haque Anu

Focus on immediate risks of AI rather than long-term hypotheticals

Concerns about AI ethics education and implementation

Despite coming from different perspectives, both speakers express concern about the current state of AI ethics discussions, suggesting a need for more practical and immediate approaches.

Overall Assessment

Summary

The main areas of agreement revolve around the need for critical examination of AI’s societal impact, preservation of human agency and knowledge, and more practical approaches to AI ethics and development.

Consensus level

Moderate consensus on the importance of addressing immediate AI challenges and preserving human elements in AI development. This implies a shared recognition of the need for more nuanced and practical approaches to AI governance and ethics.

Differences

Different Viewpoints

Approach to AI ethics and governance

Jovan Kurbalija

Mohammad Abdul Haque Anu

Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.

Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.

While both speakers are concerned about AI ethics, Kurbalija focuses on bottom-up AI development and knowledge preservation, while Anu emphasizes the need for clear ethical guidelines and implementation.

Unexpected Differences

Anthropomorphization of AI

Sorina Teleanu

Tapani Tarvainen

Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.

Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.

While Teleanu cautions against anthropomorphizing AI, Tarvainen unexpectedly considers the possibility of highly advanced AI being indistinguishable from humans and potentially deserving human rights. This difference highlights the complexity of defining the boundaries between human and artificial intelligence.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI ethics and governance, the focus on immediate vs. long-term AI impacts, and the philosophical implications of advanced AI.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific aspects of AI development and its implications, there is a general consensus on the importance of addressing AI’s impact on society. These differences highlight the complexity of AI governance and the need for multifaceted approaches to address various concerns.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of addressing immediate AI impacts, but Kurbalija focuses more on practical risks in various sectors, while Teleanu emphasizes the potential changes in human communication and relationships.

Jovan Kurbalija

Sorina Teleanu

Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.

Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.

Similar Viewpoints

Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.

Jovan Kurbalija

Henri-Jean Pollet

Potential for bottom-up AI development to preserve human knowledge

Importance of open source models and data licensing for AI development

Takeaways

Key Takeaways

There is a need for more philosophical and ethical discussions around AI beyond just bias and ethics

Current AI development and governance discussions often lack critical questioning of long-term implications

Bottom-up AI development is technically feasible and ethically desirable to preserve human knowledge

AI’s impact on human-to-human interaction and communication needs more consideration

There are concerns about anthropomorphizing AI and assigning human attributes to it inappropriately

Legal and ethical responsibility for AI systems needs to be clearly defined

Resolutions and Action Items

Proposal to create a ‘Sophie’s World for AI’ session at the next IGF in Norway to discuss AI from various philosophical traditions

Plan to continue the discussion on philosophical implications of AI among interested participants

Unresolved Issues

How to effectively implement AI ethics education and guidelines

The potential long-term impacts of AI on human identity and interaction

How to balance AI efficiency with preserving human imperfection and agency

The ethical implications of AGI potentially becoming indistinguishable from humans

How to ensure transparency and attribution in AI-generated content and knowledge

Suggested Compromises

Developing AI through apprenticeship programs to balance efficiency with human oversight

Using open source models and varied data licensing to allow for more inclusive AI development

Implementing systems to validate and test AI outputs similar to human education processes

Thought Provoking Comments

We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech.

speaker

Jovan Kurbalija

reason

This comment challenges the typical AI discourse and emphasizes the need to understand fundamental concepts beyond just technical aspects.

impact

It set the tone for a more philosophical and governance-focused discussion, moving away from purely technical or ethical considerations.

What roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean?

speaker

Sorina Teleanu

reason

This series of questions broadens the scope of the AI discussion beyond just large language models and encourages thinking about diverse forms of AI.

impact

It prompted participants to consider a wider range of AI applications and their implications, moving the conversation beyond popular topics like ChatGPT.

Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines.

speaker

Jovan Kurbalija

reason

This comment introduces a novel concept of human rights in the AI era, challenging the focus on efficiency and perfection.

impact

It sparked a new line of thinking about human values and rights in an AI-dominated world, shifting the discussion towards more philosophical considerations.

Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well.

speaker

Tapani Tarvainen

reason

This comment raises profound questions about the nature of intelligence, consciousness, and rights in relation to AGI.

impact

It deepened the philosophical aspect of the discussion, prompting consideration of the ethical and legal implications of highly advanced AI.

How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated.

speaker

Henri-Jean Pollet

reason

This comment raises important questions about AI data sources, integration, and the potential for more open and collaborative AI development.

impact

It shifted the discussion towards practical considerations of AI development and data management, prompting thoughts on open-source approaches to AI.

Overall Assessment

These key comments shaped the discussion by moving it beyond surface-level considerations of AI ethics and bias, delving into deeper philosophical questions about the nature of intelligence, human rights in an AI era, and the practical challenges of AI development and governance. The conversation evolved from initial critiques of typical AI narratives to exploring novel concepts like the right to be imperfect and the potential personhood of advanced AI. This progression led to a rich, multifaceted dialogue that touched on technical, ethical, philosophical, and practical aspects of AI’s impact on society and human identity.

Follow-up Questions

How can we develop bottom-up AI to preserve knowledge and prevent centralization?

speaker

Jovan Kurbalija

explanation

This is important to address concerns about knowledge being centralized and captured by a few platforms, potentially impacting identity and dignity.

How will our communication and relationship with each other as humans change due to increased reliance on AI-generated text?

speaker

Sorina Teleanu

explanation

This explores the long-term implications of AI on human-to-human interactions and communication styles.

Can we develop intelligent machines that mimic other forms of intelligence found in nature, like octopuses or fungi?

speaker

Sorina Teleanu

explanation

This questions our human-centric approach to AI and suggests exploring alternative models of intelligence.

What does it mean to still be human in an increasingly AI-driven era?

speaker

Sorina Teleanu

explanation

This philosophical question is crucial for understanding the evolving relationship between humans and AI.

How can we implement a ‘right to be humanly imperfect’ in an efficiency-driven world?

speaker

Jovan Kurbalija

explanation

This explores the tension between human nature and the push for AI-driven efficiency.

Who should be responsible for teaching AI ethics?

speaker

Mohammad Abdul Haque Anu

explanation

This addresses the need for clear guidelines and responsibility in AI development and deployment.

At what point should we consider highly advanced AI as deserving of human rights?

speaker

Tapani Tarvainen

explanation

This philosophical question challenges our definitions of humanity and rights in relation to AI.

How can we develop a system to validate and test AI outputs to ensure they are not hallucinations?

speaker

Henri-Jean Pollet

explanation

This addresses the need for quality control and reliability in AI-generated content.

How can we create a common, integrated data model for AI that allows for collaboration while respecting data ownership?

speaker

Henri-Jean Pollet

explanation

This explores the potential for standardization and open collaboration in AI development.

Could open-source software licensing models be applied to AI training data to address issues of ownership and usage rights?

speaker

Henri-Jean Pollet

explanation

This proposes a potential solution to data ownership and usage concerns in AI development.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #1 Challenges of cyberdefense in developing economies

Open Forum #1 Challenges of cyberdefense in developing economies

Session at a Glance

Summary

This panel discussion focused on cybersecurity and cyber defense challenges facing developing economies. Experts from various fields shared insights on key issues and potential solutions.

The panelists emphasized that while cyber threats are similar for developed and developing nations, the latter often lack adequate preparation, skilled personnel, and effective policies to respond. They highlighted the importance of capacity building, noting the significant skills gap in cybersecurity professionals in developing countries. The need for critical thinking, effective communication, and promoting collaboration were identified as crucial skills for Chief Information Security Officers (CISOs) in these regions.

Several speakers stressed the importance of international cooperation and trust-building between nations to combat cyber threats effectively. They discussed the role of artificial intelligence in both offensive and defensive cybersecurity measures, as well as the increasing sophistication of attacks targeting critical infrastructure and supply chains.

The discussion also touched on the challenges of participating in numerous international cybersecurity forums, with limited resources available to developing nations. Panelists suggested focusing on demand-driven approaches to capacity building and leveraging existing frameworks and resources rather than reinventing the wheel.

Legal frameworks were addressed, with emphasis on the need for well-trained law enforcement personnel rather than simply creating new laws. The panelists concluded that effective implementation of existing tools and laws, coupled with sustained capacity building efforts, is crucial for improving cybersecurity in developing economies.

Keypoints

Major discussion points:

– The importance of preparation, people, and policy for effective cybersecurity in developing economies

– The need for capacity building and skills development to address gaps in cybersecurity capabilities

– The challenges of limited resources and expertise in developing countries for cybersecurity

– The role of international cooperation and information sharing in improving cybersecurity

– The importance of implementing existing frameworks rather than creating new laws/regulations

Overall purpose:

The goal of this discussion was to explore cybersecurity challenges and strategies for developing economies, with a focus on practical steps these countries can take to improve their cyber defenses despite limited resources.

Tone:

The tone was collaborative and solution-oriented throughout. Speakers built on each other’s points and emphasized the need for practical, implementable approaches rather than just theoretical frameworks. There was a sense of urgency about the importance of cybersecurity for developing nations, but also optimism about existing resources and frameworks that can be leveraged.

Speakers

– Olga Cavalli: Moderator

– José Cepeda: European parliamentarian from Spain

– Merike Kaeo: CISO, board member and technical advisor

– Ram Mohan: Chief Strategy Officer of Identity Digital, former ICANN board member

– Christopher Painter: Director of Global Forum on Cyber Expertise, first cyber diplomat in the world

– Wolfgang Kleinwächter: Professor emeritus of University of Aarhus, former commissioner of the Global Commission of Stability and Cyberspace

– Philipp Grabensee: Defense counsel and former chairman of Afilias

Full session report

Cybersecurity Challenges and Strategies for Developing Economies: A Comprehensive Panel Discussion

This panel discussion brought together experts from various fields to explore the cybersecurity and cyber defence challenges facing developing economies. The conversation was solution-oriented, emphasising practical approaches to improve cyber defences in countries with limited resources.

Key Challenges for Developing Economies

The panellists agreed that while cyber threats are similar for developed and developing nations, the latter often lack adequate preparation, skilled personnel, and effective policies to respond. Merike Kaeo highlighted the significant skills gap in cybersecurity professionals in developing countries, while Ram Mohan stressed that the first point of failure in cyber incidents is often the lack of preparation among systems and people.

The discussion revealed a consensus on the critical importance of capacity building and skills development. Christopher Painter emphasised the need for technical assistance, while Wolfgang Kleinwächter argued that developing countries should define their own cybersecurity needs rather than relying solely on exported models from developed nations.

Essential Skills and Strategies

Merike Kaeo identified critical thinking, effective communication, and promoting collaboration as crucial skills for Chief Information Security Officers (CISOs) in developing regions. She also emphasized the importance of CISOs being stakeholders in developing national cybersecurity laws and regulations. Ram Mohan emphasised the importance of preparation, people, and policy as key factors in cybersecurity readiness.

Several speakers, including José Cepeda and Christopher Painter, stressed the importance of international cooperation and trust-building between nations to combat cyber threats effectively. Merike Kaeo echoed this sentiment, highlighting the value of collaboration and information sharing between countries.

Importance of Preparation and Drills

Ram Mohan and other speakers emphasized the critical role of preparation and regular drills in enhancing cybersecurity readiness. They stressed that organizations and nations should conduct frequent exercises to test their response capabilities and identify areas for improvement.

Future Cyber Threats

José Cepeda provided a forecast for cyber threats in 2025, highlighting the increasing sophistication of attacks targeting critical infrastructure and supply chains. He also discussed the potential role of artificial intelligence in both offensive and defensive cybersecurity measures.

International Forums and Frameworks

The panel discussed the challenges developing nations face in participating in numerous international cybersecurity forums due to limited resources. Christopher Painter highlighted several important forums, including the UN Open-Ended Working Group, the Global Forum on Cyber Expertise, and the upcoming WSIS+20 event. Wolfgang Kleinwächter pointed to the African Digital Compact as a model for regional strategies.

Olga Cavalli raised the question of how developing countries can find the time and resources to prepare information for sharing with colleagues, highlighting the practical challenges of international cooperation. She also noted the language barriers in accessing cybersecurity information, a point echoed by Ram Mohan, who stressed the importance of accessibility of information in the right language and at the right level.

Legal and Policy Considerations

Philipp Grabensee cautioned against hastily creating new laws in response to cybercrime, emphasising instead the importance of enforcing existing laws and building capacity. He also discussed content-related crimes and the potential negative consequences of rapidly implemented legislation. This view aligned with Ram Mohan’s focus on preparation and policy implementation rather than constant policy changes.

José Cepeda discussed the development of common certification systems in the EU, while Christopher Painter stressed the need for political will to prioritise cybersecurity.

Practical Approaches and Resources

The panel suggested several practical steps for improving cybersecurity in developing economies:

1. Utilise existing resources like the Global Forum on Cyber Expertise (GFCE) framework and materials.

2. Implement established guidelines such as Australia’s Essential Eight principles and the Center for Internet Security’s 10 essential controls.

3. Focus on practical, small steps in building cyber defence rather than overwhelming large-scale changes.

4. Encourage developing nations to set up national CSIRTs (Computer Security Incident Response Teams).

Ram Mohan emphasized the importance of taking small, practical steps in building cyber defense for developing economies, rather than attempting comprehensive changes all at once.

Changing Nature of Cybersecurity Personnel

Wolfgang Kleinwächter highlighted the evolving role of military personnel in the context of cybersecurity, noting that future conflicts may require different skill sets and approaches compared to traditional warfare.

Conclusion

The discussion highlighted the complex challenges facing developing economies in cybersecurity, emphasising the need for capacity building, international cooperation, and strategic resource allocation. While there was broad consensus on the importance of these issues, the panel also recognised the need for tailored approaches that consider the specific contexts and needs of developing nations. Moving forward, the focus should be on implementing existing frameworks, building human capacity, and fostering sustainable, locally-driven cybersecurity strategies that prioritize preparation, skill development, and practical, incremental improvements.

Session Transcript

Olga Cavalli: It’s Chris, I can hear you. Wolfgang and Rob, they have their own conversation, I could tell from here, so… Hello, hello. Okay, perfect. Okay, thank you for being… Hola. Thank you. Let’s start, because we have only one hour. Thank you. Thank you very much for being with us. Thank you, Philip. Thank you, Chris. Thank you, Meike, for being with us remotely. And finally, we have another big audience, but here are the good ones. More people are coming. But as we only have one hour, and we have a lot to talk about, I would like to start. First, thank you to all of you. Thank you, Jose. Thank you, Wolfgang. Thank you, Rob, Philip, Meike, Chris, and those of you who are here with us. We have this space to talk and exchange some ideas about cyber security and cyber defense in developing economies. We have some issues here. So, I would like first to start presenting our distinguished panelists. We have Mr. José Cepeda. He’s a European parliamentarian. He’s from Spain. We have Marike Keo. She’s from remote. She’s CISO and board member and technical advisor. Hi, Marike. We have Ram Mohan here with us. He’s chief strategy officer of Identity Digital. And he was former ICANN board member. We have Chris Painter, our dear friend Chris, from remote from the United States. He is the director of Global Forum on Cyber Expertise. And Chris was the first cyber diplomat in the world. So, he’s very well known for that. We have Professor Wolfgang Kleinwächter, also a very good friend of us. Professor emeritus of University of Aarhus and former commissioner of the Global Commission of Stability and Cyberspace, GACSC. And we have our… Our dear friend, Philippe Grabenze from Germany, he is the defense counsel and former chairman of Afilias, which is a company devoted to DNS services and internet services. So thank you all for being with us. And I would like to start from a statement from José Cepeda from the European Parliament. Jose will make some remarks in Spanish and I will translate into English. And if you want to practice your Spanish, it’s a good moment to listen to Jose. Jose, the floor is yours. Thank you.

José Cepeda: It’s okay. Good. I will. Okay. Well, thank you. Thank you all. Thank you for your invitation about this panel. It’s very important for Europe, for the Parliament of Europe to debate about cyber security and cyber defense. But we say it is very important to speak in Spanish to a Latin America area. It’s very important for us. Yes. Well, I want to speak a little bit in Spanish, especially for that area so important to the world. It is America, with our colleagues, who are doing an immense job, a great job in recent years to promote the policy of cyber security and the policy of cyber defense in all their countries. From the European Parliament, my introduction, what I wanted to contribute is a bit the work of projection that we are doing, a prospective work in a very complex world context, based on multiple military conflicts that we are also having at the border of Europe with Ukraine, for example, the whole Middle East, which is affecting us. Europe to become aware of the importance of future forecasts and where we also have to direct our cybersecurity policies to protect ourselves. In this sense, I would like to share with all colleagues the work that is being directed towards hybrid cyber-threats, starting, on the one hand, with the multi-channel attacks, which we call them like this, which are, in the end, the famous state actors sponsored by states that are working in a direct way, in a multiple way. First, they generate disinformation structures and, on the other hand, what they do is spread them. In this way, they also make the countries unbalanced in some way. Of course, we are working on this analysis. We are also talking about technical cyber defense, which is a very important element where artificial intelligence is opening a new path, let’s say, for the bad guys. And what we also have to do in the field of defense is to work to protect ourselves, generating cyber-shields also based on artificial intelligence that make forecasts of possible cyber-attacks and, above all, high-level structures to respond in real time. I mean that artificial intelligence, as a technology, is not only going to serve the bad guys, but also all European structures, for example, are already working in that direction and in that direction. There is also a very important element that is linked to what is mass espionage. I am talking about critical platforms based on the satellite network, encrypted communications, data processing centers, which, for example, according to the European Agency, European Cybersecurity Agency, CENISA, will use methods where the use of advanced, stealthy, deep malware, for example, using the infiltration techniques in Firmware, will be very common. Personally, I am very concerned about the training of countries in the field of putting themselves at the forefront, for example, in quantum computing. That is a question that we are going far behind. I mean, there are many private companies that have it, and yet there are many governments that do not have it, because large investments are needed, and that is also one of the very important issues that we are going to work on. The forecast for next year is also based on the automation of cyberattacks around polyphonic malware, based on artificial intelligence, with a malicious mutant code, that is, a code that is inserted in critical infrastructures, and that, possibly thanks to the technology based on artificial intelligence, is mutating as possible cyber defense structures are developed, in this case, by the institutions or governments. That is going to be very complicated, but precisely because of that, it is also very important to know how it will evolve in the coming years. Regarding autonomous cyberattacks, talking about bootnet systems, distributed attacks based on DD2, for example, without a doubt, they will reach new scales, they will be programmed and adapted precisely, dynamically, based on technology based on artificial intelligence. Perfect. Very good, very good. Thank you, José. I will translate. José is explaining to us all the preventions. and activities that they are doing about cyber security and cyber defense forecast for 2025. So first he spoke about the escalation of hybrid threats by integrated multi-channel attacks from state and state-sponsored state actors that combine cyber attacks with disinformation, digital sabotage and kinetic activities. As an example, he explained the manipulation of ICS networks to dispute power followed by disinformation campaigns to maximize social impact. Then he explained to us about the technical cyber defense about early detection systems with machine learning algorithms that will be necessary to identify patterns in these hybrid actions. And then he explains about targeted mass spionage, critical platforms such as satellites and critical communications and distributed data centers will be prime targets. According to EISA, what’s EISA? That’s the European Cyber Security Agency. Methods will include the use of advanced deep stealth, malware and fear web infiltration techniques. Then he explained to us the automation of cyber attacks based threats and artificial intelligence based threats. Not only artificial intelligence used by bad actors, also for good actors. Polyphonic malware with artificial intelligence. Attackers will use artificial intelligence to generate malicious code in real time that evades traditional detection solutions. This type of malware will be especially problematic for environments that are not updated or do not implement artificial intelligence based adaptive systems. Then he explained to us about autonomous cyber attacks. attacks, about botnets, systems that distributed attacks, such as the denial of service. We reached new scales by being programmed to dynamically adapt defense responses. And then finally, he explained about technical cyber defense, CM, security information and event management, and SOAR, security orchestration, automation, and response platforms will be essential for managing automated responses. All these are issues about the cyber security and cyber defense forecast that they are

Olga Cavalli: preparing for the next year. It seems like we have a round of comments, if that’s possible. OK. I would suggest, after these very interesting comments that Jose made about the forecast for cyber defense in 2025, I would like to go to the questions to our panelists. Allow me to find my script. So Marike, are you there? Now I can hear you because. Marike, you’re an experienced CISO, so you’re a woman devoted to cyber security. And based on your experience, which are the skills that a CISO must have, especially in a developing country, to deal with challenges that developing economies usually face in relation with cyber security and cyber defense, and also after what Jose shared with us, which is the threats that are forecast as they see that for the next year. And welcome. Thank you very much for joining us.

Merike Kaeo: Yeah. Thank you very much for the question. Yeah, being a chief information officer is a position that has evolved over the years. And it can mean different things to different people. However, to me, the role has always meant that you are the person responsible for developing and implementing the strategy to provide resilience and trustworthiness in our digital environments. And in developing countries, where sometimes they are still evolving to create effective regulation and also national cyber security laws, you are most often also a stakeholder and should be in the room to be a voice, and especially so if you’re a CISO in critical infrastructure. So I’m going to list three primary skills. One of them is that you absolutely must have critical and strategic thinking. And part of that is because in developing economies, you’re often faced with challenges that include lack of resources. And it’s not always financial. There’s really, I think, the biggest challenge is a skills gap, where you just don’t know or you don’t have the people that can help with overall cyber security roles. And this lack of resources and effective team means that sometimes the CISO has to be the security architect, the security operations team, the security operations center, the incident response team, and the threat intelligence team. They have to do everything while they’re trying to prioritize what needs to be done and how do you actually get it done. So by utilizing strategic thinking, a CISO in developing economies can determine when to outsource and which tasks need to be prioritized. Most developing nations or companies that provide cyber security help will have usually a list of the top five or 10 items to do. And they think, oh, that’s not so much. Well, in many developing economies and with the lack of resources, you might only be able to one or at most two of these items. So which ones do you choose? And when outsourcing, it is extremely important to be strategic and ensure that capacity building training is included so that developing economies can build internal knowledge and expertise to provide for future opportunities within their own countries. It can also be beneficial because sometimes in developing countries, there are language constraints. So being able to communicate. in your own language. The second skill is having effective communication skills because you must be able to communicate critical risks that are relevant to your organization, industry, or nation state. And as I previously mentioned, you are a stakeholder typically in perhaps developing legal constructs and also regulations within your developing country. And effective communication can also help build trust and collaboration, which is what brings me to the third extremely important skill of being able to promote collaboration and information-sharing. This is absolutely critical in developing economies. We all learn from each other. I’ve had the privilege to work in a very global environment, and I know that the Pacific Island nation-states, Southeast Asia, Latin America, Africa, the Balkans, I mean there are many, many information-sharing groups that are region-specific. And this very much helps developing economies because within a region you usually have different levels of maturity when it comes to cybersecurity, either defense or understanding or skills. And so it’s not even that you just build up sharing groups within your own sector, being financial or health care or what have you, but sometimes also you have similar issues based on geographic region. So that information-sharing and collaboration as to which threats you’re most vulnerable to, right, what is actually happening in your region is extremely important. And also what is critical regarding collaboration is that you must know who to escalate when, as a CISO, when you see that there’s nation-state relevant information that is specific and that can target your specific nation-state or region. So to sum it up, I think the three skills that are really important are one, critical and strategic thinking, two, effective communication, which means with the technical sector, with policymakers and regulators, and also three, which is extremely important to me, is promoting collaboration and information-sharing. So thank you for that, for giving me a chance to enumerate on those aspects.

Olga Cavalli: We always go to the one of the things that we will talk always talk about capacity building learning and exchanging information I think this is so important, but sometimes, and having work in several technological environments. We don’t have that much time, and have very few resources. So, sometimes, not that you don’t want to share the information is you don’t have the time to prepare the information to be shared to other colleagues because sometimes you have to reshape it or prepare it to be easily exchanged among colleagues. So that’s something that has happened to me and maybe something that we lack of the time or somehow a resource that could help us is making the communication easier. So, expert in critical infrastructure which I consider DNS and critical infrastructure. And you are the chief strategy officer of a very big company that has an infrastructure, start all over the world, and that was security is is a main issue issue because if the DNS doesn’t work. Most of the activities that we go on the internet won’t be possible to perform. So, how do you think that in a developing economy. How is this critical infrastructure being protected how how which measures should the local people and actions can be taken to protect this national and critical infrastructure from cyber attacks.

Ram Mohan: Thank you. Can you hear me. Okay, great. Thank you. We have the privilege of serving both developed nations as well as nations that are developing. Right. So we run the the critical infrastructure of australia.au we’re the designated service provider and we’re actually designated a critical infrastructure provider for Australia, a developed nation. But we also do this for many other smaller countries, countries in the Caribbean. We do this for Belize. We do this, we’re gonna be doing this shortly for anguilla.ai in just a little while. And what you find is that the nature of the threat is not any different. The kinds of threats that developed nations encounter and the kinds of threats that developing nations encounter are no different. The scale and size of the threats are also often not much different. And what is different is preparation, people and policy. Those are the three things that distinguish the responses of a developed nation from a developing one. America already spoke about resources and you spoke Olga about accessibility of information. It’s not enough to just have the data on what the threats are and how to respond. It is important to have it accessible in the right language, at the right level, you have to calibrate it. But in reality, in my perspective, when the problem actually happens, when you have a nation under attack, a nation’s assets, critical assets under attack, when you have the banking system that is crucial being targeted, when you have telecommunications networks that are in trouble. The very first thing that fails are the systems and the people who are unprepared. And it doesn’t matter if you have great resources, great knowledge, great education, but you will find, and this is true even in developed nations, but it’s especially true in developing countries, there is no preparation for it. They have read the paper, they have seen the website, they have even had a discussion at the cabinet level on the DDoS attack that had happened or the fact that you need to secure your routers, right? So they have the theoretical knowledge, but when the attack happens, they’ve never drilled about it before. So what you find is that that preparation is the crucial difference between a developed nation’s response to a cyber attack and a developing nation’s response to cyber attack. The second thing is people. Often you will find in developing countries, the people who have the knowledge to distinguish whether a problem that is occurring, to distinguish whether that is an attack or merely an error, there are only a few people who know it. And if those people are not available or on vacation, right? I mean, I can tell you a story in one of the countries that we serve, there was one primary person responsible for cyber defense and his wife was giving birth, he was in the hospital, the country came under attack and the systems went down. because he had to choose between being there for the baby or being there for the country, and he chose the baby, right? But that’s a, it’s a real life issue, right? So people, second thing, you just don’t have enough resources in that area. The third part is policy. You find in developing nations, governments, they look at, say, the UNSDG. They look at the various protocols or capability and maturity models. JCSC had a bunch of norms that they developed on safety in cyberspace. They are excellent frameworks, but you need governments to actually take those frameworks and implement policy so that it gets into curriculums, it gets into training systems, it gets into other governmental departments. It becomes a priority for those departments. An example there, if you look at Australia, for instance, several years ago, they got really concerned about cyber defense, and the government came up with what they called the Essential Eight. These are eight essential principles for cyber defense. They include well-known things like two-factor authentication, et cetera. But what they did was they implemented policy. They said every government department within 12 months must implement the Essential Eight. And then two years later, they said every critical infrastructure provider must certify implementation of the Essential Eight, right? So I think what you need in developing economies for success here or for a proper cyber defense strategy is… reparation, people, and policy.

Olga Cavalli: And then the policy, the eight things, very interesting. Although I think that also developed companies and countries are also attacked. So that caused my attention because there are nations and companies that have a lot of resources to have a very secure infrastructure, even though they get under attack. And so developing economies are in a much vulnerable situation, yes. So it’s preparation is the issue, okay. So I would like to go to Jose, you share with us more of your… I don’t know if you can continue commenting. We go with the forecast made by… I don’t speak, no, no, no, no, no, no. Yes.

José Cepeda: Well, the next points I want to just say about cyber defense, and it’s very, very important is collaboration at a level to international. Spanish said that’s the oldest possible with… So, yes, sorry, I try in Spanish and in English. Well, no, I’m not going to say what we have, but cyber defense and international collaboration, especially for our listeners and collaborators in the Latin American space. It is very important to convey that there is a unique structure that can unite everything. is trust between countries, trust between governments, trust to develop international cooperation policies. In all my experience that I have had over the years in work in Latin America, and it is something that we are also starting to develop in a very important way in the European context, just a few weeks ago, the Finnish Prime Minister, Nyn Nistro, presented a report talking about cooperation and European intelligence to unite the 27 countries of the European Union. Well, in that context, joint work of NATO, of the Organization for the Transatlantic Treaty with the European Union, to promote a great cyber coalition, precisely based on trust, to be able to work in a single European common intelligence system. Here I have some colleagues from the European Parliament, Galvez and some others, who have been working in a very important way, the NIS2, which is a cooperative environment, also a series of rules that are setting the pace for the 27 countries, a series of standards based on cybersecurity, which will undoubtedly be the environment of the future, such as the Cyber Security Matrix Certification, which is very important, because it implements common certification systems throughout the context of the European Union, precisely speaking of critical infrastructures for the 27 countries. Well, in short, I don’t want to extend much more. I think that, especially thinking about next year, in 2025, the main cyber threats will be based on a greater sophistication of cyberattacks, the use of artificial intelligence as a weapon, both offensive and defensive, an increase in the risks. associated to technology, based on the Internet of Things and, above all, the supply chains. And the cyber defense strategies must include active defense measures, must include predictive intelligence, based above all on artificial intelligence, and a series of solid regulatory frameworks based on that international cooperation that I mentioned, and, above all, a protection of the physical, critical infrastructures, also based on new technologies. I believe that the future bet is going to be a reality in the coming years, and it has a lot to do with quantum computing. All countries are witnessing quantum computing in a serious way, precisely to develop the entire structure of artificial intelligence, and, of course, the bad guys, to name a few, who are already using it, will have much more resources within their reach. And we play a lot in this, not only the government structures at the civil level, but, without a doubt, all the armed forces and all the structures also at the military level and at the level of the defense of all our countries. So, what Jose explained is a very interesting issue, is that there is a thing, which is trust, trust among different countries. And he explains that a presentation made by the Minister of Finland and Europe, is a piece of personal information by the Minister of Finland, that Ursula von der Leyen expressed that cooperation, want to create an intelligence European cooperation agency, something like that. It’s a project based on trust and there is a joint work with the organization, the Coalition based on Trust, and a unique way of harnessing all these threats. We think that for next year we expect more sophistication in all the attacks, that the use of artificial intelligence will be not only for defense, but also for offensive attacks. There will be also attacks done through Internet of Things, things connected to the Internet, in the supply chains using Internet of Things. So all the strategies of cyber defense must use predictive intelligence, and also the regulations frameworks must be based on trust, and also the quantum computing must be considered, because this big capacity that the computers will have in the future will have a very big impact in cyber security and cyber defense. Not only for the countries, but also for the military forces. Okay, thank you very much, Jose. Now I will go to my… There’s something in the… How…

Olga Cavalli: We finished this gap that exists in human research in cyber defense. Just for you to have an idea, my university, we have opened a new career, cyber defense, and we had one person in one month. So the high demand people didn’t have some time. So Wolfgang, what do you think?

Wolfgang Kleinwachter: It’s indeed a difficult question, and it was already mentioned by previous speakers. We have this gap, the skill gap, which is clear. We have the resource gap. And then what Ronald said, preparation, people, and policy are the differences, but it’s a complex problem, which you cannot settle with one hit. So that means you have a number of different initiatives, which pull together a stream which will enable the developing countries or the global South to step by step to close the gap. And certainly, it needs also help from the global North. Help, I would say, quote unquote, in quotation marks, because the best help is if you just provide resources which enable those countries to find their own way, because otherwise they are just a target on the export of models. And I see here a problem, because the whole world agrees capacity building in AI and capacity building in AI, in every domain, is extremely important. There is no disagreement. But what we have seen in the General Assembly of the United Nations this year, we had two resolutions, one sponsored by the United States and one sponsored by China, and the Chinese resolution in particular is about AI capacity building in third world countries. So and it got the overwhelming support. Americans even supported the Chinese resolution and China supported the U.S. resolution. So that’s fine. There is no reason to be against it, but there is a risk in it that just, you know, capacity building organized by China will include the export of the Chinese model and the capacity building offered by the United States will include the export of the American model. So I think the challenge is really, and this goes to policy and people, but that means developing countries and the global south has to develop its own strategy and to define exactly what they need. And if they have a list which specify exactly the needs, then they can ask who can help to set up the needs so that you are not dependent on the big brother or the big sister or the big uncle, but you start from your own needs. And I think this is an important point and has to guide the strategy, the long-term strategy for the global south. That’s why the African digital compact is so important because, you know, they had their own digital compact which was in the cradle of the global digital compact in the United Nations, but the African digital compact has specified the specific needs of Africa. And what is relevant for the general digital strategy for those countries is also important for the defense sector, for AI, because what Ron has said, you know, there are no, you know, if it comes to attacks, it’s no difference whether it’s developing countries or developing countries. That’s the same. The question is how you react, how you prepare the people, and how you have policy in place, and in particular this preparation aspect, so that you have, you know, not only one person, but, you know, a backup for the person that is in a vacation, the other one should be in the office. So I think that’s a problem. But if it comes to another aspect, which includes also the preparation of military personnel, so I was in another workshop a couple of months ago, where they discussed, you know, what is the type of the soldier of the future, how to train the soldiers of the future. You know, in the 20th century, you needed strong young men who could, you know, move very fast. But today, you know, if you have a young man who has over it and sits and is very capable with this computer and with the keyboard, he’s probably the better soldier. So I think this will, the whole AI revolution in the military field will change also, or it’s a challenge to understand how to train the soldiers of the future. So the best thing is you do not need them. So that means peace is always better as well. But you have to be prepared, and if you have the wrong soldiers, then you will lose the war in the 21st century. I like this soldier of the future concept.

Olga Cavalli: I think it’s very interesting to think about, but about this independence that the countries should have about capacity building. There is a big challenge because the technology is developed in few countries. I mean, I would say mainly in two countries and all the rest of the world is producing that technology. So capacity building comes also based on which technology are we using. I would like to ask Chris, Chris, are you there? Thank you for being with us. It’s a pity not to have you here at the IGF. You’re an expert in cyber diplomacy, the first cyber diplomat in the world and international relations. Which are the international and regional debate spaces our developing economies should focus on about what is happening, especially considering that sometimes you don’t have the resources to follow the spaces and to go to all the meetings. Which one would you say are the most important ones?

Christopher Painter: Olga, thanks. Good to be with you all. Sorry, I can’t be there in person. I wish I was, but sadly I’m not. I’d say a couple of things. First of all, just that is a real problem that there’s a myriad of different forums, things that people should, particularly developing world countries, global South should be in, should be participating in not just to gain the knowledge, but also to gain, to share their experience because that’s critically important. And just to the point that others have said, when we talk about capacity building, I view that as a foundational element for everything else in cyberspace. If you don’t actually have the capability, both the policy capability and the technical capability, you can’t really participate either in these forums well or secure your own systems or respond to attacks. So that really is, I think, the connective point. And for me, when I’ve seen that, as someone, as it was also noted, as Rahm noted this, you need a number of different factors, including the political will in the country too, not just the technical people saying they want the training, but also the political. commitment that this is a real priority for them. And that’s becoming more real, but that’s been hard. And that’s why for the Global Forum on Cyber Expertise, for instance, one of the groups that I do a lot of work with and have for years, which is a capacity building platform that has 60 countries, a couple dozen companies, civil society, and academia, we’ve moved very much to a demand-driven approach, as was said, that ask the global South what do they want, rather than saying, here’s what you get. And that’s a much more sustainable model. But in terms of the forums that you mentioned, there is obviously the UN, the Open-Ended Working Group, in particular, which is dealing with cyber. I’d say that when that first met, the first one of those, and now about eight years ago, I was struck by the number of developing world countries who came and said, look, we’d love to debate things like norms and these esoteric concepts, but what we really need is help. We need help with our own capacity. We need help in building our institutions and our technical capabilities. So that’s a critical one. I’d say there’s some good news story on that in trying to get more global South participation, particularly a women in cyber program that’s been administered by GFC, but many countries behind it. And there’s been lots of women from the developing world have been going to and participating and making interventions in that session, so that’s important. There’s been training that UNODA has done for cyber diplomats, especially in developing world countries. And of course, there’s a number of others, Diplo and others have done that, and that’s important. Coming up this year is the WSIS plus 20, which will be a big deal, I think. It’s hard to believe that we’re already at plus 20. I remember plus 10, but that’s an important one, I think, for developing world countries to be at. The GFC, our platform, we have 220 members and partners. We have regional. What’s important is we’ve created regional hubs to allow these countries to more easily interact, including in the Pacific Islands, ASEAN. the Americas region with OAS, African Union, and so that, and the Baltics. So that gives, that’s some way these institutions are trying to come to the community. The ITU, I think, is important, more on the technical side. FIRST, for first responders and CERTs. The Council of Europe and the UNODC for cybercrime issues. And so, as you noted, the problem is there’s so many different forums that these folks could attend. You need the right people to attend. You can’t just have your representative in New York, for instance, in the UN, go to all these meetings unless they have expertise. And you need the ability to attend them. And even for big countries, trying to attend all these meetings, these plethora of these meetings is hard. For small countries where it’s one person, and I’ll give you an example, a Pacific Island country, wonderful person from Fiji who is both doing their cybersecurity there, but she’s also traveling to New York and these other forums to try to participate. It’s very difficult on them to be able to split that time. And so we need to figure out how we can more constructively engage with the developing world and allow them to be part of these forums because it’s critically important. And obviously the IGF is another forum as well. So that’s a real challenge because we can’t simply have the developing world at one level and the rest of the world at another level. We need to make sure, for practical reasons too, even for the developing world’s standpoint, they need a lot of countries to work with them to be able to go after cyber threats, which are often routed through these countries if they don’t have strong laws or capabilities.

Olga Cavalli: Thank you, Chris. And the good thing is that now most of the events are hybrid and many things get recorded. I know it’s not the same. It’s lovely to be here and interact with people, share coffee, share a sandwich. But if you want to do research or you want to know about something, you can find information online. And luckily, several languages, which And language is also a big barrier. At least for Latin America, it’s a big barrier. Not everyone, not everyone speaks English. In order to understand foreign speakers or read clearly the documents. And now I would like to go to my dear friend, Philip Graves. I say Philip is a defense counselor and an expert also in DNS structure, which is a critical infrastructure because he was former chairman of Affiliate who was a company. Now it’s merged with another company, but it’s a global company that manages DNS infrastructure. Philip, how do you think developing economists can find guidance or reference in order to legal update their legal frameworks who will fight against cybercrime around prevention as an important thing, regulations and policy? So how can that be really considered and up to date? So the agility in the development of this regulations and welcome and it’s a pity that we don’t have you here, but we are seeing you online.

Philipp Grabensee: Thanks for having me. And this is a tough question to answer in the remaining two, three, four minutes I have, but let me try to summarize that a little bit and make a few points regarding that. So far, we have basically talked about crimes against computer systems, but another big part is of cybercrime and fighting crimes is the content related crimes. And let me, and I think we cannot learn too much or of course we can learn, but we can also learn from the mistakes, which has been made within. legal frameworks, because every time something horrible happens, and I will give you an example for that, the society and the people are calling for new laws and new tools for enforcement agencies and increasing of laws, and a lot of times this crying and asking for new laws has very negative side effects, and I think the problems are really somewhere else, or a lot of the problems are somewhere else, and I think in the discussion we had, you know, some, you know, it has become very clear that the problems are not so much the laws or the framework or technology, but it’s really that people are unprepared, or the problems are the, you know, what Marek calls the skills gap you have, and as much as you have the skills gap in, you know, people in the technology field, you have also the skills gap or people unprepared in law enforcement, and so it’s not really the technology, it’s not so much a framework, it’s really about, you know, the skill set missing, and the preparation of the people, so the example I’m giving you here, it’s a, you know, an example of, you know, horrible cases of, you know, possession of child sexual abuse material in Germany, so there were horrible cases in the news, a lot of big outrage in society, and laws were increased and new laws were passed, and, you know, reform of the German criminal law, which in the end led to unintended consequences for teenage sexual expression to the digital media, because suddenly all kind of, you know, exchange of information and exchange of pictures from teenagers, you know, on media were suddenly criminalized and a crime, so that was a really negative side effect, you know, of that calling for new regulations. And the real problem was, why was law enforcement not effective before, you know, against, or why is law enforcement, or where’s the weaknesses still in law enforcement to fight against, you know, these horrible crimes? The problem is you’re lacking well-trained and, you know, the capacity of well-trained people in law enforcement. That was really the problem. And this problem has not been solved by increasing, you know, increasing fines or introducing new paragraphs, making certain behavior, or criminalizing certain behavior. It is the same, what counts, what Rom said, what Marengo said, it’s really the capacity of the people, the capacity building, the training of the people, the skills gap. In the specific example of, you know, sexual abuse material, we were just lacking in, you know, in Germany, we were lacking the, you know, the police enforcement officers who were prepared to the exposure of traumatic material, who had the psychological training to deal with that. We had not enough people to do it, and not prepared people to go through the internet and look at this content. So that’s why, you know, still it’s a problem. So what you need is really also here, of course, frameworks help, and also you can have then framework for capacity building. But it’s not so much a frameworks, and it’s not so much a laws. It’s really about, you know, it comes down, so, you know, still it comes down to the people. Of course, it’s official intelligence helps you to, you know, go through the internet and identify, you know, potential, you know, crimes. But in the end, it has to be people who look at it and bring it to prosecution. And that what really, you know, what really, you know, what really helps protect the victims of those horrible crimes. So I can only echo what my colleague says. Also, in regard to content-related crimes and enabled and related crimes, it’s the same as what accounts for crimes against the computer system itself.

Olga Cavalli: It’s not only having the policy, it’s making it work, making it relevant. Because if not, how to make it relevant? So we have five minutes. Is anyone in the audience who would like to add something or make any comment? Or we make a final one minute per speaker, and we have to leave the room. I will start with Ram, who’s looking at me directly.

Ram Mohan: Thank you, Olga. This is a really terrific set of comments that have come through from everybody. What strikes me as a useful next step is to think about collating the information that has happened in a session like this and to go back and look at developing countries, at least those who you know, who we know, and check whether this will actually work. You know, you see what GFC is doing, what Chris was talking about what GFC is doing. They already have a framework. They already have material that is available. And I think that we ought to look at what’s already done, not reinvent the wheel, in building cyber defense. Effective cyber defense is not new cyber defense. Effective cyber defense is cyber defense that has already worked, and more importantly, defense that has already failed and therefore been patched. So that, I think, is kind of the way to look at it. Let’s be practical. Let’s take small steps, because the large steps will overwhelm developing economies.

Olga Cavalli: Thank you. Wolfgang?

Wolfgang Kleinwachter: It’s not a theoretical question. It’s a very practical question. It’s just implementation. You have to do it.

José Cepeda: Thank you. Well, I see three pillars. People, policies and preparation. It’s very, very important. Three pillars are necessary to all countries. This is not the future. It’s the present. It’s necessary now.

Olga Cavalli: Thank you, José. Marika, your final comments?

Merike Kaeo: Chris had mentioned first, and I had the privilege of helping a developing nation set up their national CSIRT. I think that is critical, and there are many, many guidelines that exist that also talk about the legal constructs and the regulatory frameworks that countries should have within their own culture, within their own legal systems, to build up this national CSIRT that I think will greatly help developing nations.

Olga Cavalli: Thank you very much. Chris, your last comments?

Christopher Painter: I think it goes back to what we were saying before. This is a critically important area. You need the political will in countries. You need sustainability and continuity, which is always difficult, and you need non-duplication. I think as we try to match resources with the needs, and the needs are great, there are a number, and I totally agree, don’t recreate the wheel. There’s lots of stuff out there. It’s an important area. The Sybil portal that the GFC runs has hundreds of projects, calendars, things that I think are helpful, and it’s publicly available. It’s not limited just to the members of the GFC. It’s been linked to the UNIDIR cyber policy portal at the UN portal, and so that’s, I think, a really good cross-linking resource. And the last thing I’d just say is, On the topic of things that are out there, I also am on the board of a nonprofit called the Center for Internet Security, which like the essential aid has the 10 essential controls. So a lot on cyber hygiene is available. So I agree. Coaliting what’s there rather than recreating things is critical.

Olga Cavalli: Thank you. Philipp, your last comment.

Philipp Grabensee: I think I can just really echo the last four comments. I think we all came out, you know, in the end to the same same opinions implementing not just, you know, the same thing, not recreating the wheel means also not always making new laws, new laws in force, recreating not recreating the wheels means in law enforcement, you know, just enforcing existing laws and building capacity for people to enforce those laws. That’s the way to go forward. And also because existing laws have always shown that they, you know, they have gone to the critical test of, you know, how they how they relate to human rights and constitutional rights. So always creating new laws, you know, always, you know, puts a lot of danger. You know, talking as a defense counsel puts a lot of danger, you know, because then those laws has to be under, you know, has to be under looked at, you know, from from all kind of perspective. And a lot of things go wrong when you lost being passed, especially in a hurry. So not recreate the wheel, not just do new words, implementation and enforcement of existing tools and laws. I think that that’s the way to go ahead.

Olga Cavalli: Okay, please help me applauding our dear friends and colleagues for a very interesting session. And for the remotes, don’t go away, we will take a picture. Don’t go away. We take a picture with you. Oh, thanks to you. ¿Me sacaste una foto? Yes. Espero que se vean ellos. Estoy aquí en el medio, ¿no? Sí, por favor. All right, thanks, have a great day and see you wherever, maybe next year in Oslo, who knows. All right, see you guys. Take it easy. Happy Christmas. Happy season. See you soon.

M

Merike Kaeo

Speech speed

127 words per minute

Speech length

775 words

Speech time

363 seconds

Skills gap in cybersecurity workforce

Explanation

Merike Kaeo highlights the significant challenge of the skills gap in the cybersecurity workforce, particularly in developing economies. This gap refers to the lack of trained professionals who can effectively handle cybersecurity tasks and responsibilities.

Evidence

Kaeo mentions that in developing economies, a CISO might have to perform multiple roles due to lack of skilled personnel, including being the security architect, security operations team, incident response team, and threat intelligence team.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Ram Mohan

Christopher Painter

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Critical thinking and strategic prioritization of tasks

Explanation

Kaeo emphasizes the importance of critical thinking and strategic prioritization of tasks for cybersecurity professionals, especially in resource-constrained environments. This skill allows professionals to determine which tasks are most crucial and how to allocate limited resources effectively.

Evidence

She mentions that in developing economies, a CISO might only be able to implement one or two out of the top five or ten recommended cybersecurity measures due to resource constraints.

Major Discussion Point

Key skills and strategies for cybersecurity

Collaboration and information sharing between countries

Explanation

Kaeo stresses the importance of collaboration and information sharing between countries in cybersecurity efforts. This approach allows countries to learn from each other’s experiences and share best practices, particularly beneficial for developing economies.

Evidence

She mentions the existence of various region-specific information-sharing groups in areas such as the Pacific Island nation-states, Southeast Asia, Latin America, Africa, and the Balkans.

Major Discussion Point

Key skills and strategies for cybersecurity

Agreed with

José Cepeda

Christopher Painter

Agreed on

Need for international cooperation and trust

R

Ram Mohan

Speech speed

125 words per minute

Speech length

909 words

Speech time

436 seconds

Lack of resources and preparation for cyber attacks

Explanation

Ram Mohan highlights that developing economies often lack the necessary resources and preparation to effectively respond to cyber attacks. This includes not only financial resources but also human resources and established protocols.

Evidence

Mohan provides an example of a country where there was only one primary person responsible for cyber defense, and when that person was unavailable due to personal reasons, the country’s systems were vulnerable to attack.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Christopher Painter

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Preparation, people, and policy as crucial factors

Explanation

Mohan emphasizes that preparation, people, and policy are the three crucial factors that distinguish the responses of developed nations from developing ones in cybersecurity. He argues that these factors are more important than the nature or scale of the threats themselves.

Evidence

He mentions Australia’s ‘Essential Eight’ principles as an example of effective policy implementation in cybersecurity.

Major Discussion Point

Key skills and strategies for cybersecurity

J

José Cepeda

Speech speed

123 words per minute

Speech length

1765 words

Speech time

859 seconds

Need for trust and international cooperation

Explanation

José Cepeda emphasizes the critical importance of trust between countries and governments in developing international cooperation policies for cybersecurity. He argues that this trust is fundamental to creating a unified structure for cyber defense.

Evidence

Cepeda mentions the recent presentation by the Finnish Prime Minister about cooperation and European intelligence to unite the 27 countries of the European Union, and the joint work of NATO with the EU to promote a great cyber coalition based on trust.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Christopher Painter

Merike Kaeo

Agreed on

Need for international cooperation and trust

Development of common certification systems in the EU

Explanation

Cepeda discusses the development of common certification systems for cybersecurity in the European Union. These systems aim to implement standardized certification across all 27 EU countries, particularly for critical infrastructures.

Evidence

He mentions the Cyber Security Matrix Certification as an important initiative in this direction.

Major Discussion Point

Legal and policy considerations for cybersecurity

C

Christopher Painter

Speech speed

180 words per minute

Speech length

1050 words

Speech time

349 seconds

Importance of capacity building and technical assistance

Explanation

Christopher Painter emphasizes the critical importance of capacity building and technical assistance in cybersecurity, particularly for developing countries. He views this as a foundational element for everything else in cyberspace, enabling countries to participate effectively in international forums and secure their own systems.

Evidence

Painter mentions the Global Forum on Cyber Expertise, which has 60 countries, companies, civil society, and academia as members, and uses a demand-driven approach to capacity building.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Ram Mohan

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Differed with

Wolfgang Kleinwachter

Differed on

Approach to cybersecurity capacity building

UN Open-Ended Working Group as an important forum

Explanation

Painter highlights the UN Open-Ended Working Group as a crucial forum for discussing cybersecurity issues, particularly for developing countries. He notes that many developing countries have used this forum to express their need for capacity building assistance.

Evidence

He mentions that when the Open-Ended Working Group first met about eight years ago, many developing world countries expressed their need for help in building their institutions and technical capabilities.

Major Discussion Point

International forums and frameworks for cybersecurity

Agreed with

José Cepeda

Merike Kaeo

Agreed on

Need for international cooperation and trust

Need for political will to prioritize cybersecurity

Explanation

Painter stresses the importance of political will in countries to prioritize cybersecurity. He argues that without this commitment from political leadership, efforts to improve cybersecurity capabilities may not be successful.

Major Discussion Point

Legal and policy considerations for cybersecurity

W

Wolfgang Kleinwachter

Speech speed

128 words per minute

Speech length

739 words

Speech time

344 seconds

Developing countries should define their own cybersecurity needs

Explanation

Wolfgang Kleinwachter argues that developing countries should define their own cybersecurity needs and strategies, rather than relying solely on models exported from developed countries. This approach ensures that the strategies are tailored to the specific context and requirements of each country.

Evidence

He cites the African Digital Compact as an example of a region-specific strategy that specifies the particular needs of Africa in the digital realm.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Ram Mohan

Christopher Painter

Agreed on

Importance of capacity building and skills development

Differed with

Christopher Painter

Differed on

Approach to cybersecurity capacity building

African Digital Compact as a model for regional strategies

Explanation

Kleinwachter highlights the African Digital Compact as a positive example of a region-specific digital strategy. He suggests that this model could be useful for other developing regions in crafting their own cybersecurity strategies.

Evidence

He mentions that the African Digital Compact was developed in the context of the global digital compact in the United Nations, but specifically addresses the needs of Africa.

Major Discussion Point

International forums and frameworks for cybersecurity

P

Philipp Grabensee

Speech speed

154 words per minute

Speech length

943 words

Speech time

366 seconds

Caution against hastily creating new laws in response to cybercrime

Explanation

Philipp Grabensee warns against the hasty creation of new laws in response to cybercrime incidents. He argues that this approach can lead to unintended consequences and may not address the root causes of the problem.

Evidence

Grabensee provides an example from Germany where new laws passed in response to child sexual abuse material cases had unintended consequences for teenage sexual expression in digital media.

Major Discussion Point

Legal and policy considerations for cybersecurity

Importance of enforcing existing laws and building capacity

Explanation

Grabensee emphasizes the importance of enforcing existing laws and building capacity in law enforcement, rather than constantly creating new laws. He argues that the real problem often lies in the lack of well-trained personnel to enforce existing laws.

Evidence

He mentions the example of Germany lacking police enforcement officers who were prepared for exposure to traumatic material and had the psychological training to deal with it in cases of sexual abuse material.

Major Discussion Point

Legal and policy considerations for cybersecurity

O

Olga Cavalli

Speech speed

134 words per minute

Speech length

1475 words

Speech time

657 seconds

Need for developing countries to participate in multiple forums

Explanation

Olga Cavalli highlights the challenge for developing countries to participate in multiple international cybersecurity forums. She notes that while participation is important, it can be difficult due to resource constraints.

Major Discussion Point

International forums and frameworks for cybersecurity

Agreements

Agreement Points

Importance of capacity building and skills development

Merike Kaeo

Ram Mohan

Christopher Painter

Wolfgang Kleinwachter

Skills gap in cybersecurity workforce

Lack of resources and preparation for cyber attacks

Importance of capacity building and technical assistance

Developing countries should define their own cybersecurity needs

Multiple speakers emphasized the critical need for capacity building and skills development in cybersecurity, particularly for developing economies. They agreed that addressing the skills gap and providing technical assistance are fundamental to improving cybersecurity capabilities.

Need for international cooperation and trust

José Cepeda

Christopher Painter

Merike Kaeo

Need for trust and international cooperation

UN Open-Ended Working Group as an important forum

Collaboration and information sharing between countries

Speakers agreed on the importance of international cooperation and trust-building in addressing cybersecurity challenges. They highlighted various forums and initiatives that facilitate such cooperation.

Similar Viewpoints

Both speakers emphasized the importance of focusing on implementation and capacity building rather than creating new laws or frameworks. They argue that effective enforcement of existing measures is more critical than constantly developing new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Unexpected Consensus

Caution against hasty creation of new laws

Philipp Grabensee

Ram Mohan

Caution against hastily creating new laws in response to cybercrime

Preparation, people, and policy as crucial factors

While most discussions focused on building capacity and implementing new measures, there was an unexpected consensus on the need for caution in creating new laws. Both speakers, from different perspectives (legal and technical), agreed that hasty creation of new laws or constant policy changes might not be the most effective approach to cybersecurity.

Overall Assessment

Summary

The main areas of agreement centered around the importance of capacity building, skills development, and international cooperation in addressing cybersecurity challenges, particularly for developing economies. There was also consensus on the need for strategic thinking and prioritization of resources.

Consensus level

There was a high level of consensus among the speakers on the fundamental challenges and approaches to cybersecurity in developing economies. This consensus suggests a clear direction for future efforts in this area, focusing on capacity building, international cooperation, and strategic resource allocation. The implications of this consensus are that international initiatives and policy-making bodies may find broad support for programs that address these agreed-upon priorities.

Differences

Different Viewpoints

Approach to cybersecurity capacity building

Wolfgang Kleinwachter

Christopher Painter

Developing countries should define their own cybersecurity needs

Importance of capacity building and technical assistance

While both speakers emphasize the importance of capacity building, Kleinwachter stresses the need for developing countries to define their own needs, while Painter focuses on the importance of external assistance and international cooperation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to capacity building and the role of international assistance versus self-reliance for developing countries in cybersecurity.

difference_level

The level of disagreement among the speakers is relatively low, with most differences being in emphasis rather than fundamental approach. This suggests a general consensus on the importance of capacity building and international cooperation in cybersecurity, particularly for developing economies.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of preparation and capacity building, but Mohan emphasizes policy development while Grabensee focuses on enforcing existing laws rather than creating new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Similar Viewpoints

Both speakers emphasized the importance of focusing on implementation and capacity building rather than creating new laws or frameworks. They argue that effective enforcement of existing measures is more critical than constantly developing new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Takeaways

Key Takeaways

Developing economies face significant cybersecurity challenges, including skills gaps, lack of resources, and inadequate preparation

Critical skills for cybersecurity in developing countries include strategic thinking, effective communication, and promoting collaboration

Preparation, people, and policy are crucial factors in cybersecurity readiness

International cooperation and trust between countries is essential for effective cybersecurity

Capacity building and technical assistance are vital for improving cybersecurity in developing economies

Developing countries should define their own cybersecurity needs rather than relying solely on models from developed nations

Implementing existing cybersecurity frameworks is often more effective than creating new laws or regulations

Resolutions and Action Items

Collate information from discussions like this and check its applicability with developing countries

Utilize existing resources like the Global Forum on Cyber Expertise (GFCE) framework and materials

Focus on practical, small steps in building cyber defense rather than overwhelming large-scale changes

Encourage developing nations to set up national CSIRTs (Computer Security Incident Response Teams)

Unresolved Issues

How to effectively address the cybersecurity skills gap in developing countries

Balancing the need for international cooperation with maintaining independence in cybersecurity strategies

How to ensure sustainable and continuous improvement in cybersecurity capabilities despite limited resources

Addressing language barriers in accessing cybersecurity information and participating in international forums

Suggested Compromises

Using hybrid or online formats for international meetings to increase participation from developing countries

Creating regional hubs for cybersecurity cooperation to make participation more accessible for smaller countries

Focusing on enforcing existing laws and building capacity rather than constantly creating new cybersecurity legislation

Balancing the adoption of international best practices with developing country-specific strategies that fit local contexts

Thought Provoking Comments

The very first thing that fails are the systems and the people who are unprepared. And it doesn’t matter if you have great resources, great knowledge, great education, but you will find, and this is true even in developed nations, but it’s especially true in developing countries, there is no preparation for it.

speaker

Ram Mohan

reason

This comment highlights the critical importance of preparation and readiness, beyond just having resources or knowledge. It challenges the assumption that simply having advanced technology or information is sufficient for cybersecurity.

impact

This shifted the discussion towards the practical aspects of cybersecurity implementation, especially in developing countries. It led to further exploration of the gaps between theoretical knowledge and practical readiness.

The best help is if you just provide resources which enable those countries to find their own way, because otherwise they are just a target on the export of models.

speaker

Wolfgang Kleinwächter

reason

This insight emphasizes the importance of empowering developing countries to create their own cybersecurity strategies rather than simply adopting models from other nations. It introduces a nuanced perspective on international cooperation and capacity building.

impact

This comment sparked a discussion about the balance between international assistance and local autonomy in cybersecurity. It led to considerations of how to provide support without imposing external models.

It’s not so much a frameworks, and it’s not so much a laws. It’s really about, you know, it comes down, so, you know, still it comes down to the people.

speaker

Philipp Grabensee

reason

This comment cuts through the focus on legal frameworks and technology to emphasize the human element in cybersecurity. It challenges the notion that solutions are primarily about laws or technical systems.

impact

This insight refocused the discussion on the importance of human capacity and training in cybersecurity efforts. It led to further exploration of how to address skills gaps and prepare people effectively.

Overall Assessment

These key comments shaped the discussion by shifting focus from theoretical frameworks and technological solutions to practical implementation challenges, especially in developing countries. They highlighted the importance of preparation, local autonomy in strategy development, and human capacity building. The conversation evolved from discussing broad international policies to exploring specific ways to empower and prepare individuals and institutions for cybersecurity challenges.

Follow-up Questions

How can developing economies find the time and resources to prepare information for sharing with colleagues?

speaker

Olga Cavalli

explanation

This is important because information sharing is crucial for cybersecurity, but developing economies often lack the time and resources to prepare and share information effectively.

How can developing countries build their own strategies for capacity building in AI and cybersecurity, rather than relying on models exported from other countries?

speaker

Wolfgang Kleinwächter

explanation

This is important to ensure that developing countries can address their specific needs and avoid becoming dependent on models from other countries that may not be suitable for their context.

How can we make international cybersecurity forums more accessible and relevant for developing world countries?

speaker

Christopher Painter

explanation

This is crucial because developing countries often struggle to participate in multiple international forums due to resource constraints, yet their participation is essential for global cybersecurity efforts.

How can we address the skills gap in law enforcement for dealing with cybercrime, particularly in developing countries?

speaker

Philipp Grabensee

explanation

This is important because effective law enforcement is crucial for combating cybercrime, but many countries lack the necessary trained personnel and resources.

How can we collate existing cybersecurity frameworks and resources to make them more accessible and applicable for developing economies?

speaker

Ram Mohan

explanation

This is important to avoid reinventing the wheel and to help developing economies implement effective cybersecurity measures based on existing, proven frameworks.

How can developing nations set up effective national CSIRTs (Computer Security Incident Response Teams)?

speaker

Merike Kaeo

explanation

This is critical for developing nations to build their cybersecurity capacity and respond effectively to cyber incidents.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Main Session | Policy Network on Artificial Intelligence

Main Session | Policy Network on Artificial Intelligence

Session at a Glance

Summary

This discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labor implications of AI. Speakers emphasized the need for a global governance framework for AI, highlighting challenges such as the digital divide, environmental impacts, and the potential for misuse in areas like peace and security.

Key points included the importance of accountability and transparency in AI systems, with suggestions for expanding liability to both producers and operators. The environmental impact of AI was discussed, particularly regarding resource extraction and energy consumption. Speakers stressed the need for interoperability while cautioning against potential exploitation of creators’ labor. The discussion touched on AI’s impact on the job market and the need for upskilling and reskilling workforces.

Panelists debated the feasibility of a global AI governance regime, acknowledging the challenges of multilateralism but emphasizing its necessity. The importance of common standards and definitions was highlighted, along with the need for both global cooperation and domestic policy implementation. Speakers also addressed the need for a clear definition of AI and the challenges of regulating a rapidly evolving technology.

The discussion concluded with calls for responsible collaboration, prioritizing sustainability in AI development, and the importance of considering AI’s impact on humanity as a whole. Participants emphasized the need for a comprehensive approach to AI governance that addresses technical, ethical, and societal implications.

Keypoints

Major discussion points:

– The need for global AI governance and cooperation, while balancing domestic policies

– Liability and accountability issues related to AI systems and their impacts

– Environmental and sustainability concerns around AI development and deployment

– Addressing inequality and capacity building to ensure equitable AI access and benefits

– Defining AI and its applications to enable effective governance

The overall purpose of the discussion was to examine key issues and recommendations around AI governance, based on a report by the IGF Policy Network on AI. The speakers aimed to explore how to implement responsible AI development and deployment on a global scale.

The tone of the discussion was largely serious and concerned, reflecting the gravity of the challenges posed by AI. However, there were also notes of cautious optimism about AI’s potential benefits if governed properly. The tone became more urgent towards the end as speakers emphasized the need for swift action on global AI governance.

Speakers

– Sorina Teleanu: Moderator

– Amrita Choudhury: Policy Network on AI coordinator

– Jimena Viveros: Managing Director and CEO of Equilibrium AI, member of the UN Secretary General’s high-level advisory body on AI

– Anita Gurumurthy: Executive Director of IT4Change

– Yves Iradukunda: Permanent Secretary, Ministry of ICT and Innovation of Rwanda

– Brando Benifei: Member of the European Parliament and co-rapporteur for the EU AI Act

– Meena Lysko: Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her

– Muta Asguni: Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia

Additional speakers:

– Ansgar Kuhne: Affiliated with EUI

– Riyad Najm: From the media and communications sector

– Mohd: Online moderator

Full session report

Expanded Summary of IGF Policy Network on Artificial Intelligence Discussion

Introduction

This summary reports on a discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labour implications of AI. The discussion brought together experts from various sectors, including government officials, policymakers, and industry representatives, to address the complex challenges posed by AI development and deployment on a global scale.

Key Discussion Points

1. Liability and Accountability in AI Systems

The discussion emphasized the importance of establishing clear liability and accountability mechanisms for AI systems. Anita Gurumurthy, Executive Director of IT4Change, called for liability rules that encompass both producers and operators of AI systems. Ximena Viveros, Managing Director and CEO of Equilibrium AI and member of the UN Secretary General’s High-Level Advisory Body on AI, stressed the importance of state responsibility throughout the AI lifecycle, while also noting the challenge of allocating responsibility given the opacity of AI systems.

Brando Benifei, Member of the European Parliament and co-rapporteur for the EU AI Act, highlighted the need for transparency to address liability issues in the AI value chain. The speakers agreed on the importance of accountability and transparency in AI systems, emphasizing the need for explainability to ensure proper governance.

2. Environmental Sustainability in the AI Value Chain

The environmental impact of AI was a significant point of discussion. Meena Lysko, Founder and Director of Move Beyond Consulting, highlighted the environmental impacts of AI infrastructure and resource extraction. Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia, provided concrete data on the projected increase in electricity consumption by data centres, emphasizing the urgency of addressing AI’s environmental impact.

The discussion also touched on the potential of AI to support sustainable development goals, highlighting the complex nature of AI’s effects on society and the environment.

3. Interoperability and Global Cooperation

The importance of interoperability and global cooperation in AI development and governance was a recurring theme. Yves Iradukunda, Permanent Secretary at the Ministry of ICT and Innovation of Rwanda, stressed the importance of partnerships to bridge divides in AI development, particularly between developed and developing nations. Benifei emphasized the need for common standards and definitions for AI globally.

Lysko called for sincere collaboration on responsible AI development, while Asguni highlighted the challenge of regulatory arbitrage between countries. This underscored the need for a coordinated global approach to AI governance while recognizing the difficulties in achieving this given varying national interests and capabilities.

4. Labour Implications and Social Impact of AI

The social implications of AI, particularly its impact on labour markets, were discussed. Benifei stressed the need to consider AI’s impact on labour markets and the importance of upskilling and reskilling workforces. Iradukunda emphasized the importance of addressing inequalities in AI adoption and access, particularly in the Global South.

The discussion highlighted the need for a whole-of-government and whole-of-society approach to AI governance, as well as the importance of capacity building and awareness to ensure equitable AI development and deployment.

Additional Key Points

1. Global AI Governance and Regulation: The need for a global governance framework for AI emerged as a central theme, with speakers emphasizing the importance of international cooperation while acknowledging the challenges of multilateralism.

2. Defining AI: The challenge of defining AI for governance purposes was raised by audience member Riyad Najm, highlighting a fundamental issue in AI regulation efforts.

3. AI’s Impact on Peace and Security: Multiple speakers raised concerns about the potential misuse of AI in military applications and its implications for global peace and security.

4. AI Chatbot for Report Interaction: The creation of an AI chatbot to allow interaction with the report’s contents was noted as a practical tool for disseminating the findings.

Unresolved Issues and Future Directions

Several key issues remained unresolved, including:

1. How to achieve a binding global treaty or governance framework for AI

2. Balancing proactive and reactive approaches to AI regulation

3. Addressing regulatory arbitrage between countries, especially between the Global North and South

4. Defining AI in a way that allows for effective governance

5. Ensuring transparency and explainability of AI systems for accountability purposes

6. Protecting against misuse of AI, especially in military applications

Conclusion

The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreement on the need for global cooperation and comprehensive governance frameworks, differences in emphasis and approach underscored the difficulties in achieving a unified global strategy. The conversation emphasized the importance of considering AI’s impact on humanity as a whole, balancing its potential benefits with the need to mitigate risks and ensure equitable access and development.

As summed up by a quote from the UN Secretary General, shared by moderator Sorina Teleanu, “Digital technology must serve humanity, not the other way around.” This encapsulates the overarching goal of AI governance efforts discussed in this panel: to harness the potential of AI for sustainable development and societal benefit while addressing the significant challenges it poses to global governance, environmental sustainability, and social equity.

Session Transcript

Sorina Teleanu: Welcome to the main session of the IGF policy network on artificial intelligence. We have about one hour and 15 minutes to go through the main outcomes of a work that has been happening for about a year. And before I go and introduce our guests, I would like to invite you to join me in welcoming Amrita and tell us a bit about the work that has been going on for this year behind the policy network on artificial intelligence. Amrita?

Amrita Choudhury: Good afternoon, everyone, and thank you for coming. I agree with Sorina, the room is too large, the audience is too far away for all of us, but thank you for coming to this policy network on AI’s main session. Just to give you a background, the policy network on AI originated from the 2020 IGF, which was held in Addis Ababa, where people, the community thought that there should be a particular policy network which works on AI issues, especially related to governance, with a focus on global south. And so the first year, that is last year, we produced a report, and you can go and see it in the P&A website, and this is a multi-stakeholder group which actually decides what is going to be discussed, how it is going to happen, and how the report is formed. We have a few of the community members also sitting here, and this year, we had four subgroups, interoperability, sustainability, liability, and labour-related issues. Some of the community members who have been very active, of course, all community members have worked in their capacities, and they are all volunteers, but some names which I would like to mention is Caroline, Shamira, Ashraf, who is also the online moderator, Yik Chang Chin, Olga, Shafiq, and Herman, Olga Kavali, they were, you know, great leaders of the various subgroups. We also thank all the members, volunteers, proofreaders, and our consultant, Mikey, who is working behind the screen, and I think she’s sitting there, and our MAG coordinator, who is also sitting there for all the hard work which has been put in. If you want to see the report, it is there online, and if you are in the Zoom room, it would be put into the chat, and I think Sorina also has something planned. With that, I will pass it on to Sorina and our panellists.

Sorina Teleanu: Thank you so much, Amrita. I’m going to try to get closer to you as well, because that feels a bit odd, and the light is exactly on me. So we heard a bit about the work of the Policy Network on Artificial Intelligence, and I’m sure you have heard lots of talks about AI these days. We are deploying, in fact, reporting, and you can probably also guess it, the main word over the past two days at the IGF has been, obviously, AI. So it’s obviously talked about quite a lot, and it is in this context that we will be trying to unpack some of the discussions around artificial intelligence, more specifically around AI governance with our esteemed guests that I’m going to introduce briefly. But before that, let me tell you also a few words about the report. Amrita mentioned it’s available online, and it is the result of a one-year long process, so I do kindly encourage you to take a look at this, at the summary, executive summary. So the report covers four main areas. One is liability as a policy level in AI governance. The other is environmental sustainability within the generative AI value chain. Then the third area is around interoperability, legal, technical, and data-related. And the final area covered in the report is on labor implications of AI. So now I’m going to ask the obvious question. Has anyone here even tried to open the report before joining this session? Am I seeing a hand? Oh, I’m seeing a few hands. Excellent. Thank you for doing that. I also have some, I hope, good news for you here and also for our colleagues who have been working so long on this report. Our attention spans, you know, it’s kind of limited these days, and reading a 100-something page report might not be the first thing we want to do. But I have a gift for you, and that’s an AI assistant. We talk about AI. Let’s also walk the talk a bit. So what my colleagues at Diplo Foundation have been doing is to build an AI assistant based solely on the report. So you can go online and actually interact with the AI assistant and ask questions about the report and its recommendations. We’re going to share the link, and you can access it during and after the session as well. So, and I’m pretty sure colleagues who have been working on the report would be looking forward to hearing your feedback as well on what is written there. Let me turn back to our guests. I’ll introduce them briefly. And then the plan for the session is to hear a bit from them about the four main areas of the report. Also hear from them about how they see the recommendations of the report and where do they think this recommendation could be going moving forward so they actually have an impact in the real world. And then we do hope to have a dialogue. Although this room might not be very inviting for the kind of dialogue we’re hoping, I will be looking at you, and I hope there will be a few raised hands in the room. So let me do what I have been promising for quite a while. In no particular order, we have Jimena Viveros Thank you, Jimena. Managing Director and CEO of Equilibrium AI, and also member of the UN Secretary General’s high-level advisory body on AI, which produced another excellent report that I do encourage you to take a look at if you haven’t yet. Then we have online with us Mina Lisco. Thank you, Mina, for joining us. Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her. Also online, Anita Gurumurthy, Executive Director of IT4Change. Thank you, Anita, for joining. Back to the room, we have Yves Iradukunda, Permanent Secretary, Ministry of ICT and Innovation of Rwanda. Thank you for joining. Brando Benifei, Member of the European Parliament and co-rapporteur for probably the most famous piece of legislation on AI at the moment, the EU AI Act. And Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia. Thank you so much for hosting us this year. And we also have an online moderator for our participants online. Mocht will be giving us feedback and input from the online room. So again, in no particular order, I’m going to invite our guests to reflect on a section of the report and try to look also at the recommendations, if possible, and tell us how you see these recommendations moving forward. And I’m going to do an absolutely random pick. Anita, would you like to start?

Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I just wanted to commend the report, and especially for the four focus areas. Those really come with a lot of insights and also reflect the state of art analysis, especially on crucial but often neglected areas of environment and labor. Also, I think it takes up two very, very difficult areas. One is the whole idea of liability, and the other is the idea of interoperability. I’ll focus on these two, because I’d like to really zoom in on what I think we should be looking at in this domain. What would be interesting and useful is for the report to enlarge its remit in terms of liability rules, which should apply to both producers and operators of systems. Because a fairly invested level of care is needed in designing, testing, and employing AI-based solutions. And we need to understand that while producers control the product’s safety features, and producers look at how interfaces between the product and its operator can really be improved, let’s take the whole context of social welfare systems or the government employing systems. So in that case, the operator of the system also is implicated in decision making around the circumstances in which systems will be put to use. And these are real world situations, and here I think it’s really important that operators also become liable and bear some of the associated costs when risks become actual harms. So that’s one thing. The second is a particular thought that I have around the training that we do of the judiciary, the training that is needed for lawmakers, policy makers, et cetera. Here I think that the elephant in the room cannot be disregarded, and that is really the whole absence of global space to make certain decisions. We are particularly concerned not only about the opacity of algorithms, but the fact that such opacity often in cross-border value chains, you know, in trade, in services, for instance, get compounded because of trade secret protections. So trade secret claims over the technical details of AI can really become an obfuscating mechanism and limit the disclosure of such information systems. I want to draw your attention to a recent paper from CIGI in Canada where a landmark case about Lyft and Uber came up to the court and the Washington Supreme Court ruled that reports in question maintained as trade secrets by Lyft and Uber qualify actually as public records. And in the public interest, they have to really be put out. So we have to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach. Some countries are already powerful and some not. So that’s very important. I’d also like to just focus a little bit, maybe one minute or one and a half minutes on the vital distinction between interoperability that’s a technical idea and interoperability that’s a legal idea. I think if you look at interoperability, sometimes while calling for this important principle, you know, it’s like openness. We have to be careful about who we are making something open for or whether there is public interest underlying such openness. So interoperability often enables systemic exploitation of creators labor, right? So oftentimes, if we don’t have guardrails, the largest firms tend to cannibalize innovation. So I would like to actually conclude by saying that we should really look at technical interoperability and policy sovereignty as not, you know, things that are polarized, but we should work towards a framework in which many countries can participate in global AI standards. My last comment would be a fleeting remark about the wonderful chapter on labor, which could perhaps do with one addition about the idea of cross-border supply chains in which labor is implicated in the global south. And the fact that while guaranteeing labor rights, we really need to understand that working conditions in the global south include subcontracting and therefore transnational corporations must also be responsible in some way when they outsource, you know, in labor chains, AI labor chains to third parties or subcontracts. So that we are actually looking at responsibility in the truest sense of the term. I’ll stop here, thank you.

Sorina Teleanu: Thank you, Anita. We’re already adding more keywords to the ones that we already have in the four main sections of the report. And I have two on my list right now, that’s transparency and responsibility. I’ll be adding more during the discussions and at the end, we’ll see what were the keywords in this debate. So we’re moving from the global south to the global north. And I’m going to invite Brando to provide his reflections, also because they relate to what Anita has been talking about interoperability and cross-border supply chains. So, Brando, you have the floor.

Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very important panel because clearly on AI, we need to build global cooperation, global governance, and we need to examine together what are the challenges. And in fact, the impact on labor and opportunities of having an interoperable technological development around AI are some of those challenges. In fact, I think that the fact that we have chosen, which was a debated choice, it was not obvious, we have chosen to identify the use of artificial intelligence in the workplace as one of the sensitive use cases that in the AI Act is regulated to try to build safety and safeguards for workers, for those that are impacted by AI in the labor place, in the place where they work, is one important direction. And also, at a larger policy point of view, clearly the impact of AI on the labor market is already very impactful. So we need to build common strategies to manage the change in how the workforce will be composed. In fact, I think we could compare the AI with the impact that is already showing, consider that we are only two years into the generative AI revolution, to some extent, we can call it like that, it’s only two years when it reached the general public. And we will see what will happen in a short time after. So the impact is already strong. We need to consider the change that is happening, like when electricity was introduced. Sometimes I hear it’s like with internet. No, because internet is not as pervasive as AI can be. AI can change every workplace, every dynamic of labor. And it’s like the invention of electricity. It’s like the use of the steam in the development of pre-industrial automatic processes. We can look at that with that eye, I would say. And then that’s why we need global governance. We need rules because the impact on our societies is in fact even larger, not just on labor. But obviously, I say that as one who negotiated a regulation that dealt with market rules, we need to build a set of policies which are fiscal policies which are budgetary policies, permanent lifelong learning policies that are able to deal with these changes. And I really believe that we need to build common standards, common definitions. We are working on that in various international fora so that we can have more interoperability. In fact, you know that the EU has been leading on pushing against those that limit interoperability. In fact, one other legislative act of the EU, the Digital Markets Act, is also targeted at increasing interoperability. And we think that this is crucial if we want our different parts of the world to work together and to find solutions between our different businesses that can have our AI cooperate together, work together, not be in different silos and be separated. I don’t think this would be good for our economies, but also for the global understanding. We need AI to be also respectful of different traditions, different histories. I say that because we risk instead, because of the dynamic of how the training of AI happens to have a very limited cultural scope. And I say that from Europe. So it could be even more applied to other parts of the world, I would say. So I think this is something that are, I mean, these are some of the challenges we are in. I strongly believe that. that we need to combine, and I conclude on this point, two different efforts. On one way, domestic policy, in the sense that we need to have our own rules on how we deal with AI entering into our society. And there can be different models, there will be different models, but we can build some common understanding. For example, and this applies again also to the labor topic, we have built some common language, also looking at the work of the UN on the issue of the risk categorization. The idea of finding different levels of risk attached to different ways of using AI as a common way of looking at how we use AI. And on the other end, I think we also need to concentrate on where we need to work on a supranational level, because there are issues where we cannot find solutions without working over the borders. I mentioned one thing that is outside the two topics of labor and liability, but I think it’s especially important to mention it. To conclude, it’s the issue of the security and military use of AI. I think it’s very important that we work on that, because all the other actions are not effective if we are not able to control AI used as a form of a weapon or a form of security in all its implications. So I think these are some of my reflections on the topic. Thank you very much.

Sorina Teleanu: Thank you also for covering quite many topics. The good news on your final point about the discussions on the security and military implications of AI is that there is a debate at the UN General Assembly on a potential resolution for that. So for anyone in the room who belongs to a government, do encourage your Ministry of Foreign Affairs to be part of this discussion, because as Brando was saying, it is important to have this as some sort of a universal agreement at the UN level. On the interplay between global governance and rules and domestic policies, I hope we can get back to that a little later in the session and I’m hoping to also hear reflections from the room because that’s a very important point. Okay, if we agree on something at an international level, what next and how can we implement those policies locally at the national and regional level as well? And I also liked your point about common standards and definitions. It’s not easy to agree on these things at regional level. At an international level, it’s a bit more complex as well, but it would help when we discuss, again, the interoperability, liability issues and all these things that have been raised so far. Let me move on to Jimena because she’ll also speak about liability.

Jimena Viveros: Hello, thank you very much. It’s a pleasure to be here with all of these distinguished speakers and the audience. So, first of all, I would like to highlight what Brando was saying before about peace and security because I think that is key. And as a commissioner for RE-AIM, which is the global commission for the responsible use of AI in the military domain, we like to expand this into the broader peace and security domains because the implications of AI, obviously we know it in the civilian space, in all types of different forms, but in the peace and security domains, it’s not just limited to the military. So we can see that in civilian actors that are state actors, such as law enforcement and border controls, and we can also see it in non-state actors that are also civilian, which can range from terrorism, organized crime, mercenary groups, and just rogue actors. So it’s very important to also look at it from all of these dimensions because they do have a very destabilizing effect internationally, regionally, and at every type of level because of the scalability and the easy access, the proliferation of it all. So that’s why accountability and liability is so important. So the report is great, and it really tackles a lot of the good topics about liability. However, the report only focuses on liability in terms of administrative, civil, and or product liability. It was a deliberate choice to exclude the criminal responsibility, but I would also go a little bit further also and say that we need to look at state responsibility as well for the production, the deployment, the use in the entire life cycle, basically, of any of these AI systems. I think it’s very accurate that this liability part is the first section of this report because it’s extremely important. Why? Because we need to, in the current landscape that we’re living in, where international law is pretty much blatantly violated with complete impunity all the time, talking about accountability seems like fairy tales, but it’s really important to uphold the rule of law, to rebuild the trust in the international system, which is at a critical moment right now. Also for the protection of all types of human rights for all types of people, especially those in the Global South. I am Mexican, and the Global South, we are disproportionately affected by these technologies, both by the digital divide, but by the deployment of it, and the fact that we are basically consumers, not developers, also influences greatly how we are affected by the technology. It also matters because there is a deterrent effect when we’re talking about accountability in the criminal domain, especially, that really deters, and this deterrent effect helps promote safe, ethical, and responsible use, development, and deployment of AI. And it also allows for remedies for harm. These mechanisms are very important, and should be included

Brando Benifei: in every type of accountability framework, because we do have a lot of problems that stem from AI in terms of liability, accountability, which I prefer the term accountability because it’s more encompassing. So we have the atomization of responsibility. Obviously, there’s so many actors that are involved throughout the entire life chain of these technologies, and both enterprises, people, and also states as a whole. That’s why I involved the state responsibility. I identify three categories, so the users, the creators, and the authorizers, and everyone, but they’re not mutually exclusive. Each type of responsibility, it can be allocated on its own, and should be allocated on its own. Obviously, what was mentioned, the opacity, the black box, it also affects the proper allocation of responsibilities to each one of the actors, and the fragmentation of governance regimes, because what we’re witnessing now is just kind of forum shopping to whichever jurisdiction is more amenable to your purposes, and that’s where you set up, or that’s where you operate, and so on. So that’s why a global governance regime is extremely important, because these technologies are transboundary, as has already been said. So having a patchwork of initiatives is completely insufficient, and also the regimes that we have right now for reporting, for monitoring, and verifying everything that could eventually lead to some type of accountability, they’re all based on voluntarism, and in my opinion, that’s absolutely insufficient. It’s ineffective. At the OECD, we have this monitoring of incidents and framework where it’s obviously based on self-reporting, and we have witnessed there that the lack of transparency and the lack of accuracy in this type of systems of voluntarism is just not gonna work, and it’s absolutely unsustainable. Also, the type of self-regulation that is being used, or self-imposed by the industry sector, is also not gonna work if we don’t have actual enforcement mechanisms,

Jimena Viveros: so in a centralized authority to do so, because if we go, again, state by state, it’s really not gonna be very efficient. So I think we all have the general notion of what accountability is, and what it means, and why it matters. We just need to find the solutions, and the willingness to do so, because everyone should be accountable throughout the entirety of the world. the entire life cycle of AI. And I’ll leave it here, but I’m happy to expand on some issues later. Thank you.

Sorina Teleanu: Thank you, Jimena. I think we’re already collecting suggestions for the police network to continue working on these issues next year, and I’m taking notes of some of the areas that could be in focus. You mentioned the impact of AI on peace and security broader and going beyond the military domain. This notion of state responsibility and liability, and then the fragmentation of AI governance. And I’m going to put a question out there that I hope we can explore a little later with everyone in the room as well. The idea of a global governance regime. Is that feasible? How feasible actually it is? And what can be done concretely to get there? We all know that the appetite for multilateralism these days is not as much as we might want it, but maybe it’s not all lost. All right, let me continue with our speakers, and I’m going to invite Yves to continue, please.

Yves Iradukunda : Thank you, and good afternoon. It’s great to be here in this critical conversation, and thanks to the Internet Governance Forum for inviting us, and particularly commendable work that the Policy Network on AI has done, and the report that offers really good recommendations that should, if implemented, if guiding our engagements going forward, should make a significant impact. This conversation is very critical, and as I hear my fellow panelists share their reflection on the report, but also insight on their respective context and work. It challenged me to think about, we can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in. I think if we reflect on that, then AI is just not a new concept from a perspective of its impact on our lives on a day-to-day. I say this because technology has been able to help advance innovation, solve different challenges, and help tackle some of the issues that we have. But at the same time, technology has driven some of the inequity issues. I think as we reflect particularly on AI today, we also need to really acknowledge the fact that if the foundational values of why we do technology are not revisited, it’s not just about AI, it’s about the values of our society altogether. But since we are focusing on AI, just allow me to reflect. From a perspective of Rwanda, we always ask ourselves to what extent, to what end, what is the end goal? And we focus primarily on the impact we want to have on our citizens. And whether it’s AI or any other emerging technology, we really want to see it as a tool, a tool that we use to improve the lives of our citizens, whether it’s in healthcare, education, and agriculture, where we are really prioritizing our investments to use AI leveraging, addressing the gaps that we have. And so what we’re seeing as an outcome is really leveraging technology investments, and also early success of AI to bridge the gaps we see in equity and inclusion, but most importantly, improving the lives of our citizens. So the themes for today’s discussion, whether it’s focusing on interoperability in governance, looking at environmental sustainability, or the issues around accountability, and the impact AI is having on labor, all of this can be addressed if we again zero in on the impact we want to have on our society. I think unlocking AI’s full potential, I would agree with what has been said before. All these values and ethical guidelines and the principles that guide how it’s implemented have to really be guided. There has to be consensus and dialogue on how we deploy the different solutions ethically. And I think it was said earlier on, the responsible approaches have to really understand the different players that are accessing the AI tools. And the standards should follow those values and should protect against ill-intention of the use of AI. So when I look at the report and the different recommendations, I find confidence in this global community within the policy network. But again, for this session, I really want to call upon the leaders in the room, the technology specialists, and corporate companies that are deploying these tools to really follow these recommendations, but most importantly, figure out what is it that we want to do for our society. And so whether it’s coming to building capacity, I think it’s something that we need to double down on. Right now, there is inequity in terms of how different countries are adopting AI. The talent is probably available in all countries, but in terms of access to the tools, in terms of awareness, there is a big disparity. So I think even as we speak, most of the people across the globe may have limited understanding and appreciation of the impact AI is going to have on their lives. So I think building capacity should start with awareness, and then the deployment of the AI tools should really be focused on improving people’s lives at all levels, whether it’s the highest advancement in security, as it was just said, whether it’s in medicine and other applications. We also need to think about how does it affect farmers in their respective society in different levels. I think we should foster partnership as we follow the implementations from this report. Like I said, it’s not just for governments or corporates alone or international organizations. I think we need to really bring partnerships forward to make sure that we bridge the divide and accelerate innovation across all levels. And finally, I think we should really commit to the adoption of these policies within our respective jurisdictions. I think the boundaries of the impact of AI are limitless. So even if you look at environmental impact of AI, you will not know boundaries. So the innovation around manufacturing of equipment that is used for AI solution, looking at energy solutions that are renewable and also limiting applications of AI that are really against environmental approaches should also be limited. So to conclude, I think it’s really commendable working on the report, the recommendations. I think a lot of insights have been already shared here on the panel, but really a call on all the leaders here present to really put at the center the impact on the people, on the citizen, respectively, and really think about how does UI serve that purpose of improving their lives.

Sorina Teleanu: Thank you so much, Yves. Also for bringing the focus back to issues on inequality, access, capacity building, how do we bridge the divides that we see actually growing instead of shrinking. And I pretty much like your question, to what end? Where are we going with this, whatever we call it, technological progress? And while listening to you, I was just reminded of two quotes we came across the other day while we were going through the discussions of the many sessions. And I just want to read them quickly, and again, hoping we can reflect on them a little earlier. Again, building on your point about, okay, to what end? One came from the Secretary General of the UN, who’s actually convening this forum, and it was very simple but also very powerful. Digital technology must serve humanity, not the other way around. We might want to think about this a bit more as we develop and deploy AI technologies. And the second one is a bit more elaborated, but along the same lines. Are we sure that the AI revolution will be progress? Not just innovation, not just power, but progress for humankind. I’m hoping we can also have a bit more reflection on this here, but also beyond in our broader debates on AI governance. I’m going to move online and invite Mina to provide her intervention. Mina, over to you.

Meena Lysko: Thank you. Thank you very much, Serena. Maybe I could start with first thanking the Internet Governance Forum’s policy network on Artificial Intelligence for actually organizing this very important discussion and for inviting me to be part of it. I appreciate the chapter on environment sustainability and generative AI. I’d like to maybe first paint a vivid picture where this picture as well as other scenarios, I am of firm belief, have actually been the premise for the Internet Governance Policy Network on AI. So, as it stands, the global south is indiscriminately impacted by generative AI and its associated technologies. The global north economies are strengthened largely by providing technologically advanced technologies and technologically advanced solutions which are taken up worldwide and at the same time, the global north have the resource and time to implement and enforce policies which will protect the local environments. The entire day is not necessarily spent on hard and hazardous labor, just to get food into the mouths of the poor. This may not be the same with poorer and developing countries. Just as with our plastic pollution, we will see greater disparities in impact of non-green industries on the environments of the most vulnerable. To illustrate this view, I will use the example of generative AI. The automotive industry is being transformed by the integration of electric vehicles, software-defined vehicles, smart factories, and generative AI. Identifying red flags related to environmental harm across the entire value chain of electric vehicles is crucial to sustainable development. So, permit me, key red flags are the biodiversity loss from mining raw materials. Generative AI relies on large-scale data centers, GPUs, and other computational hardware, as well as all of us with, for example, our smart phones, all of which require metals and minerals like lithium, cobalt, nickel, rare earth metals, and copper. Extracting these materials impacts local ecosystems, wildlife, and the broader environment. Let’s look at this from the perspective of deforestation and habitat destruction. Cobalt mined in the forests of the global south country, the Democratic Republic of Congo. Cobalt is a chemical element used to produce lithium-ion batteries, for example. The country has seen genocide and exploitive work practices. The cutting down of millions of trees, in turn negatively impacting air quality around mines. More so, cobalt is toxic. The expanded mining operations result in people being forced from their homes and farmland. According to a 2023 report, forests, highlands, and lakeshores of the eastern DRC are guarded by armed militias that enslave hundreds of thousands of men, women, and children. The destruction of forests due to cobalt mining reduces the earth’s natural carbon footprint. Cobalt mining reduces the earth’s natural carbon sinks, which are crucial for mitigating climate change. Let’s also be reminded of the negative impacts of copper and nickel mined in the Amazon rainforests. The escalating nickel extraction in Indonesia, as well as lithium mined in Chile’s Atacama Desert. So, besides the biodiversity lost from mining raw materials, we can also look at and explore water pollution from material extraction, processing, and battery disposal. The carbon footprint from energy intensive production, assembly, and charging. Waste generation at every stage, including battery disposal and component manufacturing. The social and ethical issues like child labor and mining, hazardous working conditions, and green washing. Addressing these red flags requires stricter regulation, sustainable sourcing, clean energy use, and investments in circular economy practices. We need to be extra mindful of the impact of batteries on the environment in the longer term. We are presently having to manage disposal of electronic waste, including the plastic. These impregnate our vital land and waters, but still at a micro and nano level. If we fast forward a few decades from now, the battery waste bodes to be far more unmanageable. As we are now looking at seepage of fluids into our ecosystems. So, the policy network on artificial intelligence policy brief report provides seven multi-stakeholder recommendations for policy action. I’d like to emphasize that in developing a comprehensive sustainability metric for generative AI, that’s recommendation one, the standardized metrics must have leeway to adapt. To take into consideration our rapidly evolving digital space. Today, we are having to look at the repercussions of elements such as cobalt, nickel, and lithium. We are having to consider greener technologies to meet a nominal energy demand relating to generative AI. A decade, or even a few years from now, our targets will likely be completely different. Also, if I can add one more, I suggest that we have, in addition to the seven recommendations, an outlook into the impact of the environment. Because we have moved beyond just terrestrial. We are mining outer space. So, the global space race for mining resources to quench our generative AI thirst needs also consideration. I’d like to pause there for now. Thank you very much. Thank you also, Mina. Thank you for making us think of more right in front of us issues that sometimes we tend not to see just because they’re right in front of us. Thank you for raising more awareness about the use and misuse of natural resources here on earth, but also in outer space. That’s not something we talk so much in AI governance

Sorina Teleanu: discussions. But it is a very important point also because we don’t necessarily have a global framework for the exploitation of space resources. And it would probably be better to start thinking about that sooner rather than later. Because as Mina was saying, we do see a lot of competition for the use of resources for the development of AI and other technologies. So, thank you so much for bringing that up also. Moving on to Mutas for your reflections, please.

Muta Asguni: Thank you so much, Serena. Really happy to be here with you guys on this session at IGF. I think there is a lot of ambiguity and uncertainty, as you mentioned, Serena, surrounding AI. This is not just the talk of the hour and not just the talk of the hour in Saudi. This is the talk of the minute and second everywhere in the world. And before I talk about the paper and the report, I want to take a step back and look back at history. Because history is not just the greatest teacher. It’s also the greatest predictor of the future. We as human beings, as global society, as a united nation, we’ve been here before. Four times. We’ve been here before in the first industrial revolution with the transition from agriculture to industrialization. Then again with the introduction of electricity. And again with the introduction of computers. And then on the fourth industrial revolution with the introduction of the Internet. And finally now, we’re on the cusp of the fifth industrial revolution with the transition from the digital age to the intelligence age. Each one of the previous four industrial revolutions had profound impact on three specific aspects. On infrastructure. On society, mainly on labor. And on policy. Let’s take electricity as an example. As Brandy just mentioned. When electricity was introduced, we had to develop a lot of new infrastructure to deliver electricity to every home, to give a chance to everyone to be able to harness its power and use in a safe and robust manner. When we talk about electricity and its impact on the society and jobs, the jobs market was never the same before and after electricity. It changed forever and we’ve adopted, adapted and prospered together with electricity. We’ve up-skilled and re-skilled our economies and our people to be able to leverage that technology for the greater good. In terms of policy, we’ve developed standards, we’ve developed frameworks. We as an international community came together in order to build a robust and meaningful framework that we can all work together on for the greater good in the use of electricity. AI is not going to be different. If we look at the same three lenses in the AI perspective, let’s take infrastructure for an example. Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers. This will have a profound impact on the environment. But the good news is, 30% of the 7 gigawatts that we use today is actually, and this is a funny anecdote, is actually being used to predict the weather. So if we can actually, and it’s using very old technologies and machine learning technologies in order to predict the weather, just to predict 7 days of weather. Now we can actually use generative AI to predict not just 7 days, 12 days in much less power and reutilize that excess power for new uses of AI. Now in terms of society, yes, AI is going to have a profound impact on jobs. Jobs are, again, not going to be the same before and after we fully adopt AI. But we as a global society, we need to come back again, upskill and reskill our economies in order to adopt, adapt, and prosper together with AI. Finally, in terms of policy, this is the main topic of discussion in this session. Like every technology, there are two aspects when it comes to policy for AI. And this is true for every technology, there is a local aspect and a global aspect. In terms of the local aspect, we can look at the collection, the use, the utilization and the access of data and AI technologies within these specific geographies according to the local priorities and agendas. In the global aspect, which we’ve already done amazing work with the establishment of the report, we actually need to work as global bodies with local government, with the private sector and the public sector. And the good news is everyone is willing to actually put their hands together and leverage whatever we have today for the good of humanity and to ease the adoption of AI. And with that, I look forward to the rest of the session and the discussion. Thank you so much.

Sorina Teleanu: For the good of humanity is a very good way to end this section of the discussion. I did promise we’ll have a dialogue and we only have 19 more minutes of this session. So I’m going to try to do that. I look in the room and I also count on my colleague online to tell us what’s happening there. Any hands, anyone would like to… Do you have a mic there or how does it work? Okay, I’ll come to you. Probably easier. Let’s try. Please also introduce yourself. Okay.

Audience: So thank you so much. Give me the opportunity to ask some questions. So at first, I’d like to congratulate your hard work to release this report. I know that’s very hard work to address such complicated issues. And I’m eager to read this report. But back to the main theme, AI governance we want, I want to ask a fundamental question about what is the overarch goal for the AI governance? Is it acceptable to use the title of the very first United Nations resolution adopted by the General Assembly in March, the title is Seizing the Opportunities of Safe, Secure, Trustworthy AI Systems for Sustainable Development? If not, what is the better articulation for the overarch goal of AI governance? So that’s my first question. And my second question is that I believe governance is beyond regulation. Governance dealing with technical innovations, because we do need technical information, but we also need governance to guide these innovations for the great good of the people and the globe, the planet. So if we use that for sustainable development is the overarch goal of the AI governance, how can we guide the AI innovation in line with the sustainable development goals, and even to accelerate the implementation of sustainable development goals? And the third question is back to the regulation. The common concern for AI application is that about disinformation. But the disinformation is from the mild use of using AI tools. So take traffic safety as example. For traffic safety we need safe cars, we need safe roads, but more importantly we need people, the drivers, obeyed by the rules. So how can we have a comprehensive governance framework to regulate the behavior of the AI users? I stop here. Thank you so much.

Sorina Teleanu: Thank you also. I think we’ll try to get a few more questions and then provide reflections. Any more points from the room? I don’t see any hand. And we covered quite many topics, so I’m pretty sure you do have at least a small reflection in mind, thinking about all of them. I’m seeing a hand there. Could you please come? There are only mics here, unfortunately. Meanwhile, I do like your question. What do we want from AI governance and what is the AI governance we want? And while we’re waiting, Ximena, you want to provide some reflections?

Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was promoted by China, the one you mentioned, which is fantastic. So all of these resolutions are steps forward. And they are also leading up to the global governance that we want and that we expect. That’s why we had the Summit of the Future this September, and we had the Global Digital Compact, and we had the Pact for the Future. And all of the documents that were adopted therein are a monumental step because we are now guiding the path of where AI is going to be governed by and for humanity. So that’s actually the title of the Secretary General’s High Advisory Body report, Governing AI for Humanity. And I’d like to say also for the benefit and protection of humanity, because as you mentioned, AI has enormous potential and can be harnessed for good, can be harnessed for all types of enhancement of all of the sustainable goals. However, as Amina Mohammed said in the Arab Forum this past March in Beirut, there can be no sustainable development without peace. So going back to the point of peace and security and the importance of AI and the dual-use nature of it, what we want to create is global governance that encompasses both of this dual-use nature, repurposability, all of that. And it’s important to have it because, again, if not, we’re just going to have fragmented approaches that are not interoperable, that are not correlated or cooperated. So we all need to work towards this. And the only way we can do it is by adoption of a binding treaty. So that’s going to be hard, but we need to be ambitious in order to have this technology governed by us, not that we eventually get surpassed by it.

Audience: Hi, my name is Ansgar Kuhne, I’m with EUI. Of course, I’d like to congratulate the Policy Network on AI for this very important report. I’d like to invite the panel to reflect on the interaction between the liability and interoperability aspect, specifically if we have interoperating AI systems, how best to identify where liability would lie in case issues do arise. Is there a role for contractual agreements in this? And if so, how to deal with the imbalances in both informational and economic power that various actors within that network of interoperating players may have? Thank you.

Sorina Teleanu: Thank you as well. Do we have more hands in the room? Yes, we do. Try the mic over there. If not, I’ll come your way. Not so much. Okay, it’s going to take me a while. If anyone would like to provide any reflection while I do the walk, please go ahead. Thank you.

Audience: Thank you. Riyad Najm. I’m from the media and communications sector. Now, in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI. By this definition, does it mean the speed that we can execute our computation or is it the access of data that we can reach and manipulate at the same time? By doing that, we all know that artificial intelligence was established a long time ago. The only thing that is becoming relevant now is because we are able to access data at the same time with a great amount and we have excessive and high speed of computation. So we need to define it first before we try to govern it. And maybe my other comment, if for the past maybe almost 20 years, we have not been able to govern the internet itself on a global level. All what we get are like sometimes guidelines, some initiatives and so on. And there was never a treaty that can cover this. Are we able to do that for artificial intelligence? I leave that to the panel to answer. Thank you.

Sorina Teleanu: Thank you as well for taking us many, many steps back and asking the questions of what exactly do we talk about when we talk about AI. I’m going to turn online and see if we have any questions and reflections from there briefly. From our online participants, not our online speakers. If our online moderator can just go ahead and unmute.

Audience: Okay, thank you very much. But actually we have a problem with our audio. So I’m not quite sure whether you can hear us well or not. Please go ahead, we can hear you well. Okay, there are a few questions actually from the online audience. The first one, I think this is a repetition of the last question, more or less the same. So how do we address the regulatory arbitrage between the countries, especially between the global north and the global south? Because the situation is very difficult and very different. Even for the internet, for the past 20 years, it is hard for us to regulate it, right? So how do we arbitrage this issue? And the second question is, we have a problem of wrong usage of AI. So the wrong usage of AI is worse in the case of military applications. So how do we safeguard ourselves to ensure that our AI is not being wrongly used for military purposes? And then we have an interesting question from Omar. The third question is, okay, AI is producing a lot of harms actually in terms of the online interactions. So there are several bullying cases, there are a lot of things that are being generated falsely by generative AI. So how do we protect ourselves, especially for the young generations? So any panelists that hopefully can answer these questions? And then maybe how to foster the collaborations when we encounter this problem? So I think that’s four questions is enough because we have like seven minutes more. Thank you very much.

Sorina Teleanu: Thank you also, Mohd, sorry, and to everyone else online and here. We have seven minutes to answer quite many questions and I’m going to turn to you. Please go ahead.

Brando Benifei: Yes, well, a lot of different things have been asked, I tried to answer a few. In fact, on the issue of the definition that was touched, it was a big issue for us too. In the end, I think it’s very important that we concentrate as much as possible on defining the concrete applications of AI so that we define the systems, we define what we want to regulate for the regulation sake, because we are not talking about philosophy or other sciences that should analyze AI in different aspects. So we have been working on that as EU and there are important processes ongoing at UN, at OECD, and I think we need to stick to the minimum that we can so that we can find more agreement. Otherwise, we will lose sight. On the sustainable goals issue that was mentioned, I think it’s important to also mention the risk of excessive wealth concentration that will limit the access to services, to the same issues we mentioned of permanent learning, etc. So in fact there is also an issue of how we distribute the added value created by AI increase of productivity. It’s something that if we look at the policy side, it cannot be avoided. I think we need to bring that on the table too. So it’s a fiscal policy, it’s a budgetary policy, it’s a new welfare system because in fact with the revolutions we talked about, with the industrial revolution, electricity, the digital space, we have seen changes in how we organized our safety nets and our state support systems. So we need to work also on that. And finally on the liability, I want to say that it’s very important that we work on finding more transparency. This is what we have been working on with the AI Act, because if there is no downstream transparency between the various operators in the chain of value of AI, then the risk of asymmetry and the transportation of responsibilities down the stream will be damaging to the weaker actors. It will strengthen the incumbents and not have a healthy market for AI. So liability, transparency, yes we can have contractual agreements. But only if we have strong safeguards to avoid the lack of information. Otherwise we will just entrench market advantages and we will, I think, suppress innovation. So we need to find a good way. In Europe we are now working on a new liability legislation, AI liability legislation that complements the AI Act and we will be discussing this for sure also in the future in this kind of context. Thank you very much.

Sorina Teleanu: Thank you also, Brando, for highlighting the need for actually whole-of-government and whole-of-society approaches to dealing with the challenges of AI. Any more reflections from our speakers? We have three more minutes.

Muta Asguni: So, I think there were a lot of questions regarding regulation, regarding governance, regarding the definition of AI. I just want to sort of take a step back and highlight an amazing approach that has been taken in the report. In regards of forms of regulation, I think there are a lot of different forms of regulation that has been taken in the report in regards of focusing on the value chain of AI. Because you cannot govern the whole of AI together. You need to actually distribute it into components and look at each component in isolation from the other components. And I also want to mention that we still don’t fully understand AI, right? This is the first technology, maybe not the first, we still don’t understand electricity for example fully, but this is a technology that is giving us answers in a way that is not very transparent. We don’t know why did the model give us that answer. So, I think a change on how we regulate and how we govern such a technology is very much needed. We cannot take a reactive approach, especially when it comes to liability. We also need to serve a proactive approach in the appropriate components within the value chain. So, in the data layer for example, the collection of data, we’re going to take a reactive approach. But in the access to AI for example, maybe we need to consider a more proactive approach when it comes to governance and regulation. And from that, I want to also talk a little bit about interoperability. Because, you know, one of the biggest questions that we get from investors when it comes to investment in Saudi for example, is if I’m compliant with the laws and regulations in country X, am I going to be compliant with the laws and regulations in your country? And this is a very important question. And it’s a very important question, especially when it comes to the GDPR and the differences between the GDPR and the BDPL in KSA. So, having frameworks and interoperability, especially when it comes to data, because data currently is kind of clear, and we can move from data upstream into the value chain of AI from there. Thank you.

Meena Lysko: Thank you. Thank you very much. Perhaps just from my side, I’d like to again just emphasize that in order for us to have a future world and an equal future, sincere and responsible collaboration is crucial. And we need to prioritize these in sustainability design, like putting the report, deployment and governance of generative AI technologies. And maybe a last point, without an environment, there’s no point on collaboration to boost economies or on developing societies. We need to move off from our path of total global destruction. Thank you.

Sorina Teleanu: Thank you also. We have quite a few powerful messages out of these sessions. I hope someone will be taking due notes. If not, we have AI enabled reporting. Jimena?

Jimena Viveros: I suggest very quickly, I just wanted to say that I think now that even though there’s no one single definition to AI, we’re getting, I mean, the technology has been here for over 70 years. So, I mean, we have some understanding of it. And what we’re trying to do now is to whiten the black box in terms of explainable AI and so on. So, all of these things, like we’re trying to do forensics, for example, on the models to see like how they came up with the outputs and so on. So, this is a very important thing that’s going to help us make AI more accountable. And the global governance framework, I think it should be overarching of all of the topics. Obviously, there’s going to be a lot of subset regimes, but they should all be dependent on like the umbrella of governance. And just to finish on the liability, I think the one conclusion we can come to is that if you cannot fully control the effects of a technology, you should accept by the mere fact that you’re using it, that you will be responsible for whatever happens. That will happen in this case. So, I think that should be the general rule that we should keep in mind for now, especially when it comes to the peace and security domain or when there’s human right violations involved. So, high scale or high risk frontier models and all of the other type of decision support systems and autonomous weapon systems. Thank you.

Sorina Teleanu: Thank you. Yves, Anita, any final reflections from you before we wrap up?

Yves Iradukunda : Just to, again, agree with the comment around the liability. I think it goes back also to the emphasis that has been done on awareness and capacity building, because some of the liability may come from the most vulnerable link within our ecosystem. So, that then means that we need to emphasize on partnership, because if that sort of responsible use of some of this method is applied in any one of the jurisdiction, it will not leave the rest of the countries or organizations safe. So, I think, again, an emphasis on building the partnerships that enforce collaboration and partnership to advance some of these values that have been discussed.

Sorina Teleanu: Thank you. Amrita, if you’re still with us and would like to add something? Okay, perhaps not. We are out of time. I’m not even going to try to summarize the many points that have been touched on today, but I’m sure there will be a very comprehensive report by the Policy Network facilitators, and there will also be one from, as I was saying, AI-enabled. I do, again, encourage everyone to take a look at the report, maybe even only the recommendations. There is a chatbot that will allow you to interact with it directly. Looking forward to seeing how the Policy Network will continue its work building on some of the very, very useful and thought-provoking reflections from today. Many thanks to our speakers here and online. Many thanks to you in the room also for your contributions, and our online participants also. Enjoy the rest of the IGF, and let’s see where we get with this AI, humanity, governance, society, and all the implications around them. Thank you so much.

J

Jimena Viveros

Speech speed

147 words per minute

Speech length

842 words

Speech time

342 seconds

Need for global governance framework for AI

Explanation

Jimena Viveros argues for the necessity of a global governance framework for AI. She emphasizes that this framework should be overarching and encompass all topics related to AI governance.

Evidence

She mentions that there will be subset regimes, but they should all fall under the umbrella of global governance.

Major Discussion Point

AI Governance and Regulation

Agreed with

Brando Benifei

Anita Gurumurthy

Agreed on

Need for global governance framework for AI

Importance of state responsibility for AI systems

Explanation

Jimena Viveros emphasizes the need for state responsibility in the production, deployment, and use of AI systems throughout their entire lifecycle. She argues that this is crucial for rebuilding trust in the international system.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Brando Benifei

Anita Gurumurthy

Agreed on

Importance of addressing liability and accountability in AI systems

Differed with

Anita Gurumurthy

Differed on

Scope of liability for AI systems

Challenge of allocating responsibility given opacity of AI systems

Explanation

Viveros highlights the difficulty in allocating responsibility due to the opacity of AI systems. She points out that the ‘black box’ nature of AI makes it challenging to determine how decisions are made.

Evidence

She mentions ongoing efforts to develop explainable AI and forensic techniques to understand how AI models produce their outputs.

Major Discussion Point

Liability and Accountability for AI

B

Brando Benifei

Speech speed

131 words per minute

Speech length

1909 words

Speech time

872 seconds

Importance of domestic policies alongside global governance

Explanation

Brando Benifei emphasizes the need for both domestic policies and global governance for AI. He suggests that while global cooperation is necessary, countries also need to develop their own rules for dealing with AI in their societies.

Major Discussion Point

AI Governance and Regulation

Agreed with

Jimena Viveros

Anita Gurumurthy

Agreed on

Need for global governance framework for AI

Need to focus on concrete AI applications in regulation

Explanation

Benifei argues for focusing on defining and regulating concrete applications of AI rather than getting bogged down in philosophical definitions. He suggests this approach is more practical for regulatory purposes.

Evidence

He mentions that the EU has been working on this approach, and there are ongoing processes at the UN and OECD.

Major Discussion Point

AI Governance and Regulation

Differed with

Muta Asguni

Differed on

Approach to AI regulation

Need for transparency to address liability issues in AI value chain

Explanation

Benifei emphasizes the importance of transparency in the AI value chain to address liability issues. He argues that without downstream transparency, there is a risk of asymmetry and unfair distribution of responsibilities.

Evidence

He mentions that the EU is working on new AI liability legislation to complement the AI Act.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Jimena Viveros

Anita Gurumurthy

Agreed on

Importance of addressing liability and accountability in AI systems

Need to consider AI’s impact on labor markets and upskilling

Explanation

Benifei emphasizes the need to consider AI’s impact on labor markets and the importance of upskilling. He argues that AI will significantly change the job market, requiring adaptation and new skills.

Evidence

He compares the impact of AI to previous industrial revolutions, suggesting it could be as transformative as the introduction of electricity.

Major Discussion Point

Environmental and Social Impacts of AI

Need for common standards and definitions for AI globally

Explanation

Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests this is crucial for enabling interoperability and cooperation between different parts of the world.

Evidence

He mentions ongoing work in various international fora to develop these common standards.

Major Discussion Point

AI and Global Cooperation

Agreed with

Meena Lysko

Yves Iradukunda

Agreed on

Need for collaboration and partnerships in AI development and governance

A

Anita Gurumurthy

Speech speed

140 words per minute

Speech length

752 words

Speech time

320 seconds

Challenges of regulating AI given its transboundary nature

Explanation

Anita Gurumurthy highlights the difficulties in regulating AI due to its cross-border nature. She points out that the opacity of algorithms in cross-border value chains, combined with trade secret protections, can hinder effective regulation.

Evidence

She cites a recent case involving Lyft and Uber where the Washington Supreme Court ruled that reports maintained as trade secrets should be made public in the public interest.

Major Discussion Point

AI Governance and Regulation

Agreed with

Jimena Viveros

Brando Benifei

Agreed on

Need for global governance framework for AI

Need for liability rules for both producers and operators of AI systems

Explanation

Gurumurthy argues for the necessity of liability rules that apply to both producers and operators of AI systems. She emphasizes that a high level of care is needed in designing, testing, and employing AI-based solutions.

Evidence

She provides examples of social welfare systems and government employment of AI systems to illustrate the importance of operator liability.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Jimena Viveros

Brando Benifei

Agreed on

Importance of addressing liability and accountability in AI systems

Differed with

Jimena Viveros

Differed on

Scope of liability for AI systems

M

Muta Asguni

Speech speed

137 words per minute

Speech length

1135 words

Speech time

495 seconds

Importance of proactive and reactive approaches to AI governance

Explanation

Muta Asguni argues for a combination of proactive and reactive approaches to AI governance. He suggests that different components of the AI value chain may require different regulatory approaches.

Evidence

He gives examples of taking a reactive approach to data collection and a more proactive approach to AI access.

Major Discussion Point

AI Governance and Regulation

Differed with

Brando Benifei

Differed on

Approach to AI regulation

Potential of AI to support sustainable development goals

Explanation

Asguni highlights the potential of AI to contribute to sustainable development goals. He suggests that AI can be a tool to improve citizens’ lives in areas such as healthcare, education, and agriculture.

Major Discussion Point

Environmental and Social Impacts of AI

Challenge of regulatory arbitrage between countries

Explanation

Asguni highlights the challenge of regulatory arbitrage between countries in AI governance. He points out that differences in regulations between countries can create complications for businesses and investors.

Evidence

He gives an example of investors asking about compliance with regulations in different countries, specifically mentioning differences between GDPR and BDPL in Saudi Arabia.

Major Discussion Point

AI and Global Cooperation

M

Meena Lysko

Speech speed

124 words per minute

Speech length

968 words

Speech time

467 seconds

Environmental impacts of AI infrastructure and resource extraction

Explanation

Meena Lysko highlights the significant environmental impacts of AI infrastructure and resource extraction. She emphasizes the need to consider these impacts in the development and deployment of AI technologies.

Evidence

She provides detailed examples of environmental damage from mining activities for materials used in AI hardware, such as cobalt mining in the Democratic Republic of Congo and lithium mining in Chile’s Atacama Desert.

Major Discussion Point

Environmental and Social Impacts of AI

Need for sincere collaboration on responsible AI development

Explanation

Lysko emphasizes the importance of sincere and responsible collaboration in the development and governance of AI technologies. She argues that this is crucial for creating an equal future and addressing global challenges.

Major Discussion Point

AI and Global Cooperation

Agreed with

Yves Iradukunda

Brando Benifei

Agreed on

Need for collaboration and partnerships in AI development and governance

Y

Yves Iradukunda

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Importance of addressing inequalities in AI adoption and access

Explanation

Yves Iradukunda highlights the need to address inequalities in AI adoption and access. He emphasizes the importance of building capacity and awareness to ensure equitable development and use of AI technologies.

Major Discussion Point

Environmental and Social Impacts of AI

Importance of partnerships to bridge divides in AI development

Explanation

Iradukunda stresses the importance of partnerships in bridging divides in AI development. He argues that collaboration is essential to enforce responsible use of AI across different jurisdictions.

Major Discussion Point

AI and Global Cooperation

Agreed with

Meena Lysko

Brando Benifei

Agreed on

Need for collaboration and partnerships in AI development and governance

Agreements

Agreement Points

Need for global governance framework for AI

Jimena Viveros

Brando Benifei

Anita Gurumurthy

Need for global governance framework for AI

Importance of domestic policies alongside global governance

Challenges of regulating AI given its transboundary nature

The speakers agree on the necessity of a comprehensive global governance framework for AI, while acknowledging the need for domestic policies and the challenges posed by AI’s transboundary nature.

Importance of addressing liability and accountability in AI systems

Jimena Viveros

Brando Benifei

Anita Gurumurthy

Importance of state responsibility for AI systems

Need for transparency to address liability issues in AI value chain

Need for liability rules for both producers and operators of AI systems

The speakers emphasize the importance of establishing clear liability and accountability mechanisms for AI systems, including state responsibility and transparency in the AI value chain.

Need for collaboration and partnerships in AI development and governance

Meena Lysko

Yves Iradukunda

Brando Benifei

Need for sincere collaboration on responsible AI development

Importance of partnerships to bridge divides in AI development

Need for common standards and definitions for AI globally

The speakers agree on the importance of collaboration, partnerships, and common standards in AI development and governance to address global challenges and bridge divides.

Similar Viewpoints

Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.

Muta Asguni

Brando Benifei

Importance of proactive and reactive approaches to AI governance

Need to focus on concrete AI applications in regulation

Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Unexpected Consensus

Importance of addressing inequalities in AI adoption and access

Yves Iradukunda

Anita Gurumurthy

Brando Benifei

Importance of addressing inequalities in AI adoption and access

Challenges of regulating AI given its transboundary nature

Need to consider AI’s impact on labor markets and upskilling

Despite representing different regions and perspectives, these speakers unexpectedly converged on the importance of addressing inequalities in AI adoption, access, and its impact on labor markets, highlighting a shared concern for equitable AI development.

Overall Assessment

Summary

The main areas of agreement include the need for a global governance framework for AI, the importance of addressing liability and accountability, the necessity of collaboration and partnerships in AI development, and the recognition of AI’s environmental and social impacts.

Consensus level

There is a moderate to high level of consensus among the speakers on the key issues surrounding AI governance. This consensus suggests a growing recognition of the complex challenges posed by AI and the need for coordinated global action. However, differences in emphasis and approach indicate that achieving a unified global framework for AI governance may still face significant challenges.

Differences

Different Viewpoints

Approach to AI regulation

Brando Benifei

Muta Asguni

Need to focus on concrete AI applications in regulation

Importance of proactive and reactive approaches to AI governance

Benifei advocates for focusing on concrete AI applications in regulation, while Asguni suggests a combination of proactive and reactive approaches depending on the component of the AI value chain.

Scope of liability for AI systems

Anita Gurumurthy

Jimena Viveros

Need for liability rules for both producers and operators of AI systems

Importance of state responsibility for AI systems

Gurumurthy emphasizes liability for both producers and operators of AI systems, while Viveros focuses more on state responsibility throughout the AI lifecycle.

Unexpected Differences

Emphasis on different aspects of AI governance

Jimena Viveros

Yves Iradukunda

Importance of state responsibility for AI systems

Importance of partnerships to bridge divides in AI development

While both speakers discuss AI governance, their focus is unexpectedly different. Viveros emphasizes state responsibility, while Iradukunda stresses the importance of partnerships and collaboration. This highlights the complexity of AI governance and the various approaches that can be taken.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation, the scope of liability for AI systems, and the balance between global governance and domestic policies.

difference_level

The level of disagreement among the speakers is moderate. While there are differences in emphasis and approach, there is a general consensus on the need for AI governance and regulation. These differences reflect the complexity of AI governance and the various perspectives that need to be considered in developing effective policies and frameworks. The implications of these disagreements suggest that a multifaceted approach to AI governance may be necessary, incorporating elements from various viewpoints to create a comprehensive and effective regulatory framework.

Partial Agreements

Partial Agreements

Both speakers agree on the need for global governance of AI, but they differ in their emphasis. Benifei stresses the importance of domestic policies alongside global governance, while Asguni highlights the challenges of regulatory arbitrage between countries.

Brando Benifei

Muta Asguni

Importance of domestic policies alongside global governance

Challenge of regulatory arbitrage between countries

Both speakers address the environmental and developmental aspects of AI, but from different angles. Lysko focuses on the negative environmental impacts of AI infrastructure, while Asguni emphasizes the potential of AI to support sustainable development goals.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Similar Viewpoints

Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.

Muta Asguni

Brando Benifei

Importance of proactive and reactive approaches to AI governance

Need to focus on concrete AI applications in regulation

Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Takeaways

Key Takeaways

There is a need for global governance and regulation of AI, balanced with domestic policies

Liability and accountability frameworks for AI need to address both producers and operators

The environmental and social impacts of AI, including on labor markets and inequality, must be considered

Global cooperation, common standards, and partnerships are crucial for responsible AI development

AI governance should aim to harness AI’s potential for sustainable development while mitigating risks

Resolutions and Action Items

Continue work of the Policy Network on AI to build on insights from this discussion

Encourage stakeholders to review the full Policy Network on AI report and its recommendations

Explore use of the AI chatbot created to allow interaction with the report’s contents

Unresolved Issues

How to achieve a binding global treaty or governance framework for AI

How to balance proactive and reactive approaches to AI regulation

How to address regulatory arbitrage between countries, especially Global North and South

How to define AI in a way that allows for effective governance

How to ensure transparency and explainability of AI systems for accountability purposes

How to protect against misuse of AI, especially in military applications

Suggested Compromises

Focus on regulating concrete AI applications rather than trying to define AI as a whole

Adopt a value chain approach to AI governance, addressing different components separately

Balance global governance frameworks with flexibility for domestic implementation

Combine proactive and reactive regulatory approaches for different aspects of AI

Thought Provoking Comments

Digital technology must serve humanity, not the other way around.

speaker

UN Secretary General (quoted by Sorina Teleanu)

reason

This concise statement cuts to the heart of the ethical considerations around AI governance, framing the discussion in terms of human-centric values.

impact

It refocused the conversation on the fundamental purpose and ethics of AI development, beyond just technical or policy considerations.

We need to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach.

speaker

Anita Gurumurthy

reason

This comment introduced important environmental and legal perspectives that had not been previously discussed, highlighting the need for a nuanced, global approach.

impact

It broadened the scope of the discussion to include environmental concerns and international legal frameworks, leading to more holistic consideration of AI governance.

We can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in.

speaker

Yves Iradukunda

reason

This comment challenged participants to consider AI in a broader historical and societal context, rather than as an isolated phenomenon.

impact

It shifted the discussion towards a more holistic view of AI’s role in society and its relationship to other technologies and social issues.

Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers.

speaker

Muta Asguni

reason

This comment provided concrete data on the environmental impact of AI, bringing a tangible dimension to the discussion of sustainability.

impact

It grounded the conversation in real-world implications and highlighted the urgency of addressing AI’s environmental impact.

Now in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI.

speaker

Riyad Najm (audience member)

reason

This question challenged a fundamental assumption of the discussion, pointing out the lack of a clear, agreed-upon definition of AI.

impact

It prompted speakers to address the challenge of defining AI for governance purposes, leading to a more nuanced discussion of how to approach regulation.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical and policy considerations to include ethical, environmental, historical, and definitional challenges. They pushed participants to consider AI governance in a more holistic, global context, while also highlighting the urgency of addressing concrete impacts. The discussion evolved from specific policy recommendations to grappling with fundamental questions about the nature of AI and its role in society.

Follow-up Questions

How feasible is a global governance regime for AI?

speaker

Sorina Teleanu

explanation

This is important to explore concrete steps for establishing international cooperation on AI governance, given current challenges with multilateralism.

What is the overarching goal for AI governance?

speaker

Audience member

explanation

Defining a clear goal is crucial for aligning global efforts on AI governance and guiding policy development.

How can we guide AI innovation to accelerate implementation of sustainable development goals?

speaker

Audience member

explanation

This explores how to harness AI’s potential for addressing global challenges while mitigating risks.

How can we develop a comprehensive governance framework to regulate the behavior of AI users?

speaker

Audience member

explanation

This addresses the need to govern not just AI systems, but also how humans interact with and use AI technologies.

How to best identify where liability would lie in case issues arise with interoperating AI systems?

speaker

Ansgar Kuhne

explanation

This explores the complex legal challenges of assigning responsibility in interconnected AI systems.

What is the correct and definite definition of AI?

speaker

Riyad Najm

explanation

A clear definition is necessary to properly scope and implement AI governance efforts.

How do we address the regulatory arbitrage between countries, especially between the global north and south?

speaker

Online audience member

explanation

This explores how to create equitable AI governance given different national contexts and capabilities.

How do we safeguard against the wrong usage of AI for military purposes?

speaker

Online audience member

explanation

This addresses critical concerns about AI’s dual-use nature and potential misuse in warfare.

How do we protect ourselves, especially young generations, from harms produced by AI in online interactions?

speaker

Online audience member (Omar)

explanation

This explores safeguards needed to protect vulnerable groups from AI-enabled online harms.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

AI Assistant for the PNAI report

DC-CIV & DC-NN: From Internet Openness to AI Openness

DC-CIV & DC-NN: From Internet Openness to AI Openness

Session at a Glance

Summary

This discussion focused on exploring the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. Many speakers emphasized that while AI and the Internet are distinct, there are important interconnections to consider as AI increasingly becomes an intermediary layer between users and online content.


Key points of discussion included the need for transparency and accountability in AI systems, concerns about AI amplifying existing biases and power imbalances, and debates over appropriate regulatory approaches. Some argued for applying Internet openness principles to AI, while others cautioned against direct equivalence given AI’s distinct nature. The importance of human rights frameworks in AI governance was highlighted, as was the need to consider societal and collective rights alongside individual protections.


Participants explored tensions between permissionless innovation and precautionary regulation for AI. There were differing views on the degree of standardization and interoperability needed for AI systems compared to Internet infrastructure. The discussion touched on challenges around AI safety, liability, and the concentration of AI development among a few large companies.


Overall, the session illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility. While no clear consensus emerged, the dialogue highlighted important areas for further exploration as AI governance evolves, including potential lessons from Internet governance experiences.


Keypoints

Major discussion points:


– The relationship and differences between AI and the internet


– Applying internet governance principles like openness and interoperability to AI


– The need for AI regulation and accountability, especially regarding risks and harms


– Balancing innovation with safety and human rights considerations in AI development


– The impact of AI on internet usage and information access


The overall purpose of the discussion was to explore whether and how core internet governance principles and values could be applied to AI governance and regulation. The participants aimed to identify lessons learned from internet governance that could inform approaches to AI.


The tone of the discussion was thoughtful and analytical, with participants offering different perspectives and occasionally disagreeing. There was a sense of grappling with complex issues without clear solutions. The tone became slightly more urgent near the end when discussing concrete next steps and the need for action on AI governance.


Speakers

– Luca Belli: Co-moderator


– Olivier Crépin-Leblond: Co-moderator


– Renata Mielli: Advisor at the Ministry of Science and Technology of Brazil, Chairwoman of CGI (Brazilian Internet Steering Committee)


– Anita Gurumurthy: IT4Change


– Sandrine Elmi Hersi: Leads ARCEP’s (French Regulator) Unit on Internet Openness


– Vint Cerf: Internet pioneer, Chief Internet Evangelist and Vice President at Google


– Yik Chan Chin: Professor at University of Beijing, leads work of PNAE (Policy Network on AI)


– Sandra Mahannan: Data scientist analyst at Unicorn Group of Companies


– Alejandro Pisanty:


– Wanda Muñoz: Member of the feminist AI research network in UNESCO’s Women into Ethical AI platform


Additional speakers:


– Desiree: Audience member


Full session report

Expanded Summary of AI Governance Discussion


This discussion explored the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. The dialogue illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility.


Key Themes and Debates


1. Relationship between AI and Internet Governance


A central point of discussion was the relationship between AI and Internet governance principles. Vint Cerf, an Internet pioneer, emphasised that AI and the Internet are fundamentally different technologies requiring distinct governance approaches. However, other speakers like Luca Belli and Renata Mielli argued that some core Internet values, such as transparency and accountability, could apply to AI governance. Mielli specifically mentioned the Brazilian Internet Steering Committee’s principles as potentially applicable to AI governance.


There was general agreement that AI is creating a new intermediary layer between users and Internet content, which could significantly change how users access and interact with online information. Sandrine Elmi Hersi noted that this could potentially restrict user agency and transparency in accessing online information. She highlighted predictions that search engine traffic could decline by 25% by 2026 due to AI chatbots, emphasizing the impact of generative AI on internet openness.


2. Openness and Interoperability in AI Systems


The concept of openness in AI systems sparked debate. Anita Gurumurthy argued that openness in AI is complex and doesn’t necessarily lead to transparency or democratisation. She critiqued the term “open” as potentially misleading in the context of AI. Vint Cerf pointed out that AI systems are mostly proprietary and not interoperable, unlike Internet protocols. Yik Chan Chin added that standardisation and interoperability of AI systems are extremely difficult currently.


3. Human Rights and AI Governance


Wanda Muñoz strongly advocated for a human rights-based approach to AI governance, beyond just ethics and principles. She emphasized the need for accountability, remedy, and reparation when violations of human rights result from AI use. This perspective shifted the discussion towards considering AI governance in terms of concrete human rights obligations and mechanisms for redress.


4. Regulation and Governance Approaches


Participants offered various perspectives on how to approach AI regulation:


– Vint Cerf suggested that accountability is important in the AI world, and parties offering AI-based applications should be held accountable for any risks these systems pose.


– Sandra Mahannan proposed that regulation should focus more on AI developers and models rather than users, highlighting the importance of data quality and challenges faced by smaller players in the AI industry.


– Yik Chan Chin emphasised the need for global coordination on AI risk categorisation, liability frameworks, and training data standards.


– Alejandro Pisanty cautioned against trying to regulate AI in general, suggesting instead to focus on specific applications. He also stressed the importance of separating the effects of human agency from the technology itself in AI governance.


5. Impacts and Risks of AI Systems


Several speakers highlighted potential risks and impacts of AI systems:


– Wanda Muñoz warned that AI systems can perpetuate and amplify existing societal biases and discrimination.


– Yik Chan Chin noted that AI poses new cybersecurity risks to Internet infrastructure.


– Alejandro Pisanty raised concerns that generative AI could lead to loss of information detail and accuracy.


Thought-Provoking Insights


Several comments shifted the discussion in notable ways:


1. Anita Gurumurthy challenged the current paradigm of data and wealth concentration in the tech industry, suggesting alternative paths for more distributed value creation.


2. Vint Cerf’s distinction between AI and the Internet prompted more careful consideration of which Internet governance principles may be applicable to AI. He also highlighted potential benefits of AI in improving our ability to ingest, analyze, and summarize information.


3. Sandrine Elmi Hersi’s insights on how AI is fundamentally changing user interaction with Internet content prompted discussion about implications for governance.


Unresolved Issues and Future Directions


The discussion left several key issues unresolved:


1. Balancing innovation and risk mitigation in AI regulation


2. Extent to which Internet governance principles can or should be applied to AI governance


3. Ensuring AI systems enhance rather than restrict access to diverse online information


4. Approaches for global coordination on AI governance given differing national/regional priorities


Participants suggested developing a joint report for the next Internet Governance Forum on elements that can enable an open AI environment. They also proposed continuing collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues.


Follow-up questions raised by participants highlighted areas for further exploration, including:


– Balancing regional diversity and harmonisation needs in AI governance


– Strengthening multi-stakeholder involvement in AI governance


– Regulating AI from the developer angle


– Incorporating feminist and diverse perspectives into core values for AI governance


– Developing international norms for liability and accountability in AI


– Regulating AI in specific verticals or sectors


– Ensuring transparency in complex AI systems like large language models


– Addressing potential loss of specific details in AI-generated content


– Approaching AI regulation in relation to internet infrastructure


In conclusion, the discussion highlighted the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. While there was agreement on the need for AI governance, developing a unified approach will require balancing multiple perspectives and priorities, with a strong emphasis on human rights, accountability, and the unique challenges posed by AI as a new intermediary layer in internet interactions.


Session Transcript

Luca Belli: So, let me just start to introduce the panelists and introduce a little bit of the team, and then I will give the floor to my friend and co-moderator, Olivier Crepin-Leblanc. So, our panelists of today, you can already see them on the screen, the remote panelists, and here with us, the on-site panelists, we will start with Renata Miele, who is advisor at the Ministry of Science and Technology of Brazil, and also the chairwoman of the CGI, the Comitê Gestoral Internat, the Brazilian Internet Steering Committee. Then we will have Sandrine Hersey, who leads the ARCEP, the French Regulators Unit on Internet Openness. Then we will have Sandra Mahanen. Is Sandra with us or not? Yes, so that works for the Unicom Group and for Omefe Technologies. Then we will have Mr. Winsor, that needs a few introductions. He is an internet pioneer and also chief internet evangelist, if I’m not mistaken, also vice president of Google. And then we will have Ik Chan, who is professor at the University of Beijing, and also leads work of the PNAE, the Policy Network on AI. And last but not least, we should have our friend Alejandro Pisanti somewhere. Sorry, there is also Vanda Munoz. Sorry. So, do we have Alejandro Pisanti online? I don’t see him. He’s online. I don’t know whether he is. is online but he’s not visible yet but we hope he will soon and then we will have also Vandamunas, she’s already online of course and then last but not least Anita Gurmundti from IT4Change. All right, now let me just introduce a little bit of the topic and why also we have combined these two session of the Dynamic Coalition on Core Internet Values and on Net Neutrality and Internet Openness because we had very similar proposals for this year’s IGF sessions to bring together individuals that have been working for in some cases decades on core internet values and at least one decade on Internet Openness, Net Neutrality and similar issues to discuss which kind of lessons can be learned from Internet Openness and Core Internet Values if any and can be transposed to the current discussions on AI Governance and AI Openness. We know very well that Internet and AI are two different beasts but there is a lot of things that can overlap and to do AI you need somehow Internet connections and Internet as a whole and a lot of things that happen on the Internet especially most applications nowadays rely on some sort of AI so there is a deep connection between the two but they are not exactly the same thing and many of the core Internet values or Internet Openness principles that we have been discussing for the past decade or so may apply or not. Some things may be more intuitive like transparency. Transparency of Internet traffic management, we have discussed this on Net Neutrality issues for a decade, is essential to understand to what extent the ISP is managing traffic and is this reasonable traffic management, is this blocking or throttling unduly some specific traffic or not? This is also essential to understand which kind of decisions are taken by this AI system we may rely upon for an enormously important part of our lives, from getting loans and credit in banks, to being identified and maybe arrested by police as criminals, or use our services with face recognition. So this kind of transparency, although different, be it in a rule-based system or in a very intensive system like LLMs that relies on a lot of computational capacity and are more predictive and probabilistic than deterministic, in both cases, we need to understand how they function. And this is not something that from a rule of law and due process perspective, we can accept to simply say, you know, we don’t know how it works. I understand that in many cases, we don’t know how it works, but this kind of transparency is essential for the counterpart of our accountability, accountability with users, with regulators, with society at large. And then we have a lot of debates about interoperability and permissionless innovation, which are the core of the internet functioning, but most of AI systems are not interoperable. And actually, the most advanced are also developed by a very small number of large corporations, which may lead us to the kind of concentration that non-discrimination and decentralization that are at the core of the internet and at the core of net neutrality and at the core of internet openness aim at avoiding. And to conclude also a very concrete example of how this concentration and the lack of net neutrality can even be put on steroid by AI. to some extent. We have very good examples now. We have been debating zero rating and its compatibility or incompatibility with NetNeutrality for the past decades almost. And we know that in most of the Global South, people access the internet primarily through meta family of apps, especially WhatsApp. And the fact that now WhatsApp includes at the very top of his homepage, meta AI, means that de facto most of the people in the Global South, they have as an internet experience, primarily WhatsApp, and will have as AI experience, primarily meta AI, period. That is the reality of most of the people which are sadly poor. And we only have that as an introduction to AI. And we also work to train for free that specific AI. So those are a lot of considerations we have to have in our mind to understand to what extent we can transpose internet openness principles, to what extent we can learn lessons from regulation that already exists, and to some extent may have failed over the past decades in terms of internet openness, and also which kind of governance solutions we can put forward in order to shape the evolving AI governance and the hopeful, maybe, AI openness. At this point, this would be the moment where my co-moderator, Olivier, would provide his introductory remarks. But I’m seeing him intensely speaking with our remote moderation team. So as the show must go on, I think we can start with the first speaker that we have on our agenda, that if I’m not mistaken, is Renata Mielli. I think, Olivier, do you want to give us your introductory story? Sorry, Renata. Olivier arrived again, and he is going to provide


Olivier Crépin-Leblond: his introductory remarks, and then we will pass the floor to Renata. Yes, apologies for this. Blanc speaking and I’ve been running back and forth trying to get all of our panelists remote panelists because we’ve got quite a few to have camera access, be able to be recognized by us and so on. So sorry for the running in and out but thank you for the introduction and it’s really great to have a meeting of both organizations, both dynamic coalitions together for a topic which is of such interest. I’m going to give just a quick few words about the core internet values because I’m not quite sure that everyone in the room knows about those. I see some new faces as well and this dynamic coalition started quite a while ago based on the premise that the internet works due to a certain number of internet fundamentals that allowed the internet to thrive and to become what it is today. And those are quite basic actually, they’re all technical in nature and so if I just look at them, the first one is the point that the internet is a global resource, it’s an open medium, open to all, it’s interoperable which means that every machine around the network is able to talk to other machines and I’m saying machines when we started it was every computer but now it is of course you’re speaking about all sorts of devices on this. It’s decentralized, there’s no overall central control of the internet, short of the single naming system, the DNS, apart from this there’s so many organizations that are involved in its coordination. It’s also end-to-end so it’s not application specific, you can put any type of application at the end and make it work with something at the other end so the actual features reside in the end nodes and not with a centralized control of a network and that makes it user-centric. End users have the choice of what services they want to access, how they want to access them and these days of course course, using mobile devices, they’re able to upload into their or download into their mobile devices, any type of application that they want to run. And they don’t really think about the internet running behind the scenes. And of course, the most important thing, it’s robust and reliable. It’s to think that there are so many people now on a network, which started with only a few thousand people, and then a few hundred thousand, and then a million and a few million. And some people back in the day thinking this is going to collapse. Well, it’s still working. And it’s still doing very well and very reliable considering the number of people and the amount of the number of people that are trying to break it, the amount of malware and everything else that is on this. So it’s pretty robust, pretty reliable. A few years ago, we also added another core value, which was that of safety. The right to be safe was one of the things that we felt was important to add as a core value. In other words, being able to allow for cyber measures to make sure the network itself doesn’t collapse and all sorts of ways to not content control as such, but to make sure that you are safe when you use the internet and you’re not going to be completely overblown by the amount of malware and everything that’s out there. And that’s something which I think we were quite successful in doing. All the antivirus software, all of the devices, all of the things that we now have on the net to make it work. These are very open values. And of course, they’re open for people to adopt. And of course, we have seen erosion of these over the past years. The openness of the network has been put to test on many occasions. There’s also been certainly, as far as network neutrality is concerned, some traffic shaping and some things affecting But on the whole, it’s still global, it’s still interoperable, it’s still got the basic values that we’ve just spoken about now. And whilst we are seeing an erosion, we’re also seeing that it’s quite well understood by players out there. And we’re looking at various different levels. So the telecommunication companies, governments, the operators, the content providers and so on, that we have this equilibrium, if you want. I can’t say a sweet spot because it keeps on moving forward, but this equilibrium today, and we hope that we will be able to continue having this equilibrium tomorrow that will make this internet both innovative, keep the innovation, but at the same time also make it as safe as possible and as stable as possible. Because that’s really something that now, with a network that is so important in everyone’s lives, we need to make sure we have for the future. The economic implications and societal implications to having a broken internet are too big for us not to do this. So hopefully that’s a message that’s been well understood. But now we have AI. AI has come up and seems to have made an absolute revolution. Forget everything else that’s happened before, we need to regulate, regulate, regulate. That’s what some are saying. I think today’s session is going to be looking at this. Do we need to regulate, regulate, regulate, or can we learn some lessons from the core internet values and how the internet has thrived to be what it is today and apply it to artificial intelligence? Well, let’s find out.


Luca Belli: Let’s start with our first panelist, Renata Miele from the Ministry of Science and Technology. Please, Renata, the floor is yours.


Renata Mielli: Hello. Thank you, Luca. Thank you, Olivier. It is a pleasure to be here discussing this interesting approach. about AI and Internet, I am very happy with this bridge that you bring to us to reflect about AI and Internet and the core values that we have to have in mind when we talk about this new and pervasive technology. So, thank you very much, my colleagues. I am going to bring some historical perspective and try to make this bridge between the core values that we have in Brazil to Internet into AI. Well, long-term analysis of Internet development shows that Internet mostly benefited from a core set of original values that drove its creation, such as openness, permission, innovation, interoperability and others. The Internet and its technologies historically fostered an interoperable environment guided by solid and universally accepted standards that enabled pervasiveness, shared best practices, collaboration and collective benefit from a unique network deployed worldwide. Just as other types of technology, the development of the Internet was also based in academic collaboration networks with researchers that worked together to deploy the initial stages of the global network. Artificial intelligence has been seen as a game-changer for a broad range of fields, from data science and news media to agriculture and related industries. In this sense, it is safe to start from the assumptions that AI will greatly impact society in several terms, economically, politically, environmentally, socially and many others. The harder challenge we have is to drive this evolution in such a way that it is possible to do it. that positive impacts superpass the negative ones with AI being used to empower people in society for a more inclusive and fair future for all, and the first step for that is to have a clear consensus on fundamental principles for AI development and governance. The Brazilian Internet Steering Committee, CGI.br, outlined the set of ten principles for the governance and use of the internet in Brazil. Our so-called Internet Decalogue provides core foundations for internet governance and is very much in line with the core internet values proposed by the IGF’s dynamic coalition, in a way that we believe can be leveraged to also meet expectations for the governance and development of AI systems. Principles such as standardization and interoperability are important for opening development processes, allowing for exchange and joint collaboration among various global stakeholders, strengthening research and development in the field. In the same sense, AI governance must be founded on human rights provisions, taking into account its multipurpose and cross-border applications. Principles such as innovation and democratic collaborative governance can be also considered as foundations for artificial intelligence in order to encourage the production of new technologies to promote stakeholder governance, with more transparency and inclusion through every related processes. Same goes for transparency, diversity, multilingual and inclusion, which can be interpreted in the context of AI systems development from the perspective that these technologies should be developed using technical and ethical safe, secure and trustworthy standards, curbing discrimination. discriminatory purposes. At the same time, the legal and regulatory environments hold particular relevance for the interpretation that the development and use of artificial intelligence systems should be based on effective laws and regulations in order to foster the collaborative nature and benefits of this technology while safeguarding basic rights. It should also be noted that adopting a principles-based approach ends up generating more general guidelines, which can lead to implementation difficulties. However, this should be balanced with the need for each country to adopt AI governance to its local realities and peculiarities, in addition to granting greater sovereignty over how this governance should take place in terms of politics, economics, and social development. As a bottom line, we could think of AI development and governance as a priority topic for a more intense south-to-south collaboration, fostering the creation and expansion of research and development, as well as open and responsible innovation networks with long-term cooperation agreements and technology transfer in order to corroborate sovereignty and solid development frameworks for the global south. It is important to not try reinvent the wheel and draw upon good practices that already exist, such as the global articulations across the IGF and WSIS processes, or even more stable sets of proposed frameworks, such as net mundial principles and guidelines, that can orient the evolution of the ecosystem to be even more inclusive on results-oriented. Last but not least, existing coalitions, political groups, and other stakeholders should be included in this process. could be leveraged as platforms for collaboration within digital governance and cooperation as a whole, including in traditional multilateral spaces such as the BRICS for D20. Brazil, for example, held the presidency of G20 in 2024 this year, and will do the same with BRICS in 2025. We believe that, in both cases, there will be good opportunities for foresting best practices in digital governance collaboration across different countries. Thank you very much.


Olivier Crépin-Leblond: Thank you very much, Renata. Wow, what a start. A lot of points being made here. I’ll ask that we all try and stick to our five minutes, because otherwise we’ll run over, we can speak for hours on these topics. But next is Anita Gurumurthy, IT4Change, and Abdelkader, are they ready?


Anita Gurumurthy: Yes, I’m here. Anita? Can you hear me?


Olivier Crépin-Leblond: She’s there, and she works, she can speak, yes? Perfect. Go ahead, Anita. Fantastic, we can hear you, yes.


Anita Gurumurthy: You can hear me, I hope. Yeah. All right. So, thank you very much. I just heard that from Renata, and also note this wonderful point that Mr. Vint Cerf has made, that the Internet is used to access AI applications, but operationally AI systems don’t need the Internet to function. I mean, I think we are making reference to the fact that, in many ways, algorithms predated the Internet or the Internet-based revolution. However, the fact of the matter is, just like the intimate relationship between time and space, or space and time, we have a relationship between the Internet and contemporary AI, which Mr. Cerf calls agentic AI. Allow me to be a little bit more specific. critical of openness itself, because I think when I open up my house, what I mean is everyone is welcome. But I think the ideas of the open Internet and open AI do not necessarily, you know, map on to this kind of sentiment. So the term open Internet is used very frequently, but it doesn’t have an universally accepted definition. And that is because, as all of us know, and none of us needs an introduction about the geoeconomics of the data paradigm here, we see that the data collection has become pivotal when we talk about the Internet paradigm. And it’s used either to target ads in large proportions or build products. And only a handful of players with the scale to meaningfully pull this off. So the result is a series of competing walled gardens, if you will. And they, of course, don’t look like the idealized Internet we started with. And today’s technology runs on a string, I would say, of closed networks, app stores, social networks, algorithmic feeds. And those networks have become far more powerful than the web, in large part by limiting what you can see and what you can distribute. So the basic promise of the Internet revolution, the scale, the possibility is, well, I mean, I would say it’s not plausible at this conjuncture. Alongside all of this, the possibilities of community and solidarity haven’t died. Thank God for that. Because we have the open source communities, there are open knowledge communities. And of course, all of these remain open and vulnerable, unfortunately, to capitalist cannibalization and to state authoritarianism. So that is a bit of bemoaning the state of the Internet. And all of this points to an important thing, which is that instead of an economic order, that could have leveraged the global Internet for the global commons of a data paradigm. We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation. Now, I come to the openness in AI, and I’m cognizant of the five minutes that I have. I think it’s worthwhile to look at Irene Salomon’s analysis of AI labeled open. What is open AI? We’re actually talking about a long gradient, right? And everything is open. You can talk about open as if it’s one thing, but you could actually have something with very minimal transparency and reusability attributes. So that could also be open, and therefore open is not necessarily open to scrutiny. And the critiques that even others, other scholars like Meredith Whitaker, mount against this paradigm is that we don’t necessarily democratize or extend access when you talk about openness. And openness doesn’t necessarily lower costs for large AI systems at scale. Openness doesn’t necessarily contribute to scrutability, and it often allows for systemic exploitation of creators’ labor. So where do we go from here? I mean, it’s very sobering that GPT-4, for instance, when it was published by OpenAI, they explicitly declined to release details about its architecture, including model size, hardware, training, compute, data set construction, and training methods. So here we are. What we need to do is restore ideas of transparency, reusability, extensibility, and the idea of access to training data. And it’s politicized form. If we don’t do this, then we will be lost. And my last submission here is to be able to politicize each of these notions and make them part of ex-ante public participation, we need to turn to environmental law and look at the Aarhus Convention, for instance. We need a societal and collective rights approach to openness, whether it’s the open internet or open AI. And collective rights, societal rights that do not preclude individual rights or liability for harms cost to individuals. I’m not precluding that, but we still need to understand what will benefit society and what will harm society. So we are looking really at a societal framework for rights, which is super liberal, which doesn’t just always come back to my product cost you harm, but really looking at the ethics and values of societies, of sovereignty of the people, you know, as a collective. And here, I think we should understand that there are three cornerstones in substantive equality, the right to dignity and freedom from misrecognition, the right to meaningful participation in the AI paradigm, not just in a model, and the right to effective inclusion and the gains of AI innovation, which is for all countries and not just a couple. Thank you.


Luca Belli: Thank you very much, Anita, for this reality shock and for remembering us that actually, behind the label of openness or open, one has to look at the substance of things. And the very good example of open AI, whose practice and architecture is really antithetic to openness, despite having open in its own name, is something that allow us to think about the fact that if we want to have market players, a very large one, including and multi billion corporations stick to their promises, maybe some type of regulation is actually essential. And here is It’s very good to then pass the floor to Sandrine Elmiersi, because RCEP has been very vocal and leading AI openness over the past years of implementation since 2015 of the open internet regulation in Europe. So it’s very good to, based on the experience you had over the past decade, understand which kind of mistakes might have been done, which kind of limits may exist, and which kind of lessons we can learn to better shape openness of AI. Please, Sandrine, the floor is yours.


Sandrine Elmi Hersi: Thank you. First of all, let me thank the organizers of this session for this important conversation on how to incorporate internet fundamental values in the development of AI. So I will focus this introduction on the generative AI impact on the concept of internet openness. So as we know, generative AI is a versatile innovation with vast potential across many sectors, and more broadly, for the economy and society. These technologies also raise several legal, societal, and technical issues that are progressively being tackled. But we can see that policymakers, notably at EU level, at European Union level, have primarily focused their action and initiatives on the risks of these systems in terms of security and data protection, as seen in the EU AI Act. But the impact of these technologies on internet openness and the potential restrictions this application could bring on users’ capacity to access, share, and configure the content they do have access through the internet have only started to become a topic of attention in the public debate now. And yet, generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable. For example, a study published by Gartner this year predicts that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots. And aside from these conversation tools, generative AI applications and generative AI systems are increasingly being adopted by traditional digital services providers, including through well-established platforms like search engines, social media, and connected devices. From this perspective, we can say that generative AI could soon become an integral part of most users’ digital activities, potentially serving as a primary gateway for accessing content and information online. So thanks to their user-friendly interfaces, generative AI tools open up new possibilities to a wider range of users. With generative AI, it has never been easier to create text, images, or even code. However, we must also consider the challenges and risks in terms of Internet openness and users’ empowerment. So at RCEP, we have for a long time emphasized that Internet services providers, the main focus of EU open Internet regulation, are not the only players that may negatively impact Internet openness, understood as a right for end-users and users in general to access. and share freely the content of their choice on the Internet. In 2018, we published a report highlighting that, complementary to net neutrality obligations, the impact of search engines, operating systems, and other structuring platforms on devices and Internet openness should be tackled with appropriate regulatory replies. At EU level, the Digital Markets Act, adopted in 2022, has introduced new tools for promoting non-discrimination, transparency, and interoperability measures addressing some of the problems raised by gatekeepers. But for us, now it is the time to assess the impact of generative AI on Internet openness and users’ empowerment. This is why we have started to work on the issue and already sent first observations to the European Commission from our experience on net neutrality. And we can already see effects of generative AI applications on how users access and share content online. Just some examples, suppose the transition from search engines to response engines is not a neutral evolution and could restrict the user experience. As the interface of generative AI tools offers users little control and agency over the content they access to, providing an ad hoc single response with often a lack of transparency, no clear sources, and no ability for users to adjust the setting, we must also take into account the inherent technical limitations. of AI, including biases, lack of explicability, and risk of hallucinations that are now becoming part of the digital landscape. And generative AI development also brings fundamental changes to content creation, which could impact how information is shared and the diversity and richness of content available online, because as generative AI tools become primary gateways for content access, AI providers could also capture the essential part of the economic and symbolic value from content dissemination, which could threaten the capacity and willingness of traditional content providers such as media or digital commons to produce and share original content to the benefit of the economy and society. While these developments are concerning, at ARCEP we are convinced, as other persons around the table today, that we can create the conditions necessary to apply the presymposal of open internet to artificial intelligence in terms of transparency, user empowerment, and open innovation. For information, we will publish next year a set of recommendations in that perspective and are looking for partners to work towards this task. And to conclude, a final word, to say that we believe that we have the collective responsibility to shape the future of artificial intelligence governance in a way that secures its development as a common good. This means notably the adoption of high standards in terms of openness, but also pro-innovation, sustainability, and safety. So thank you again for the opportunity to be here and I look forward to the discussion ahead.


Olivier Crépin-Leblond: Thank you very much Sandrine and thank you for sharing the perspective of a regulator. We now have a perspective from a business community and that’s Sandra, data scientist analyst at Unicorn Group of Companies. Sandrine, you should be able to unmute and take over. Sandra? We cannot hear Sandra. Can we check why we cannot hear Sandra? Sandra, can you try to speak again? No, we can’t. We’re not hearing Sandra online either. We’re not hearing her online. Should we maybe… Sandra, can you do a last attempt? Yes, can you try to speak? There’s a problem with her mic. Yes, there is a problem with… While we try to solve this in the interest of time, let’s move ahead to the next speaker and then we will have Sandra maybe later.


Vint Cerf: So, Vint Cerf, please, the floor is yours. Well, thank you very much for asking me to join you today. This is a very, very interesting topic. I will say that AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet. What you see is extremely complex and large models that operate very differently from each other. And the real intellectual property is a combination of the training that we have. material, and the architecture and weights of each of the models that are being generated, and those are being treated largely as proprietary. So open access to an AI system is not the same as access to its insides and its training material and its detailed weights and structure. So we should be a little careful not to try to equate the things that make the internet useful and try to force them onto artificial intelligence implementations. I don’t think we’re at a place where standardization is our friend yet. The one place where standardization might help a lot for generative AI and agentic AI would be standard ways, semantic ways of interacting between these models. Humans have enough trouble with speaking to each other in language, which turns out to be ambiguous. I do worry about agents using natural language as a way of communicating and running into the same problem that humans have, which is ambiguity and confusion and possibly bad outcomes. Generally speaking, if we’re going to ask ourselves whether we should regulate AI in some way, I would suggest, at least in the early days, that we look at applications and the risks that they pose for the users. And so the focus of attention should be on safety and for those who are providing AI applications to show that they have protected the users from potential hazard. I will also, I feel strongly that there is a subtle risk in our use of generative AI. Those of you who know how these things work know that the large language models essentially compress large quantities of text into a complex statistical model. The consequence of that is that some details often get lost. get lost. We have a subtle risk in our use of large language models where we may lose specific details even though the generative output looks extremely convincing and persuasive because of the way it’s produced. I wonder whether we will end up with blurry details as a result of filtering our access to knowledge through these large language models. I would worry about that. I guess the last thing I would say is that accountability is as important in the AI world as I think it is in the internet world. We need to hold parties accountable for potentially hazardous behavior. The same is true for parties offering AI-based applications. Everybody should be accountable for any risks that these systems pose. My guess also is that we should introduce into the training material better provenance so that we know where the material came from. This would be beneficial for websites in general, knowing where the content came from, so that we can assess its utility and accuracy. I’ll stop there and thank you for your opportunity to intervene.


Olivier Crépin-Leblond: Thank you very much, Vint. Since we are a bit pressed for time, we’ll go straight over to Yik-Chan Shin while we’re working out with Sandra how to get her mic working. Yik-Chan, you have the floor.


Yik Chan Chin: Thank you for inviting me. From the P&I perspective, because as you may know that we have a policy at the IGF, we have an intersection called the policy level on artificial intelligence. So we did a report on the interoperability, liability, sustainability, and labor issues. So first of all, I think Well, I do agree with Wing-Chief in terms of the infrastructure of AI. Actually, it’s quite different from the internet because actually, AI system and the users are supported and connected via the internet, but they’re quite different because AI, including algorithms, data, and the computing power, technically, none of them must be unified or standardized, I mean, technically, okay? Also, according to our past experience, the interoperability of AI is an extremely complex issue, so we have been working on the topic for two years, the last two years, but it’s really extremely difficult issues. So, before I go into detail about the interoperability, I think there’s two things, two principles worth paying special attention to, because when I look at the question, you talk about permissive innovation versus precautionary principle, so when we talk about open internet, so basically, we talk about permissive innovations, and I’m not sure whether this principle should apply, should be applied to the AI regulation because AI is extremely compressed, and there’s some features, you need the features of AI, for example, compressive, and predictable, and also, it’s kind of autonomous behave, so all these particular features make the AI quite harmful if there’s a risk, so whether we should allow this permissive innovation approach, which is applied to the open internet because we know the history of internet is in the beginning self-governed, but should we allow this to apply to the AI system? I think we should be very cautious about that, but on the other hand, we do see there’s some overlapping principles. between the AI regulation, internet regulation, for example, so I think those values should be applicable to both systems, for example, like a human-centered system, okay, both AI and internet, inclusively, universality, transparency, accountability, robustness, and the safety and naturalities, and the capacity building, so all these values actually applicable to both systems. So in terms of the interoperability, which is my area, so I would like to say something particularly focused on interoperability. So first of all, what is the interoperability? Interoperability, basically, we’re talking about the capacity of the AI system in terms of machine, machine, they can talk to each other, communicate with each other smoothly, so including not only machine but also the regulatory policy, you know, so they can communicate and work together smoothly, but this doesn’t mean we have to, I mean, the regulation or the standard has to be harmonized, because we can have a different mechanism to accommodate interoperability, for example, we can have a compatibility mechanisms to accommodate the interoperability. So therefore, you know, first of all, I think interoperability are crucial for the openness of the internet and AI system, but AI system can be diverged as well as convergent, as I said, as I just explained before, because the system is quite different, it’s not necessarily to be unified, so therefore, first of all, we need to figure out what area of AI system or even AI regulations or governance has to be interoperable, what areas can be allowed divergent, you know, and respect the regional diversity, so from the pin… So, in our report, we identified several areas which could be addressed at a global level, which need to have the interoperable framework. For example, like the risk, the AI model’s risk categorization and the evaluation, because we see there’s a different approach to categorize the AI risk and evaluate. So you have a EU approach, and China has a Chinese approach. So even America, the US just released their global standardization framework mechanisms. So we need to have a kind of interoperable framework in terms of AI risk categorization and evaluation. And the second is about liability. So we have a huge debate just over there about the liability of AI system, and who should take responsibility, and what kind of responsibilities, criminal or civil. So we haven’t had a global framework, even a national framework, because there’s still debate at the EU level and the national level. So the liability of the AI model is another area that could be addressed at the global level. The third one, I think we just mentioned about the training data set. So all these issues can be addressed at the global level. And so the other things, and the last thing I want to mention is about how do we balance the regional diversity and the harmonization needs. So we need to respect the regional diversity in AI governments, but at the same time, can establish compatibility mechanisms to reconcile the divergence in the regulations. So there’s different mechanisms we can use, but there’s a context dependent, and so case by case. I think, is my time up? So last things I want to say is about the area we had to improve is a weak regime capacity of international institutions. you coordinate the alliance. So that’s a kind of concept we call the weak regime capacity of the international institutions, which means we have a lot of the international institutes like ITU, IEEE, and the UN, but how can we?


Luca Belli: Can I ask you just to wrap up in one minute so that the others also have time to speak? Thank you.


Yik Chan Chin: Yeah, so the last thing is that we need to have some kind of the global institution which can coordinate different initiatives at the national level, regional level, and the international level in terms of the openness of the AI and the openness of the internet. So the GDC have not provided a concrete solution in terms of how to strengthen the multi-stakeholder in the AI governance, but leave to the WISC-20 to decide. So I think we should address this in the WISC-20 debate. I think I stop there, thank you.


Luca Belli: Fantastic, let’s try to see if now Sandra can be audible. Sandra, can you try to speak so that we can check? We are not hearing you, can you try again?


Sandra Mahannan: How about now?


Luca Belli: Yes, now, yes, perfect. Keep the mic, close your mouth, please. Thank you very much. Thank you.


Sandra Mahannan: Thank you. Sorry for the whole mix up, and once again, thank you so much for the opportunity to be here.


Luca Belli: I’ll try to speak very short. If you can keep the mic very close to your mouth because literally we can hear very well when it’s close to your mouth, and not at all when it is five centimeters from your mouth. Thank you very much. We cannot hear you, Sandra, I’m sorry.


Sandra Mahannan: What’s the issue?


Luca Belli: Sandra, unfortunately, we keep on not being, I think it’s a mic problem. So, if you have another microphone where you are, I suggest you try to change it while we go to the next speaker, Alejandro Pisanti, because we are not able to hear you in this moment, Sandra. So, Alejandro Pisanti, very old friend, not because he is old at all, but because we know each other since a lot of years. So, please, Alejandro, the floor is yours.


Alejandro Pisanty: Thank you. Can you hear me well?


Luca Belli: Yes, very well.


Alejandro Pisanty: Thank you. Thank you, Luca and Olivia, for the yeoman’s work you did to put this session together, and to all members of the Dynamic Coalition for the Exchange of Ideas. I salute also friends I see on screen. I think I see Martin Botterman, I think I see Edesire Misolewicz and Harald Lalvestand, probably getting that right. So, briefly, because I’m going to be mostly responding as well as putting forward what I have prepared, the Dynamic Coalition was created to see, try to follow on these core values, which if you take them away, you don’t have the Internet anymore. If you take away openness, for example, you have an Internet. If you take away interoperability, you have a single vendor network and so forth. And that’s what we are trying to now extend to, or I say to challenge how much we can extend it to AI. We have to be very careful what we call AI. In people’s mind are the generative AI systems that start from text and can give you either more text in a conversational interface, or can give you images, video and audio. But artificial intelligence is a lot more things. It’s molecular modeling, it’s weather forecasting, it’s every use of artificial intelligence that we use for basically three purposes, which is finding patterns in otherwise apparently chaotic information. finding exceptions to information that appears to be completely patterned, and extrapolating from these. And we know that extrapolating from algebra, extrapolating from things that you only calibrate for interpolation, is always going to be risky. So that’s, you know, our basic explanation and concern for hallucinations in LLM systems. We have, second, one of the lessons we’ve learned for many years from this dynamic coalition, is to separate the effects of the human and the technology. To separate the effects of human agency, human intention. Cybercrime doesn’t happen and wasn’t invented by the internet. It happens because people want to take your money or want to hurt you in some ways, and now use the internet as they previously used, fax, post, or just try to cheat you face to face. Same for many other undesirable conducts. So we have to separate what people are wanting to do and how technology modifies it, for example, by amplifying it, by enabling anonymity, by crossing borders, and so forth. And same for AI. It’s not AI that is doing misinformation. We have had misinformation, I think, since at least the Babylonians, and probably even before we had written language. But now we have very subtle and easy to apply ways to apply misinformation at a large scale. But we still have to look at the source and the intention of the people who are creating and providing this misinformation, and not try to regulate the technology instead of regulating the behavior, or educating the users to avoid them falling for misinformation. Second large point here, third large point here, is not trying to regulate artificial intelligence in general, in total, but being sure that you are not, by trying to regulate what you don’t like about LLMs doing misinformation, you don’t kill them. your country’s ability to join the molecular modeling revolution for pharmaceutics, for example. There’s a recent paper by Akash Kapoor, which I think is very valuable for this session, which speaks of leaving behind the concept of digital sovereignty and replacing it with a digital agency. Luca and I were in a meeting two weeks ago in New America, in D.C., where this concept was put forward, and it’s a very powerful one, because what I extract from it is instead of trying to be sovereign by closing borders and putting tons of rules, which are basically copied from rules from the countries which actually developed the technology, and based on the fears, it’s trying to be powerful, even if you have to sacrifice some sovereignty in the sense that you have to collaborate with other countries, you have to collaborate with another academic institution and so forth, which, by the way, has always been the way of developing technology and academic research. There’s a recent French paper, it came to my attention only yesterday, which speaks about de-demonizing, stop demonizing artificial intelligence, without, of course, becoming confident or overconfident, but try to regulate and to promote AI. If your country is looking, legislators are looking to regulate AI and are not putting a lot of money into research and development and into, let’s say, like Denmark or Japan have done recently, or Italy, putting together a major computing facility for everybody to use to develop AI, they are lying to you, they are cheating you, because they are actually closing the door to the effects of innovation and condemning you to actually getting this only from outside the country in the end, in subtle and uncontrolled ways. How do we bring multi-stakeholder governance, which is another lesson from our dynamic coalition to artificial intelligence, we have to find a way. maybe to scare the companies with the fear of harder regulation, to come together with other stakeholders like academia, like the users, like rights holding organizations, and so forth, as we did with, for example, the domain name market with ICANN 25 years ago. It’s not doing an ICANN again, necessarily, but it’s extracting the lessons of how you bring these very diverse stakeholders together to a kind of core regulation designed for this type of system and risks that are present in reality, and not only the imaginary ones. There has also been some talk about open sourcing, which is very valuable. The risks have already been mentioned, and one risk that has not been mentioned that we’ll learn from this history of open source software is derelicts, software that is abandoned, systems that are abandoned, that are not maintained anymore, which are very risky because defects can creep in and never be fixed. And then these things become part of the infrastructure of the internet. We already have seen some major security events, for example, happening from unmaintained open source software, which was at the core of different systems. So the challenge here will be to avoid the delusion of a one world government. We don’t need the GDC. We don’t need a UN artificial intelligence agency. We need to look more at the federated approach. And I think that this will be more approachable, more available. There’s a better path to it. If we do, as for example, the UK has been doing, go by the verticals, go by the center specific type of regulation, which we already have, use all the tools society already has, like liability for commercial products, liability for public officials who purchase systems which work badly. It’s as bad to purchase a system that does biased or discriminatory assignment of funds in a social security system, as it’s bad to purchase cars that end up killing people in crashes because you don’t have airbags and that would be


Luca Belli: thank you fantastic thank you very much alejandro i was going to remind you to wrap up but you already did it yourself fantastic so uh let’s try to see if now sandra has a new mic that can work let’s try to see give a last shot at sandra’s presentation sandra can you hear us yes can we can hear you we can hear you very well very well excellent please go ahead thank you um


Sandra Mahannan: i’m so sorry about the mix up with my mic and all um i’m going to try to keep this very short um so i would want to come in from the angle of um the business angle so to speak so um i work with unicorn group of companies and it’s an ai and robotics company so um i read one time that ai often reflects the biases of its creators right so um we all know that um ai response quality is a very huge concern it’s a huge concern um because we have cultural biases religious biases um recently i was in in a religious gathering where religious leaders were trying to discuss um uh the adoption of ai and the you know um the response the concerning responses that ai gives um the erroneous responses and all of that and we all know that um air responses are heavily dependent on the data quality fed into um the model right and um uh yeah so um the the the the acquisition of such data is usually not um not cheap it’s very expensive we talk about computing power we talk about um acquisition of data these are very expensive processes. So my tip would be to regulate AI, not really from the user angle, but from the openness should come from the aspect of the developer, the development angle, where we talk about data quality, data privacy, security, data sharing protocols, operating in the market as an entity, interoperability, and yeah, and all of that. Yeah, that would be my take on it. Thank you.


Luca Belli: Fantastic. Thank you very much, Sandra, for your perspective and for being so fast. Do we want to have our last speaker last but not least, of course,


Olivier Crépin-Leblond: I think we’ll have a Wanda Munoz, who is a member of the feminist AI research network in UNESCO’s Women into Ethical AI platform. So over to you, Wanda.


Wanda Muñoz: Thank you so much. Can you hear me?


Olivier Crépin-Leblond: Absolutely.


Wanda Muñoz: Okay, okay. Okay. Well, thank you so much. I’m delighted to be here. Thanks for the to the organizers for having me and thanks Alejandro for recommending me to be here. I would like to take a somewhat different perspective from what has been shared so far, because what I like to put on the table today is my perspective as someone who comes from policymaking and from human rights implementation. So my contributions come from this perspective. And I will also build on the results of a report from the Global Partnership on AI called Towards Substantive Equality in AI that I invite you all to review and for which we counted with the amazing leadership of Anita. So I take the opportunity to thank her again for her contributions and to say that I fully agree with her intervention. So I will start by sharing a few thoughts on the issue of values itself and then I will move to human rights. First, I like to share that I think the core values of Internet governance have been very useful to have a common understanding of the Internet that we want and that serves the majority, but arriving to the discussion when these values were already adopted and implemented for a while, I want to put it on the table that maybe we could benefit from analyzing these values from a gender and diversity perspective. And I think there’s already a wealth of research from feminist AI scholarship in this regard. For instance, just to mention an example of the six core principles of feminist data visualization, I don’t know if you are familiar with it, but I invite you to look for them, which propose values such as rethinking binaries, embracing pluralism, examining power and aspiring to empowerment. And these are quite different from the Internet core set of values today, but also complementary. And I think what I like from this other set of values is that they question social constructs, power and distribution of resources. And these issues to me are inextricably linked both to Internet and to AI governance, but still they are often left out of mainstream discussions. So that being said, I’ll move to human rights. And here what I like to say first is that I like to give you some, a couple of ideas of how a few of us insist that human rights should be at the front and center of any discussion on AI governance, at least on the same standing and ethics, principles and values. I think maybe for some of you see it differently, but human rights are not just words. Human rights are actions, policies, budgets, indicators and accountability mechanisms, which were already mentioned by Renata and Anita before. So in the context of artificial intelligence, human rights allow us to reframe the discussion on AI in different terms and to ask different questions. So let me give you three examples. Although saying that we must mitigate the risk of AI, what we would say from a human rights perspective is that when AI harms occurs, it systematically results in violation. of human rights that disproportionately affect women, racialized persons, indigenous groups, and migrants, among others. And I’m sure you know of the many docents or more documented examples of these that have affected the right to employment, to health insurance, to social services, and many others, which you can find, for instance, in the OECD AI Incidents Monitor. Another example, instead of saying that in AI governance, we should balance risk and innovation, if we acknowledge that the benefits of innovation generally benefit a privileged few, and the brunt of the harm is primarily for those already marginalized, from a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI. And I want to tell you that of this research we carried out for GPA, where we consulted more than 200 people from all walks of life, backgrounds in five continents. This is possibly the number one demand that was documented in the report. And I also want to say I appreciated Jake’s perspective on the need for international norms, specifically regarding liability and accountability. Another example is regarding non-discrimination, that I think generally speaking, people understand non-discrimination as saying, I don’t go out and say slurs to people in the street, right? So I don’t discriminate. But from a human rights perspective, this is far from enough. What non-discrimination means is that you must take positive actions to avoid and to redress discrimination that already systematically exists in our organizations, in our data, in our policies. And this is particularly the case in internet and in artificial intelligence. So in a human rights framework, unless we take action, we are effectively perpetuating discrimination. Similarly, we could have a discussion about what a general notion of safety means, but unless we adopt specific actions to ensure the safety. In the context of the vulnerabilities of specific groups, in each specific context, we will keep excluding those who are already more marginalized. And here, Alejandro, as often, I want to respectfully disagree with you when you talk about the need not to demonize technology, because I hear this once and again, and I really don’t think it’s a helpful term that is often thrown at those of us who are pointing out the risks and harms of AI. So I think we are doing this from the documented impacts and from the evidence, and trying to raise the alarm to at least start bringing into reality, you know, into the reality of what AI is causing, in addition, I mean, to contrast with what we see most of the time, which is this AI hype. And of course, I think that for all the problems that the UN has, we don’t need it to lead an effort on AI regulation, if we want to have at least some equality in terms of negotiation. This leads to larger issues with multilateralism that I hope we could discuss another time. So just to conclude, I think when we speak about AI governance, what is at stake has the potential to change the core of how our societies function. So I fully agree with Anita on the need for a societal and collective rights approach, in addition to a human rights one. And to me, this cannot happen without regulation. So thank you.


Luca Belli: Fantastic. Thank you very much, Vanda, for bringing these really very good and intense, thought-provoking points. I think this is a very good way to open our discussion now with the floor. We have a good 20 minutes to speak. Also, let me share with the floor that one of the intentions that we had when we started designing the organization of this session was to try to distill some core elements maybe that we could put into a report, a joint report for next IGF. We know very well that in six months, before next IGF, there will be very few things that we could do in terms of outcome, but some very joint paper on what could be the elements for a… in an open AI ecosystem or something like that. So if you have any ideas, if you can help us, guide us, identify what could be these core elements, or if you have any other reflections on what has been said, we have 20 good minutes to discuss this. Feel free to be punchy, provocative, while of course diplomatic and respectful. Just raise your hand and our mic will be.


Olivier Crépin-Leblond: And Luca, if I could add, there are sometimes some panels at IGFs where everyone agrees with each other. And I was really pleased to see various viewpoints and some panelists not agreeing with each other. So that’s really good. And by the way, if you all, as panelists also have points to make about each other’s interventions and please go ahead. Now, if you’re online, you can put your hand up in the Zoom and we’ll be seeing this. And of course, if you’re in the room, then put your hand up and mic will fly in your direction or maybe be brought over to your direction. Does anyone who wish to fire off?


Luca Belli: Who wants to start our collective exercise?


Olivier Crépin-Leblond: It’s a lot to digest.


Luca Belli: Yes, I see. Are you, Desiree, are you stretching or raising your hand? Okay, let’s break the silence with Desiree.


Audience: Hi, Desiree. The question, yeah, thank you all for your very rich intervention. There’s a lot to take in. And what I, although I don’t know the title of the session, whether really focusing on the core principles of AI, this is a dynamic coalition working group on core principles of the internet. So I’m really, really glad to see the differentiation that the AI is not the internet confirmed by some of our panelists. And then we also had heard that the AI is building this intermediary layer like a user interface between these structures. So I think it’s important to see the AI just as something being built on top of the existing infrastructure. And my concern is really that we will end up with the internet that is even fuller of deep fakes and disinformation at this current stage. And that trying to have a sustainable internet where we need to be really. careful about the capacity that we have in the society in running the Internet and getting bits of information through the network, should the layers of the network, you know, be looked separate and protected. And what I think I’m hearing, and I’d like to have a confirmation, is that AI being built on the top should really be regulated as the AI layer and like not to go deep down into the Internet infrastructure as such. But then there are arguments that some networking parts will be using AI as well. And so how do we see this, you know, regulation being played out? And what is the core principle here? Is it still net neutrality, that we’re stupid about the bits that go through the network? It just raised a lot of, you know, questions in my mind.


Luca Belli: Yeah, thank you, Desiree. We have Anita online who wants to react. And then we’ve also got Vint Cerf having put his hand up. So let’s take Anita and Vint’s reactions and then we go around the floor again.


Anita Gurumurthy: I must apologize that I wasn’t responding to the point from the floor because I wanted to come in earlier. So is that okay that I go in now?


Luca Belli: Yes, please go ahead. And then we have Vint and then Renate.


Anita Gurumurthy: Yeah, it’s a minor point. It may be linked to what was just observed. I think that when you talk about the Internet and the innumerable struggles in our own regulatory landscape, and I recall my organization, and it’s, you know, the good fight we put up for net neutrality in relation to our telecommunications authority, the way the idea of non-discrimination in hers in the network is very, very different, I think, when it comes to artificial intelligence. I think AI is primarily linked to truth conditions of a society, and you’re really not necessarily prioritizing non-discrimination. I think that’s somewhat of a technicalized representation of data and AI debates. And what we’re actually doing is using discrimination and social cognition in a manner that you can use data for social transformation. So there is a certain slippage there very often, and in fact, in our joint work with Wanda, we actually said that we might sometimes have to do affirmative action through data. So we really have to be cautious about conflating non-discrimination on the internet with principles for responsible AI.


Olivier Crépin-Leblond: Thank you. Vint?


Vint Cerf: I’m literally just thinking on the fly here about AI as another layer, as the interface into this vast information space we call the internet. First of all, Alejandro’s point that machine learning, it covers a great deal more than large language models, is mentioned of weather prediction, for example, resonates with me because we recently discovered at Google that we can do a better job predicting weather using machine learning models rather than using Navier-Stokes equations. But I think that we should be thoughtful about the role that machine learning and large language models might play. One possibility is that they filter information in a way that gives us less value. That would be terrible. But another alternative is that it helps us ask better questions of the search engines than we can compose ourselves. We have a little experience with this through the use of what’s called a knowledge graph that helps expand queries into the index of the World Wide Web and then pull data back. Summarization could lose information, that’s a potential hazard, but I think we should be careful not to discard the utility that these large language models might have in improving our ability to ingest, analyze, and summarize information. So, this is an enormous canvas, which is mostly blank right now, and we’re going to be exploring, I’m sure, for the next several decades.


Olivier Crépin-Leblond: Renata?


Renata Mielli: Just a point. Of course, AI and the Internet are not the same thing, they are different. But, in my point of view, we have some challenges that we are facing in regards to how to address the risks of AI and the impacts on society are pretty much the same in terms of the need for more transparency, accountability, diversity, more decentralized and democratic. Not only Internet, but AI. And we need to focus also on how AI is impacting the Internet and how people interact with the Internet. Now, we are in a situation that, for example, when you do a search on Google, you don’t have a lot of links to click and interact with the content of the link about something. How to cook a cake, orange cake, for example. Because the artificial intelligence brings the results and you don’t need to click anymore. And, in a lot of times… the results are not accuracy and have bias, and this is impacting the internet, how we experience it. So there is not the same thing, but one impact other, and we have to have in mind that the core values we need to have to regulate the AI in this actually moment needs to be into account this transparency, this accountability, liability, and so on, that we need neutrality and other core values that we have to internet.


Yik Chan Chin: Yes, I think in terms of internet governance, we have these core infrastructures, which we all recognize it has to be public good, even global public good. So this is no problem. So we are continually regulate separately as a core infrastructures, but AI system, I think is more on the application of life, but there’s an issue I think many people already touched, especially in terms, which is cybersecurity, because the AI actually put a lot of the, because it caused a lot of the problem in terms of cybersecurity, because it make the internet more vulnerable towards the cyber attacks. So that is one areas, just like my colleague said, they have a mutual impact, especially in terms of cybersecurity, in terms of the AI may help enlarge the dangerous harms towards the internet stability. I think that is one area we have to put focus, but there’s other impact, which needs to take a long observation on the impact of the AI. on the internet and the core infrastructure of the internet.


Luca Belli: Let’s get to Sandra and unless there is anyone else with an urgent comment or question, we can then wrap up. Sandra, please. Can you speak again? Yes, very well.


Sandra Mahannan: I just wanted to quickly react to what I think two speakers ago, she made mention of a concern about having erroneous responses because some way somehow AI just summarizes the feedback from searches, which was one of, I think I totally agree with her, which was one of the points I made earlier that these biases whether we like it or not, AI is here to stay. Then these biases are really concerning because we would agree that there are really big wigs in the business and they get to I don’t want to say control the narrative, but for lack of a better way of expressing it. Then the small players, no matter how accurate they are, don’t get to really, I don’t know access is really low because the big wigs have occupied the market, which means that responses automatically or people automatically go there and then what happens when the decentralization is not really happening, when it’s not really decentralized, when people are not really getting to, as other players in the industry are not really getting to People are not really having access to those other players, and that is why it is really important, I think, that regulation should really come heavy on the side of the development or the developers or the models themselves.


Luca Belli: I think that now, as we will be kicked out of the room in two minutes, it’s time to wrap up and to thank the participants for their very thought-provoking comments and presentations. I think we have illustrated very well the complexity of this issue and also the interest of then trying to keep on having this very productive joint venture for next IGF to present the result of what could be a very brief report on the elements that can enable an open AI environment, as it was suggested also in the chat during the session, our best effort to distill the knowledge shared of an hour and a half. I think the mics are giving up on us. Try this one. It’s a sign that we have to wrap up. Thank you very much, everyone, and we will do our best effort to consolidate everything we have learned today into this report. Thank you very much.


Olivier Crépin-Leblond: And I should just add, if anybody is interested in joining both the Anime Coalition of Network Neutrality and the one on Co-Internet Values, then come to talk to us and we’ll take your email address and name and we’ll be very happy to have you on. And thanks again to all of our panelists and really great job. So thank you so much.


Alejandro Pisanty: Thank you again and congratulations for the session.


Wanda Muñoz: Thank you.


V

Vint Cerf

Speech speed

139 words per minute

Speech length

790 words

Speech time

338 seconds

AI and Internet are fundamentally different technologies requiring distinct governance approaches

Explanation

Vint Cerf emphasizes that AI and the Internet are not the same thing and should not be governed in the same way. He suggests that the standardization which has made the Internet useful may not be applicable to artificial intelligence, at least not yet.


Evidence

Cerf points out that AI systems are extremely complex and large models that operate very differently from each other, with proprietary training data and architectures.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Luca Belli


Renata Mielli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Luca Belli


Renata Mielli


Differed on

Applicability of Internet governance principles to AI


AI systems are mostly proprietary and not interoperable, unlike Internet protocols

Explanation

Vint Cerf highlights that AI systems, unlike Internet protocols, are largely proprietary and lack interoperability. He points out that the intellectual property in AI systems lies in their training data, architecture, and weights, which are often kept secret.


Major Discussion Point

Openness and interoperability in AI systems


Differed with

Anita Gurumurthy


Differed on

Openness in AI systems


AI governance should focus on regulating applications and risks, not the technology itself

Explanation

Vint Cerf suggests that AI governance should concentrate on regulating specific applications and their associated risks, rather than attempting to regulate AI technology as a whole. He emphasizes the importance of focusing on safety and protecting users from potential hazards.


Major Discussion Point

Regulation and governance approaches for AI


L

Luca Belli

Speech speed

144 words per minute

Speech length

2057 words

Speech time

854 seconds

Some core Internet values like transparency and accountability can apply to AI governance

Explanation

Luca Belli suggests that certain principles from Internet governance, such as transparency and accountability, could be relevant to AI governance. He argues that these principles are essential for understanding how AI systems function and for ensuring accountability to users, regulators, and society.


Evidence

Belli gives examples of how transparency is crucial in both Internet traffic management and AI decision-making processes, such as in credit scoring or facial recognition systems.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Vint Cerf


Renata Mielli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Vint Cerf


Renata Mielli


Differed on

Applicability of Internet governance principles to AI


U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

AI is building an intermediary layer on top of Internet infrastructure

Explanation

The speaker suggests that AI is creating a new layer between users and Internet content. This intermediary layer is becoming increasingly unavoidable and could significantly change how users access and interact with online information.


Evidence

The speaker cites a Gartner study predicting that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Sandrine Elmi Hersi


Agreed on

AI is creating a new layer between users and Internet content


R

Renata Mielli

Speech speed

116 words per minute

Speech length

1058 words

Speech time

544 seconds

AI and Internet governance face similar challenges around transparency, accountability and decentralization

Explanation

Renata Mielli argues that while AI and the Internet are different, they face similar governance challenges. She emphasizes the need for transparency, accountability, and decentralization in both domains.


Evidence

Mielli gives an example of how AI is impacting Internet search results, where users now often get direct answers from AI instead of links to various sources, potentially reducing transparency and diversity of information.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Vint Cerf


Luca Belli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Vint Cerf


Luca Belli


Differed on

Applicability of Internet governance principles to AI


A

Anita Gurumurthy

Speech speed

144 words per minute

Speech length

1108 words

Speech time

460 seconds

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization

Explanation

Anita Gurumurthy argues that the concept of openness in AI is not straightforward and does not automatically result in transparency or democratization. She suggests that openness in AI can have a wide range of meanings and implementations, not all of which contribute to scrutability or accessibility.


Evidence

Gurumurthy cites the example of GPT-4, where OpenAI declined to release details about its architecture, training data, and methods, despite being labeled as ‘open’.


Major Discussion Point

Openness and interoperability in AI systems


Differed with

Vint Cerf


Differed on

Openness in AI systems


Y

Yik Chan Chin

Speech speed

138 words per minute

Speech length

1174 words

Speech time

510 seconds

Standardization and interoperability of AI systems is extremely difficult currently

Explanation

Yik Chan Chin points out that achieving standardization and interoperability in AI systems is currently very challenging. This is due to the complex and diverse nature of AI technologies and applications.


Major Discussion Point

Openness and interoperability in AI systems


Global coordination is needed on AI risk categorization, liability frameworks, and training data standards

Explanation

Yik Chan Chin argues for the need for global coordination in several key areas of AI governance. This includes developing consistent approaches to categorizing AI risks, establishing liability frameworks, and setting standards for training data.


Major Discussion Point

Regulation and governance approaches for AI


AI poses new cybersecurity risks to Internet infrastructure

Explanation

Yik Chan Chin points out that AI technologies introduce new cybersecurity challenges to Internet infrastructure. She argues that AI can make the Internet more vulnerable to cyber attacks.


Major Discussion Point

Impacts and risks of AI systems


W

Wanda Muñoz

Speech speed

173 words per minute

Speech length

1106 words

Speech time

383 seconds

A human rights-based approach is needed for AI governance, beyond just ethics and principles

Explanation

Wanda Muñoz argues for a human rights-based approach to AI governance, going beyond ethics and principles. She emphasizes that human rights provide a framework for concrete actions, policies, budgets, indicators, and accountability mechanisms.


Evidence

Muñoz gives examples of how a human rights perspective can reframe AI governance discussions, such as focusing on systematic human rights violations resulting from AI harms and the need for accountability, remedy, and reparation.


Major Discussion Point

Regulation and governance approaches for AI


AI systems can perpetuate and amplify existing societal biases and discrimination

Explanation

Wanda Muñoz points out that AI systems can reinforce and exacerbate existing societal biases and discrimination. She argues that unless specific actions are taken, AI will continue to perpetuate these issues.


Evidence

Muñoz mentions documented examples of AI harms disproportionately affecting women, racialized persons, indigenous groups, and migrants in areas such as employment, health insurance, and social services.


Major Discussion Point

Impacts and risks of AI systems


S

Sandra Mahannan

Speech speed

124 words per minute

Speech length

517 words

Speech time

249 seconds

Regulation should focus more on AI developers and models rather than users

Explanation

Sandra Mahannan suggests that AI regulation should primarily target developers and models rather than users. She argues that this approach would be more effective in addressing issues such as data quality, privacy, security, and interoperability.


Major Discussion Point

Regulation and governance approaches for AI


S

Sandrine Elmi Hersi

Speech speed

111 words per minute

Speech length

843 words

Speech time

451 seconds

AI could restrict user agency and transparency in accessing online information

Explanation

Sandrine Elmi Hersi expresses concern that AI systems, particularly generative AI, could limit users’ ability to access and control the content they see online. She suggests that AI is becoming an unavoidable intermediary layer between users and Internet content.


Evidence

Hersi cites a Gartner study predicting a 25% decline in search engine traffic by 2026 due to the rise of AI chatbots.


Major Discussion Point

Impacts and risks of AI systems


Agreed with

Unknown speaker


Agreed on

AI is creating a new layer between users and Internet content


A

Alejandro Pisanty

Speech speed

154 words per minute

Speech length

1191 words

Speech time

461 seconds

Generative AI could lead to loss of information detail and accuracy

Explanation

Alejandro Pisanty expresses concern that generative AI, particularly large language models, might result in the loss of specific details and accuracy in information. He suggests that the compression of large amounts of text into statistical models could lead to the omission of important details.


Major Discussion Point

Impacts and risks of AI systems


Agreements

Agreement Points

AI and Internet are distinct technologies requiring different governance approaches

speakers

Vint Cerf


Luca Belli


Renata Mielli


arguments

AI and Internet are fundamentally different technologies requiring distinct governance approaches


Some core Internet values like transparency and accountability can apply to AI governance


AI and Internet governance face similar challenges around transparency, accountability and decentralization


summary

While AI and the Internet are distinct technologies, there are some shared governance challenges and principles that can be applied to both, particularly around transparency and accountability.


AI is creating a new layer between users and Internet content

speakers

Unknown speaker


Sandrine Elmi Hersi


arguments

AI is building an intermediary layer on top of Internet infrastructure


AI could restrict user agency and transparency in accessing online information


summary

AI is becoming an intermediary layer between users and Internet content, which could significantly change how users access and interact with online information.


Similar Viewpoints

The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.

speakers

Vint Cerf


Sandra Mahannan


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


Unexpected Consensus

Need for global coordination in AI governance

speakers

Yik Chan Chin


Wanda Muñoz


arguments

Global coordination is needed on AI risk categorization, liability frameworks, and training data standards


A human rights-based approach is needed for AI governance, beyond just ethics and principles


explanation

Despite coming from different perspectives (technical and human rights), both speakers emphasize the need for global coordination in AI governance, suggesting a broader consensus on the international nature of AI challenges.


Overall Assessment

Summary

The main areas of agreement include recognizing AI and the Internet as distinct technologies with some shared governance challenges, acknowledging AI’s role as a new intermediary layer in accessing online content, and the need for focused regulation on AI applications and developers.


Consensus level

There is a moderate level of consensus among the speakers on the fundamental challenges and approaches to AI governance. However, there are varying perspectives on the specific methods and focus areas for regulation. This suggests that while there is a shared understanding of the importance of AI governance, there is still a need for further discussion and refinement of specific governance strategies.


Differences

Different Viewpoints

Applicability of Internet governance principles to AI

speakers

Vint Cerf


Luca Belli


Renata Mielli


arguments

AI and Internet are fundamentally different technologies requiring distinct governance approaches


Some core Internet values like transparency and accountability can apply to AI governance


AI and Internet governance face similar challenges around transparency, accountability and decentralization


summary

While Vint Cerf emphasizes the fundamental differences between AI and the Internet, suggesting distinct governance approaches, Luca Belli and Renata Mielli argue that some core Internet governance principles can be applied to AI governance.


Openness in AI systems

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


summary

Anita Gurumurthy argues that openness in AI is complex and doesn’t automatically lead to transparency, while Vint Cerf focuses on the proprietary nature of AI systems, highlighting their lack of interoperability compared to Internet protocols.


Unexpected Differences

Perception of AI risks

speakers

Vint Cerf


Wanda Muñoz


arguments

AI governance should focus on regulating applications and risks, not the technology itself


AI systems can perpetuate and amplify existing societal biases and discrimination


explanation

While Vint Cerf, as a technology pioneer, seems to take a more neutral stance on AI risks, focusing on application-specific regulation, Wanda Muñoz unexpectedly emphasizes the systemic risks of AI in perpetuating societal biases. This difference highlights the gap between technical and human rights perspectives on AI governance.


Overall Assessment

summary

The main areas of disagreement revolve around the applicability of Internet governance principles to AI, the nature of openness in AI systems, and the appropriate focus and approach for AI regulation.


difference_level

The level of disagreement among speakers is moderate to high. While there is some consensus on the need for AI governance, there are significant differences in perspectives on how to approach it. These differences reflect the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. The implications of these disagreements suggest that developing a unified approach to AI governance will be challenging and may require balancing multiple perspectives and priorities.


Partial Agreements

Partial Agreements

All three speakers agree on the need for AI regulation, but they differ in their approaches. Vint Cerf suggests focusing on applications and risks, Sandra Mahannan emphasizes regulating developers and models, while Wanda Muñoz advocates for a human rights-based approach.

speakers

Vint Cerf


Sandra Mahannan


Wanda Muñoz


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


A human rights-based approach is needed for AI governance, beyond just ethics and principles


Similar Viewpoints

The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.

speakers

Vint Cerf


Sandra Mahannan


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


Takeaways

Key Takeaways

AI and Internet are fundamentally different technologies requiring distinct governance approaches, though some core Internet values like transparency and accountability can apply to AI governance.


Openness in AI is complex and doesn’t necessarily lead to transparency or democratization. AI systems are mostly proprietary and not interoperable, unlike Internet protocols.


AI governance should focus on regulating applications and risks, with emphasis on human rights, developer accountability, and global coordination on key issues like risk categorization and liability.


AI poses new risks around restricted user agency, perpetuation of biases, cybersecurity vulnerabilities, and potential loss of information detail and accuracy.


Resolutions and Action Items

Develop a joint report for next IGF on elements that can enable an open AI environment


Continue collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues


Unresolved Issues

How to balance innovation and risk mitigation in AI regulation


Extent to which Internet governance principles can or should be applied to AI governance


How to ensure AI systems enhance rather than restrict access to diverse online information


Approaches for global coordination on AI governance given differing national/regional priorities


Suggested Compromises

Focus AI regulation on applications and risks rather than the underlying technology


Adopt a human rights-based approach to AI governance while still allowing for innovation


Develop compatibility mechanisms to reconcile divergent regional AI regulations while respecting diversity


Thought Provoking Comments

We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation.

speaker

Anita Gurumurthy


reason

This comment challenges the current paradigm of data and wealth concentration in the tech industry, suggesting there were alternative paths for more distributed value creation.


impact

It shifted the discussion to consider the economic implications and power dynamics of AI and internet governance, rather than just technical aspects.


AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet.

speaker

Vint Cerf


reason

This comment importantly distinguishes AI from the internet and questions whether internet governance principles can be directly applied to AI.


impact

It prompted participants to more carefully consider which internet governance principles may or may not be applicable to AI, rather than assuming direct transferability.


From a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI.

speaker

Wanda Muñoz


reason

This comment reframes the discussion of AI governance in terms of human rights, emphasizing accountability and remediation.


impact

It broadened the conversation beyond technical and economic considerations to include a human rights perspective on AI governance.


Generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable.

speaker

Sandrine Elmi Hersi


reason

This insight highlights how AI is fundamentally changing how users interact with internet content.


impact

It prompted discussion about the implications of AI as a new layer of internet infrastructure and how this might require new approaches to governance.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical internet governance principles to include economic, human rights, and structural considerations specific to AI. They challenged assumptions about the direct applicability of internet governance to AI and prompted a more nuanced exploration of how AI governance might need to differ. The discussion evolved from comparing internet and AI governance to considering AI’s unique challenges and impacts on internet use and society more broadly.


Follow-up Questions

How can we balance regional diversity and harmonization needs in AI governance?

speaker

Yik Chan Chin


explanation

This is important to respect different regional approaches while still establishing compatible mechanisms for global AI governance.


How can we strengthen multi-stakeholder involvement in AI governance?

speaker

Yik Chan Chin


explanation

This is crucial for ensuring diverse perspectives are included in shaping AI policies and regulations.


How can we regulate AI from the developer angle, focusing on data quality, privacy, security, and interoperability?

speaker

Sandra Mahannan


explanation

This approach could address issues at the source of AI development rather than just regulating end-user interactions.


How can we incorporate feminist and diverse perspectives into core values for AI governance?

speaker

Wanda Muñoz


explanation

This could lead to more inclusive and equitable AI systems by questioning social constructs, power dynamics, and resource distribution.


How can we ensure accountability, remedy, and reparation when human rights violations result from AI use?

speaker

Wanda Muñoz


explanation

This is critical for addressing the disproportionate harm AI can cause to marginalized groups.


How can we develop international norms specifically regarding liability and accountability in AI?

speaker

Alejandro Pisanty


explanation

This is important for establishing consistent global standards for responsible AI development and use.


How can we separate the effects of human agency and intention from the technology itself in AI governance?

speaker

Alejandro Pisanty


explanation

This distinction is crucial for appropriately addressing issues like misinformation and cybercrime in the context of AI.


How can we regulate AI in specific verticals or sectors rather than attempting to create one-size-fits-all regulations?

speaker

Alejandro Pisanty


explanation

This approach could lead to more effective and tailored regulations for different AI applications.


How can we ensure transparency in AI systems, particularly in complex models like large language models?

speaker

Luca Belli


explanation

Transparency is essential for accountability and understanding how AI systems make decisions.


How can we address the potential loss of specific details in AI-generated content?

speaker

Vint Cerf


explanation

This is important to maintain the accuracy and richness of information as AI systems become more prevalent in content creation and summarization.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #153 Internet Governance and the Global Majority: What’s Next

WS #153 Internet Governance and the Global Majority: What’s Next

Session at a Glance

Summary

This discussion focused on strategies for improving global majority representation and participation in internet governance forums. Panelists from Nepal, Peru, and Uganda shared insights on challenges and opportunities for meaningful engagement. Key issues highlighted included the need for better access to internet infrastructure in developing regions, capacity building for diverse stakeholders, and more inclusive funding mechanisms to enable participation in global forums.


The panelists emphasized the importance of a bottom-up, multi-stakeholder approach to internet governance. They advocated for engaging youth, parliamentarians, and other underrepresented groups through targeted outreach and education. Successful examples were shared, such as youth-led national Internet Governance Forums and capacity-building programs for legislators.


Challenges discussed included power asymmetries, fragmented efforts among civil society groups, and difficulties in translating global discussions to local contexts. The panelists stressed the need for harmonized agendas, collaborative research, and leveraging existing networks to amplify global majority voices. They also highlighted the importance of localizing narratives and ensuring meaningful representation beyond token participation.


Recommendations for global partners included providing more inclusive funding opportunities, aligning existing resources more effectively, and ensuring global majority representatives have “seats at the table” in key discussions. The panelists concluded that while progress has been made, continued efforts are needed to create a truly inclusive and representative system of global internet governance.


Keypoints

Major discussion points:


– Challenges in achieving meaningful participation from global majority stakeholders in internet governance forums, including funding limitations, capacity gaps, and power imbalances


– The importance of localizing global internet governance issues and narratives to make them relevant at national/regional levels


– Strategies for engaging policymakers and parliamentarians on internet governance topics, such as targeted capacity building programs


– The need for better coordination and alignment of efforts among different stakeholders and initiatives


– Ways to improve multi-stakeholder collaboration, especially bringing in private sector and government voices


The overall purpose of the discussion was to explore how to maximize the impact and representation of global majority voices in key internet governance forums and processes, with a focus on practical strategies and lessons learned.


The tone of the discussion was constructive and solution-oriented. Panelists spoke candidly about challenges but maintained an optimistic outlook, offering concrete examples of successful approaches. The conversation had a collaborative feel, with panelists building on each other’s points. There was a sense of shared purpose in working to improve global internet governance processes.


Speakers

– Amara Shaker-Brown: Moderator


– Ananda Gautam: Youth leader focused on democratizing human rights in the digital age, advocate for inclusive technology policies, co-founder of Open Internet of Paul


– Paola Galvez: Tech policy consultant, founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, team leader for the Center of AI and Digital Policy


– Peace Oliver Amuge: Africa Regional Strategy Lead with the Association for Comprehensive Communications, member of the UN Multistakeholder Advisory Group with the Internet Governance Forum


Additional speakers:


– Audience member: Academic researcher from Australia interested in youth IGF initiatives


Full session report

Revised Summary of Internet Governance Forum Discussion on Global Majority Participation


This discussion focused on strategies for improving global majority representation and participation in internet governance forums, with particular emphasis on the Internet Governance Forum (IGF), the WSIS+20 process, and the Global Digital Compact. Panellists from Nepal, Peru, and Uganda shared insights on challenges and opportunities for meaningful engagement, emphasising the need for a bottom-up, multi-stakeholder approach to internet governance.


Speakers:


– Ananda Gautam: Youth activist and organizer from Nepal


– Paola Galvez: Digital rights advocate from Peru


– Peace Oliver Amuge: Technology policy expert from Uganda


Key Challenges:


1. Internet Access and Infrastructure


Peace Oliver Amuge highlighted the lack of internet access in many regions, particularly in Africa, as a fundamental challenge. This issue was seen as a prerequisite to addressing other governance concerns.


2. Capacity Building and Funding Limitations


All panellists agreed on the critical need for capacity building among diverse stakeholders, including civil society, policymakers, and parliamentarians. Funding challenges were identified as a significant barrier to participation in global forums and capacity-building efforts.


3. Power Asymmetries


Paola Galvez drew attention to the power imbalances present in global forums, which can hinder meaningful participation from global majority representatives.


4. Fragmented Efforts


Peace Oliver Amuge pointed out the lack of a harmonised agenda among civil society groups, leading to fragmented efforts that dilute the impact of global majority voices in governance discussions.


5. Inclusive Timing and Venues


The importance of considering timing and venue accessibility for global forums was emphasized to ensure broader participation from diverse regions.


Strategies for Improvement:


1. Multi-stakeholder Engagement


The panellists unanimously supported a multi-stakeholder approach to internet governance:


– Peace Oliver Amuge advocated for a bottom-up approach starting from the grassroots level.


– Ananda Gautam emphasised the importance of engaging government and private sector support.


– Paola Galvez focused on leveraging existing networks and alliances.


2. Localisation of Global Issues


Speakers stressed the importance of translating global discussions to local contexts, making abstract global discussions relevant to national and local realities.


3. Targeted Outreach and Education


Successful examples were shared of engaging underrepresented groups:


– Ananda Gautam highlighted the Youth IGF Nepal initiative.


– Peace Oliver Amuge mentioned APC’s African School on Internet Governance.


4. Improving Funding Mechanisms


Panellists agreed on the need for more inclusive and creative funding opportunities to ensure meaningful participation from the global majority.


5. Leveraging Existing Structures


Peace Oliver Amuge suggested utilising existing structures like national and regional IGFs to foster multi-stakeholder dialogue and build capacity.


6. Ensuring Meaningful Representation


Paola Galvez emphasised the importance of direct representation from the Global South in governance discussions.


7. Research and Stakeholder Mapping


The need for comprehensive research and mapping of stakeholders and resources was highlighted as crucial for effective engagement.


8. Youth Initiatives


The role of youth-led initiatives in internet governance was emphasized, with examples of successful youth engagement shared by the panellists.


Conclusion:


The discussion revealed a high level of consensus among speakers on the key challenges and potential solutions for improving global internet governance. While progress has been made, continued efforts are needed to create a truly inclusive and representative system. The panellists maintained an optimistic outlook, offering concrete examples of successful approaches and emphasising the shared purpose of working towards more equitable and effective internet governance processes.


Moving forward, the key areas for focus include improving internet access and infrastructure, enhancing capacity building efforts, developing more inclusive funding mechanisms, and ensuring meaningful representation from diverse regions in governance discussions. By addressing these issues and leveraging initiatives like the Global Digital Compact and WSIS+20 process, stakeholders can work towards a more inclusive and effective global internet governance framework that truly represents the needs and perspectives of the global majority.


Session Transcript

Amara Shaker-Brown: International Media Assistance, and the Center for International Private Enterprise have been running the Open Internet for Democracy Initiative, a program to build a network of open internet advocates who champion the democratic values and principles to guide the future development of the internet and how it works. Our Open Internet Leaders Program, two of which are with us, along with their mentor today, have been a key part of our work. Our leaders are emerging experts in digital rights and open internet issues from the global majority, representing civil society, the media, and the local private sector. So I will quickly introduce our panelists, and then we can jump right in. So we’ll start with Ananda. Ananda Gautam is a passionate youth leader focused on democratizing human rights in the digital age, advocating for inclusive technology policies since 2018. His commitments include leading global initiatives like the Internet Society Youth Standing Group and co-founding Open Internet of Paul, while also promoting digital freedom and cybersecurity. Through research, advocacy, and capacity building initiatives, he strives to empower young people in shaping the future of internet governance. And he was one of our leaders. Paola Galva Calergos is a tech policy consultant dedicated to advancing ethical AI and human-centric digital regulation globally. She holds a master’s of public policy from the University of Oxford, congratulations, recently, and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, and as a team leader for the Center of AI and Digital Policy. And Peace Amuge is the Africa Regional Strategy Lead with the Association for Comprehensive Communications, where she works on the intersection of technology, human rights, and gender. She’s a member of the UN Multistakeholder… Stakeholder Advisory Group with the Internet Governance Forum. So she has been very busy. And she is also a member of the advisor group for our Open Internet for Democracy Initiative. So thank you, all three of you, for taking the time and for coming for this. And we will hop right in with, you know, just an easy, simple question on Internet governance. So, Peace, I guess we’ll start with you and move across. How can global majority advocates maximize the impact of key themes emerging from these 2024 fora, such as NetMundial, G20, WSIS, to advance a free, open and interoperable Internet in their regions? So taking some of these global themes and applying or advocating for them at a regional level.


Peace Oliver Amuge: Thank you very much, Amara and everyone else who has joined us in this room. I’m privileged to be part of this panel and have these very key discussions. I think to me, what would be important is to, first of all, unpack what we mean by free, open, you know, Internet, you know, what does that mean to us? What does that mean to the different stakeholder groups that we have, to the different communities that we are talking about, the different context? I think when we access, when we unpack that, then we will know what we exactly want, because we need to have some clarity when we talk about the open, free Internet that we want to advance. And so then when we have this clarity, then we will know, set our priorities and know if we’re talking about affordable access, if we’re talking about local content, you know, I think it would be important when we have that clarity or unpack, know what we mean. And then we start to have more details. and create these priorities that I’ve just mentioned, and then have our agenda set. And then we go into having our very harmonized strategy for us to benefit or leverage from all these processes that are happening. And I think then more and more we need to establish and we need to engage and embrace collaborations and synergies across the different stakeholders and embrace the multi-stakeholder approach. And also when we still talk about internet, for me as someone that comes from the African region, I think a very important, crucial aspect of the internet to talk about is the access. We cannot talk about any other things, any other issues if we are not connected, meaningful connectivity, having communities, local communities connected, because we still have very many people who are not connected. So I think for me, that is something that I would want to say as we start this conversation, setting our priorities. And for me, as someone that comes from the African, I think access is so important.


Paola Galvez: Thank you. Paula. Thank you, Maira. Hello, everyone. Thank you for joining us. Let me give you a perspective from someone that is from Latin America, I’m from Peru. It’s a developing country that had participated actively in these discussions, but it’s not well, okay. Can you hear me well now? Yes, thank you. So let me provide my perspective as someone that comes from Peru, a developing country that has participated in this process that are very well aware of what’s happening. but at the same time that has critical challenges happening on the ground. I can say political crisis that sometimes make that these very critical digital topics get overlooked. I do believe that we need to maximize the impact of all these fora to look ahead what’s happening after 2025. And for that I have three ideas I would like to share. On one hand, a strategic alignment. I believe we are several actors and we need to work as a community with coalition building to bring the voices to the ones that are making the decision. They are on the table. Second, try to bring localized narratives. It can feel abstract sometimes when we think about these global discussions, right? But they are absolutely important for our national and local realities. So how can we localize these topics? By thinking local examples, right? How to provide them to our national context. This is very important. And third but not least, I believe monitoring implementation is very, very important. Advocating for clear mechanisms that track commitments made during these fora. This is usually the government implementing them. But there are mechanisms in place. So civil society, media, and also the private sector can play a vital role in this. I’m going to be brief so we can discuss among others. So I’ll keep it there.


Amara Shaker-Brown: Go ahead, Ananda.


Ananda Gautam: Thank you. So I think as peace started the context of Africa, I come from a global south as well. certain issues that are still like if we Do the global context we have more than I think about 35 percent people still not connected to the internet and then There is another issue the people who are recently connected to the internet Doesn’t have the enough capacity to have the full leverage of this Platform or the opportunities that are unwinded with access to the internet So in my context the two major issues will be the access and empowerment. I call it So first thing is people should be access regardless of any barrier and the barrier could be either access to infrastructure or like there might be other barriers like language and then like Affordability and other accessibility issues like how Person with disabilities can have access to internet and we can call it on a broad topic How we can ensure the meaningful access to the internet another thing is after having the access We will be discussing about the human rights in digital perspective. Are we our human rights? protected when we are being Online or we are leveraging different technologies now We are talking about AI governance and we are also now talking about AI gap along with the digital divide AI divide is also a very concerning topic today, so How do we ensure that? These policies are this kind of forums. We have like national forums regional forums and we are just Ahead of the which is plus 20 review process. We just passed the global digital compact and which has very Positive message, but we are yet to see the which is plus 20 process. So One of the major challenges this kind of forums are not binding, it doesn’t have a kind of ripple effect, you know, people are not bound, people are like any stakeholder is not bound to implement the takeaways, but if we have any mechanism that we could, that governments could uphold the values that are taken away from this forum is very important. And me actually focusing on the perspective of the young people, I believe young people have a very big stake because they are, I believe they are the biggest stakeholder of the internet and they are the future internet governance leaders who need to be equipped with enough capacity, enough knowledge so that they can decide what is the future of the internet they want. So we, the three things I want to take away from this first round of our sharing is that we need to complement access with empowerment and we should not forget meaningful access. We need capacity building of young people and marginalized communities so that they can leverage and then they can actually make their voices heard in the global forums and then like the deliberations of this forum should be somehow taken back to the communities so that we uphold the human rights. I will break here and go into another round of discussion. Thank you.


Amara Shaker-Brown: Thank you all. Yeah, I think we have all heard sort of the issues around accessibility being the, you know, the first barrier before you can even get into how the internet works. So IGF is a key multi-stakeholder forum and there have been a couple other ones this year and we, you know, multi-stakeholder is the key word of the year. But what in your view are the key challenges and opportunities for effectively implementing a multi-stakeholder approach to internet governance, especially ensuring, and I think this is the key word, meaningful representation, not just having people in the room as a standard bearer for the global majority or for their country, but having meaningful representation in for a, like the IGF, WSIS, or like the past summit of the future. And I guess we can go the other direction. So Ananda, we’ll start with you and then Paula, then peace, unless someone has a burning desire to start.


Ananda Gautam: Okay, then I think this is a very kind of challenging question, but my point of view being engaged with national regional initiatives for a while, and then the biggest asset that internet governance has, is it’s, I think 270 plus national regional initiatives. Being said, we are the United Nations backed initiatives. I see that the United Nations should also have some form of cooperations with national governments and other international agencies that they adhere with these initiatives that, so that we can implement the multistakeholder discussions, not only at the global level, but also at the regional and local level. We have many challenges. I also chair the USIGF Nepal, and then like many UN agencies themselves doesn’t know about internet governance forum still, so there is a huge gap. They are leading many digital initiatives, but like they are not aware that these deliberations are happening, and these are led by the internet governance forum. So I see it kind of like structural coordination, what do you call, gap. And if this coordination. could be made, the United Nations is one of the major institutions that is a global actor that can foster partnership and that can develop coordination mechanism with all stakeholders including governments. If these deliberations are done like we do have many, I think this year IGF has spent many funds in bringing in the parliamentarians and if we can make those parliamentarians take these initiatives back to their countries and support the national and regional initiatives, I see by some time, not by tomorrow but like by in a couple of years, if we can make them realize and we need to make those kind of mechanisms, I see those mechanisms lacking. In global level, we come here, it is so good. We go to regional IGF, it is a bit less, private sector are not very interested, government participation is always minimal, it is overcrowded by civil society. So it should be when we call equal footing on multistakeholder bodies, there should be equal participation as well. So we need to ensure or like IGF needs to have initiatives and coordination mechanism with all UN specialized agencies and all the projects or the initiatives that are being done either it be on eliminating digital divide or it could be on digital safety or it could be on AI, whichever UN agency is working on, if they align their efforts, aligned with a global digital compact and the deliberation of Internet Governance Forum, I think this is the best way we can get this thing done from bottom-up approach. Thank you.


Paola Galvez: So I’ll continue as you said Amara, I’m just taking this off to not overhear me, but you let me know if you guys cannot hear me well, thank you. I see different challenges. First of all, sometimes these four vital discussions are happening in places that are far and it requires lots of resources. First of all monetary, so funding is one good barrier that we have and then let me go for the opportunity. That’s why sometimes we need to look out for funding support or organizations that are able to support civil society, media, to come and join academics as well. All the stakeholders, because small organizations cannot join on their own. I remember if I was able to come into an idea for the first time in 2019, it was thanks to Internet Society and a fellowship called the ISO Youth Ambassadors. So these are great opportunities that we can look out for to come and have meaningful representation. This is one thing. Second, and that goes tied to another challenge, I’d say for this, for developing countries as we come from, is the lack of knowledge and experience. And sometimes it sounds the UN Internet Governance Forum, like NetMundial, these big events where only experts come, right? And that’s far from the truth. If we can really understand about this process, actually what we want is people that are really passionate and want to have a stake in the future of the Internet, right? But it takes, and this is the opportunity, to bring more information, make it accessible for everyone. And I think the three of us that are in the table, we try to make this in our localities, like informing what is an Internet Governance Forum. Internet Governance Forum, right, at a local level, at a regional level, to try to motivate other organizations to come and join. That’s why the newcomer sessions are very important, because it can be a monster. If you see the app shed, there are so many sessions happening at the same time, so having also mentors and somebody that can pair you, pair and follow you during the IDF, that could be great, and I think that’s a great use case of best practice. I remember my first time it was somebody from ISOC that joined me and guided me a bit on how to make the most out of this forum. One third challenge, I’d say power asymmetries, because even when we are here in the same table, let’s say, we are all here, then we will have, I think, an open Q&A, so everyone can comment and make a question. There are still power asymmetries that, you know, in this global forum, we can reflect agendas shaped by wealthier nations, more developed countries, richer or bigger corporations that can hold bilateral meetings that sometimes a small CSO don’t know is happening or don’t know could happen, right? And for the opportunity, I could say creating more inclusive mechanisms to push for everybody to be really at the exact same power in the conversation so that we can all have a meaningful participation and that our opinions can get heard as it should be. Thank you.


Peace Oliver Amuge: Thanks, Paul and Amanda, and I totally agree with what you said, and so I will just add on to a few things, and one challenge that I see and I want to repeat it is the capacity building. And I think the lack of capacity that exists among the different stakeholders. But I think something that we must agree is that we have had over time, at least there are some steps that have been made in regards to civil society. You know, lots of initiatives have been going on to build the capacity of civil society organizations or participants. And not to say that they are already there, no. But at least I want to acknowledge that there are some steps, you know, there are some strides that have been made. But where we need to also put our focus is the judiciary, the parliament, the law enforcers, you know, the government, you know, the private sector. We need to look at these other stakeholder groups to build their capacity. Because if I look at Africa, at the region, we’ve had organizations like Equality Now, APC, ICT for Change, Kiktenet, you know, NDI, a couple of organizations that have been really trying to have capacity built. So I think we need to again kind of map out, you know, and look at where are the gaps, who are the people that we need to focus on in terms of capacity building. And I want to also say that yes, I really agree on the funding opportunity because this will also facilitate the capacity building, you know, bridging the skills gap, you know, that I am mentioning. And also ensuring that, you know, we have meaningful participation from these stakeholder groups when we talk about, you know, when we are at the IGF, when we talk about the different conversations that are happening at WSIS+, the GDC that just ended. And then one challenge that is there, that we see is limited, harmonized agenda. As civil society, do we have an agenda, you know, from a national, at a national level, going to the sub-regional, pushing forward to the regional, and coming. at the global level, you know, when we all see it, like Paula was saying that we will all have different agendas, so I think we need to have a kind of harmonized agenda or strategy, and also there is still a problem of fragmented efforts. We are doing so much, you know, even when we talk about the capacity that I just stepped away from, it is very fragmented. We need to harmonize our efforts as we talk about these challenges that we see. I think also we need to look at, be very inclusive when we make programs. We need to be very flexible and acknowledge and be aware of the different contexts, and look at our participants as people that have different challenges and abilities. You know, looking at the women, looking at the persons with disability, and looking at the timing when these conversations are happening. I think you all remember when the GDC consultations were happening online. It was not very inclusive, you know, for other people. I think Amanda, like you, it was happening when it was very late for you. It was happening in my afternoons. And also, let’s look at the IGF now. It’s happening around Christmas time when some people are already off work, you know. Some people are working until 15th and taking their break. So I think this kind of timing should be very inclusive, and we need to look at them as well. And the challenge of also venues. If these conversations are happening in Geneva, happening in New York, it’s not inclusive. You talk about, even when you have maybe funding to travel there, you might be limited in terms of visa. You will not get visas. So I think these are some of the little things that might be ignored, but they are a big challenge to our participations and engagement, and for us to have the impact that we desire. So, and again, to say what Amanda mentioned, that having these other stakeholders in the room, having a multi-stakeholder conversation, and not having more of maybe civil society, but having. everybody in the room is one of the challenge that we still continue to have. Thank you.


Amara Shaker-Brown: Thank you. Yeah, building off that a little bit, have you seen in your work in the past year, or do you have ideas in sort of the future of more effective ways that civil society, media, and the private sector can collaborate to engage with these multilateral processes or even civil society can sort of help bridge that gap and pull other stakeholders in? I know some of those divides are hard to bridge, but if there’s any work that you have seen either at the local level or at the global level of ways to really bring those non-governmental stakeholders together, especially trying to sort of engage the private sector.


Peace Oliver Amuge: Thank you. Amara, so Paula was pointing at me that you, yes. Okay, yes. So yeah, I think one thing that I would suggest is definitely the bottom-up approach, you know. When we come here, we meet with, usually it’s a very habit that probably many people experience. We meet here, our governments, we meet here, our members of parliament. But when we go back to our different countries or regions, we then don’t have any conversations happening. And I think we should leverage on structures like the IGF that is very, starts from the grassroots. I think we need to embrace and leverage on it for us to have meaningful conversations and have everybody participating and harmonize our efforts and avoid fragmented efforts, put together our agenda. But one thing that I need, that I also want to mention is research, you know, we need to have research done. We need to map the stakeholders that are doing different, putting different efforts in place. We need to map the existing knowledge. We need to map the resources that we have and at all levels really, starting from national, from grassroots and building all through, coming to the global level and putting a funding mechanism in place is a very key thing that we need to focus on as well and have then after doing that, the research, all this mapping that we have and then we need to also leverage on the power of the collective, you know, the collaborations, the synergies that we are building, we need to leverage on this, the synergies that we build and then come up with this strategy all together because I think we really need to emphasize collaborations and synergy and a multi-stakeholder approach. I think that’s what I want to say now, thank you.


Paola Galvez: Okay, let me just build on what Bice just said. The power of networks and alliances is huge, literally. I can name two examples and for the presence of Latin America, for instance, in the Global Summit for the Future, I see Al Sur, the coalition of several digital rights NGOs in Latin America happen, and they all went to New York, participated. I see this as a… fantastic example of how Latin American organizations can come as a unified voice, right? Another example. For instance, IGF, this is my fifth IGF and most of the time this is the space where I see Ananda every year, for instance, if there is another meeting we will not meet, but the immense potential of bringing new people to the conversation, new organizations probably, or somebody that is working on a specific topic, it doesn’t have to be an expert on the Internet, but if you’re talking about children or financial services, it’s good to have their opinion too, right? And they don’t know about IGF, but for instance, I built on the Center for AI and Digital Policy Community to work on a proposal, this is an example, but algorithmic transparency, and getting to know to the community what is going to happen in December, the IGF, and many people reach out asking, what is the IGF, right? So this is a good example, and there are newcomers participating in this IGF, and they will provide expertise and evidence for new regulation and the future of AI, this is an example once again. But I do think good practices are happening. So we need to, if you’re thinking about civil society and media, we need to be very creative, right? Because most of us know about the UN call for travel support that exists, but unfortunately resources are limited, right? They cannot fund everyone that applies, so let’s be creative and try to find other governments that can have funding, or universities. So good practices for academics are very good allies that they want us to continue our research and make reports on the discussions that are happening. So I think this is a good example. can be use cases and maybe people that are joining online and have some other ideas would be great to share on the chat or here in the room because we need to act as a community to start changing it, bringing the voice of the global majority to this discussion and that it to be very meaningful.


Ananda Gautam: Thank you Paola and Peace, being the last speaker I have the privilege to opt in for both of their voices and then like challenges to bring on something new. So my perspective is like it is a collaborative approach, as I mentioned before we have challenges but like I gave the invitation from IGF to my minister of IT but he didn’t care because he doesn’t know the value of this kind of meetings and that is why we don’t have much support in many countries. If we could establish importance of this conference or this kind of events, multi-stakeholder forums and the government start taking it seriously this will create another kind of environment, they want to send their young people attending these events, maybe they can secure funding from government as well. That is one of the options we need more of the government support which is very very lacking and then if government starts supporting this kind of initiatives, another collaborative approach is we need equal collaboration of civil society and private sector. The private sector works in their business activities and which are very much related to the governance of their, if they are a tech company the governance of technology is going to affect their business as well. So they need the help of civil society to make responsible use of the technologies and to make awareness and capacity building of this kind of issues. I think private sector can help civil society with some kind of we can say CSR. It is like a kind of corporate social responsibility so that they help civil society to represent this kind of issues in this kind of forums. They can have like tech companies like Meta, TikTok or whatever tech companies are there. They can help civil society to send their representative to these forums. They can help them to create the capacity building initiatives, to create awareness programs, to make responsible use of technology. That is kind of collaboration that is required and this kind of forum should force the environment for those collaborations. It should force the environment to bring government in the board so that they actually take this thing seriously. If they think they take this thing seriously, private sector would also definitely look these things at more responsible way and this is what we ideally believe as the multi-stakeholder collaboration. So I call for this kind of multi-stakeholder collaboration. Another part that I have believed is I started Youth IGF Nepal back in I think 2022 and within three years of establishment we have been able to make an impact that there is a Youth IGF in Nepal. We need to hear them. So our ministries often call us for the consultation meetings and our minister was so happy that I delivered him the letter from the IGF. Although he didn’t come, this is an impact. By his third year he might come. We never know. People are conscious that there is Internet Governance Forum, there is Youth Internet Governance Forum in Nepal. We should hear them. They are young voices. We should include them. So I’ve been taking part in different consultations, and our community is growing. For the first year, we trained 100 people. And coming to the third year, we have trained more than 300 people. And few people are part of now Internet Society Youth Ambassadors Program. Few people are going to APR IGF. Few people are going to regional IGF. And they will take this deliberation back to their community. So another thing is, while we come and join this kind of forum, we get the opportunity, we should give something back to the community so that this community thrives and create an environment that will impact. It is not an overnight change, but we need to make our efforts. And UN IGF should make a lot of effort, because this time, as Paula mentioned, only 10 people got this travel support. All of the fund was actually invested in the parliamentarians. So if government could have sent their parliamentarians with their funding, and those funds could have been utilized to bring more stakeholders, it could have been wonderful. Or even, there’s so much of other opportunities that UN can pull in to support in bringing more young people. And one of the best examples I can give is from Brazil. There is a CGI that supports more than 15 people every year bringing to the IGF and other regional events. That’s why you might be seeing many Brazilian young people in this IGF. That’s because there is a support mechanism that has been intact. So we need those kind of support mechanisms. I think that’s it from my side. Thank you so much.


Amara Shaker-Brown: Great. Thank you all. We are now going. to open it up to questions either in the room or online. Please feel free to put your question in the chat or to raise your hand. I see we have a question in the room. Yes, we have someone in the room and she will take the mic.


Audience: Thank you. It’s great to hear about the youth IGF. I’m really interested to know more about it. I know I’m from Australia and I see that IGF is there in Australia. I’m an academic in the research space as well, so I’m really interested to know more about that. Thank you.


Ananda Gautam: Okay, so in regards to youth IGF Australia, I don’t know. I was just having this thought. I was having a conversation with Jordan from AUDA who established Australian IGF. I was about to ask him why don’t you guys start a youth IGF, but unfortunately I couldn’t. So, it is a good way when there is a national IGF you also initiate youth initiative. It will help bring more young people. At least they will start looking for what IGF is and then like most of the people contact me when they get fellowships. They go to the global forum and regional forum and then realize there is a national forum in their own country. So, this is a good one. They are having awareness and a few people want to go to the IGF and they contact me how we can go to the IGF even some people from ministry this time contacted me we want to go to the IGF what is there any funding available and then like I said we don’t have any funding available so there are very basic principles UN IGF Secretariat has created a toolkit to establish a youth IGF if you want support we from youth coalition on internet governance and like in the society youth standing group helped to establish a lot of youth initiatives we also helps to funding few initiatives what I’m not sure how much would it be for Australian people but for in the developing nations we are supporting thousand dollar each for each youth initiative and they’re starting their I think we jump started five youth initiatives supporting them from the international society youth standing group we have a very limited budget but like I want in my tenure I want to jump start few more youth initiatives so we can help you out maybe if you know Jordan already we can sit with Jordan as well because ODA is the entity that has been that was a worst for Asia-Pacific regional IGF and then like they have started Australian IGF and if you find someone from Australian IGF we can sit together and then discuss how this is and help how you guys can start your own youth initiative I think that is it


Amara Shaker-Brown: thank you great thank you thank you for your question any other questions in the room okay I have one from Priyal online question is what are some strategies that you have seen that are successful for civil society advocates to engage with their local or national policymakers So parliamentarians, judiciary, anything like that on these issues because we know sort of getting the multi-stakeholder voice into the multilateral system can be difficult. So any past success there.


Ananda Gautam: Can you put the question in text? Maybe it was so long, you know.


Paola Galvez: I can start while then, Ananda, you can jump in. Actually I have a good example because as you said, Amara, it’s hard to bring the government into multi-stakeholder discussions. But a while ago in Peru, we started with capacity building program for congressmen talking about digital economy. And that was a multi-stakeholder effort because NGO Hiperderecho participated, COMEX, which is a chamber of reuniting private sector in Peru that has a committee on the digital economy and groups, different big technology companies. We developed this. And I remember it was under the context of a period when the Peruvian government, the Peruvian congress, sorry, wanted to regulate the sharing economy and these platforms like Uber, Cabify, et cetera. So we started with these sessions so that they can understand a bit about the technology, the telecom, et cetera. It was a couple of weeks before Peru IGF and we said it would be a great opportunity so that we can all have a discussion and you can listen to what civil society, media, academics and other stakeholders have to say on this and other topics related to the internet governance. And they were interested, a bit sceptic at the moment, but we had some participants from the parliament. This is one thing. On the other… And sometimes this happens, because I was working on public consultation in Peru as part of the UNESCO AA Readiness Assessment methodology that I conducted in Peru, and we had to do this. And public, sorry, congressmen went there, but some others did not go and only sent their advisors. This is a very personal opinion, but I would say let’s not take it as a wrong way. Having their advisors is also good, because they are the ones speaking to their ears and actually they are the ones writing the bills, so that’s good too. As long as there is a commitment from the congressmen to send their team, there is a good step. So I can tell you these good examples that I have, and I hope it can be replicated, but showing them that it will benefit their work is a good way to speak to them so that they are interested in joining these discussions. Thank you.


Peace Oliver Amuge: So I want to just give also an example of what we have been doing to engage the legislators or members of parliament. We have APC convinced the African School on Internet Governance Forum, and from last year, we were able to have about 16 members of parliament coming from different African countries. And to have these members of parliament in the room with the other stakeholders like civil society, technical community, as fellows or participants of the school was very important and very key. For them to really learn, understand these issues that we talk about, because when we go back usually we want to engage them, but they do not understand these issues that we talk about. Even the reports that we publish, they don’t consume these reports, because they are long reports, they do not have the time. and the rest. So we need to pull them and bring them to this conversation that we always have. So I think having the members of Parliament, you know, join the school was really key. And again, I want to give an example still with APC, because APC is a network of other civil society organisations. And still across Africa, APC works with other civil society organisations, for instance, Kiktenet, for instance, WUGNET, that is based in Uganda. And I remember in 2022, WUGNET was able to support members of Parliament and someone from the judiciary to attend the IGF. And they did not just stop at supporting their participation at the forum. They continue to engage them back home. What next? You know, having the other stakeholders in the room and saying, OK, we were together at the global forum. What next for us? You know, what are some of the things that we need to talk about at the national level? So I think this kind of very strategised engagement needs to happen. And it should not stop by them coming once to the conversation, them coming to the meeting once. We need to continuously engage them, you know, to have their buying, to have their understanding, and also to ensure that we have people in the, on the floor of Parliament that understand the issues that we talk about. So I thought I should just share that strategy that we use. And again, even this year, we had over six members of Parliament join. And while I’m at the forum here, I’ve seen some of them again joined a parliamentary track. So you can be sure that continuous engagement of these members of Parliament, you know, will bring some change, you know, when we go and knock the doors to talk about the policies, you know, the gaps that we see, share with them the policy briefs that we come up with from our different countries or regions, they will be able to understand the issues that we are talking about. Thank you.


Ananda Gautam: Thank you, Peace, for taking this about the learnings of parliamentarians. When UN IGF Secretariat started the parliamentary track, I had proposed that they bring parliamentarians and young people together in a setting that they would contribute afterwards to getting back to their local community with the young people. So that it could be something takeaways, but it never happened. I think there is one parliamentarian track youth leaders dialogue today, but I’m not sure it is mentioned on that way or not. Another example is we have a digital freedom coalition in Nepal, and we have been working with parliamentarians on different bills that have been proposed in Nepal. So at least we have to start at some point. Being said that, everything we advocate for might not be reflected in the development, but they will start listening to it. And I have reflected that many parliamentarians are very keen to learn about these new issues and build their understanding on the emerging issues at least. So this will help in a longer run. The immediate effect might not be there, but if we advocate for it persistently and continuously, they will start listening to it. And when they feel the importance of it, they are the ones who are writing the bills. They are the ones who will be making the laws if we make them understand this is doable. That is it from my side. Thank you.


Peace Oliver Amuge: Amanda, let me just add something that I remembered just yesterday. We were having a conversation with one of APC’s member called Rudi International that comes from Congo, and he was very happy that… that from Congo, DRC Congo, they have brought, I think, five members of parliament. I might, I think maybe I forgot the number. But again, I liked what he was talking about, that okay, yes, they have come. He’s very happy that they have come, but he was thinking about what kind of sessions, guiding them, because coming to the IGF is one thing, like I just mentioned, but afterwards, are we just going to come and move around and that’s it? Laying a strategy for what engagements that they should be part of, who are the people that they should meet, and trying to organize some people to, again, give them some insights. So I think these are some of the things that we need to do as civil society, as different stakeholders, if we have these opportunities to have members of parliament or policy makers in the room, let’s go an extra step and have agenda set and have strategies laid well, so that we can have some impact. Thank you.


Amara Shaker-Brown: Thank you. We have time for one more question. If anyone on the line has a question, otherwise I have one final one that I can put. All right. So we are trying to get right global majority to meaningfully be involved. And as your global minority partners, partners from the global North, either funders or other civil society, are there specific things other than promoting, as things you’ve said, funding, more funding opportunities, more inclusive timing and location, other things that we can be carrying forward if we are in the room and you are not, other than advocating for you to be in the room. But are there any specific actions that your global North partners can be taking or supporting to help? sort of bring your messages into the conversation?


Paola Galvez: I can start. First, I think, and that reminds me to one invitation I received, but it was first day, because I’m based in Paris, but I’m Peruvian, and they were thinking I was in Paris, they invited me to a global forum, let’s say. And I mentioned I was in Lima doing this project with UNESCO. And they said, ah, then you can join online, because we don’t have much funds. And, you know, at the moment, it was a tough decision. But I was very sure of what I believe. And I said, I’m really sorry, but I don’t think it’s the same engagement when you join online and you deliver your speech as much as I’m very passionate when I when I speak, I said it will not be the same. So let’s please look for someone else who may be in Europe and that can come. In the end, they made the effort and they found the funds. So this is one thing to say. I appreciate when the Global North want to help us. But there is I don’t think there’s a replacement of Global North representative speaking for us. So I would like to reiterate the importance of being more creative and getting the funds to bring the voices of Global South representative, because there’s nothing more important than having them. We all know that the main sessions are important. But what really, really matters are the discussions that we have during the coffee breaks, or if we can have lunch and the space when we can really mention our needs, our pains, what are the challenges that we’re living. So, yeah, advocating for having more funds and because we may have the funds and then we may not have the seat on the table. And trying to look further, because if we are in a conversation where we’re discussing education in rural areas, it’s nice to see experts on education, but we must have teachers that are suffering or living the challenges of digital technology impact on education, right? I, myself, found it challenging looking for these real actors that meaningfully engage for this public consultation that I mentioned, but I know my faults, right? So I asked for help to an organization, do you know of people or communities of teachers that can come? And then an awahoon, which is a community from Peru, came all the way to Lima and participated. That’s meaningful. It’s a lot of work, but we need to do it if we want discussions to be valuable and to really have the impact on the Internet we want.


Peace Oliver Amuge: Okay, thank you. I want to add on the funding mechanism. I think we need to also acknowledge the different context, you know, content and embrace local content. But like I mentioned at the beginning, for instance, in the African region, really the issue of access is very, very important. And so then we need to look at mechanisms or that work when we look at, for instance, the access, just picking on access, the mechanism that can work like community networks, you know. So I think sometimes we are using the same approach everywhere, but we need to tailor our approaches to fit the context of our targets. So I think as we work on our different strategy, we need to ensure that we are aware of the different context and what can work. And like Paula said, that someone else cannot come and speak for us. for the local community. So I want to just emphasize that we need to always acknowledge that. And in general, we need to have gender-responsive approaches and embrace multistakeholderism. Thank you.


Ananda Gautam: Thank you, Paula and Peace. So complimenting them, my major kind of reflection is it is not about finding new funding. I think it is alignment of the existing funding. Are we giving the funds to who needed the most, or are we just distributing it? The alignment of the available funds is very important. We have so many UN agencies, and how the efforts of all the specialized agencies in line with the, now we can say, will they be aligned with the Global Digital Compact, or what WSIS Plus 20 will deliver it? And another thing is collaboration. Have we sought out for the collaborative approach from all the stakeholders? I think these two things need to be addressed so that we can have the kind of multistakeholder engagement that we are seeking. Thank you.


Amara Shaker-Brown: Great. Thank you all. And thank you all for joining us. I think we can, sorry, excuse me, wrap it up. Thank you to our panelists. Thank you for sharing your expertise, your experiences, and for giving us some ideas of what we can be doing in this next year to ensure full, meaningful participation of all stakeholders. And with that, I will say have a lovely evening to those in the room, and have a lovely day for the rest of you. Thanks so much, everyone. Thanks so much. Thank you. Thank you.


P

Peace Oliver Amuge

Speech speed

156 words per minute

Speech length

2260 words

Speech time

864 seconds

Lack of internet access in many regions

Explanation

Peace Oliver Amuge emphasizes the importance of internet access, particularly in the African region. She argues that meaningful connectivity and access for local communities should be a priority before addressing other internet governance issues.


Evidence

Peace mentions that many people in Africa are still not connected to the internet.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Differed with

Ananda Gautam


Differed on

Prioritization of internet access vs. other governance issues


Fragmented efforts and lack of harmonized agenda

Explanation

Peace Oliver Amuge points out the problem of fragmented efforts in addressing internet governance issues. She argues for the need to have a harmonized agenda or strategy across different levels, from national to global.


Evidence

She mentions the need to map out stakeholders, existing knowledge, and resources at all levels.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Bottom-up approach starting from grassroots level

Explanation

Peace Oliver Amuge advocates for a bottom-up approach to internet governance. She suggests leveraging structures like the IGF that start from the grassroots to have meaningful conversations and harmonize efforts.


Evidence

She mentions the need to embrace and leverage on IGF structures for meaningful conversations and participation.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Ananda Gautam


Paola Galvez


Agreed on

Need for multi-stakeholder collaboration


Continuous engagement with policymakers

Explanation

Peace Oliver Amuge emphasizes the importance of ongoing engagement with policymakers, particularly members of parliament. She argues that this continuous engagement is crucial for building understanding and support for internet governance issues.


Evidence

She provides an example of APC’s African School on Internet Governance Forum, which included 16 members of parliament from different African countries.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


A

Ananda Gautam

Speech speed

144 words per minute

Speech length

2528 words

Speech time

1050 seconds

Need for capacity building among stakeholders

Explanation

Ananda Gautam emphasizes the importance of capacity building for stakeholders in internet governance. He argues that young people and marginalized communities need to be equipped with knowledge to participate effectively in shaping the future of internet governance.


Evidence

Ananda mentions the success of Youth IGF Nepal, which has trained over 300 people in three years.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Importance of capacity building and education


Differed with

Peace Oliver Amuge


Differed on

Prioritization of internet access vs. other governance issues


Engaging government and private sector support

Explanation

Ananda Gautam argues for the need to engage both government and private sector support in internet governance initiatives. He suggests that private sector companies can help civil society through corporate social responsibility initiatives.


Evidence

He mentions the potential for tech companies to help civil society send representatives to forums and create capacity building initiatives.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Need for multi-stakeholder collaboration


Better alignment of existing funding

Explanation

Ananda Gautam argues that the issue is not about finding new funding, but better aligning existing funds. He suggests that funds should be given to those who need them most, rather than just distributed broadly.


Evidence

He mentions the need to align efforts of UN agencies with the Global Digital Compact or WSIS Plus 20 outcomes.


Major Discussion Point

Improving Global Majority Representation


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Funding challenges for participation in global forums


P

Paola Galvez

Speech speed

144 words per minute

Speech length

2090 words

Speech time

869 seconds

Power asymmetries in global forums

Explanation

Paola Galvez highlights the issue of power asymmetries in global internet governance forums. She argues that even when all stakeholders are present, agendas can be shaped by wealthier nations or larger corporations.


Evidence

She mentions that smaller civil society organizations may not be aware of or able to participate in bilateral meetings that occur during these forums.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Leveraging networks and alliances

Explanation

Paola Galvez emphasizes the importance of networks and alliances in improving participation in internet governance forums. She argues that these collaborations can help bring unified voices and new perspectives to the discussions.


Evidence

She provides examples of Al Sur, a coalition of digital rights NGOs in Latin America, and her work with the Center for AI and Digital Policy Community.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Peace Oliver Amuge


Ananda Gautam


Agreed on

Need for multi-stakeholder collaboration


Providing more funding opportunities for participation

Explanation

Paola Galvez argues for the need to provide more funding opportunities for participation in internet governance forums. She emphasizes the importance of in-person participation for meaningful engagement.


Evidence

She shares a personal experience where she advocated for funding to attend a forum in person rather than participating online.


Major Discussion Point

Improving Global Majority Representation


Agreed with

Peace Oliver Amuge


Ananda Gautam


Agreed on

Funding challenges for participation in global forums


Ensuring meaningful in-person participation

Explanation

Paola Galvez stresses the importance of ensuring meaningful in-person participation from Global South representatives. She argues that there is no substitute for having voices from the Global South present in person at these forums.


Evidence

She mentions the importance of informal discussions during coffee breaks and lunches for sharing needs and challenges.


Major Discussion Point

Improving Global Majority Representation


Agreements

Agreement Points

Importance of capacity building and education

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Lack of capacity that exists among the different stakeholders


Need for capacity building among stakeholders


Lack of knowledge and experience


summary

All speakers emphasized the need for capacity building and education among various stakeholders to improve participation in internet governance processes.


Funding challenges for participation in global forums

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Funding opportunity because this will also facilitate the capacity building


Better alignment of existing funding


Providing more funding opportunities for participation


summary

All speakers highlighted the importance of addressing funding challenges to ensure meaningful participation from diverse stakeholders in global internet governance forums.


Need for multi-stakeholder collaboration

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Bottom-up approach starting from grassroots level


Engaging government and private sector support


Leveraging networks and alliances


summary

All speakers agreed on the importance of multi-stakeholder collaboration and engagement in internet governance processes, emphasizing the need for inclusive participation from various sectors.


Similar Viewpoints

Both speakers emphasized the importance of ongoing, meaningful engagement with policymakers and ensuring in-person participation from Global South representatives in internet governance forums.

speakers

Peace Oliver Amuge


Paola Galvez


arguments

Continuous engagement with policymakers


Ensuring meaningful in-person participation


Both speakers highlighted the challenges of internet access and capacity building in developing regions, particularly in Africa and Nepal.

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


Unexpected Consensus

Importance of local context and representation

speakers

Peace Oliver Amuge


Paola Galvez


Ananda Gautam


arguments

Fragmented efforts and lack of harmonized agenda


Ensuring meaningful in-person participation


Engaging government and private sector support


explanation

All speakers unexpectedly agreed on the critical importance of considering local context and ensuring genuine representation from diverse regions in internet governance processes, despite their different geographical backgrounds and areas of expertise.


Overall Assessment

Summary

The speakers showed strong agreement on the need for capacity building, improved funding mechanisms, multi-stakeholder collaboration, and meaningful representation from diverse regions in internet governance processes.


Consensus level

High level of consensus among the speakers, implying a shared understanding of key challenges and potential solutions in improving global internet governance. This consensus suggests that these areas could be focal points for future initiatives and policy development in the field of internet governance.


Differences

Different Viewpoints

Prioritization of internet access vs. other governance issues

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


summary

Peace emphasizes the primary importance of internet access, particularly in Africa, before addressing other governance issues. Ananda, while acknowledging access, places equal emphasis on capacity building for effective participation in governance.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around prioritization of issues (access vs. capacity building) and strategies for funding and stakeholder engagement in internet governance.


difference_level

The level of disagreement among the speakers is relatively low. Their perspectives are largely complementary, focusing on different aspects of the same overarching goals. These minor differences in approach could actually lead to a more comprehensive strategy for improving internet governance and accessibility in the Global South if integrated effectively.


Partial Agreements

Partial Agreements

Both Paola and Ananda agree on the need for improved funding for participation in internet governance forums. However, Paola emphasizes creating new funding opportunities, while Ananda argues for better alignment of existing funds.

speakers

Paola Galvez


Ananda Gautam


arguments

Providing more funding opportunities for participation


Better alignment of existing funding


All speakers agree on the need for broader stakeholder engagement, but propose different strategies: Peace advocates for a bottom-up approach, Ananda emphasizes government and private sector involvement, and Paola focuses on leveraging existing networks and alliances.

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Bottom-up approach starting from grassroots level


Engaging government and private sector support


Leveraging networks and alliances


Similar Viewpoints

Both speakers emphasized the importance of ongoing, meaningful engagement with policymakers and ensuring in-person participation from Global South representatives in internet governance forums.

speakers

Peace Oliver Amuge


Paola Galvez


arguments

Continuous engagement with policymakers


Ensuring meaningful in-person participation


Both speakers highlighted the challenges of internet access and capacity building in developing regions, particularly in Africa and Nepal.

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


Takeaways

Key Takeaways

Internet accessibility remains a major challenge, especially in developing regions


Capacity building is needed across all stakeholder groups, not just civil society


Multi-stakeholder collaboration and a bottom-up approach are crucial for effective internet governance


More inclusive and creative funding mechanisms are needed to ensure meaningful participation from the global majority


Local context and tailored approaches are important when addressing internet governance issues


Resolutions and Action Items

Leverage existing structures like national and regional IGFs to foster multi-stakeholder dialogue


Create more inclusive mechanisms to equalize power dynamics in global forums


Develop strategies to engage parliamentarians and policymakers in internet governance discussions


Seek out and support youth initiatives in internet governance


Unresolved Issues

How to effectively engage private sector stakeholders in internet governance processes


Ways to ensure government support and participation in multi-stakeholder forums


Methods to harmonize fragmented efforts across different stakeholder groups


Strategies to address power asymmetries in global internet governance discussions


Suggested Compromises

Accepting advisor participation when parliamentarians cannot attend in person


Balancing online and in-person participation to increase inclusivity while recognizing the importance of face-to-face interactions


Reallocating existing funding rather than seeking new sources to support global majority participation


Thought Provoking Comments

I think to me, what would be important is to, first of all, unpack what we mean by free, open, you know, Internet, you know, what does that mean to us? What does that mean to the different stakeholder groups that we have, to the different communities that we are talking about, the different context?

speaker

Peace Oliver Amuge


reason

This comment highlights the importance of clearly defining terms and considering different perspectives before diving into solutions. It sets the stage for a more nuanced discussion.


impact

This shifted the conversation to focus more on the specific needs and contexts of different regions and stakeholders throughout the rest of the discussion.


I believe we are several actors and we need to work as a community with coalition building to bring the voices to the ones that are making the decision. They are on the table. Second, try to bring localized narratives. It can feel abstract sometimes when we think about these global discussions, right? But they are absolutely important for our national and local realities.

speaker

Paola Galvez


reason

This comment provides concrete strategies for improving engagement and impact in global internet governance discussions. It emphasizes the importance of both coalition-building and localizing global issues.


impact

This comment sparked more discussion about specific ways to engage local stakeholders and translate global issues to local contexts throughout the rest of the conversation.


One of the major challenges this kind of forums are not binding, it doesn’t have a kind of ripple effect, you know, people are not bound, people are like any stakeholder is not bound to implement the takeaways, but if we have any mechanism that we could, that governments could uphold the values that are taken away from this forum is very important.

speaker

Ananda Gautam


reason

This comment identifies a key challenge in translating forum discussions into real-world impact. It raises important questions about accountability and implementation.


impact

This led to further discussion about ways to increase the impact and accountability of global internet governance forums.


I appreciate when the Global North want to help us. But there is I don’t think there’s a replacement of Global North representative speaking for us. So I would like to reiterate the importance of being more creative and getting the funds to bring the voices of Global South representative, because there’s nothing more important than having them.

speaker

Paola Galvez


reason

This comment directly addresses power dynamics in global discussions and emphasizes the importance of direct representation from the Global South.


impact

This comment shifted the discussion to focus more on specific ways to increase meaningful participation from Global South representatives, rather than just having others speak on their behalf.


Overall Assessment

These key comments shaped the discussion by consistently bringing the focus back to practical, actionable strategies for improving global internet governance. They emphasized the importance of clear definitions, local context, accountability, and direct representation from the Global South. This led to a rich discussion that balanced high-level principles with specific, on-the-ground realities and challenges.


Follow-up Questions

How can we unpack and clarify what is meant by a free, open Internet in different contexts and for different stakeholder groups?

speaker

Peace Oliver Amuge


explanation

This is important to establish clear priorities and agendas for advancing Internet governance in different regions.


How can we improve access to meaningful connectivity, especially in regions like Africa where many are still unconnected?

speaker

Peace Oliver Amuge


explanation

This is crucial for ensuring equitable participation in Internet governance discussions and benefits.


What mechanisms can be developed to track and monitor implementation of commitments made during global Internet governance fora?

speaker

Paola Galvez


explanation

This would help ensure accountability and progress on agreed-upon goals.


How can we address the ‘AI divide’ alongside the digital divide?

speaker

Ananda Gautam


explanation

This is an emerging concern as AI becomes more prevalent in technology and governance.


What strategies can be employed to make global Internet governance discussions more binding or impactful at national levels?

speaker

Ananda Gautam


explanation

This would help translate global discussions into concrete actions and policies.


How can we improve coordination between UN agencies and national/regional Internet governance initiatives?

speaker

Ananda Gautam


explanation

Better coordination could lead to more effective implementation of Internet governance principles.


What are effective ways to build capacity among different stakeholder groups, including judiciary, parliament, and law enforcers?

speaker

Peace Oliver Amuge


explanation

This is necessary to ensure all relevant parties can meaningfully participate in Internet governance discussions.


How can we create more harmonized agendas and strategies across different levels (national, regional, global) of civil society engagement?

speaker

Peace Oliver Amuge


explanation

This would help create a more unified and impactful civil society voice in Internet governance.


What research is needed to map existing stakeholders, knowledge, and resources in Internet governance across different levels?

speaker

Peace Oliver Amuge


explanation

This would help identify gaps and opportunities for more effective collaboration and resource allocation.


How can we better align existing funding in Internet governance to ensure it reaches those who need it most?

speaker

Ananda Gautam


explanation

This could lead to more effective use of limited resources and better representation in Internet governance discussions.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.