Open Forum #30 Harnessing GenAI to transform Education for All

Open Forum #30 Harnessing GenAI to transform Education for All

Session at a Glance

Summary

This panel discussion focused on the impact of generative AI on education, particularly from a global perspective. The panelists, representing diverse backgrounds including academia, law, and policymaking, explored various aspects of AI’s integration into educational settings.

Key topics included the challenges of detecting AI-generated content in academic work, with conflicting views on the effectiveness of current detection tools. The discussion highlighted concerns about academic integrity and the need to adapt assessment methods to account for AI use. Panelists emphasized the importance of teaching students to use AI tools responsibly and ethically, rather than simply trying to prevent their use.

The conversation also addressed the potential for AI to exacerbate existing digital divides between the Global North and South. Panelists stressed the need for equitable access to AI technologies and the importance of capacity building in developing countries. They discussed strategies for integrating AI into curricula and teacher training programs in the Global South.

Legal and ethical considerations were explored, including copyright issues related to AI training data and the need for clear guidelines on AI use in academic settings. The panel also touched on the potential benefits of AI in making educational resources more accessible in developing countries.

The discussion concluded with reflections on how Africa can benefit from and contribute to AI development in education, highlighting initiatives like the Nelson Mandela African Institutions of Science and Technology. Overall, the panel emphasized the need for a balanced approach to AI in education, recognizing both its potential benefits and challenges.

Keypoints

Major discussion points:

– The use of generative AI in education, including benefits and challenges for teachers and students

– Detecting AI-generated content in academic settings and issues around academic integrity

– Intellectual property concerns related to training and using generative AI

– The digital divide between the Global North and South in access to and use of AI tools

– Strategies for ethically integrating AI into education, especially in developing countries

Overall purpose:

The goal of this discussion was to explore the impacts of generative AI on education from multiple perspectives, including technical, ethical, legal, and policy viewpoints. The panel aimed to consider both opportunities and challenges, with a focus on implications for developing countries.

Tone:

The overall tone was analytical and solution-oriented. Panelists offered critical perspectives on current approaches but also proposed constructive ideas for moving forward. There was a shift towards the end to focus more on opportunities and capacity building in the Global South, ending on a more optimistic note about the potential for AI to enhance education globally if implemented thoughtfully.

Speakers

– Jingbo Huang: Director of United Nations University Research Institute in Macau

– Antonio Saravanos: Associate Professor of Information System Management, New York University

– Eliamani Isaya Laltaika: Judge of High Court of Tanzania, Faculty member at Nelson Mandela University

– Mike Perkins Sfhea: Associate Professor and Head of Center for Research and Innovation, British University Vietnam

– Mohamed Shareef: Director of Government and International Relations at OCSICA, Former Minister of State from Maldives

Additional speakers:

– None identified

Full session report

Expanded Summary: The Impact of Generative AI on Education – A Global Perspective

This panel discussion, organized by the UN University, brought together experts from diverse backgrounds to explore the multifaceted impact of generative AI on education, with a particular focus on global implications. The panelists, representing academia, law, and policymaking, delved into the challenges and opportunities presented by AI integration in educational settings worldwide.

Use of Generative AI in Education

The discussion began with the acknowledgement that generative AI is already being widely used in educational contexts. Mohamed Shareef presented findings from the Maldives, where educators are already utilizing AI tools but often lack the necessary knowledge and training to do so effectively. He noted differences between K-12 and higher education teachers in their approach to AI, with the latter group showing more openness to its use. This highlighted a crucial need for capacity building and professional development in AI literacy for educators.

Antonio Saravanos shared his teaching approach, which focuses on helping students understand AI’s capabilities and limitations. He emphasizes the importance of reframing generative AI as a tool for deeper understanding, rather than simply a means of producing answers. Saravanos encourages students to critically evaluate AI-generated content and use it as a starting point for further research and analysis.

The reliability of AI detection tools in academic settings emerged as a contentious point. Mike Perkins argued strongly against the use of current AI detection tools, explaining that they are not sufficiently reliable for accusing students of plagiarism. He highlighted the potential harm to students’ academic careers and the risk of false positives, emphasizing the need for more nuanced approaches to assessment and evaluation in an AI-enabled world.

Generative AI and the Global South

A significant portion of the discussion centered on the implications of generative AI for the Global South. Mohamed Shareef raised concerns about the potential for AI to exacerbate existing digital divides between the Global North and South. He used a vivid metaphor, comparing generative AI to “Red Bull” that gives “digital transformation wings”, highlighting the risk of widening gaps between those with and without access to these powerful tools.

To address these challenges, Antonio Saravanos advocated for the development of local AI tools and solutions in the Global South to avoid dependence on technologies from the Global North. This approach could help build local capacity and ensure that AI solutions are tailored to specific regional needs and contexts.

Mike Perkins proposed an AI assessment scale framework as a potential tool for ethically integrating AI into education in the Global South. This framework provides a structured approach for educators to introduce AI tools in a manner that maintains academic integrity and promotes equitable access, considering factors such as AI literacy, infrastructure, and cultural context.

Eliamani Isaya Laltaika, representing the Nelson Mandela African Institutions of Science and Technology, emphasized the need for partnerships between Africa and the Global North to build AI and STEM capacity. He highlighted the potential benefits of AI in making educational resources more accessible in developing countries, noting how AI could help overcome barriers to accessing international research publications.

Changing Educational Approaches for AI

The panelists agreed on the need to adapt educational approaches to effectively incorporate AI. Saravanos argued that educators should focus on teaching students to use AI effectively, rather than attempting to ban its use. This approach acknowledges the inevitability of AI in education and the workplace, preparing students for a future where AI literacy will be crucial.

Perkins stressed the importance of redesigning assessments to focus on skills that AI cannot replicate, such as critical thinking, problem-solving, and creativity. This shift in assessment strategy could help maintain the relevance and integrity of education in an AI-enabled world.

Laltaika called for the development of ethical guidelines for AI use in education at all levels. Such guidelines could help address concerns about academic integrity, copyright, and equitable access to AI tools.

Shareef advocated for the integration of AI and digital literacy into curricula across disciplines. This approach would ensure that students are prepared to navigate an increasingly AI-driven world, regardless of their field of study.

Legal and Ethical Considerations

The discussion also touched on important legal and ethical considerations surrounding AI in education. Laltaika highlighted the need to revisit copyright frameworks to address the use of copyrighted material in AI training. This issue raises complex questions about intellectual property rights in the age of AI and requires careful consideration to balance the needs of content creators and AI developers.

Unresolved Issues and Future Directions

While the discussion provided valuable insights and potential strategies for integrating AI in education, several unresolved issues remain. These include:

1. Developing effective and ethical methods for evaluating AI-generated content in academic work

2. Ensuring equitable access to advanced AI tools in the Global South

3. Balancing copyright protections with the use of copyrighted material in AI training

4. Retaining skilled AI professionals in developing countries

The panelists suggested several action items to address these challenges, including:

1. Developing ethical guidelines for AI use in education at institutional and national levels

2. Integrating AI and digital literacy into curricula across disciplines

3. Redesigning assessments to focus on skills AI cannot replicate

4. Investing in IT infrastructure in developing countries to improve access to AI tools

5. Establishing public-private partnerships and collaborations between the Global North and South for AI capacity building

In conclusion, the panel emphasized the need for a balanced approach to AI in education, recognizing both its potential benefits and challenges. The discussion highlighted the importance of global collaboration, ethical considerations, and adaptive educational strategies to ensure that the integration of AI in education promotes equity, enhances learning outcomes, and prepares students for an AI-enabled future.

Session Transcript

Jingbo Huang : Good. Channel 2. Okay, let’s start. PowerPoint, please. Okay, so welcome. Good afternoon, everyone. I hope everybody had a good lunch and a good coffee break. And so today’s session is about generative AI and education. How does generative AI to transform education for all? So we take a different approach. We take the approach of a system, system approach, because for the issue of generative AI, there are different perspectives to look at it. There’s technical, there’s, so from our end, it’s more of a whole society and multi-stakeholder approach. So I’ll explain to you why I say it this way. But first, before I talk about that, I have to introduce my organization. So that’s my job. So my name is Jingbo Huang, the director of a United Nations University Research Institute in Macau. So how many of you have heard of a UNU, UN University? Oh, one, two. Great. Three. We’ve heard of them. Wonderful. Thank you. Great. So UN University headquarters is in Tokyo. We have 13 research institutes in 12 different countries. We are the UN. We also have an identity which is academic. So that’s why we do research and we do training and education at UN University. 13 research institutes in 12 countries. And we have different institutes covering different expertise. The one, as you can see, on the map. So those are the locations of our institute. So our institute that I’m heading is UNU Macau. It specializes in digital technologies and sustainable development goals. Recently, in the past, we have been around for more than 30 years. Recently, we have been working more toward AI, AI governance, AI ethics, extra, including in addition to digital tech with women, gender, cybersecurity, growing up online, extra. So we have a huge portfolio. And if you’re interested, we can talk later. And today’s approach, as I mentioned, is a system approach to talk about generative AI and education, multi-stakeholder. So when we talk about generative AI, we certainly will look at the system itself. But beyond system, and usually the system where the technical background is usually less important, I would consider so, than people. So people have to be in the center. So let’s look at the people picture. So on the right side, you can see that there are teachers, certainly. And teachers have to be, they have been using, taking advantage of the tools and trying to develop personalized education tools and using generative AI. And we also have learners, the students. And maybe nowadays we talk about lifelong learners. It’s actually everybody has been using it. And we also look at the schools, school administrators, for example, universities now with generative AI, which transforms how we learn and how we teach drastically and what kind of a curriculum would be relevant. How do we train people for the future generation? So those are the questions that the university administration needs to think about. And if we look at the bigger outer ecosystem, we also need to look at the policymakers, Ministry of Education, for example, and also the regulators. So today, of course, there are parents. I think some of you being parents and you also understand, you know, sometimes you want to know what your children are doing and online and with the generative AI. So this is, also we have the technology company and they are actually the ones who develop the technologies. With this people map. And I’m very happy to introduce to you our panelists because we represent actually all the roles here. We also have researchers. So this is what I mean by a system approach or a whole society approach to discuss the generative AI and education topics. So I would like to introduce you to our wonderful panel. So it’s alphabetical order. First one is Antonio Saravanos and later you will see him online. He’s a professor, associate professor of information system management from New York University. And sitting next to me is Dr. Eliamani Lataika and he’s a judge of a high court of Tanzania. He’s also a faculty member of Nelson Mandela University and he’s from Tanzania. And then we have a professor Mike Perkins. Sorry, how to pronounce it. Yeah, that’s fine. Okay. He’s an associate professor and head of Center for Research and Innovation, British University, Vietnam. And we have Mr. Mohammed Sharif and he is the director of government and international relations, OCSICA. And he’s also former minister of states from Maldives. Unfortunately, Dr. Kaohsiung cannot make it today with us. So we’ll stay with the five people panel today. So the first one I would like to ask our panelists. Let me take a seat. So we will have our presentation and then later I would highly encourage you to interact with us, ask us questions and share your practice and your best practices with us. So later I will invite you to speak. And so first I would like… Can you still hear me? Yeah. So mine is breaking up. I cannot hear myself. The first set of questions I would like to ask Mohammed. Yeah. And Mohammed has been a state minister, a researcher, a higher education administrator and now a private sector leader. So recently you have been working closely with educators of K-12 in Maldives. Would you please share how the educators in Maldives use generative AI in their classrooms and what concerns and

Mohamed Shareef: difficulties do they encounter? Thank you. Well, let me start by thanking you, I think, for your presence here. I know there are many, many sessions going out there, but you’ve come here to hear from us. So as Jingbao alluded to, I had the opportunity during the last kind of year, year and a half, to interact with educators in the Maldives, mostly K-12 educators, but also faculty from the main universities in the Maldives. Over the last year and a half, there has been an increasing interest of educators in generative AI. Now, AI for practitioners like myself and many of you, I’m sure, you know, generative AI is really sparked this interest and everyone’s looking to see how they can be supercharged with AI. And the same is true in the Maldives. Now, Maldives, you may not know, is a small island developing nation. It’s an upper-middle-income nation, so it’s not a least developed nation, but challenged through technology adoption. Now, when we look at over the last kind of year, I interacted with about 270. And the first thing I asked is, are you familiar with what is it? And about 50% of them said, yeah, I have some idea what generative AI is. Maybe they’re not so familiar with what generative means, but they kind of have a sense of, okay, this is something they need. This is something they want. So already, but then I asked, do you use it, right? And what I found was quite surprising that nearly 85% of educators in the Maldives already use it, but not as I hoped they would, but they use it. And for that reason, what I found was, so what do you do? I asked them. Oh, AI can make beautiful slides. This is the first thing, because creating slides is a big headache, I guess, for educators. And with AI, you can just give it your notes and it will create the bullet points. And if you have better AI, it will even… put it all into PowerPoint or whatever tool you want. So it is the idea that teachers are already taxed in terms of the time they have. But what I found interesting was that about 15% of educators, both in higher education and in K-12, were already using generative AI on a daily basis to teach or to aid them in their teaching duties. And this was surprising because I didn’t really expect that they would be using this outside of, say, casual kind of exploratory work. But this is quite surprising. But what is even more surprising is that when I asked them, what are your concerns? Because I thought they would be concerned. I thought they would be concerned with generative AI because generative AI could replace them. But when I asked them, right, K-12, K-12 teachers, their concern was that they don’t have the knowledge to leverage generative AI. And they don’t have the training opportunities to operate themselves. And their second concern was, how can we access, the access to AI is limited? And then their third concern was the accuracy. Now, they are the teachers, so they know when they create something. They know when it’s not accurate. But imagine a math teacher trying to teach something English. So they are really concerned about the accuracy of AI. And at the bottom of the list, they say, yeah, maybe only 2% of the respondents that taught this 270 educators told me they have any concerns about being replaced. I think because at the top, they already marked to make it right. There is no risk of replacing us. But they see very neatly there is. But when I asked the higher education, there is a contrast. The top concern for them is plagiarism and cheating. This is their concern. But plagiarism and cheating is like the fifth or sixth or K-12 teachers. For them, it’s about that. But for the higher education, the faculty, for them, the concern is, how am I going to assess these guys when they are going to be using AI’s work as they are all trying to pass AI work as their own work? So there is definitely a lot of concern. But there is a lot of general demand for it, like the Maldives, two things that educators are looking for. One, AI, how to keep children safe online, cybersecurity. So these topics have been high in demand. And I think they go hand in hand together. So this is from a developing country. So I see a lot of scope in how, especially the fact that educators are putting at the top of their concerns their own capacity and the opportunities students have with plagiarism and how can we actually assess students who use AI. Thank you.

Jingbo Huang : Thank you, Mohamed. And this is very interesting. If the technician, please, can you please bring the Zoom? So we will invite our second speaker. Since Mohamed mentioned the plagiarism and the faculty members, how are they going to assess their students? So let’s invite Dr. Saravanos, Antonio Saravanos, to talk about, as a professor, researcher, and computer scientist, you can probably easily discern that your students submit work produced by generative AI, TATCPT, for example. How do you teach your students their generative AI judgments so they can better use generative AI to enhance learning? Antonio, please.

Antonio Saravanos: So you bring up an excellent point, right? Unfortunately, it’s quite easy to detect the use of TATCPT or another artificial intelligence, specifically an NLP, natural language processing, at the novice level, at the student level. For example, thinking back, this semester, I was teaching an intro to programming course. And I would repeatedly see submissions where the solution used elements of the language I was teaching Python and elements of Python that we hadn’t yet covered. And then when you have a discussion with the students, it’s clear that they don’t really understand what the material is. So it’s easy to catch them. And there are many, many ways to catch the use of AI. For example, when students submit essays, you see them citing resources that don’t exist. And that’s quite common for TATCPT to just make up references. So I think someone more experienced, right, can kind of catch them out. So as an educator, I recognize the rise of gen AI tools like TATCPT, right, is both a challenge and an opportunity in an academic environment. So from the teaching approach that I have adopted focuses on reframing the challenges as opportunities in order to empower students and guiding them to use gen AI not as a shortcut for producing answers, but as a tool to deepen their understanding, creativity, and problem-solving abilities. Because whether we want to or not, right, when they go into industry and they leave the university, they’ll be relying on this tool. So they need to be able to use it effectively. So I guess I have many dimensions to this, and we’re a bit short on time. So I would say my foundation begins with helping the students understand the capabilities and limitations of the generative AI to begin with, right? So the first thing is to make sure that they understand that AI tools aren’t like this omnipotent source of knowledge and they understand that there are inherent flaws. So we need to begin with that. And then once they have that, we can move forward. So to illustrate this, I’ll present case studies in class where the AI outputs contain some mistakes, biases, fallacies, right, and then these examples become teaching moments. So first emphasizing the importance of the human element. So I may have students generate a solution to a coding problem with ChatGBT, and then the class goes over and critiques the solution with me, identifying mistakes. But you could even generalize this exercise, right? So with respect to students, anything where you have a gen AI response being compared to some authoritative source like a peer-reviewed article, and then highlighting discrepancies and the challenges, right, to identify what the AI might have produced, being flawed or incomplete responses, right? So this is what the AI gave us. How do we tell that there’s a mistake there, right? So generating these metacognitive abilities, thinking critically, is where it’s at. Hopefully this answers the question. Thank you, Antonios.

Jingbo Huang : And so Antonios has been incorporating, embracing generative AI in his teaching. So let’s move on to Mike. So one of your research interests is academic integrity. And would you please share with us some strategies for detecting AI-generated content in academic settings? Are the current tools effective? What are your insights on the responsible and ethical use of gen AI tools in academia? Thanks very much.

Mike Perkins: I’m just going to start off by saying, you know, how can we detect it? You can’t. And I’m going to disagree with what Antonios has said there. Educators cannot effectively detect the use of gen AI tools. There have been several studies which have demonstrated this. Earlier this year, a University of Reading study found that 94% of test submissions which were produced using gen AI sources were not detected during the marketing process. I’ve carried out experiments earlier than this. We created a series of gen AI produced assessments. using GPT-4. We then submitted these into the piles of all of the faculty marking them. We gave them the generative AI detection tools and we said, just tell us if you spot any tools that have been used, any assessments that have been created using AI. Performance extremely low in terms of people being able to pick this up. Some of the comments that you do hear people saying, and Antonio’s mentioned about Chat GPT making up fake sources. Originally, Chattyptee 3.5, yeah, that was true. It’s getting less and less true now. And now we have new tools such as Google Research released last week, which actually carries out an agent-based search, creates a literature review from real web sources, and will produce a full literature review for you. So this sort of story that AI tools, you can always tell that they’re going to, when we can detect them, it’s simply now not true. I would really strongly recommend to say, if you think you’re spotting a piece of work that you think is being created through gen AI, you may be wrong. Now you might say, well, okay, what I’ve got, I’ve got some, I’ve got a, I’ve got an AI detection tool. I’ve got zero GPT, I’ll turn it in. Also, wrong. Other research that I’ve carried out and many researchers in the academic and technical field, there’s now actually a consensus that these tools are not suitable for accusing students of committing plagiarism, as we say. Now you might say that, well, these tools, these software companies tell me that they’ve got a 98% accuracy rating. Okay, so you have a thousand students. How many students are you going to accept that you falsely accuse of plagiarism? And you mark them as zero, you make them redo a course, they maybe fail an assessment, they maybe have to drop out of university. Is that acceptable to you? Certainly not to me. And the research that I’ve been carrying out, it really highlights time and time again, that it’s actually the students who are at most risk of being in a precarious situation at their institution. Maybe they’re neurodivergent, maybe they’re English as a second language speakers. And these are the students who write in this style that people say, oh, that’s Gen AI. People write in lists, or people, you know, write in a certain sort of structured way, that yes, sometimes Gen AI tools do kind of replicate. But this is because they’re standardized forms of producing text. And especially when you’re an ESL speaker, you have often been taught in this particular way of using certain words in a certain format. So what you end up doing is you say, these students have been caught using Gen AI tools and they’re cheating. And they haven’t. But then they suffer some really severe consequences. We’ve also got to really consider broader issues of inclusivity and equity and access for these tools. Because you can make Gen AI output, even if this is detected as Gen AI produced, with a few simple techniques, you can turn this into text that is not going to be detected through any AI text detector. And we carried out this research. We created pieces using Gen AI tools. We tested them against the seven most popular and most research backed AI text detectors. And we found out that simply they were very low accuracy to begin with. 44% accuracy rating for unchanged text. But if you’re a student who’s wanting to cheat to get away with something, that’s not how you use AI. You don’t just copy and paste your prompts and throw that at the teacher and say, there you go. But if you do, you’re probably a struggling student who needs more support. It doesn’t need to be told, you’ve cheated, we’re going to throw you out of university now. But you give me 15 minutes and a piece of text, and I will make that text completely undetectable. Might be a thousand words, might be 2,000 words. We demonstrated that with a few simple prompts in terms of integrating these directly into our created prompts without manual editing, just by saying something like, write this in a more complex way. Add some spelling errors to this. Make this less complex. Make this sound more human. Add some verseness to it. Change the sentence length. Change the paragraph length. What you’re doing here is actually causing temperature changes in the underlying model. Now, if you have API access, you can actually set temperatures for the model, and you’ll find a higher temperature will give you a higher variation. We’re talking about stochastic models here, which try and predict what’s going to be the next word in the sequence. But if you just change that up and you add in some additional words, you rewrite some sections, you’re not going to get this detection that is going to be acceptable in really any formal academic integrity process. Now, if you take a look at the Guardian or Observer yesterday, I was quoted in there actually talking about this subject, and it’s a really interesting article, which talks exactly about these challenges. It’s the students who get falsely accused, and these are the ones who are struggling. Or it’s students who do admit to, you know, taking some shortcuts, but is that their fault? If they’re using ChatGPT or other Gen AI tools to do the assessment, why haven’t you changed your assessment? Why haven’t you changed your assessment to account for these tools? They’ve been out for two years now. What’s going on? So, I think there’s some really big changes that need to be made in education more broadly to recognize these tools and see how we can actually integrate them. Thank you. Thank you, Mike. Thank you. So, we’ve been talking about academic integrity. Now, let’s move on to our lawyer, Eddie Amani. We’re an expert in intellectual property. Gen AI tools can be trained by items protected by IP. There is a significant legal uncertainty whether AI tools, their training, use, and outputs represent IP infringements. What are some implications on education? Thank you very much.

Eliamani Isaya Laltaika: Thank you very much, Dr. Wong, for that question. Just before I go into the IP question, I want to appreciate the last speaker, the professor, for really opening this up. As a judge, I was trying to imagine getting a case where a student is suing the university for being accused of using Gen AI to produce their thesis and they need their PhD. I cannot graduate because this work is from Gen AI. I’m a judge. I need to do justice, not only to the student, but to the university and to the universe. And, surprisingly, the professor has said it’s impossible to detect. That’s kind of an interesting entry point into what I want to speak, because we are trying to make the AI thing so much hyped to the extent that we lose the things that we were working for and trying to reinvent the wheel. Copyright has been at the center of education. Copyright is at the loggerhead of Gen AI for several reasons. First, there is a saying in my language, which means, whenever you come across something very impressive, you should be sure that someone has toiled to make it so. So, any time you put a prompt, chat GPT or any other generative AI, and you get a wonderful text that suits your expectations, meets your expectations to the extent that it cannot even be detected, like a professor said, you should be sure that it borrows heavily from what existed before. So, the time that has been used to train chat GPT and other AI is allegedly time that copyrighted work was heavily violated. So, many people think, yes, this looks like a chapter in my book. I’m a professor of sociology, and somebody has just put a prompt, and the whole five, six years of my work. So, the case is going to court people saying chat GPT has violated copyright. So, that has an impact. education. Secondly, there’s an issue of attribution and ownership. Academia are known all over the world, academics, for generously acknowledging some other people’s work. You’ll find I was reading one paper when I was coming, and this professor has cited so many works in the world. One page, half a page is footnotes, you know, trying to say this is from so and so. We are not seeing that in Chatterjee PTT. It does not say this comes from Dr. Huang’s university in Macau, no, no, no. And thirdly, overtly restrictive laws that are coming as a result of Chatterjee PTT and other AIs. Now we are being very active. We kind of try to go away from the established principles of copyright towards extremely restrictive laws. What can be done to bring or to strike the right balance? And these are just my opinions, they are not binding. Some people think whatever a judge says is binding. Unfortunately, I’m speaking just as an academic, because like I was introduced, I still teach. I was appointed into the bench from the academia and I still retain my position. Number one, we must revisit copyright frameworks to ensure that there is some sort of enumeration for those that have been used to train AI. It should encourage open licensing. If your work is being used out there, you should be. I very much support monetary possession to authors and creatives, because if we don’t do that, we are shooting ourselves in the leg. Computers and AI will continue to certify our literary and cultural tests for a long time. I also think there is a need to establish ethical guidelines at every level. You should have ethical guidelines at the university level, at the ministry level, like the former minister has wonderfully stated. He has spent a long time with educators, because he’s speaking like one of them on the challenges of preparing slides and stuff like that. I also think there should be promotion of public-private partnership, because the government cannot succeed alone by trying to enact laws. It’s only upon learning from those who are directly affected by these laws and regulations that we get out. I need to say just a little bit for the next two minutes or so, that AI is a blessing in this guise in developing countries. In Tanzania, for example, getting money from the state and then they publish in international journals. You cannot access them. I’m seeing through Google a paper published by my professor, but I am required to pay to access it. It’s data from my country. It’s knowledge of my professor. He was trained by taxpayers from my country. The company has restricted this knowledge from me, so there was no equity. Now I can see AI coming forcefully and say, okay, I don’t want to promote these laws. I want equity. Yes. Thank you very much.

Jingbo Huang : He was ready, started to do the issue related to generative AI in the global South. The second round of the questions, we will focus more on the perspective of global South. As you know, in the UN, leave no one behind is a central value. Let’s look at gen AI from the global South perspective. I’ll go for the second round. The first question will also go to Mohamed. You used to be a policymaker in Maldives. Would you please reflect on how generative AI created digital divides between global North and global South? What policymakers should consider to help reduce the divides and promote more equitable access to gen AI, particularly in the small island countries like Maldives? Gen AI, the potential of gen AI is undoubtedly huge.

Mohamed Shareef: Today, everyone expects, actually, and should rightly expect gen AI to support us in achieving sustainable development goals. If not for AI, how would we have survived the pandemic? AI is already supporting us in the darkest of our times. What I fear most is that AI is like a new front that’s open for transformation practitioners who are working in the global South. This is a new front on the war on digital divide because we are already facing a lot of challenges trying to catch up with the rest of the world. Now, suddenly, there is like a red bull in the mix. For me, gen AI is like a red bull. It gives digital transformation wings. Suddenly, it’s printing. Those who can have this red bull, how are you going to catch up with them? In the global South, we just get the grip of this red bull. What can we do? There’s actually three aspects for digital divide when we talk about digital divide. It’s multifaceted, but the three aspects we talk about are access to technology, economic disparities, and the educational gaps. All these three things have a huge advantage. The developed world is investing their wealth today in AI and in particular, in generative AI. Not only that, there’s a huge educational gap in that because they are investing in the ecosystems in this developed part of the world. We are losing even our smartest brains in the developed world. This further exacerbates the educational gap. Then again, access to this is extremely limited. I’m sure you already heard 26% of the globe still remains offline and 50% of that is in the Asia-Pacific. The island communities are particularly impacted. What would I suggest as somebody who’s been for a long time a practitioner in digital transformation in the developing part of the world? I would say we’ve got to find a narrative where even the developing countries or even the least developed countries need to prioritize investment in IT infrastructure. In the Maldives, over the last five years, we’ve gone from having just one submarine cable in the rest of the world to five submarine cables. We’ve gone from geostationary internet to real internet. We’ve invested a lot to make sure that we are connected and we are connected in every way possible from under the water and from the sky down. But then you’ve got to make sure AI and digital literacy in these international curriculums. This is extremely important, but this is extremely hard as well. I am actually working with the higher education in particular to develop AI modules that are multidisciplinary and taught for every student. From nurses to finance specialists, AI needs to be taught. And then we’ve got to actually develop an ecosystem where we can retain our smartest rather than lose them to the West. So governments and the private industry need to partner. We alone cannot do it as His Excellency has pointed out, right? And of course, nation states as well as educational institutions should come up with policies for ethical use of AI. We cannot just jump into AI without proper national as well as institutional guidelines that safeguards our data and our privacy. And finally, we cannot do it alone. This is why we are very glad to be working alongside institutions like the UNU. We’ve got to work together. If we are going to have to actually bridge the AI divide and not let it divide us even further. Thank you.

Jingbo Huang : Since Mohammed already mentioned the capacity building, so the next question for Antonio would be, if you were teaching teachers from the Global South to integrate generative AI tools to their classroom, what would you tell them about the technical nature of the GEM-AI tools to help them understand that GEM-AI, understand the benefits and limitations. So we’re talking about the teachers education in the Global South.

Antonio Saravanos: So an excellent question. I think I would begin by highlighting that there are two sides to the tool. So on the one hand, we have the solution being used by teachers to make them more productive. And I think that was also mentioned by other panelists as well. So generating slides, generating these types of resources, perhaps assessments and so on. So in that sense, it’s quite powerful. And then there needs to be training for that. And then the other half is, okay, so how can we use it in assignments for the students, right? So how can the students be using it to make them more productive and so on? So one is understanding that there are the two dimensions, right? The other, again, mentioned by other panelists is the digital divide, right? Luckily, there are free tools that one can use, but it’s important also to recognize that they’re restricted. So I think this goes a bit tangentially, right? But again, it would be quite wise for academics to kind of work together to figure out ways that they can gain access to the more advanced solutions and also develop their own local tools that they can use that might not be as limited. So a lot of the work is open source. So can we run our own AI solutions locally with support and so on and learn and develop in that area, right? So not as to be left behind and so on. If I were teaching teachers from the Global South, I would begin by highlighting the technical nature, right, of these tools. So highlighting their benefits and their limitations, right? I think a good starting point, right, is what is Gen AI in general, right? What can be used to generate, right, text, images, music, code, right? So they get a good perspective of everything that it’s possible to do. Because sometimes one is kind of limited to use cases that they’ve heard from others and so on. So I think a good overview is a great starting point. And then highlighting the advantages and disadvantages, right? So it can summarize, explain, create content, but it doesn’t think or understand, right? Even though it kind of makes one think that they’re thinking, it’s not really like a human, right? As was mentioned by another panelist, it’s just guessing, right? Probabilities, like what should come next? So in that sense, talk about what the open solutions are and what the paid solutions are, right? So one can use DALI for images. Google Colab has an AI solution to generate code, right? So there’s a lot out there. So what’s there? And then not everything will be appropriate for every instructor. It kind of depends on what subject matter you’re teaching, right? And how the AI works, as was mentioned before, right? So it may not be as easy to catch someone plagiarizing using chat GPT in sociology, but it may be easier, right? If it’s an intro to programming course. So it really depends on the context a bit, right? So I see I’m running a bit short on time. So I think I’ll stop there, but happy to expand on the conversation offline if anyone is interested. Thank you, Antonio. May I please ask our technicians to put out the PowerPoint back again?

Jingbo Huang : And so the next question is for Mike, and you developed the AI assessment scale, which allows gen AI to be integrated into educational assessments while promoting academic integrity and the ethical use of these technologies. Sure. Thank you very much. So I was earlier just telling you about how really not feasible to say, well, we’re just going to tell that students have used gen AI tools.

Mike Perkins: But especially not if you’re in the Global South, because these tools also cost a lot of money. And the most accurate tools, which could be used to have these conversations, are going to be the ones that are most expensive. And you, in the Global South, maybe not going to be able to do this. So what’s the best way to do this? So what’s the alternative then? What can we do to change things up? Well, what I’ve developed is a framework for how we can actually introduce gen AI tools in an ethical way into assessment settings. Now, what this is, is a conversation starter between academics and students to say, look, we know that gen AI tools exist. We can’t put the genie back in the bottle, as much as some academics would probably like to, I think, and say, let’s just go back to where this is going. So what we have is a situation where, in the last two years, academics have been saying, oh, these students cheating using gen AI, yet still setting the same essay questions they’ve set for 20 years. But we’re beyond that timing. Now is the time to change, and the AI assessment scale is a tool primarily for assessment redesign. It’s a way to say, look, what are the important things we need to change? What are the things that we need to change? So this starts off right from the very beginning, where we say, look, there’s some times where we can’t use any AI at all. Now, if you are a mid-20s, you might want to use a gen AI tool. But we’re beyond that timing. Now is the time to change, and the AI assessment scale is a tool primarily for assessment redesign. It’s a way to say, look, what are the important things we need to change? What are the things that we need to change? What are the things that we need to change? So this starts off right from the very beginning, where we say, look, there’s some times where we can’t use any AI at all. Now, if you are a medical student, and you are training future nurses and doctors, you want to ensure that when that student graduates and becomes a doctor, they actually know the fundamental biological aspects of a human. So how are you going to test that, then? Well, you’re not going to say, here’s an assignment, and write me about the human heart. You’re going to put them in an example, or put them in a face-to-face assessment situation, or a one-to-one viva, or a presentation, and say, tell me about this situation. There’s a corpse. Demonstrate that you know how to cut that up before I’m going to let you practice on somebody else. You’re not going to give them an assignment to say, tell me about how you cut up this corpse. Hopefully, they’re not going to be corpses that they’re actually dealing with. Hopefully, when we graduate, then they’re actually going to be able to deal with live humans. So sometimes, we need this fundamental knowledge to be tested. And that’s when we say, look, there’s sometimes where we say there’s no AI. And this is a secured assessment. But then, as soon as we go away from a secured assessment, it is no longer possible to test that AI. So if you can’t control how the students are using Gen-AI tools, what you need to do is to change your assessments so that they focus on the things that you want to train them about. So, for example, at level two, this is where we’re talking about the human heart. This is where we’re talking about the human heart. So if you can’t control how the students are using Gen-AI tools, what you need to do is to change your assessments so that they focus on the things that you want to train them about. So, for example, at level two, this is where we’re talking about process-based assessments. So you’re a writing instructor, and you want to teach students about how to plan an essay. So you say, what tools can you use to help you plan an essay? And then you submit that as your assessment, and then we explore it. Because when students graduate, they are going to be asked to do tasks by their employers, and what we want is for them to be able to finish that output. We don’t say, oh, well, you’ve got to use, you’re not allowed to use the internet to do your job, or you’re not allowed to use your Gen AI tools. So we’ve got to train students how to use them effectively for different situations. At the next level, we might say, we want to have AI as a collaborator. We want to train students how to use Gen AI tools to draft text, to adjust what they are creating, to give them maybe even feedback on their work. And what we’re looking at is this co-creation element, rather than trying to say, oh, well, you wrote that part, and the AI wrote that part. I write using Gen AI tools, and by the time I’ve finished writing my journal paper, I can’t tell myself which part I wrote by myself and which part the AI wrote, and it’s all my ideas and it’s all my voice. So the idea is we’re trying to train students on maintaining their voice and maintaining a critical approach to how is the best way to use AI. And then we can go beyond that. We can say, you know, there’s some times where we want students to use AI tools specifically, and therefore we want to assess how well they’re actually using Gen AI tools. So we might say, rather than just, oh, yeah, you can use AI for this, we say, you must use AI for this, or show me your use of this tool to demonstrate this final problem. So what we say this full AI. And this used to be our final version of the scale, the final level. Then we recognize that technology is changing so rapidly, we need to recognize the increasing multimodal use of AI. And that’s why we have this final AI exploration level. Now in this AI exploration level, what we’re looking at is solving problems that we don’t necessarily even know exist yet. How can we use Gen AI to solve problems that have been created by Gen AI, or to do things in a different way, to fundamentally use Gen AI to have this element of co-design and co-working between an educator, a student, and a Gen AI tool to actually solve something new? Now we’re not going to be talking about K-12 students here on this, but we may be talking IB final project. Maybe we’re looking at undergraduate dissertations, PhD students, master’s students. So this is how we can bring all of these together into this five point scale, which can hopefully support students in using AI in a different way. So that’s how I think we can use it in the Global South. It’s a free framework. It doesn’t require any licensing. If you want to take this and adapt this, we actually have tools available. So this one on the bottom, you can download a translation. We have this translated into 12 different languages already with more to come. And we also have the design assets linked there. So it’s just a Canva asset that you can change and adapt to your own context. Because not everybody is the same. Every country has got different requirements. So we’ve got to be able to change accordingly. So some information about the AI assessment scale.

Jingbo Huang : Thanks. Thank you, Mike. I think the tools are very useful. And thinking about the Global South, probably the capacity building for the teachers and for the students would be of a challenge compared to the Global North. That’s just my impression. So the last question, but not least, we go back to our legal expert, Elia Mani. Africa, that’s where you are from, will be the future of the world. I really believe so. It is expected to contribute to 62% of the global population in the next 25 years.

Eliamani Isaya Laltaika: Nigeria and Tanzania, specifically Tanzania, where you’re from, expected to grow their population by 50 to 90%. How can Africa benefit from and contribute to the development of Gen-AI in education? Thank you very much for that question, which is very close to my heart. And I’ll start by giving the similarities between Tanzania and Nigeria as an entry point to address the question. And both Tanzania and Nigeria are hosts to the Nelson Mandela African Institutions of Science and Technology. It’s unfortunate that many people don’t know that the late Nelson Mandela had a vision of science, technology, and innovation, powering the next generation of Africans and increasing competitiveness in the continent. And as a result, four institutions were established named after him. And I come from the one that is in Arusha, Nelson Mandela African Institution of Science and Technology. That’s what you can see, that I’m an adjunct faculty member there. And this has been our way of positioning ourselves to benefit from the global innovation scale and also to contribute. So just like how we established Nelson Mandela institutions in the past, the one in Arusha was established in 2011, and the one in Nigeria was established in 2007, and you can Google about it and you will see that quite a number of world-class innovations have been happening in these institutions and contributing meaningfully towards empowering the next generations of Africans. Our current president, who happens to be a lady, female head of state, we are very proud of our president, Samia Sulu Hassan. She has pioneered STEM education, science, technology, engineering, and mathematics as the way to prepare the next generation of Tanzanians and Africans in general to contribute meaningfully to innovation. And just last week, she reshuffled the cabinet and when addressing the nation of that, she told the minister of ICT, who I’m told is on his way coming to attend this, that his ministry has been narrowed so that he can focus specifically on ICT to ensure that he explores whatever is going to help Tanzania to improve and compete on all fronts in terms of ICT. We have a long way to go, and this is how I want to finish my contribution by asking everyone to ensure that you give a hand to the global south. Personally, I got a scholarship to study my master’s and PhD in Germany by the generous scholarship of the Max Planck Society. Many of my age mates studied in the US and the UK. We are not seeing this happening anymore on a larger scale. So we still need bridges to be built so that the north and the south can share expertise and share knowledge. That’s how we can position ourselves, not only to benefit, but also to contribute meaningfully to gen AI and STEM in general.

Jingbo Huang : Thank you, Eliemani. So, well, conscious of time, the session has to end, but I would like to encourage all of you, whoever would like to have an exchange of ideas with the panel members, please come up and we can have individual conversations. And for those online, sorry, we cannot accommodate the questions. Feel free to write to us and, you know, or yeah, write to us and we can have an exchange later. All right. Thank you very much to your panelists and thank you for being here and listening to the session. Thank you. Thank you. Thank you. Thank you.

M

Mohamed Shareef

Speech speed

124 words per minute

Speech length

1399 words

Speech time

672 seconds

Educators in Maldives already using generative AI, but lack knowledge and training

Explanation

Mohamed Shareef found that 85% of educators in Maldives are already using generative AI, but their primary concern is lack of knowledge and training opportunities to leverage it effectively. This indicates a need for capacity building in AI for educators in developing countries.

Evidence

Survey of 270 educators in Maldives showing 85% usage of generative AI and concerns about lack of knowledge and training

Major Discussion Point

Use of Generative AI in Education

Agreed with

Antonio Saravanos

Mike Perkins

Eliamani Isaya Laltaika

Agreed on

Need for AI education and capacity building

Generative AI risks exacerbating existing digital divides between Global North and South

Explanation

Mohamed Shareef argues that generative AI is creating a new front in the digital divide, as developed countries invest heavily in AI infrastructure and education. This widens the gap between the Global North and South in terms of access, economic disparities, and educational opportunities.

Evidence

Comparison of AI investments and educational ecosystems between developed and developing countries

Major Discussion Point

Generative AI and the Global South

AI and digital literacy should be integrated into curricula

Explanation

Mohamed Shareef emphasizes the importance of incorporating AI and digital literacy into international curricula. He suggests developing multidisciplinary AI modules for all students, regardless of their field of study, to prepare them for the AI-driven future.

Evidence

Work with higher education institutions to develop AI modules for various disciplines

Major Discussion Point

Changing Educational Approaches for AI

A

Antonio Saravanos

Speech speed

123 words per minute

Speech length

1062 words

Speech time

515 seconds

Need to reframe generative AI as a tool for deeper understanding, not just producing answers

Explanation

Antonio Saravanos advocates for reframing generative AI as a tool to enhance understanding, creativity, and problem-solving abilities, rather than a shortcut for answers. He emphasizes the importance of teaching students to use AI effectively as they will rely on it in their future careers.

Evidence

Examples of using case studies with AI-generated outputs containing mistakes to teach critical thinking

Major Discussion Point

Use of Generative AI in Education

Agreed with

Mohamed Shareef

Mike Perkins

Eliamani Isaya Laltaika

Agreed on

Need for AI education and capacity building

Educators should focus on teaching students to use AI effectively, not banning it

Explanation

Antonio Saravanos argues that educators should guide students in using generative AI as a tool to deepen their understanding and problem-solving abilities. He emphasizes the importance of preparing students for future careers where AI tools will be prevalent.

Evidence

Examples of incorporating AI-generated solutions into classroom discussions and critiques

Major Discussion Point

Changing Educational Approaches for AI

Agreed with

Mike Perkins

Agreed on

Redesigning educational approaches for AI integration

Need to develop local AI tools and solutions in Global South to avoid dependence

Explanation

Antonio Saravanos suggests that academics in the Global South should work together to develop local AI tools and solutions. This approach can help overcome limitations of free tools and reduce dependence on expensive solutions from the Global North.

Evidence

Mention of open-source AI solutions that can be run locally with support

Major Discussion Point

Generative AI and the Global South

M

Mike Perkins

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Current AI detection tools are not reliable for accusing students of plagiarism

Explanation

Mike Perkins argues that current AI detection tools are not suitable for accusing students of plagiarism due to their inaccuracy. He emphasizes the risk of falsely accusing students, particularly those from disadvantaged backgrounds or with language barriers.

Evidence

Research showing low accuracy rates of AI detection tools and the risk of false accusations

Major Discussion Point

Use of Generative AI in Education

Differed with

Antonio Saravanos

Differed on

Effectiveness of AI detection tools in academic settings

AI assessment scale framework can help ethically integrate AI into education in Global South

Explanation

Mike Perkins presents an AI assessment scale framework for ethically integrating AI into educational assessments. This framework provides a conversation starter between academics and students, offering a way to redesign assessments to accommodate the reality of AI tools.

Evidence

Description of the AI assessment scale with different levels of AI integration in assessments

Major Discussion Point

Generative AI and the Global South

Agreed with

Mohamed Shareef

Antonio Saravanos

Eliamani Isaya Laltaika

Agreed on

Need for AI education and capacity building

Assessments need to be redesigned to focus on skills AI can’t replicate

Explanation

Mike Perkins argues for redesigning assessments to focus on skills that AI cannot replicate. He suggests moving away from traditional essay questions to more practical, process-based assessments that test students’ ability to use AI tools effectively.

Evidence

Examples of assessment redesign using the AI assessment scale framework

Major Discussion Point

Changing Educational Approaches for AI

Agreed with

Antonio Saravanos

Agreed on

Redesigning educational approaches for AI integration

E

Eliamani Isaya Laltaika

Speech speed

117 words per minute

Speech length

1289 words

Speech time

657 seconds

Copyright frameworks need to be revisited to address use of copyrighted material in AI training

Explanation

Eliamani Isaya Laltaika argues that copyright frameworks need to be revisited to ensure proper attribution and compensation for works used in AI training. He emphasizes the importance of striking a balance between encouraging innovation and protecting creators’ rights.

Evidence

Discussion of copyright violations in AI training and the need for attribution in AI-generated content

Major Discussion Point

Use of Generative AI in Education

Ethical guidelines for AI use in education needed at all levels

Explanation

Eliamani Isaya Laltaika emphasizes the need for establishing ethical guidelines for AI use in education at various levels. He suggests that these guidelines should be developed at university, ministry, and government levels to ensure responsible AI use.

Major Discussion Point

Changing Educational Approaches for AI

Africa needs partnerships with Global North to build AI/STEM capacity

Explanation

Eliamani Isaya Laltaika argues that Africa needs continued partnerships with the Global North to build capacity in AI and STEM fields. He emphasizes the importance of scholarships and knowledge-sharing to position Africa to both benefit from and contribute to global AI development.

Evidence

Personal experience with Max Planck Society scholarship and examples of Nelson Mandela African Institutions of Science and Technology

Major Discussion Point

Generative AI and the Global South

Agreed with

Mohamed Shareef

Antonio Saravanos

Mike Perkins

Agreed on

Need for AI education and capacity building

Agreements

Agreement Points

Need for AI education and capacity building

Mohamed Shareef

Antonio Saravanos

Mike Perkins

Eliamani Isaya Laltaika

Educators in Maldives already using generative AI, but lack knowledge and training

Need to reframe generative AI as a tool for deeper understanding, not just producing answers

AI assessment scale framework can help ethically integrate AI into education in Global South

Africa needs partnerships with Global North to build AI/STEM capacity

All speakers emphasized the importance of educating both teachers and students about AI, its proper use, and integration into the curriculum.

Redesigning educational approaches for AI integration

Antonio Saravanos

Mike Perkins

Educators should focus on teaching students to use AI effectively, not banning it

Assessments need to be redesigned to focus on skills AI can’t replicate

Both speakers argue for adapting educational methods to incorporate AI tools effectively rather than trying to ban or detect their use.

Similar Viewpoints

Both speakers highlight the potential for AI to widen the gap between developed and developing countries, emphasizing the need for collaboration and support from the Global North.

Mohamed Shareef

Eliamani Isaya Laltaika

Generative AI risks exacerbating existing digital divides between Global North and South

Africa needs partnerships with Global North to build AI/STEM capacity

Unexpected Consensus

Limitations of AI detection tools in education

Mike Perkins

Eliamani Isaya Laltaika

Current AI detection tools are not reliable for accusing students of plagiarism

Copyright frameworks need to be revisited to address use of copyrighted material in AI training

While coming from different perspectives (education and law), both speakers highlight the inadequacy of current tools and frameworks to address AI-related challenges in education and copyright.

Overall Assessment

Summary

The speakers generally agreed on the importance of AI education, the need to redesign educational approaches, and the challenges faced by the Global South in AI adoption.

Consensus level

Moderate to high consensus on the main issues, with implications for a collaborative, global approach to integrating AI in education while addressing equity concerns.

Differences

Different Viewpoints

Effectiveness of AI detection tools in academic settings

Antonio Saravanos

Mike Perkins

Unfortunately, it’s quite easy to detect the use of TATCPT or another artificial intelligence, specifically an NLP, natural language processing, at the novice level, at the student level.

Current AI detection tools are not reliable for accusing students of plagiarism

Antonio Saravanos believes it’s easy to detect AI use in student work, while Mike Perkins argues that current AI detection tools are unreliable and risk false accusations.

Unexpected Differences

Perception of AI in developing countries

Mohamed Shareef

Eliamani Isaya Laltaika

Generative AI risks exacerbating existing digital divides between Global North and South

Africa needs partnerships with Global North to build AI/STEM capacity

While both speakers discuss AI in developing countries, their perspectives differ unexpectedly. Shareef focuses on the risks of AI widening the digital divide, while Laltaika emphasizes the opportunities for partnerships and capacity building in Africa.

Overall Assessment

summary

The main areas of disagreement revolve around the detection of AI-generated content in academic settings, the approach to integrating AI in education, and the perception of AI’s impact on developing countries.

difference_level

The level of disagreement among speakers is moderate. While there are clear differences in some areas, there is also general agreement on the importance of adapting education to incorporate AI. These differences highlight the complexity of integrating AI in education globally and the need for nuanced approaches that consider both opportunities and challenges.

Partial Agreements

Partial Agreements

Both speakers agree on the need to change educational approaches for AI, but differ in their specific recommendations. Saravanos focuses on using AI as a tool for deeper understanding, while Perkins emphasizes redesigning assessments to test skills AI can’t replicate.

Antonio Saravanos

Mike Perkins

Need to reframe generative AI as a tool for deeper understanding, not just producing answers

Assessments need to be redesigned to focus on skills AI can’t replicate

Similar Viewpoints

Both speakers highlight the potential for AI to widen the gap between developed and developing countries, emphasizing the need for collaboration and support from the Global North.

Mohamed Shareef

Eliamani Isaya Laltaika

Generative AI risks exacerbating existing digital divides between Global North and South

Africa needs partnerships with Global North to build AI/STEM capacity

Takeaways

Key Takeaways

Generative AI is already being widely used in education, but educators often lack proper training and knowledge to use it effectively

Current AI detection tools are unreliable for identifying AI-generated content in academic settings

Generative AI risks exacerbating existing digital divides between the Global North and South

Educational approaches and assessments need to be redesigned to effectively integrate AI tools

Copyright frameworks and ethical guidelines need to be updated to address AI use in education

Partnerships between the Global North and South are needed to build AI capacity in developing regions

Resolutions and Action Items

Develop ethical guidelines for AI use in education at institutional and national levels

Integrate AI and digital literacy into curricula across disciplines

Redesign assessments to focus on skills AI cannot replicate

Invest in IT infrastructure in developing countries to improve access to AI tools

Establish partnerships between Global North and South for AI capacity building

Unresolved Issues

How to effectively detect AI-generated content in academic work

How to ensure equitable access to advanced AI tools in the Global South

How to balance copyright protections with the use of copyrighted material in AI training

How to retain skilled AI professionals in developing countries

Suggested Compromises

Use AI assessment frameworks that allow controlled integration of AI tools into education rather than banning their use

Develop local, open-source AI solutions in the Global South to reduce dependence on expensive proprietary tools

Revisit copyright frameworks to allow some use of copyrighted material in AI training while providing compensation to creators

Thought Provoking Comments

Educators cannot effectively detect the use of gen AI tools. There have been several studies which have demonstrated this.

speaker

Mike Perkins

reason

This challenges the common assumption that AI-generated content can be easily detected, introducing a significant problem for academic integrity.

impact

It shifted the conversation from how to detect AI use to how to adapt education systems to work with AI. It led to discussion of changing assessment methods and integrating AI into curricula.

AI is a blessing in this guise in developing countries. In Tanzania, for example, getting money from the state and then they publish in international journals. You cannot access them.

speaker

Eliamani Isaya Laltaika

reason

This provides a unique perspective on how AI could potentially democratize access to knowledge in developing countries.

impact

It broadened the discussion to consider the global implications of AI in education, particularly for the Global South. It led to further exploration of digital divides and equitable access to AI technologies.

For me, gen AI is like a red bull. It gives digital transformation wings. Suddenly, it’s printing. Those who can have this red bull, how are you going to catch up with them?

speaker

Mohamed Shareef

reason

This vivid metaphor effectively illustrates the potential for AI to widen existing digital divides.

impact

It focused the discussion on the urgent need for policies and strategies to ensure equitable access to AI technologies, particularly in developing countries.

What I’ve developed is a framework for how we can actually introduce gen AI tools in an ethical way into assessment settings.

speaker

Mike Perkins

reason

This offers a practical solution to the challenges of integrating AI into education while maintaining academic integrity.

impact

It moved the conversation from identifying problems to discussing concrete solutions, providing a framework for educators to adapt to the reality of AI in education.

Overall Assessment

These key comments shaped the discussion by challenging assumptions about AI detection, highlighting global inequalities in AI access, and proposing practical solutions for integrating AI into education. The conversation evolved from identifying challenges to exploring nuanced perspectives on AI’s impact in different global contexts and discussing concrete strategies for ethical AI integration in education. This progression deepened the analysis and broadened the scope of the discussion beyond initial concerns about academic integrity to encompass global educational equity and the transformation of assessment methods.

Follow-up Questions

How can educators effectively assess students’ work when generative AI tools are widely available?

speaker

Mohamed Shareef

explanation

This is a key concern for higher education faculty as they struggle to determine if students are submitting their own work or AI-generated content.

What are effective strategies for teaching students to use generative AI tools responsibly and ethically?

speaker

Antonio Saravanos

explanation

As generative AI becomes more prevalent in education and industry, students need guidance on how to leverage these tools appropriately.

How can assessment methods be redesigned to account for the existence of generative AI tools?

speaker

Mike Perkins

explanation

Traditional assessments may no longer be effective, so new approaches are needed to evaluate student learning in the age of AI.

What legal and ethical frameworks are needed to address copyright and intellectual property issues related to generative AI in education?

speaker

Eliamani Isaya Laltaika

explanation

The use of copyrighted materials to train AI models raises complex legal questions that need to be resolved.

How can the digital divide between the Global North and Global South be addressed in relation to generative AI technologies?

speaker

Mohamed Shareef

explanation

Ensuring equitable access to AI tools and education is crucial to prevent widening global inequalities.

What strategies can be employed to retain talented AI researchers and practitioners in developing countries?

speaker

Mohamed Shareef

explanation

Preventing brain drain is important for building local AI capacity in the Global South.

How can generative AI tools be effectively integrated into educational curricula in resource-constrained environments?

speaker

Antonio Saravanos

explanation

Adapting AI education for contexts with limited technological infrastructure is crucial for global equity.

What role can international partnerships play in supporting AI education and research in Africa?

speaker

Eliamani Isaya Laltaika

explanation

Collaboration between Global North and South institutions could help build AI capacity in developing regions.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #73 The Need for Regulating Autonomous Weapon Systems

Open Forum #73 The Need for Regulating Autonomous Weapon Systems

Session at a Glance

Summary

This panel discussion focused on the challenges and risks posed by autonomous weapons systems and the urgent need for international regulation. Experts from various fields, including diplomacy, technology, academia, and civil society, debated the complexities of governing AI in military applications.

The discussion highlighted the rapid development of AI-powered weapons and the potential consequences of their unregulated use. Participants emphasized the need for a binding international treaty by 2026 to prohibit autonomous weapons that cannot comply with international humanitarian law and to regulate other such systems. However, challenges were noted, including geopolitical tensions, the difficulty of defining meaningful human control, and the gap between technological advancements and policy-making.

Several speakers stressed the importance of a multi-stakeholder approach, involving not just diplomats and military experts, but also scientists, engineers, and civil society. The discussion touched on the complexities of AI systems, including inherent biases, limitations in testing and validation, and the potential for unintended consequences.

The global implications of autonomous weapons were highlighted, with particular concern for the disproportionate impact on the Global South. Participants called for increased capacity building and education to address the AI divide between nations. The need for public awareness and engagement was also emphasized.

While some speakers expressed optimism about reaching an international agreement, others cautioned about the difficulties in achieving consensus given the rapid pace of technological change. The discussion concluded with a call for urgent action, recognizing the “Oppenheimer moment” in AI weapons development and the need for smart, flexible regulation that can keep pace with technological advancements.

Keypoints

Major discussion points:

– The urgent need for regulation and governance of autonomous weapons systems and AI in military contexts

– Challenges in developing effective regulations due to rapidly evolving technology and geopolitical tensions

– The importance of a multi-stakeholder approach involving governments, industry, academia, and civil society

– Concerns about the risks of autonomous systems making complex decisions without meaningful human control

– The need for capacity building, especially in the Global South, to address the “AI divide”

Overall purpose/goal:

The discussion aimed to raise awareness about the risks posed by autonomous weapons systems and AI in military contexts, and to explore potential governance approaches and regulations to address these risks. The panelists sought to highlight the urgency of the issue while acknowledging the complexities involved.

Tone:

The overall tone was one of concern and urgency, but also pragmatism. Speakers emphasized the gravity of the risks while also acknowledging the challenges in developing effective regulations. There was a mix of cautious optimism about the potential for international cooperation and more pessimistic views about the likelihood of reaching binding agreements in the near term. The tone became somewhat more urgent towards the end as speakers emphasized the need for immediate action given that autonomous systems are already being deployed in conflicts.

Speakers

– Gregor Schusterschitz: Ambassador from Austria

– Wolfgang Kleinwächter: Moderator

– Ernst Noorman: Ambassador from the Netherlands, Chair of the GGE laws

– Vint Cerf: Internet pioneer

– Jimena Viveros: Commissioner on the Global Commission on Responsible AI in the Military Domain, member of various AI commissions

– Olga Cavalli: Dean of the Defense University in Argentina

– Chris Painter: Former US Cyber Ambassador

– Ram Mohan: Chief Strategy Officer of Identity Digital, former ICANN board member

– Kevin Whelan: Head of UN Office for Amnesty International

Additional speakers:

– Milton Mueller: From Georgia Tech (online participant)

– Hiram: From Encode Justice, part of Stop Killer Robots Coalition (audience member)

– Artem Kruzhulin: Panelist on earlier panel about public and private sector cooperation (audience member)

– Kunle Olorundari: President of Internet Society Nigerian chapter, researcher (audience member)

– Raida Lindsay: Local digital policy expert (audience member)

Full session report

Expanded Summary of Panel Discussion on Autonomous Weapons Systems and AI in Military Contexts

Introduction

This panel discussion brought together experts from diplomacy, technology, academia, and civil society to debate the challenges and risks posed by autonomous weapons systems and the urgent need for international regulation. The conversation highlighted the rapid development of AI-powered weapons and the potential consequences of their unregulated use, emphasizing the need for a binding international treaty by 2026.

Key Discussion Points

1. Urgency of Regulation

There was strong agreement among panelists on the pressing need to regulate autonomous weapons systems. Ambassador Gregor Schusterschitz from Austria called for binding rules and limits by 2026, mentioning the Vienna Conference and recent UN General Assembly resolutions on the topic. Kevin Whelan from Amnesty International pointed out that existing autonomous systems are already being deployed in conflicts, underscoring the immediacy of the issue.

Ernst Noorman, Chair of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS), provided insights into the GGE process, noting the participation of 125 countries and its inclusive nature. He warned that the fast pace of development is closing the window for preventive regulation.

Jimena Viveros, Commissioner on the Global Commission on Responsible AI in the Military Domain, stressed the importance of moving from discussions to negotiations. This sense of urgency was tempered by acknowledgement of the challenges involved, with Chris Painter, former US Cyber Ambassador, noting that rapid technological evolution is outpacing regulatory efforts.

2. Challenges in Regulation and Technical Limitations

Several speakers highlighted the significant technical and geopolitical challenges in regulating AI and autonomous weapons. Ram Mohan, Chief Strategy Officer of Identity Digital, emphasized the difficulty in creating unbiased and accurate AI systems, pointing out the limitations of current software engineering methods for AI. He argued that the concept of a zero-defect AI system, while appealing, faces inherent limitations and discussed the challenges of jailbreaking AI systems through prompt engineering.

Ernst Noorman noted that geopolitical tensions and mistrust are hindering progress on international agreements. The rapid evolution of technology was seen as a major obstacle, with Chris Painter highlighting how it outpaces regulatory efforts.

3. Multi-stakeholder Approach and Capacity Building

There was broad consensus on the need for a multi-stakeholder approach to governance. Ambassador Schusterschitz emphasized the importance of involving diplomats, military personnel, academia, industry, and civil society in discussions. Olga Cavalli, Dean of the Defense University in Argentina, stressed the value of including technical experts in these conversations and highlighted the challenges of training in developing economies and the high demand for cyber defense education.

Kevin Whelan highlighted the potential of the UN General Assembly process for broader engagement, while Jimena Viveros called for new governance models suited to AI challenges. This multi-faceted approach was seen as crucial for addressing the complex issues surrounding autonomous weapons systems.

4. Human Control, Accountability, and Ethical Concerns

Maintaining meaningful human control over the use of force emerged as a key concern. Kevin Whelan argued that the use of autonomous weapon systems in law enforcement contexts would be inherently unlawful and dehumanizing, as international law and standards governing the use of force rely on nuanced human judgment. Ram Mohan highlighted the challenges of human oversight with complex AI systems.

Vint Cerf, internet pioneer, provided crucial insights on the differences between AI and nuclear deterrence. He emphasized that unlike nuclear weapons, AI systems are not easily contained and can propagate in unexpected ways. Cerf stressed the importance of clear lines of accountability and the need for standardization and binding agreements to address these challenges.

5. Role of Private Companies and Current Deployments

The discussion touched on the role of private companies in deploying AI systems in current conflicts, as mentioned by audience members. This raised concerns about the lack of oversight and potential consequences of commercial entities driving the development and use of autonomous weapons systems.

Unique Perspectives and Thought-Provoking Comments

Jimena Viveros offered a thought-provoking comparison between AI and nuclear weapons, noting that AI presents a fundamentally different challenge. Unlike nuclear weapons, which were immediately classified and rarely used due to mutual assured destruction, AI technology is being widely developed and deployed without a collective understanding of its potential consequences.

Ram Mohan provided crucial technical insights, explaining that current software engineering methods for testing, quality assurance, and validation are insufficient for AI systems. This perspective highlighted the inherent challenges in developing reliable AI systems for weapons and deepened the conversation about the limitations of human control over AI.

Conclusion and Next Steps

The discussion concluded with a call for urgent action, recognizing the current critical juncture in AI weapons development. Key takeaways included:

1. The need for a legally binding instrument to prohibit autonomous weapons systems that cannot comply with international law.

2. The importance of briefing countries on developments in the GGE LAWS process.

3. The need to develop risk management frameworks for machine learning systems.

4. A call for smart and targeted regulation that keeps pace with technological development.

Unresolved issues include how to effectively regulate rapidly evolving AI technology, ensure meaningful human control over complex AI systems, and address the capacity gap between developed and developing countries in AI governance. Suggestions from audience members included focusing on regulating specific use cases of AI in weapons, such as digital triggers, rather than AI itself.

The discussion highlighted the complexity of the challenges posed by autonomous weapons systems and AI in military contexts, emphasizing the need for continued dialogue, multidisciplinary approaches, and urgent international cooperation to address these critical issues. The panel stressed the importance of including more technical experts in diplomatic discussions and developing flexible regulations that can adapt to future technological developments while balancing the need for regulation with the desire to not hinder beneficial AI innovation.

Session Transcript

Gregor Schusterschitz: But moving to a negotiation mandate to work out the details has not yet been possible. Geopolitical tensions, mistrust among states, and a potentially flawed confidence in technological solutions hinder progress, despite the urgency of the issue, given the fast pace of development and the era of autonomic weapons systems, and the preventive window for regulation closing soon. This is why Austria has been engaging actively. We hosted the Vienna Conference Humanity at the Crossroads in April this year and tabled two resolutions in the UN General Assembly on autonomous weapons systems that enjoyed the support of an overwhelming majority of states. The global discourse on autonomous weapons systems should not be limited to a constituency of diplomats and military experts only. The issue has broad implications for human rights, human security and development and thus concerns all regions and all people. From an Austrian perspective, a multi-stakeholder approach on this critical issue is therefore important. We welcome the contributions of science and academia, the tech sector and industry, and broader civil society. And in this vein, I hope that today’s discussion will further stimulate such a multi-stakeholder discourse. For Austria, there is urgency to finally move from discussions to negotiations on binding rules and limits on autonomous weapons systems. And I look very much forward to today’s discussion. Thank you.

Wolfgang Kleinwächter: I thank Ambassador Schuster-Schütz for the opening remarks. And now we move to the panel discussion. I think we have an excellent panel here. So we have another ambassador, Ernst Norman, from the Netherlands. We have a former U.S. cyber ambassador, Chris Painter. He will be online. We have Olga Cavalli, who is the dean from the University of Defense. from Argentina. We have Jimena Viveros, who is a member of the Commission on Responsible Artificial Intelligence in the military domain. She is from Mexico. And we have Kevin Wieland, the head of the UN Office for Amnesty International. And we have Ram Mohan, who is the Chief Strategy Officer of Identity Digital and a former ICANN board member. So this is really a multi-stakeholder setting here. We have experts from the government, from business, from civil society. And we know that nearly 10 years in the GGE laws, there are already negotiations, which has produced some minor results, have produced already a final document. So Ambassador Norman from the Netherlands is now the chair of the GGE laws. And I would propose that he starts giving us a good overview where we are in the process. And Mr. Ambassador, you have the floor and five minutes.

Ernst Noorman: Thank you very much. Can you hear me? Okay. Well, thank you, first of all, for inviting me to this important panel and give me the floor and to elaborate a bit on our views on this very important topic. First, to structure my intervention, I use three circles to discuss the risk and opportunities of AI in international peace and security. First, the largest circle represents AI broadly, including civilian issues, a new and still developing domain that brings opportunities, but that also poses the international community with all sorts of new challenges. Within the large circle, there’s a second smaller circle. This circle is about AI in the military domain. Questions related to this circle are more specific, what are the implications of the use of AI for the way militaries operate? What kind of rules or measures do we need to make sure militaries use AI in a responsible way? Earlier this year, the Netherlands and the Republic of Korea successfully introduced a resolution on AI in the military domain in the UN First Committee. The resolution requests a report from the UN Secretary General, providing states with a platform to exchange perspectives. The resolution was approved by a massive majority of 161 votes, with only 3 against and 13 abstentions. This resolution will initiate a dialogue independent of multi-stakeholder re-aim process, which will continue to serve as an incubator for ideas and perspectives from other sectors. The re-aim process was an initiative also from Korea and the Netherlands on responsible AI in the military domain. These two processes will complement each other, working towards inclusive discussions on AI in the military domain. And third and final circle, contained within the second circle, is the autonomous weapon systems. Although the issue first came up in the Human Rights Council in 2013, it was referred to the Convention of Certain Conventional Weapons, CCW, given its relevance to disarmament. The CCW has played a critical role in addressing emerging threats, including prohibitions and regulations on various weapon systems. The CCW then established a group of governmental experts on lethal autonomous weapon systems, the GGE Laws, for short, in 2016. Now, the GGE counts nowadays 127 high-contracting parties, that means 127 countries, plus every other country and relevant international NGOs can attend as observers and do also so. So one can say it’s a very inclusive process. My colleague, our Dutch ambassador for disarmament, Robert Indenbosch, chaired the GGE on laws through 2026. One of the strengths of the GGE is that it has all the large military states included. This can make discussions more difficult, but I believe that when we get to agreements on regulations and prohibitions, it will be much more effective. As a final point, it remains important to note that the group is increasingly working against time. What started as a concern of the future is today an urgent pressing issue as weapon systems capable of limited or no human intervention are rapidly being developed and deployed in modern battlefields. It falls on the international community, on states and other stakeholders to garner the political will to make progress on this issue. And the interest by the global community is evident as shown by the multiple regional and international conferences and UN General Assembly resolutions, all of which highlight the growing global engagement. Coming back to the question, is this an Oppenheimer moment? Can we learn something from the nuclear arms race? I am very wary of drawing historical parallels. The challenges we face are enormous, as these types of weapon systems have the potential to transform modern warfare. But they also differ from the nuclear domain in many ways. So I would be cautious to draw such parallels. A lot of important work is happening and we must continue to collaborate constructively to address the issue and to treat it with the urgency it demands. Thank you very much.

Wolfgang Kleinwächter: Thank you, Mr. Ambassador. And I would like to ask a question about the Oppenheimer moment. So my understanding from the Oppenheimer moment, it’s also a challenge to the researchers and to the academics to be aware about their responsibility, what they are doing. And just two days ago, we had the Nobel Prize ceremony in Stockholm, where the winner of the Nobel Prize for physics, Britain, raised also concerns and said, you know, this can bring a moment where we are really at risk. And in so far, you know, we should make parallels which do not working, but we could be aware of risks and circles. And sometimes we are coming back on a higher level in a situation where we have been already. And I just was informed that meanwhile, Vint Cerf, who was expected to give also some opening remarks, is now online. And I’m very happy, Vint, that you are able to make it. I think it’s very early in the morning in the United States. You have the floor now. Thank you very much.

Vint Cerf: You’re very kind. Thank you so much. As it happens, my day began at 1 o’clock this morning in Washington, D.C., so I’ve been up for a while. My previous session didn’t end timely, and I thrashed around for a while before I got to this one, so I apologize for my delay. Let me just add a little bit to what has already been discussed. First of all, some of you know about an organization called the Ditchley Foundation. It’s a US-UK organization. And among the various things that it convenes are discussions on important policy, like this one, a concern for autonomous weapons. We spent a day and a half looking at the nuclear deterrent practices and tried to ask whether they would inform any of our practices with regard to cybersecurity. And the conclusion was that the two are quite different, just as the previous speaker pointed out. For one thing, proliferation has already happened. AI is essentially everywhere. And to make matters more complicated, AI is not necessarily very reliable. And my biggest worry about trying to establish policy with regard to autonomous weapons or other potentially hazardous uses of AI is that we don’t yet know how to contain artificial intelligent agents to prevent them from executing functions that might turn out to be a considerable hazard. And so while we can try to establish policy and objectives to achieve that limitation, I think the previous speaker implied that there was a great deal of work to be done in the technical community to establish bounds on the behavior of these autonomous agents. So I think that we can’t really succeed in making policy unless we also have the technology available to enforce it. Therefore, there’s still a lot of work to be done. That’s as much as I think I need to disturb you with this morning, but thank you so much for the opportunity to intervene.

Wolfgang Kleinwächter: Thank you very much, Vint. And I hope you can stay with us and continue the discussion because our next speaker is also an expert in this field and is a member of various commissions. She is now the Commissioner on the Global Commission on Responsible AI in the Military Domain, which is also an initiative which came out from the Netherlands. But she was involved in the HLAB of the United Nations Secretary General Committee, and she’s working with the OECD as an AI expert. And I’m very happy that we have Jimena Viveros from Mexico. Jimena, if you could comment on what we have heard already, and to explain what you are doing in this commission.

Jimena Viveros: Hello. I hope you can all hear me. Perfect. Well, first of all, I would like to thank our Austrian and Dutch friends for championing such important initiatives, and also the South Korean, which are not here, I think, but are also part of this very big international effort to put these resolutions on the table, which are very welcome. And also your work with the GGE laws. So just to give a broader spectrum, for those who are not familiar with the Global Commission on Responsible Use of AI in the Military Domain, as Wolfgang said, this was an initiative created by the Netherlands and the government of South Korea. So we are a commission of, I think, 18 commissioners and around 40 experts. And we have a mandate to come up with some recommendations by the middle or end of next year regarding this. So also, I was, as Wolfgang mentioned, part of the United Nations Secretary General’s high-level advisory body on AI, where we had an issue whether or not to include the military domain in our recommendations. For those who read the report, which I hope is everyone, we did include it in the end, but it was a struggle. So the reason why it was included, I led the engagements and the consultations on the peace and security, as also I am leading the work stream on peace and security at RE-AIM. And the arguments that I use and the issues that I always raise are similar, but they might seem different in context. I always say that these technologies cannot only be looked through the military lens. That’s why I call it the peace and security spectrum, because there are so many non-state actors that are using this. And even state actors, which are civilian, like law enforcement or border controls. And non-state actors, the immediate thought is always terrorism, but we also have organized crime and mercenaries, which are increasingly relevant in the political landscape that we’re looking at right now. And it’s the exact same technology that is being used. So what we need to come up with are guidelines in the development phase to have responsible innovation, because we also don’t want to hinder innovation, because of course there are also good applications that can come out of AI in the peace and security domain, when used responsibly, when developed responsibly. But that’s the key, because when we’re talking about all of these governance initiatives, we always speak about these very abstract terms, responsible AI, ethical AI, safe AI. But the problem is when we bring it down to the operator, to the developer, to the user, to the consumer, no one really knows what obligations that kind of derives. And those are the translations that we need to make, kind of make it operational. And we have a huge problem. which is going to be implementation and which is going to be enforcement. That’s even going to be a bigger one. So that’s why we absolutely need a binding treaty as our secretary general and the ICRC called for, for 2026 with this two-tier approach, right? That is based on whether or not the systems can comply with IHL. Then those would, if they cannot, then those would be forbidden. And those who can could be regulated accordingly. But this is extremely necessary. But then we also need a centralized authority that would have the mandate to do the oversight as we do, for example, with the energy agency. I’m also a little bit cautious to call this the Oppenheimer moment because AI is a very different monster than nuclear because when, even since its origins from the splitting of the atom, it was immediately weaponized. And there was like this whole veil of secrecy around it with the Manhattan Project and with everything that happened for years. And then, you know, with the Cold War and the arms race and everything, no one really used it. Everyone was producing it, but no one really used it because it was like a mutual assured destruction. Whereas with AI, we don’t really have the conscience yet collectively that it will be the same. But as of right now, since its origins, it was used simultaneously in civilian and in military. So it was weaponized and non-weaponized uses at the same time. So that makes it even harder to control. And then you have open source, which makes it even harder to control. And it’s cheaper and the resources to create it and to do harm with it are so much more accessible and less traceable than say like a uranium plant. So that also makes it easier for non-state actors or other malicious or rogue and nefarious actors to. to get hold of this and to create big harm. Also, when you converge it or convert it into weapons of mass destruction, so with nuclear chemical bio, but also swarm drones could have the potential of being a weapon of mass destruction in itself. And that’s something that we should also keep in mind. So we have very added it to the cyberspace and all of those different types of attacks on critical infrastructure and the whole destabilization effect that AI has in the military and the peace and security domains is enormous. And a big problem that we definitely need to address and keep in mind in every single forum is the disproportionate effect that this will have on the global South. Because these weapons are not gonna be used in the global North, against the global North. These are normally weapons that will be affecting the global South. And the problem is there’s no capacity response as of yet to counter this type of threats. And this is a big, big, big issue that we should all be mindful of. So all of these initiatives, I mean, even when we’re talking about the civilian ones, so for example, at the OECD, where we only look at civilian domains, but there’s a monitoring of incidents, which I think could be very useful also for other peace and security domains. Because the lack of data is also a risk. So, and we also know that the civilian data or the data that’s been collected by civilian sources is being then used by other type of military or security agencies. So that is also a very big problem that we should all be mindful of. And so that’s basically the landscape of the risks and the threats that I see. as the most urgent, but I will leave it there to keep mindful of time.

Wolfgang Kleinwächter: Thank you, Jimena. You made a good point. Enunciation to the nuclear bombs were used, but AI weapons are produced and used. It needs more awareness, and so that means all the discussions are taking place more or less in small expert circles. So the level of public awareness about this issue is relatively low. So it means much more public awareness, and it’s the first discussion in an IGF on this issue. And one of the objectives of this discussion is to raise the level of awareness, and awareness leads to education. And we have here Olga Cavalli, who is the dean of the Defense University in Argentina. And my question to Olga is, how do you prepare the soldiers and generals of tomorrow for this new situation? Thank you, Olga.

Olga Cavalli: Thank you, Wolf. Can you hear me? Thank you. Thank you very much for inviting me. This is a very interesting question, and I like very much the perspective that our colleague from Mexico brought, is what happens with the Global South. So developing economies or the Global South, and I can bring some perspectives from Latin America. Latin American countries are engaged in different discussions and negotiations related with autonomous weapons. We have been active for more than 10 years in different spaces, saying that it’s a concern for our countries, for our region. The challenge is always for developing economies and how we approach this technology. We don’t, in general, we don’t produce this technology, we use it, it’s expensive to buy. And imagine from a capacity building perspective, how can you train our soldiers and our civilians? I like very much your perspective, it’s not only about military issues, it’s also about other uses legally or illegally of these weapons. How do you approach the technology that is so far from developing technology, how it’s developed and how it’s reachable from an affordability perspective? So it’s extremely expensive, you don’t develop it, it’s extremely hard to buy. How do you approach this training? So we have been working from our university in different collaborations with universities from developed economies from the United States, Europe and other countries. So we think that through collaboration in between different teaching spaces, that’s the way that our countries can approach and learn about these technologies. For you to have an idea, the minister called me for this position because of my training in technology and we opened a new career in cyber defense. We had more than 1,000 applications in one month. So what the authorities this morning were expressing about the need for training in cyber security, in cyber defense, it’s a reality in all the countries. So these new careers are highly demanded. So our challenge is, how are we going to train these people, for example, in things like autonomous weapons? So it’s a huge challenge for us. So I think that a way is cooperation with other universities, with other governments. We are working on that and our president is very keen in going abroad and having these agreements. So I think this is the way. And at the same time, in general, we think that a global treaty could be a very useful tool because what happens usually is that these regulations are developed in different spaces. and with different focuses, and usually the Global South is following up, but not perhaps so much into the development of the regulations. So a global agreement could be ideal. As usual, it’s difficult to achieve. So I will stop here and I will continue contributing. Thank you.

Wolfgang Kleinwächter: Thank you very much, Olga. We have two years to go until 2026, so let’s hope for the best. But… I don’t know why this happened, it goes on and on. Like this? You said like this? Probably it’s my mouse, so I have no idea. I think you need to hold it like this. Okay, yeah. Capacity building was the responsibility of Chris Painter for many, many years. Chris is well known in this community. He was the first US Cyber Ambassador, and I hope he is online. Chris, can you hear us? And then you have the floor.

Chris Painter: I can hear you. Hopefully you can hear me. Can you hear me?

Wolfgang Kleinwächter: Yes, we can hear you.

Chris Painter: Excellent, great. Well, it’s good to be here, sadly virtually. I wish I was there in person. But, you know, this debate is not new. It’s been made more urgent by the reality of AI. But I remember, and folks who know me well know that I’m a devotee of various cyber movies. And there is a, you know, going back to even 1970, the first movie where computers took over the world, called Colossus, The Forbidden Project, was exactly the scenario. Where the US decided, because they thought it would be more rational, take emotions out of it. They put a computer in charge of the nuclear arsenal. The Soviets did the same. The two talked to each other, became self-aware, and took away all civil liberties to protect humankind from itself. So this is not a new issue. It’s been dramatized over the years. And it’s certainly in Terminator and other places like that. It’s been dramatized. But I think it’s been made, obviously, more real by the emergence of AI as a real thing. Although one is other patterns. analysts have said, is still very unsettled in terms of exactly what the technology is, what its capabilities are and where it’s going. But as you say, Wolfgang, there is some urgency around this because it’s fast evolving. You know, I draw some parallels to the area of cyber and cybersecurity. And as many folks know, there’s been debate for many years now, on the cyber community about cyber attacks, cyber capabilities, cyber offensive capabilities and defensive capabilities, as moving them to an autonomous level to take the man out of the middle, in some sense. And I think that’s been the argument for that for many years has been that cyber quote, moves at the light of speed. And if your attackers can hit you, and now with artificial intelligence can hit you more often, and, and with such lightning quickness and adaptability, you need an autonomous system to respond to them. Now, the problem with that is, like in this area, generally, it’s not clear. And of course, as others have said, AI spans the entire, the entire landscape of everything from cyber tools, to drones to physical weapons. But in the cyber tools, and I think more generally, the escalation paths are still not really clear how these potential capabilities can be used, how they will work. And if you have AI working against AI, then you have even a greater chance of an escalation path, it gets out of control. And I think that’s some of the things the panels mentioned. So that’s, that’s a real concern. But even with that, I don’t think we’ve made a lot of progress kind of cabining, when you would have automated responses in cyber, you know, that’s still a live debate within countries between countries. And I don’t think we’ve seen a huge amount of progress. And of course, that’s more lethal, or has a likely, you know, the likelihood will be more lethal, less lethal, there may be some lethal cases, then then using it and more physical boundaries, as we’ve been talking about today. So that’s one concern. Another is the cybersecurity implications of attacks on these autonomous weapons systems. Like everything else, if they’re connected, they are essentially insecure at some level. If they’re not connected, there’s still ways to get into them. And so even leaving aside all the uncertainties of hey, how a AI works, and it’s not really being as secure, as unbiased, as people think it is. The cybersecurity implications, and this has been true for weapon systems more generally, has been a huge concern because you could have an adversary breaking into these weapons, changing the artificial intelligence parameters, changing when they’re used, and that again creates huge risks to peace and security. And then finally, as was pointed out by others, AI is not this unbiased system that’s out there. It depends on training. It depends on how you educate it and what the parameters are. So the thought that it could be unbiased itself is a problem. So what that leads, I think, to is what are the solutions here? And as was pointed out, the GGE on these topics has been longstanding, but has not made huge amounts of progress in terms of moving toward what many people think we need, which is a treaty. I guess I’m less optimistic that a treaty can be reached, and I base that in terms of what I’ve seen in other areas as well, where the geopolitical differences that we’re facing are ones where I think there’s unlikely to be agreement. The other issue with respect to this is because this is such a quickly evolving field, and as was pointed out by Olga and others, we still don’t know the implications of how AI can be used, how it can’t be used, how you can cabinet the technical, as Vint said, the technical requirements of this, that reaching a treaty in the short time frame of two years, I think, is going to be very difficult without the basic understanding of where the technology is going, and that technology is continuing to move fast. So then the question is, what kinds of things might we do? I think education is critically important, bringing other stakeholders, as this discussion is doing, is important. I think addressing the AI divide, as I think Olga put it, with a lot of the global South, and making sure there’s more capacity building in this area, not just in this area, but in attendant areas too, like cybersecurity and AI more generally, awareness. I think calling out use cases where you actually say, where we’ve seen these technologies being used in autonomous weapons and what the implications are, so it’s made more real, is really important. And ultimately, I think before you get to a treaty, kind of calling out what is good and what is bad, what norms have… behavior are like we have in cyberspace, but applying them, you know, different ones, I think, likely in this case, building toward a treaty eventually. I wish we could move quicker, but I have a feeling that because of that uncertainty of the technology, plus the geopolitical issues, that’s going to be very difficult to do in the short term. And I think that’s exacerbated by the oversight issues that one of our speakers raised, which I think are very difficult here, too. So, you know, I expect we’re going to have to move more incrementally, but I expect part of that is the education of both general populace, but even the people who work within the UN and in governments about what the implications of this are more generally. And with that, I’ll stop.

Wolfgang Kleinwächter: Thank you, Chris. And thank you also for putting a little bit water into the wine. Sometimes it’s also good to be more realistic than too optimistic. Anyhow, you know, as you said, all stakeholders have to be involved in the development of the framework for the future. And you need technical experts. And Ron Mohan, grown up in India, is for many, many years a technical expert in the ICANN community, was an ICANN board member, is now the CSO from Identity Digital. And this represents also the private sector. So there will be no autonomous weapon system without business. Ram, what’s your approach?

Ram Mohan: Thank you. This is one of those things where you need a village to use the microphones. So I wanted to focus on objective information and data as a basis for policymaking. I hear discussions about how to solve problems, and I hear ideas such as guaranteeing human human control with the way to achieve it by legal means. And I wanna introduce some of the risks and threats that come in the evolution of software engineering. Cause I think we have to understand the software basis and the engineering basis before we get to the legal and policy areas. AI’s own evolution means that currently known methods in software engineering of testing, quality assurance and validation are either incomplete or insufficient. Many weapons systems in the conventional area, many weapons systems demand a model of zero defects, right? So there’s a zero defect model that is expected. Now, while the concept of a zero defect AI system is appealing, it’s important to recognize some of the inherent limitations that exist there. If you look at some of the key challenges, one is data quality and bias. As Chris Painter was saying, AI systems learn from the data that they are trained on, but we also know that all data is biased and all data is inherently inaccurate. And that will strongly influence the outputs from the AI systems. The second piece is algorithmic limitations. We know that current AI algorithms are susceptible to complex or ambiguous situations. And when it comes to weapons systems, that’s almost all of the definition. All systems there are complex and ambiguous and with a lot of changing parameters in there. The third component in there is unforeseen circumstances. So AI systems are likely to struggle to… to understand unexpected inputs or situations that deviate from their training data, right? And what we have been talking about is in those cases, let’s make sure that there is human oversight. But human oversight, when there is not an understanding of how the system, the AI system arrived at the conclusion it did, the human oversight then defaults to merely intuition. And that may not be sufficient when we’re talking about human scale problems rather than just technology issues. So when you’re talking about high consequence decisions that are driven by AI, we also understand that AI systems that are learning from prior data sets, they can create novel behaviors that are neither predictable nor foreseeable. And this is exacerbated in the edge cases. One of the interesting and evolving characteristics that I have been studying is the relative ease by which you can jailbreak AI-based systems. And the jailbreaking is often a matter of expert prompt engineering. For those of you who don’t know what prompt engineering, it’s really the science, some call it an art, but I think it’s more of a science of creating effective prompts that guide the AI model to generate desired outputs, right? So you may be able to program guidelines, laws, treaties into an AI model and say, you must conform to all of these guardrails. But I think that smart… prompt engineering will likely be able to overcome those kinds of guardrails that exist. So there is a great deal of evolution that is happening in that area. And good prompt engineering can help the AI system perhaps learn to build guardrails by itself. But that same kind of prompt engineering can result in not only unintended consequences, but consequences that become part of the training dataset for the next cycle of the LLM. And when that is not documented or when that is not understandable, you are, I think, gonna have a system that compounds an original deviation from the norm. So I therefore have some concerns about a discussion that starts with the premise that human control is a good way or is the way to help solve what is evolving here. Because you can establish strong ethical guidelines, you can create international regulations, and you can build robust safety measures. But if you look at the software engineering underneath these systems, the data validation, the fact that today’s systems, it’s very hard to create a zero defect model, combined with the enormous capability of smart prompt engineering to jailbreak these systems, makes me think that we have to spend quite a bit more time in research, understanding how these systems. systems work, have a lot of simulations of those kinds of systems first, and then start to build some global frameworks and global norms of what safety should be before we can start to think about a treaty or an international agreement that makes sense, because when the foundational principles are not fully characterized, you end up, if you start to work on law or treaties, you may find that the unintended consequences may be far greater than the good that was intended.

Wolfgang Kleinwächter: I think this is a very interesting additional aspect and if I understand you, there is really a problem that even if you have human control, that the underlying technology overstretches the capacity of the human who is in control, so that it’s just on paper, but the reality could be moving in a different direction. And I think this is an issue for a lot of civil society organizations. We have involved a number of that, there is a broad NGO called Stop Killer Robots, which is also active in the GGE laws, and Kevin, you represent Amnesty International, which has also discussed this issue for some years, so what is the civil society perspective, if you have watched all these experts from diplomacy, technology, business, so what do you think about from a civil society perspective about this, and then we have time enough for two or three questions from the floor. Please prepare your questions.

Kevin Whelan: Thank you and good afternoon everyone. It’s a pleasure to be here and to speak on behalf of Amnesty International on this important topic. It’s a bit of a challenge to be, I think, maybe the ninth or tenth speaker on a panel right after lunch, so I’ll try to be as concise as possible. But it’s great because I think it gives me a bit of an opportunity to respond to some of the things the panelists have already said. I mean, I speak on behalf of Amnesty International, that’s part of various coalitions, not necessarily on behalf of all civil society groups in general. But from our perspective, I think we view the challenges and risks that come from autonomous weapon systems as imminent and as significant. And it’s for that reason we believe the international community should clarify and strengthen existing international humanitarian and human rights law through a legally binding instrument, through an instrument that would do at least three things. One, would prohibit the development, production, use, and trade in systems which by their nature cannot be used with meaningful human control over the use of force. And I hear what Rohan is saying. And I think from our perspective, I think we’re viewing this as, let’s say, a legal standard, not necessarily a technical standard, but perhaps we can discuss that in more detail. And the prohibition would extend to systems that are designed to be triggered by the presence of humans or that use human characteristics for target profiles. So this would be the so-called anti-personnel autonomous weapon systems. So in addition to that prohibition, a regulation of the use of all other autonomous weapon systems, and then on top of that, a positive obligation to maintain meaningful human control over the use of force. Now, as some of the speakers have already mentioned, the use of autonomous weapon systems in armed conflict has been at the center of the debate, much of which has taken place in the CCW. But I think as Ximena and Olga and others had talked about, this is a debate which has dimensions that are broader than armed conflict and broader than the CCW. It’s not just an issue of IHL. It’s not just an issue of weapons law, but also of human rights. And so I wanted to use a bit of time to focus on the dangers in relation to the law enforcement context, where the use of force is governed by a different threshold from that which applies in armed conflict. And so from our perspective, the use of autonomous weapon systems in this context would be inherently unlawful, as the international law and standards governing the use of force and policing rely on nuanced and iterative human judgment. So this goes back to something that Rohan was saying about the challenges that some of these systems have in dealing with complexity. We are talking about an exceedingly complex decision that should not be delegated. A law enforcement officer must continually assess a given situation in order to, if possible, avoid or minimize the use of force. I’m not saying that the legal determinations in the context of armed conflict are simple. What I am saying is the legal determinations in the context of a law enforcement context are exceedingly complex. And then if there is a system to be delegated, given the complexity of the issues we need to address, the system would have to be so complex as to render the system outside of meaningful human control. In other words, a machine so sophisticated to attempt to adapt to subtle environmental cues would make the machines inherently unpredictable. So then we come back to the notion of how do you evaluate that with something other than just intuition? And this becomes a significant issue in terms of. of accountability, because it would blur the lines of responsibility and accountability. It would undermine the right to remedy. And the last thing I wanted to point out is that the use of autonomous weapon systems in law enforcement would be dehumanizing. It would violate the right to dignity, undermine the principles of human rights compliant policing. And I think one of the panelists has already made the discussion, addressed the issue of bias in algorithms and systems. There are risks of systematic errors and bias in algorithms, in autonomous systems. We know, we’ve documented complex systems could have biased results based on biased data. For example, facial recognition can lead to profiling on ethnicity, on race, on national origin, gender and other characteristics, which are often the basis for unlawful discrimination. So then imagine adding lethality as a component to that system. And so this is one of the reasons just to say, stepping back, why we see value in the process at the General Assembly, because it has an aperture that’s broader than that in the CCW context. Thank you.

Wolfgang Kleinwächter Thank you very much. And I think we have time for one or two questions. So you need a microphone to ask a question.

Audience: Yeah, I hope you can hear me. So thank you for this wonderful panel. I think this is a very important issue. My name is Hiram. I’m from Encode Justice. We’re a part of the Stop Killer Robots Coalition. We actually had a member go to GGE laws meeting in Geneva. And it was very appalling to see only two data scientists there, like me, two people from the technical community. And it feels like, you know, in a lot of the rolling text and so on, a lot of the technical issues are overlooked. You know, diplomats expecting these systems to be controllable and reliable and predictable, just, you know, like kind of a dream. I think the question here is, what are the bottlenecks in terms of understanding? for diplomats or for you know government bodies to work towards an international treaty towards like banning autonomous urban systems or regulating autonomous urban systems.

Wolfgang Kleinwächter: Ambassador, can you take the questions here? Stop Killer Robots is an NGO in the GGE laws.

Ernst Noorman: You can hear me? Yes. Thank you very much for the question and you know what the very ambition is of the chair, my colleague, is to include as many voices at the table as well. That’s why we he’s been really actively encouraging to involve stakeholders, other organizations and only the signatory countries and observing countries but also academics and NGOs like Amnesty International and who are also involved in ICRC to get a full picture and to involve everyone. At the same time we are ambitious as well in trying to reach some agreement amongst countries and I understand also from your contribution limitations but at the same time we feel the urgent need also to be ambitious. We’ve been ambitious with RE-AIM in tabling this resolution to put it on the table and I understand from the contributions it’s going to be difficult to reach an agreement that’s actually any agreement in this area but if without ambition you won’t reach anything.

Wolfgang Kleinwächter: Okay, thank you very much and we have two questions online and then we have another one here in the room. So could we hear the first question online?

Audience: Yes, hello, can you hear me? Yes, we can hear you. Yes, hi, this is Milton Mueller from Georgia Tech. I want to go after this title again about Oppenheimer. I think I haven’t heard much about one of the main problems facing AI governance, which is the belief among certain developers of AI that they have, in fact, put us on the path of an autonomous, not just a lethal weapon system, but an autonomous superintelligence that is capable of and might inevitably result in the destruction of humanity. And you know, about a year and a half ago, two years ago, we had this massive panic, and we had the Future of Life Institute resolution that we should stop all development of AI. And it was those people who believed that they had passed an Oppenheimer moment, that they had discovered a power so awesome, comparable to Oppenheimer’s weaponization of atomic fission. And those of us who have investigated this problem now know that this is a myth. This idea of a superintelligence that is imminent, and that this superintelligence will have the power to destroy all of humanity and all of human civilization is just not a realistic thing. So I hope that, I think your discussion of the issue of lethal autonomous weapons has been much more grounded in reality. But I do want to know if we are not headed towards a sort of revival of the myth of a superintelligence that is autonomous and capable of destroying humanity.

Wolfgang Kleinwächter: Yeah, first, let’s take the second question, and then we try to find the person who can reply to Milton. The second question is…

Audience: Can you hear me? Hello? Yeah, Sivash, we can hear you. Okay. So these concerns about AI are very much shared by business leaders, but there was a recent point of view that another country, another region is on the race to develop AI, and if we slow down or withdraw from this race, they will win. So we will stay on the race, continue developing without safeguards, and after we win the race, we’ll worry about the safeguards. Shouldn’t instead the governments and all actors get into the same room and try to achieve a solution, either at UN, ICANN, or in a conference center like Potsdam or any historical place? That’s my question. Thank you.

Wolfgang Kleinwächter: Okay, thank you. We have two more questions in the room. My proposal is, all three questions in the room, that we give the three questioners in the room the possibility, and then we have a final round. So you need a microphone, so one, two, three, four, and then we close the queue, and then we have a final round among the participants, and then Ambassador Schusterwitz would make a final remark. Okay, go ahead.

Audience: Okay, I’ll make a small remark. I had a lighting session yesterday, where I was actually showing an actual military warfare drone, which is able, being, like, cost 500 bucks, able to take out a $10 million tank. And this is technology which is now actually used. And the trick is, there are already lots of attempts, and lots of successful attempts to implement AI on the battlefield, from swarm drones to some mothership drones connecting to the high queue over Starlink antenna, literally glued to this mothership drone flying high in the, like, skies. So what I have to say is, I’ve been thinking a lot about how can we protect our future. future from AI going rogue and hostile in some way. It is not a battle between humans and humans. It’s a battle between humans and some mad robots, basically. And I think we are kind of wrong way in the design of our attempts to regulate AI because you cannot regulate the development of AI. It’s super rapid and nobody actually will agree with you and hear you out and so on. But what we could regulate is finally weapons. I mean, like the problem of AI getting hostile is a problem of AI intentionally on his own way pulling the trigger, pulling the digital trigger of some weapon, no matter a pistol or intercontinental ballistic missile. So if we not limit AI, but if we limit by some UN treaty an ability to produce a weapon which is equipped with a digital trigger, which can be used by AI, we can protect yourself. But I mean, like, it may sound like weird, but a human should only be killed by God or another human. There should not be any robot with this trigger pulling. Thank you.

Wolfgang Kleinwächter: Okay, thank you very much. You need a mic. Take this one.

Audience: Hello? Yeah, can you hear me? Good. Artem Kruzhulin. I was actually a panelist on an earlier panel related to public and private sector cooperation. And my question is in a way related to this very subject. So ever since AI was a subject, there’s always been an ongoing theme of the fact that legislation is consistently in a position where it’s falling further and further behind. And it’s very difficult to continue keeping up. How would you comment on the fact that while we are still here trying to discuss conceptual ideas around the way to control these systems, there are private sector companies, such as Helsing or Unreal, that are already deploying these systems in life conflicts. And they are in a way superseding the discussion just by sheer fact that they’re actually using these systems already. And what do you see as a… the solution to these problems. Okay, thank you. All right, thank you very much. My name is Kunle Olorundari and I’m the president of Internet Society, Nigerian chapter, and at the same time a researcher. And interestingly, I wrote a paper recently that I published on ITP platform based on this subject matter that is artificial generative intelligence terrorism. And that’s more of that was what drawn me to this session because I really want to get to know more about what is being discussed. And when I listened to one of our panelists, I mean, the perspective we chew in was so interesting to me because I was actually looking in my own paper, I was looking at deontology and utilitarianism. Deontology says that, okay, fine, let’s look at how we can look at the use of AI in a moral perspective. But then, and I discovered that when I was looking at my paper and of course I set up like a focus group of experts that speaks to those issues. I discovered that, yeah, that’s going to be a bit pretty difficult because now I have to go to the extent of define what is moral, which of course I know that all of us are not going to agree on. Then on the issue of utilitarianism, right? Looking at, okay, the maximum effective use in terms of the good use. Yeah, I can say that, okay, this is a good use. And that person will say, no, that is not a good use. So I discovered that, well, there are so many perspectives. And when I had the perspective of one of our panelists where they said that, okay, yeah, we now need to look at the issue of data because all data are inherently inaccurate. That now connected to the utilitarianism and the ontology. And I was thinking, oh, wow, I think this is just the right time for us to start talking about all these issues because this has come. and there’s nothing anybody can do about it, the best thing you can do is to take it to the next level. The issue of treaty, yeah, it will definitely come, but then I think we need to start to look at how we can, you know, okay, IGF is just a forum where we discuss all these issues, we can elicit ideas, right, but there is no binding treaty or need, so I think we should be looking at how we can take this to the next level, like maybe a plenipotentiary, right, where you have the ITUTs, the radio, the standardization and development arm, where they discuss issues, probably we can have, you know, something coming out of there, so that we can take it to a level where it’s going to be binding on each and every one of us, and for me, I just want to know more if, apart from Plenipot, I’m familiar with Plenipot, is there any other platform that we can, you know, discuss the issue of, you know, standardization when it comes to AI? Thank you very much.

Gregor Schusterschitz: Okay, thank you, we have a final question here, and then we have a final round around the table, this time we start with Kevin, but yeah, it’s not too long, too much,

Audience: so can you introduce yourself and ask your question? Hi, I’m Raida Lindsay, I’m a local digital policy expert. My question was mostly covered, but I want to ask, we are seeing the deployment of autonomous decision-making today in war, especially in Gaza, and a lot of it is being piloted and demonstrated as, like, best practice around the world by these private companies, so I wonder what is the short-term solution, something that we can do today, we can campaign for today, in order to make sure that we could limit the impact of autonomous decision-making in war?

Gregor Schusterschitz: Oh, a lot of good questions, and I propose that you pick just what you want to say from your field of expertise, Kevin, and then Jimena, and then we’ll go around the table.

Kevin Whelan: Thank you. Great, thank you. Yeah, maybe just a couple points, I mean, about the complexity of technology and the challenges in fully understanding it. So I’m not a technology expert, but I don’t think you or any of us need to understand the technology to understand what’s at stake. I am not saying that you can necessarily create a system that is subject to meaningful human control. What I am saying is that if you cannot have meaningful control over a weapon system, that is a system that should not be deployed. And another point I wanted to talk about is, I think it’s been picked up by a number of questions, how to reconcile the argument that these are complex systems, we need to wait to see how they are developed, with the fact that these systems are already being deployed in multiple conflicts. And so that’s absolutely why we believe that there is urgency. And what can we do? I mean, we fully support the call of the Secretary General and the ICRC to negotiate a binding treaty by 2026. So I think what you can do is campaign on that behalf, right? Make your voices heard, talk about the urgency of this situation. Thank you.

Wolfgang Kleinwächter: Okay, thank you.

Jimena Viveros: Hi, so I think a little bit about everything is, can you hear me? Yes. Okay. The fact that, as I said, AI is a new monster and AI in the peace and security domain is an even newer, bigger monster. So we need to reimagine what governance looks like, because the traditional models of governance that we have seen so far have proven to not be the most adequate ones. So we need obviously multidisciplinary approaches, and we need engagement. also with industry of course to promote and to kind of guarantee that there’s going to be transparency and that there’s going to be some type of cooperation for enforcement because otherwise we’re just drafting dead paper as we would say. We need definitely capacity building as I said capacity response especially from the global south and in order to make that happen I mean I think everyone from wherever we’re standing in our trenches we can just speak to our policymakers demand that this is a thing so that it can become binding because otherwise we’re just going to be stuck in the same place and I do believe that it’s very important to talk about standards which was raised because that’s the only way that we can actually in a measurable way verify the type of guardrails and also how to not override them. So this is very critical in the way forward that’s why I mean like we need to reimagine the way that governance for this technology needs to happen and we need to do it very fast and very agilely because we are way behind on where we should be so it’s terrible that these systems are being field tested already live so there’s like no other phase in between and they’re just deployed because then we’re seeing the consequences all around the world and again the global south is the one that’s having the worst part of it. Thank you.

Wolfgang Kleinwächter: Thank you. We are pushed out now of the room so Ron and Mr. Batson you have just you know one minute to make a final comment and if Chris wants to say something fine so probably.

Ram Mohan: Thank you Wolfgang. I’ll be very brief. We should recognize that there are no unbiased and accurate AI decisions. We need to recognize that there are dependencies. And I think that the important thing here is to build risk management frameworks that mitigate both known and unknown risks that are accelerated by machine learning systems.

Wolfgang Kleinwächter: Mr. Ambassador.

Ernst Noorman: Thank you very much. I fully understand the frustration that we are lagging behind with the negotiations in the reality. That’s, of course, a big concern for us all, but that doesn’t excuse us from working hard towards an agreement on the subject. So we are fully committed as a chair of the GDE to work hard. We’re happy with the informal forum in New York. And we will be, as a chair, briefing in New York, the countries and the wider New York community on the development and work of the GDE. But we will be really keep on working and trying to achieve a result of 2026 that has been given us as a task. And we feel responsible for that. So we’re working towards a legally binding instrument to prohibit those autonomous weapons system that cannot be used in accordance with international law and to regulate the use of other autonomous weapons, which is a concept that’s supported broadly by many states. And it’s my hope that we can ultimately enshrine this through a new protocol in the CCW. Thank you.

Gregor Schusterschitz: Okay, thank you. And Wolfgang, is that Wolfgang just?

Olga Cavalli: And especially what Ram said, is the big challenge for universities, not only from the global South, from everywhere, being, have a multidisciplinary perspective. This is challenging for universities because each faculty is very much focused. So hearing you, I think we have to be really, really have a broad understanding of technology. Thank you for inviting me.

Chris Painter: So, and just finally on Milton’s point, what gives me some hope here is we’re actually talking about use cases. we’re not just talking about the specter of AI as some giant monster, but we’re actually looking at how it applies to autonomous weapons. And I think I completely agree with the comment that was made about focusing on several levels, including on management frameworks, because autonomous devices are not new. We’ve been talking about those for 30 years, but AI adds its complexity. But a lot of people just use AI as a talisman and they say those words and it’s supposed to mean something. I think actually getting down to brass tacks and talking about how those use cases work is important. So I don’t think we’re in that same loop that we were before. And then on locking people in a room and hope they come up with an agreement. I agree with Ernst that it’s great to have ambition. If you don’t have ambition, you don’t get anything. I think it’s unlikely locking people in a room is gonna result in something in the short term, but it’s important to have this process and it going. And then finally on capacity building, as Olga and others have said, I think that is critical. I think that’s critical to awareness. It’s critical to not just the global South but more generally. I’d say the Global Forum on Cyber Expertise, it’s the capacity building platform, has created a working group on emerging technologies and AI applying more to the cybersecurity context. But I think it also covers some of the aspects we talked about today. So I think that capacity building is another practical thing that we can do as we’re talking about what the constraints are, what the treaties are, et cetera, what the norms even are in this area as we apply them to technology. So also thank you for having me here.

Wolfgang Kleinwächter: Okay, thank you, Chris. And the final word comes from Ambassador Schuster-Schritz. Is online or are we now pushed out of the room?

Gregor Schusterschitz: Thank you very much. I would just, a few sentences, I think that summarize a bit the discussion that we had today. I think it was very good to have these various experts from various fields to show also the risk and severe consequences that unregulated autonomous weapons would have. This time pressure is what we call the Oppenheimer moment. we need to keep up with the development, we need to find regulation, I think that was clear for everyone, but we need to have very smart and targeted regulation that also keeps pace with the rapid technological development and this is not the first area where we have rapid technological development and we need to regulate it to a certain extent, but of course we require a multi-stakeholder approach here. We cannot just only have diplomats and military experts in the room that is trying to regulate, but we need scientists, we need software engineers, and we need civil society to find a way to regulate autonomous weapons that is also flexible for future developments.

Wolfgang Kleinwächter: Thank you very much, that’s the end of the story, this is the start of a new beginning, thank you and see you in next session, rounds on the next or in the informal consultation in New York. Thank you. Yeah. Mm hmm. Hey, Hey. Hey. Hey. Hey. Look. Look, yes, yes, but look, I touch it. It’s it’s good. Really, I listen my voice when it do that that do that. Sit. Yes. qi

Gregor Schusterschitz:

G

Gregor Schusterschitz

Speech speed

166 words per minute

Speech length

473 words

Speech time

170 seconds

Need for binding rules and limits by 2026

Explanation

Schusterschitz argues for the urgent need to establish binding rules and limits on autonomous weapons systems by 2026. He emphasizes the importance of moving from discussions to actual negotiations on this matter.

Evidence

Austria hosted the Vienna Conference ‘Humanity at the Crossroads’ and tabled two UN General Assembly resolutions on autonomous weapons systems.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Ernst Noorman

Jimena Viveros

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Need to involve diplomats, military, academia, industry and civil society

Explanation

Schusterschitz emphasizes the importance of a multi-stakeholder approach in addressing autonomous weapons systems. He argues that the issue has broad implications and thus requires input from various sectors of society.

Evidence

Austria’s welcoming of contributions from science, academia, the tech sector, industry, and broader civil society.

Major Discussion Point

Multi-stakeholder approach to governance

Agreed with

Jimena Viveros

Agreed on

Multi-stakeholder approach to governance

E

Ernst Noorman

Speech speed

136 words per minute

Speech length

1004 words

Speech time

439 seconds

Fast pace of development closing window for preventive regulation

Explanation

Noorman highlights the rapid development of autonomous weapons systems, which is narrowing the window for preventive regulation. He stresses the urgency of addressing this issue before it becomes too late to effectively regulate.

Evidence

The GGE on laws has been working since 2016 and now includes 127 high-contracting parties.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Jimena Viveros

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Differed with

Chris Painter

Differed on

Approach to regulating autonomous weapons systems

Geopolitical tensions and mistrust hindering progress

Explanation

Noorman points out that geopolitical tensions and mistrust among states are obstacles to progress in regulating autonomous weapons systems. These factors make it difficult to reach agreements on international regulations.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

R

Ram Mohan

Speech speed

118 words per minute

Speech length

873 words

Speech time

440 seconds

Difficulty in creating unbiased and accurate AI systems

Explanation

Mohan argues that it is inherently challenging to create unbiased and accurate AI systems. He points out that all data is biased and inherently inaccurate, which influences the outputs of AI systems.

Evidence

Examples of data quality and bias, algorithmic limitations, and unforeseen circumstances affecting AI systems.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

Differed with

Kevin Whelan

Differed on

Feasibility of creating unbiased AI systems

Limitations of current software engineering methods for AI

Explanation

Mohan highlights that current software engineering methods for testing, quality assurance, and validation are insufficient for AI systems. This creates challenges in ensuring the reliability and safety of AI-powered autonomous weapons.

Evidence

Discussion of zero-defect models and the challenges of applying them to AI systems.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

J

Jimena Viveros

Speech speed

153 words per minute

Speech length

1440 words

Speech time

563 seconds

Importance of moving from discussions to negotiations

Explanation

Viveros stresses the need to transition from discussions to actual negotiations on binding rules for autonomous weapons systems. She argues that the current pace of development makes this shift urgent.

Evidence

Reference to the UN Secretary General’s call for a binding treaty by 2026.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Ernst Noorman

Kevin Whelan

Agreed on

Urgency of regulating autonomous weapons systems

Need for new governance models suited to AI challenges

Explanation

Viveros argues that traditional governance models are inadequate for addressing the challenges posed by AI in the peace and security domain. She calls for reimagining governance approaches to better suit the unique characteristics of AI technology.

Major Discussion Point

Multi-stakeholder approach to governance

Agreed with

Gregor Schusterschitz

Agreed on

Multi-stakeholder approach to governance

O

Olga Cavalli

Speech speed

138 words per minute

Speech length

527 words

Speech time

228 seconds

Lack of technical capacity in Global South countries

Explanation

Cavalli highlights the challenge faced by Global South countries in developing technical capacity related to AI and autonomous weapons. She points out the difficulty in approaching and learning about these technologies due to limited resources and access.

Evidence

Example of high demand for new cyber defense programs in Argentina.

Major Discussion Point

Capacity building and education

Agreed with

Wolfgang Kleinwächter

Chris Painter

Agreed on

Capacity building and education

Need for multidisciplinary education on AI and autonomous weapons

Explanation

Cavalli emphasizes the importance of multidisciplinary education in understanding and addressing the challenges of AI and autonomous weapons. She argues that universities need to broaden their approach to teaching these subjects.

Major Discussion Point

Capacity building and education

Agreed with

Wolfgang Kleinwächter

Chris Painter

Agreed on

Capacity building and education

W

Wolfgang Kleinwächter

Speech speed

129 words per minute

Speech length

1327 words

Speech time

614 seconds

Importance of raising public awareness

Explanation

Kleinwächter stresses the need to increase public awareness about the issues surrounding autonomous weapons systems. He argues that discussions are currently limited to small expert circles and need to be broadened.

Evidence

Mention of this being the first IGF discussion on the topic.

Major Discussion Point

Capacity building and education

Agreed with

Olga Cavalli

Chris Painter

Agreed on

Capacity building and education

C

Chris Painter

Speech speed

190 words per minute

Speech length

1482 words

Speech time

466 seconds

Rapid evolution outpacing regulatory efforts

Explanation

Painter points out that the fast-paced evolution of AI technology is outstripping efforts to regulate it. He suggests that this makes it challenging to develop effective governance frameworks.

Major Discussion Point

Challenges in regulating AI and autonomous weapons

Differed with

Ernst Noorman

Differed on

Approach to regulating autonomous weapons systems

Role of capacity building in supporting governance efforts

Explanation

Painter emphasizes the importance of capacity building in supporting efforts to govern AI and autonomous weapons. He argues that this is critical for raising awareness and understanding of the issues involved.

Evidence

Mention of the Global Forum on Cyber Expertise creating a working group on emerging technologies and AI.

Major Discussion Point

Capacity building and education

Agreed with

Olga Cavalli

Wolfgang Kleinwächter

Agreed on

Capacity building and education

K

Kevin Whelan

Speech speed

161 words per minute

Speech length

1036 words

Speech time

384 seconds

Existing systems already being deployed in conflicts

Explanation

Whelan points out that autonomous weapons systems are already being used in current conflicts. This underscores the urgency of addressing the regulation of these systems.

Major Discussion Point

Urgency of regulating autonomous weapons systems

Agreed with

Gregor Schusterschitz

Ernst Noorman

Jimena Viveros

Agreed on

Urgency of regulating autonomous weapons systems

Need to maintain meaningful human control over use of force

Explanation

Whelan argues for the importance of maintaining meaningful human control over the use of force in autonomous weapons systems. He suggests that systems without such control should not be deployed.

Major Discussion Point

Human control and accountability

Differed with

Ram Mohan

Differed on

Feasibility of creating unbiased AI systems

Risks of autonomous systems in law enforcement contexts

Explanation

Whelan highlights the potential dangers of using autonomous weapons systems in law enforcement. He argues that such use would be inherently unlawful due to the complex decision-making required in policing situations.

Evidence

Discussion of the nuanced and iterative human judgment required in law enforcement contexts.

Major Discussion Point

Human control and accountability

V

Vint Cerf

Speech speed

150 words per minute

Speech length

323 words

Speech time

128 seconds

Importance of clear lines of accountability

Explanation

Cerf emphasizes the need for clear accountability in the development and use of AI and autonomous weapons systems. He suggests that this is crucial for responsible development and deployment of these technologies.

Major Discussion Point

Human control and accountability

Agreements

Agreement Points

Urgency of regulating autonomous weapons systems

Gregor Schusterschitz

Ernst Noorman

Jimena Viveros

Kevin Whelan

Need for binding rules and limits by 2026

Fast pace of development closing window for preventive regulation

Importance of moving from discussions to negotiations

Existing systems already being deployed in conflicts

These speakers agree on the urgent need to establish binding regulations for autonomous weapons systems, emphasizing the rapid pace of development and the narrowing window for effective preventive action.

Multi-stakeholder approach to governance

Gregor Schusterschitz

Jimena Viveros

Need to involve diplomats, military, academia, industry and civil society

Need for new governance models suited to AI challenges

Both speakers emphasize the importance of involving various stakeholders in addressing the challenges posed by autonomous weapons systems and AI, recognizing the need for diverse perspectives and expertise.

Capacity building and education

Olga Cavalli

Wolfgang Kleinwächter

Chris Painter

Lack of technical capacity in Global South countries

Need for multidisciplinary education on AI and autonomous weapons

Importance of raising public awareness

Role of capacity building in supporting governance efforts

These speakers agree on the critical importance of capacity building, education, and raising public awareness about AI and autonomous weapons systems, particularly emphasizing the needs of Global South countries.

Similar Viewpoints

Both speakers highlight the technical challenges in developing and regulating AI systems, emphasizing the limitations of current methods and the rapid pace of technological evolution.

Ram Mohan

Chris Painter

Difficulty in creating unbiased and accurate AI systems

Limitations of current software engineering methods for AI

Rapid evolution outpacing regulatory efforts

These speakers emphasize the importance of maintaining human control and accountability in the development and use of autonomous weapons systems.

Kevin Whelan

Vint Cerf

Need to maintain meaningful human control over use of force

Importance of clear lines of accountability

Unexpected Consensus

Limitations of traditional governance models

Jimena Viveros

Ernst Noorman

Need for new governance models suited to AI challenges

Geopolitical tensions and mistrust hindering progress

Despite coming from different backgrounds, both speakers recognize the limitations of current governance models in addressing AI challenges, suggesting a shared understanding of the need for innovative approaches to regulation.

Overall Assessment

Summary

The main areas of agreement include the urgency of regulating autonomous weapons systems, the need for a multi-stakeholder approach to governance, and the importance of capacity building and education. There is also consensus on the technical challenges in developing and regulating AI systems, and the need for human control and accountability.

Consensus level

There is a moderate to high level of consensus among the speakers on the key issues. This suggests a shared understanding of the challenges and potential approaches to addressing autonomous weapons systems and AI in military contexts. However, there are still some differences in emphasis and proposed solutions, indicating the complexity of the issue and the need for continued dialogue and negotiation.

Differences

Different Viewpoints

Feasibility of creating unbiased AI systems

Ram Mohan

Kevin Whelan

Difficulty in creating unbiased and accurate AI systems

Need to maintain meaningful human control over use of force

Ram Mohan argues that creating unbiased AI systems is inherently challenging due to data biases and limitations in software engineering methods. Kevin Whelan, on the other hand, emphasizes the need for meaningful human control, implying that AI systems can be sufficiently controlled if proper measures are in place.

Approach to regulating autonomous weapons systems

Ernst Noorman

Chris Painter

Fast pace of development closing window for preventive regulation

Rapid evolution outpacing regulatory efforts

While both speakers acknowledge the rapid development of AI and autonomous weapons, Ernst Noorman advocates for urgent preventive regulation, whereas Chris Painter suggests that the pace of evolution makes it challenging to develop effective governance frameworks.

Unexpected Differences

Relevance of the ‘Oppenheimer moment’ analogy

Ernst Noorman

Jimena Viveros

Geopolitical tensions and mistrust hindering progress

Need for new governance models suited to AI challenges

While the ‘Oppenheimer moment’ analogy was introduced to highlight the urgency of the situation, Ernst Noorman expresses caution about drawing historical parallels, whereas Jimena Viveros argues that AI presents a fundamentally different challenge requiring new governance approaches. This unexpected disagreement highlights the complexity of framing the issue of AI and autonomous weapons.

Overall Assessment

summary

The main areas of disagreement revolve around the feasibility of regulating AI and autonomous weapons systems, the appropriate approaches to governance, and the relevance of historical analogies in framing the issue.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the need for regulation and the urgency of the issue, significant differences exist in how to approach these challenges. These disagreements reflect the complexity of the topic and the diverse perspectives of stakeholders from different sectors and regions. The implications of these disagreements suggest that reaching a unified approach to regulating AI and autonomous weapons systems may be challenging and require extensive negotiation and compromise among various stakeholders.

Partial Agreements

Partial Agreements

All speakers agree on the need for regulation, but differ in their approaches. Schusterschitz and Viveros emphasize the urgency of establishing binding rules, while Painter focuses on capacity building as a crucial step towards effective governance.

Gregor Schusterschitz

Jimena Viveros

Chris Painter

Need for binding rules and limits by 2026

Importance of moving from discussions to negotiations

Role of capacity building in supporting governance efforts

Similar Viewpoints

Both speakers highlight the technical challenges in developing and regulating AI systems, emphasizing the limitations of current methods and the rapid pace of technological evolution.

Ram Mohan

Chris Painter

Difficulty in creating unbiased and accurate AI systems

Limitations of current software engineering methods for AI

Rapid evolution outpacing regulatory efforts

These speakers emphasize the importance of maintaining human control and accountability in the development and use of autonomous weapons systems.

Kevin Whelan

Vint Cerf

Need to maintain meaningful human control over use of force

Importance of clear lines of accountability

Takeaways

Key Takeaways

There is an urgent need to regulate autonomous weapons systems, with calls for binding rules by 2026

Existing autonomous weapons are already being deployed in conflicts, outpacing regulatory efforts

Regulating AI and autonomous weapons faces significant technical and geopolitical challenges

A multi-stakeholder approach involving diplomats, military, academia, industry and civil society is crucial

Capacity building and education, especially for the Global South, is essential to support governance efforts

Maintaining meaningful human control over the use of force is a key concern

Resolutions and Action Items

Work towards a legally binding instrument to prohibit autonomous weapons systems that cannot comply with international law and regulate others

Brief countries and the wider New York community on developments in the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS)

Campaign to support the UN Secretary General’s call for a binding treaty by 2026

Develop risk management frameworks to mitigate known and unknown risks of machine learning systems

Unresolved Issues

How to effectively regulate rapidly evolving AI technology

How to reconcile the need for thorough understanding of AI systems with the urgency of regulation

How to ensure meaningful human control over complex AI systems

How to address the capacity gap between developed and developing countries in AI governance

How to create unbiased and accurate AI systems for use in weapons

Suggested Compromises

Focus on regulating specific use cases and applications of AI in weapons rather than broad, abstract principles

Develop flexible regulations that can adapt to future technological developments

Combine binding treaties with softer governance approaches like norms and standards

Balance the need for regulation with the desire to not hinder beneficial AI innovation

Thought Provoking Comments

AI is a very different monster than nuclear because when, even since its origins from the splitting of the atom, it was immediately weaponized. And there was like this whole veil of secrecy around it with the Manhattan Project and with everything that happened for years. And then, you know, with the Cold War and the arms race and everything, no one really used it. Everyone was producing it, but no one really used it because it was like a mutual assured destruction. Whereas with AI, we don’t really have the conscience yet collectively that it will be the same.

speaker

Jimena Viveros

reason

This comment provides a thought-provoking comparison between AI and nuclear weapons, highlighting key differences in their development and use that make AI potentially more dangerous.

impact

This shifted the discussion to consider the unique challenges of regulating AI weapons compared to other types of weapons. It led to further exploration of the widespread and rapid proliferation of AI technology.

AI’s own evolution means that currently known methods in software engineering of testing, quality assurance and validation are either incomplete or insufficient. Many weapons systems in the conventional area, many weapons systems demand a model of zero defects, right? So there’s a zero defect model that is expected. Now, while the concept of a zero defect AI system is appealing, it’s important to recognize some of the inherent limitations that exist there.

speaker

Ram Mohan

reason

This comment brings a crucial technical perspective to the discussion, highlighting the inherent challenges in developing reliable AI systems for weapons.

impact

It deepened the conversation by introducing technical complexities that policymakers need to consider. This led to further discussion about the limitations of human control over AI systems.

From our perspective, the use of autonomous weapon systems in this context would be inherently unlawful, as the international law and standards governing the use of force and policing rely on nuanced and iterative human judgment.

speaker

Kevin Whelan

reason

This comment introduces an important legal perspective on the use of autonomous weapons in law enforcement contexts.

impact

It broadened the scope of the discussion beyond military applications to consider the implications for domestic law enforcement. This led to further exploration of human rights and accountability issues.

Overall Assessment

These key comments shaped the discussion by introducing diverse perspectives – technical, legal, and comparative historical analysis. They collectively highlighted the complexity of regulating AI weapons, emphasizing the need for multidisciplinary approaches and urgent action. The discussion evolved from broad conceptual issues to more specific challenges in implementation and regulation across different contexts.

Follow-up Questions

How can we address the AI divide between the Global North and Global South?

speaker

Olga Cavalli

explanation

Important to ensure equitable development and use of AI technologies globally

How can we improve the involvement of technical experts in diplomatic discussions on autonomous weapons?

speaker

Hiram (audience member)

explanation

Critical to ensure technical realities are understood in policy-making

How can we regulate the development of weapons equipped with ‘digital triggers’ that could be used by AI?

speaker

Audience member

explanation

Potential approach to limit AI’s ability to autonomously use lethal force

How can governance and regulatory approaches keep pace with rapid AI development and deployment by private companies?

speaker

Artem Kruzhulin (audience member)

explanation

Addresses the gap between policy discussions and real-world implementation

What platforms or forums, beyond the ITU Plenipotentiary, could be used to discuss AI standardization?

speaker

Kunle Olorundari (audience member)

explanation

Seeks to identify effective venues for developing binding international standards

What short-term solutions or campaigns can be implemented today to limit the impact of autonomous decision-making in war?

speaker

Raida Lindsay (audience member)

explanation

Addresses urgent need for immediate action given current deployment of these technologies

How can we develop effective risk management frameworks to mitigate both known and unknown risks accelerated by machine learning systems?

speaker

Ram Mohan

explanation

Critical for addressing the inherent biases and inaccuracies in AI decision-making

How can we create a multidisciplinary approach in universities to better understand and address the challenges of AI in autonomous weapons?

speaker

Olga Cavalli

explanation

Important for developing comprehensive education and research programs

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #100 Integrating the Global South in Global AI Governance

WS #100 Integrating the Global South in Global AI Governance

Session at a Glance

Summary

This panel discussion focused on the inclusion of the Global South in AI governance and development. Experts from various organizations discussed challenges and opportunities for increasing participation from developing countries in the AI ecosystem.

Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in many Global South countries, and the need for capacity building. Panelists emphasized the importance of local data collection and infrastructure development to enable AI innovation. They also discussed the role of private sector companies in providing tools and platforms to support AI development in emerging markets.

The discussion touched on ethical considerations, including fair treatment of workers involved in AI training data labeling. Panelists noted the need for inclusive stakeholder engagement when developing AI governance frameworks and ethics guidelines. Cultural factors that may inhibit participation from the Global South were also explored.

Opportunities highlighted included leveraging synthetic data generation, adapting AI solutions to work with limited computational resources, and creating accelerator programs to support local AI startups. The importance of building AI literacy and technical capacity across all levels of society was stressed.

Overall, the panel emphasized that while challenges remain, there are promising avenues to increase meaningful inclusion of the Global South in shaping the future of AI. Collaboration between governments, industry, academia and civil society will be crucial to realizing this goal.

Keypoints

Major discussion points:

– Challenges of AI governance and inclusion in the MENA region, including technology gaps, lack of representation, and regulatory uncertainty

– The role of the private sector, governments, and international organizations in promoting AI development and ethical standards in the Global South

– The importance of capacity building, data availability, and localization of AI technologies for inclusive development

– Balancing innovation with responsible AI governance and regulation

– Opportunities and challenges for the MENA region to become a global AI hub

Overall purpose:

The discussion aimed to explore ways to operationalize inclusion in AI governance ecosystems, particularly for the MENA region and Global South countries. Panelists examined barriers to participation and proposed strategies for increasing representation and capacity.

Tone:

The overall tone was constructive and solution-oriented. Panelists acknowledged challenges but focused on opportunities and practical steps for improvement. There was a sense of cautious optimism about the potential for the MENA region to play a larger role in global AI development, balanced with realism about the work still needed. The tone became more urgent when discussing the need for capacity building and literacy to enable meaningful participation.

Speakers

– Salem Fadi: Director of the Policy Research Department at the Mohammad bin Rashid School of Government

– Salma Alkhoudi: Head researcher on AI governance research project

– Nibal Idlebi: UN ESCWA Acting Director of Cluster on Statistics, Information, Society and Technology

– Roeske Martin: Director of Government Affairs and Public Policy on Google MENA

– Jill Nelson: IEEE Standards Association advisor

Additional speakers:

– Jasmin Alduri: Co-director of the Responsible Tech Hub

– Lars Ratscheid: Works in international cooperation (from Germany)

Full session report

Expanded Summary: Inclusion of the Global South in AI Governance and Development

This panel discussion focused on the inclusion of the Global South, particularly the MENA region, in AI governance and development. Experts from various organisations explored challenges and opportunities for increasing participation from developing countries in the AI ecosystem.

Research Findings:

The Mohammad bin Rashid School of Government presented key findings from a survey of over 320 AI and digital companies across 10 MENA countries:

– Main concerns: cybersecurity, AI explainability, and bias

– Regulatory uncertainty is a major challenge for companies

– Nearly one-third of companies face interoperability issues with regulations across the MENA region

– High levels of partial implementation of AI ethics standards across categories

Key Challenges:

1. Technology Gap and Infrastructure

A fundamental issue underlying many challenges is the significant technology gap between developed and developing nations. Nibal Idlebi, Acting Director at UN ESCWA, highlighted this disparity, noting that “everything is related to technology gap”. This gap manifests in several ways:

– Lack of computing power and infrastructure in developing countries

– Limited access to local data for AI development

– Insufficient representation in global AI forums and discussions

2. Regulatory Uncertainty

Martin Roeske, Director at Google MENA, emphasised that regulatory uncertainty, rather than a lack of regulation, is holding back private sector involvement in AI development in the region. This uncertainty creates hesitation among businesses to fully engage in AI initiatives.

3. Capacity and Literacy

There was broad consensus among speakers on the critical need for capacity building and AI literacy. This includes:

– Building capacity at the decision-making level to enable meaningful participation in global AI governance (Nibal Idlebi)

– Improving general AI literacy to foster understanding and adoption (Jill Nelson, IEEE Standards Association advisor)

– Creating pathways for local talent to succeed in the AI field (Martin Roeske)

4. Ethical Considerations

The discussion touched on important ethical considerations in AI development:

– Jasmin Alduri, Co-director of the Responsible Tech Hub, raised concerns about the exploitation of click workers in the Global South

– Jill Nelson emphasised the need to consider all stakeholders, including data labellers, in ethical assessments

– Martin Roeske stressed the importance of building ethical principles into products from the start

5. Language Barriers

The panel noted the importance of Arabic language support in AI tools to increase accessibility and adoption in the MENA region.

Opportunities and Proposed Solutions:

1. Data Generation and Sharing

To address data scarcity, speakers proposed various solutions:

– Initiatives to encourage local data generation (Nibal Idlebi)

– Creation of data commons and public data sharing (Martin Roeske)

– Use of satellite data and GIS information (Nibal Idlebi)

2. Fostering Local AI Ecosystems

Speakers agreed on the importance of nurturing local AI talent and businesses:

– Encouraging local businesses and startups to adopt AI (Martin Roeske)

– Creating accelerator programmes, particularly for women founders (Martin Roeske)

– Growing local expertise and employment opportunities (Jill Nelson)

3. Capacity Building Initiatives

Proposed initiatives included:

– Implementing chief AI officers across government departments (Martin Roeske)

– Continuing education and certification programs for decision-makers (Jill Nelson)

– Literacy programs to understand AI capabilities and limitations (Jill Nelson)

4. Leveraging Private Sector Involvement

Jill and Martin Roeske emphasised the crucial role of the private sector in enabling inclusion in AI governance and development, particularly in encouraging local businesses and startups.

Role of International Organizations and Standards:

– IEEE’s grassroots approach to developing social-technical standards for AI (Jill Nelson)

– Google’s implementation of AI principles in product development (Martin Roeske)

– UNESCO’s ethics guidelines for AI and their adoption by organizations (Nibal Idlebi)

Unresolved Issues:

Several important questions remain unresolved:

1. How to effectively bridge the technology gap between the Global North and South in AI development

2. Balancing innovation with regulation in emerging AI markets

3. Ensuring fair compensation and treatment of data labellers and click workers in the Global South

4. Addressing the lack of computing power and infrastructure in developing countries for AI development

Conclusion:

The discussion highlighted the complex interplay between AI development, global inequalities, and development challenges in the Global South. While significant obstacles remain, there are promising avenues to increase meaningful inclusion of the Global South in shaping the future of AI. The panel emphasized opportunities for growth, such as the high adoption rate of AI tools like Google’s Gemini in the MENA region.

Moving forward, collaboration between governments, industry, academia, and civil society will be crucial to realising the potential of AI in the Global South. By focusing on capacity building, fostering local ecosystems, and addressing regulatory challenges, the MENA region and other developing areas can play a more significant role in the global AI landscape. As the discussion demonstrated, there is a strong commitment to leveraging AI as a tool for economic development and social progress in the Global South.

Session Transcript

Fadi Salim: of our panel. My name is Fadi Salim. I’m the Director of the Policy Research Department at the Mohammad bin Rashid School of Government. It gives me great pleasure to welcome you to this panel. And the panel will include a distinguished set of experts from technical communities, international bodies, as well as the private sector, who will join us momentarily. Prior to the panel, my colleague, Selma El-Khouldy, will present key highlights from a key research projects that we are running around the region, covering 10 countries in the Middle East and North Africa, trying to explore these questions on how to navigate the fragmentation of the AI governance ecosystem in our region, but also looking into the AI ecosystem, private sector companies, small and medium businesses and startups who are working in this field across the region. So the umbrella of this panel is trying to understand how to operationalize inclusion better in the AI governance ecosystem. So before I start, before I ask my colleague, Selma, to start, if this works, let me just tell you quickly about us. We are, Mohammad bin Rashid School of Government is an academic institution, a policy think tank, and works in among many other areas of policy and research on future government and digital governance areas. We have multiple publications, research projects. We work with many organizations and partners, including in this case, the research we are presenting here is supported by google.org and in collaboration with numerous stakeholders. And these research projects cover lots of areas, especially I would like to highlight the ongoing research and capacity building projects related to AI governance, that it’s AI governance. inclusion and AI competitiveness in MENA, one that we’re presenting and talking about the highlights here, as well as AI ethics assessment and capacity building in collaboration and partnership with the IEEE, global risk mapping on AI with FLI, OECD, GPA and others, as well as generative AI and public workforce. These are some of the ongoing projects. So it looks like it’s a lot of interest in the region in this field, and we hope that this will be something that will inform the decision making and capacity building. This is the executive education work that we were doing with the IEEE and working on capacity building and building direct assurance related to AI ethics in the public sector, but also across society. So that’s something that generates and produces a group of experts who are authorized to assess the ethical implications of artificial intelligence in their workplace. And we hold a lot of workshops and policy programs with lots of stakeholders across the region to try to engage and cover this question around inclusion, and with government bodies as well. Now, I’ll hand over to my colleague, Selma, who will take us through some of the research findings of this important projects, in my view, and then we’ll ask the distinguished panelists to join us for the panel.

Salma Alkhoudi: So this slide is probably well before I get to the slide, just really loud. Is this good? Okay. I’m Selma. I was the head researcher on this research project that Dr. Al-Fadi mentioned. This research project, as mentioned, was carried out over many, many months. We surveyed over 320 companies in AI and digital across the MENA region. MENA, as we define it for this project, is about 10 countries from Morocco, Tunis, Egypt, Jordan, and then the six countries of the GCC, including Saudi and the UAE. So just a quick overview, introduction to lay the ground. I’m sure all of you are already well aware, AI is top of mind. Adoption is more than tripling globally since 2017. 50% of organizations worldwide now rely on AI in at least one business function. I think this actual figure is much higher now. And there’s a dramatic surge in AI, in generative AI adoption through open source and private models, which, of course, complicates the governance landscape. So as a researcher, there’s sort of three interlocking challenges that contribute to this complexity in the global AI governance landscape. First, we have a fundamental definitional challenge. How do we govern something that we can’t concretely define? And this isn’t just really semantic. It’s a practical problem because the tech is evolving faster than our ability to kind of grab onto AI and define it as a subset of things. Second, we’re dealing with a few key issues that cross borders and cultures. And this is kind of a laundry list of, you know, just the tip of the iceberg as to what falls under the domain of global AI governance. Data privacy, cross-border data flows, transparency, bias, deeply social and cultural as well as technical challenges. And I’m sure you’ve heard a plethora of these challenges across the panels and workshops from today. And finally, we see a few core tensions that sort of revolve around the global AI governance landscape, which include innovation versus regulation. A lot of people view this as a false dichotomy. A lot of people hold that it is a true tension that exists. Economic transformation versus job displacement. Transparency versus intellectual property. And inclusive development versus monopolization. Of course, all of these tensions we heard time and time again through our expert interviews, through interviews with SME founders, startup founders, angel investors, and of course, in our survey. So, there are four distinct governance approaches that emerge when we look at the landscape globally. Risk-based approaches like the EU’s AI Act, which is focused on classifying and mitigating potential harms. There’s the rules-based approach exemplified by China’s Gen-AI measures that provide very specific concrete requirements for AI models. There are principles-based approaches like Canada’s voluntary code. And by the way, most countries in the MENA region are also principles-based, offering flexible guidelines as to where companies should head directional. And then outcomes-based approaches as seen in Japan, which focus on measurable results rather than prescriptive processes. Each one has its pros and cons, but the crucial question is how well do these different approaches serve the global south’s needs and contexts? Of course, we gleaned some interesting and important insights from our expert interviews. Let’s see. Okay. Inside some of the powerful challenges that face the region and governance, first is that geopolitical concerns have taken away attention from technological progress, if not pushed many countries in the MENA region a few years back in terms of issues like health and safety and education. Again, the definitional issue is a problem that came up in our expert interviews. If you can’t define what AI is, and you can’t define anything that includes the term AI, including AI governance and AI ethics and responsible AI. Of course, there’s the original problem of global cooperation. MENA countries are also in very different circumstances. They have very different priorities. The MENA region or the Arab world encompasses anywhere from emerging global AI leaders like the UAE and Saudi Arabia, to countries that are currently emerging from decades of war, countries that are still embroiled in war. So the playing field is very, very disparate and very large. What we heard time and time again, which is also quite interesting from our experts, is that a shared geographic location or even a shared identity or language like being Arab or Arabic is not enough to unify efforts. Some people believe that it is enough, and it should be enough. So that’s another question for our panelists, perhaps. As a result, the lack of openness and sharing, which further complicates the governance landscape in the MENA region. When it comes to inclusive governance, there’s kind of a sobering reality that we all have to grapple with, as pessimistic as it may seem. And I’m just here to lay out problems, by the way. Hopefully the panel will tackle solutions, which is that how can it be a race if we don’t all start from the same point? And that technology is an important priority nationally, certainly. But what about things like the inability to read and write? Or things like, you know, not enough access to proper health care. So it’s a problem of national priority. The issue of quick implementation versus governance is also a big one. There’s a lot of push to get things done now and quickly before things accelerate even faster. And given the pace of AI and many experts we spoke to stress that governance needs to come first. Others don’t think so. Others think that we need to just move as fast as possible, especially as a region that is being left behind in many instances. And when asked if MENA countries are invited to the table, the answer was yes and no. That global fora are open to their members, but they aren’t always taken seriously. That contributions are siloed and weak. Often it’s just one country from the Arab world that is in attendance. And that as a result, there aren’t more invitations to lead AI governance frameworking conversations. So that’s from a little bit of insight from the experts that we’ve spoken to. But we wanted to also dive a little bit into our survey findings. This is preliminary. We’re gonna reveal the full survey results with our published report early 2025. And also we only dove into the survey findings that correlate with the topic at hand, which is global AI governance inclusion, and also a bit on regulation and interoperability as it relates. So first, this hierarchy of concerns. Cybersecurity tops the list with 258 companies expressing concern. There’s a high level of worry also about AI explainability and bias. The interesting thing when we triangulate this data with our interviews is that there are some deeply felt cultural challenges here as well. When Amina company struggles with AI explainability, they’re not just dealing with sort of algorithmic complexity. They’re wrestling with how to make AI systems comprehensible in a setting that’s so widely diverse. I mean, in terms of language, in terms of tradition, in terms of viewpoints. So it’s not just about technology and also technological literacy. It’s not just about the technical problems. On the question of, we wanted to dive into the particularity. So we asked what the negative impacts of regulations are, if any. And 22.6 respondents cite increased costs as their biggest concern. It makes sense because funding is the biggest concern for SMEs in general. So the increased cost of regulations is top of mind. The combined impact of slowing innovation and limiting AI applications also accounts for over 36% of the responses. Then we also asked about potential positive impacts. I don’t think. Okay. Despite all those concerns, nearly 30% of companies acknowledge that regulations are making AI more secure and trustworthy and add that to the around 17% who see increased consumer confidence. This is pretty in line with what we heard in our interviews with company founders as well. They do believe regulations have a role to play in facilitating innovation, especially as they look to scale across markets, across borders, but they’re still hesitant at the scope of regulatory reach because the definitions are so vague because there’s still so much on the horizon. This radar chart looks quite simple, but the clustering towards supportive, very supportive and neutral rather than the extremes on the other sides, along with our interview data tells us something really important, which is that our region isn’t suffering from over-regulation, it’s suffering from regulatory uncertainty. There’s no sort of sense of where things are headed. And companies have told us time and time again that they’re trying to figure out how to navigate a regulatory landscape that’s still sort of taking shape and emerging. And then perhaps, you know, one of the most interesting findings is interoperability. Nearly one third of companies face interoperability issues with regulations across the MENA region. And the vast majority of those who said no are companies that are too small to have tried to scale across countries anyways. So when 31% of companies say they face interoperability issues, they’re really highlighting a fundamental question, which is how can we build a unified AI ecosystem in a region with very diverse regulatory approaches and different development priorities and varying levels of digital infrastructure? On the question of AI ethics standards implementation, this is the last chart before we’ll hand it off to our experts in residence today. The high levels of partially implemented across all the categories is not just about these companies themselves being halfway there, it’s about an entire ecosystem in transition, which is kind of how you can define the ecosystems that are still emerging and developing in the MENA region. And what’s particularly striking is that areas like record keeping and transparency show higher full implementation rates than things like third party evaluations. And this suggests that these companies are better at internal governance than external validation. Still need to extract a bit more insights as to what this means from our respondents, but this is also a critical gap when we think about building regional and global trust in our AI systems. So with that, I will hand it back off to Dr. Fadi and invite our panelists to stage.

Fadi Salim: Thank you. Thank you, Salma. And lots of points to talk about, but first let me ask the distinguished panelists to join us, Dr. Nibal Idilbi, the UN Esquire Acting Director of Cluster on Statistics, Information, Society and Technology. Martin Royske, Director of Government Affairs and Public Policy on Google MENA. And Jill Fayyad, IEEE Standards Association is an advisor. and he leads lots of projects related to AI ethics capacity building. So clearly from the presentation that we highlighted some of the findings and based on the discussions we had earlier with other panels as well today, there are a lot of questions around inclusion in the region and it has been an area that our region has not been able to achieve proper inclusion in the digital age. And I will start with Dr. Nibel from ESCWA. You have a regional view in ESCWA. You have a clear, deep understanding on each country in our region and this is a sample of the world. We have some of the highest ranking countries in the world in many of the digital transformation areas to some of the least developed. So in our region, what are the real goals of inclusion should be aiming for? How in the digital era as well as in the age of AI? What are we looking for? How can we understand what inclusion can lead to?

Nibal Idlebi: Okay, good afternoon everyone. Do you hear me well? It’s okay? Okay. There is a different facet for the inclusion in the Arab region for sure. I mean especially that AI is emerging in many countries. We can see that Arab countries are not heterogeneous in terms of AI development. We have countries which are really heading very well like some GCC countries like UAE, Saudi Arabia, Qatar and maybe other countries like some countries do have now the national AI strategy while other countries are really lagging behind. Whenever we speak about the region, we cannot speak about all countries together because they are leaders in the technology development like GCC. in general, and maybe Jordan, and there are other countries which are really lacking behind, like, let us say, Somalia, Syria, Iraq, Libya, and so on, and then there are countries which are really lacking behind. For inclusion, I mean, I believe there is different level, at least from the perspective of AI, we need to involve all stakeholders in the discussion about AI, either in the AI strategy, or AI framework, or AI governance, then inclusion of all stakeholders, which means private sector, government, as well as NGOs, and academia. Academia plays a very important role in AI, maybe in some information society we forget about them in some cases, but today, I mean, in AI, it is one of the areas where the research and development is very important, therefore the inclusion of academia is important, very important. Then the design, and in the discussion of AI, we need to include all stakeholders, but also we need to include all disciplines, because we know that AI, I mean, it matters for many sectors, like healthcare, like education, like agriculture, maybe transportation, then the discussion might be not only between technological people only, but also interdiscipline, then this is one side of the inclusion. The second side is to include all segments of people, I mean, whenever we are developing any system for AI, either during the design, or the deployment, or the use, we have to include all people, in terms of disabled people, or races, I mean, all segments of the society, elderly people, women, gender, youth, and everyone. Then we have to include in our algorithm, in our thinking. on our design, on our strategy, all segments of the society. This is also very important, because the needs might be different from one segment to another, and then here we have to think globally about all societal groups. There is also maybe inclusion in terms of, I don’t know how to link it with data, because data is very important in this regard, and here we have really to, we know that in some countries, in some of the Arab countries, we lack a lot of data. Data are not very well developed. We don’t have everything on digital format. I mean, we don’t have enough data in digital, in a way or another. Then the data from one side, we need to have it. We need to have it clean and reliable, and we have to have it reliable and timely in a way or another. Then the inclusion of data that represents all region or subregion or locality in a way or another, this is another form of inclusion in order to have our algorithm or our AI system, how to say, addressing all the needs of the society. Of course, I mean, if we speak about agriculture, we need to focus on agriculture. We don’t need to focus on everything, but I mean, whenever we think about it, the data is very important, and here we have to encourage or to generate data in the digital format today in order to have representation of the needs at the different level. I will stop here for the time being.

Fadi Salim: Thank you, Nibal, and very important, and you also talked a lot or worked a lot in the past on open data in the Arab region, and that’s something that is not yet mature around the region, which also limits the availability of data for AI development across the region, and this brings me now to my questions to Martin, and Martin, you come from global leader in AI development from Google and very active in the region, and you, as a private sector leader in this domain, and based on your understanding of this, our regional context, what are the roles that, you know, what is the role of, first, the private sector entity leader in AI that can help in the inclusion of the region, whether it’s in data availability, data representativeness, or having a seat on the table in these discussions around AI development globally for the region to have a voice.

Roeske Martin: Thank you, Fadi, both for having us here and for your great partnership in this research that we’ve done together. Some very interesting data points coming out of it. Now, to your question, I think there are many ways in which the private sector can play a key role, tech companies in particular in the region. Maybe before I go, there are just a couple of things we should focus on when we talk about governance because those are all aspects that Google also is very involved with and we’re looking at it from a number of different lenses. One is, of course, around equitable access, so bridging all the different divides. As His Excellency, Abdallah Sawaha mentioned in his opening remarks today, that there are all these different types of divide, the digital divide, the algorithmic divide, the data divide, et cetera. And so a lot of that is around accessible AI tools, affordable tools, access to infrastructure. Still one third of the world is not on the internet, so how do we help bridge that gap? And then making sure people have the right capacity and skills. And so one of the things we’ve been focused on as a private sector entity here very much over the past few years is trying to create skills programs for everyone, not just for technologists or developers, but also for users of AI, whether those are users of Gen AI or just people exposed to it at school and university. Small, medium enterprises, how they can adopt AI. So there it’s important, for example, to make this available for free in language, in Arabic, and scale it to as many people as possible. So just a few weeks ago, we announced a new google.org program granting $15 million over the next few years to train 500,000 people in the Arab world and to give grants to research universities on AI. Second point is about mitigating bias. So how do we create AI systems that are fair, unbiased, and have worked with inclusive data sets? And in. know, because a lot of the AI forays have been made in the West, or in China, this part of the world hasn’t traditionally been part of the data sets that were used to train models, for example. And so very conscious efforts have to be made to ensure that these data sets are inclusive. On protecting privacy and security, I think that’s, that’s obviously one of the key areas that all governance efforts are focused on. And lots of techniques that we as private sector companies use differentiated privacy techniques, trying to anonymize data, preventing data from being widely shared, if it doesn’t have to be for a particular purpose, giving users the option to opt out of their data being collected, website owners opt out of their information being used to train models. So making it a user choice as to how much data can be shared and to use those techniques to keep it private and secure. And then finally, about promoting transparency and accountability. Also, one of the points that I think the, when your survey brought up as one of the primary concerns, there, you know, there’s a lot of work happening across the board, Google participates in many global fora when it comes to privacy. We’ve done a lot of work recently on explainable AI techniques. So, for example, proving data provenance or content provenance. We’ve introduced tools like synth ID, which is a way to watermark content that’s generated by AI. So if it’s synthetic or adaptive content can easily be identified, whether that comes out of an image generation model or out of a text model or video model. We have asked our advertisers to disclose if any contents of their ads are Um, generated by the eye, particularly in sensitive context, like elections. We’re going to make sure that, um, electoral ads follow particular policies. We’ve introduced new policies around that. Um, we have, uh. 2 creators to enable their content if it’s if it’s generated through and I, and we also. Um, provide information about images on search where you can look at how this image was where this image appeared originally. Whether it’s been modified, what its history has been on the Internet. So there’s a problem that you can, we can track it back to. So, I think across those 4 domains, equitable access. Mitigating bias, privacy, security and transparency accountability. It’s like, there’s a huge role to play and, um, as you said. You know, it’s a multi stakeholder dialogue. Um, it’s very important that. All players are part of it. Um, we’d like to be part of the convening where we can. So 1 example is, uh, in the UAE, we’ve started this thing called an AI mattress. Where we bring together stakeholders from across academia. By 2 organizations and the tech industry as well as governments. To discuss, uh, responsible policymaking. And, um, I think those kinds of 4 help, uh, with taking some of the discussions happening. At IGF and elsewhere to a more local level and then continue that.

Fadi Salim: Thank you, Martin and, uh, this bring us. Uh, to, uh, Jill’s, uh. Jill, you come from, uh, the IEEE is, uh, is a standard organization. Uh, but also it has, um, the structure of doing things has a lot of. Um, horizontal, uh, working groups across domains across jurisdictions across the world. And the same thing happens in the, um, ecosystem. The same thing happened in ICANN ecosystem. This multi stakeholders model. That enables a lot of people to. Participate in, um, creating something, or at least be included in the discussion and that. it eventually could be representing them or representing what they want as an outcome. Now, the question is, our region, and this is something we hear a lot from these organizations, does not participate enough. And this is not just this region, but also maybe the global south in general, to use that term, has that of an issue. Is this a question of capacity or is it awareness or is it other reasons that we are not involved at a mass scale, whether it’s the researchers and academics, whether it’s the experts, whether it’s the technical community? What’s from your view as a standard organization that function in such a model think? Thank you.

Jill: Thank you, for the opportunity and also for the question, by the way. So, IEEE, as you say, is a standards organization, but it’s not a standard organization in the traditional sense of just developing standards. It is a non-profit organization that is completely voluntary based in the sense that all the people who participate to the standards are volunteers and they structure into working groups. So, you can think of it like a grassroots bottom-up approach in opposition to standard organization that would come from governments and that would trickle down all the way to get consensus at the engineering level. No, it’s more the engineers deciding that they need to build the standards. So, for example, Wi-Fi 802.11 was built by IEEE this way because it addressed the need. And by doing it this way, you are able to do it fast. Now, that’s one aspect of it. The fact that it is grassroots is something that maybe we don’t advertise enough. Because it’s open to everybody. Everybody can participate to the working groups. Everybody can contribute. And if we have a problem of inclusiveness, the level of AI governance, I think the problem has two sides to it. There is a side, like was described by Martin and when you heard before, that is about how do you import and localize technology in global south regions, which I don’t like the term global south. I think outside of we now, Western Europe and North America, right? So either we are able to, we have to localize that, and it is very important to be able to localize. But there is also the fact that we can contribute. If we don’t contribute the same way as others are contributing in these countries, then how can we expect to have our values and our cultural aspects reflected in the technologies that we use? So we have the opportunity to contribute as well to these standards and make sure, because once they are standards, they get adopted. And they very often get adopted through regulatory channels in many different regions. So that also helps address the other point that you brought up, which is how do we address standards or regulations at the country level versus at the regional level? So Pierce McConnell of DigiConnect at the EU level was at the plenary yesterday on misinformation and disinformation, was reflecting on the fact that what helped EU, EU was a consumer society of AI, pretty much like the global South is. But how did it manage to get its voice heard is by getting all this as a consumer society reflected through EU regulations. And this is what GDPR. was about. So the Brussels effect is really about enabling a build-up of needs at a regional level so that these needs can be taken into account by the technology developers and can be integrated into solutions. But this is outside of IEEE. Just to go back to the IEEE question, I think the interesting part is that IEEE beyond standards also offers other ways that are very useful. We are collaborating on one of these, which is about capacity building. And capacity building is really about the ability to build capacity, and it starts with people, then with data, and then ultimately with compute. And in that order, in the sense that you need first to have the literacy, the survey showed us that we don’t have a clear definition for AI, we don’t know exactly what is AI ethics, we don’t know what is responsible AI, what is the difference, why is there a need for trustworthy AI, all of these things. There is a need for literacy in the first place. That literacy, by the way, is needed everywhere. It’s not just in the region. It’s needed for the city manager in North America as much as it is needed for the government service provider in the GCC country. And that literacy allows you then to understand that you have an issue with data representativity, like Martin was reflecting upon, the fact that many of these algorithms, basically, are built based on data, and that data is reflective of certain societies and cultures. If your data is not represented, you might be, you might not necessarily are, but you might be misrepresented at the algorithmic level, and the outcomes might be non-beneficial for you. So you might have bias, you might have transparency issues, you might have other issues that could be associated with it. So from that perspective… perspective, capacity building allows you to contribute as well. So participating to working group in standards, going through capacity building efforts such as the one that we are developing, and last but not least, have the ability as well to localize content. So we came kind of to the conclusion that you develop a solution, that solution you can get it adopted, you can get it spread in a specific region, in a specific country, but at the end of the day the people who are going to implement it might not all be English speakers. So you need the ability to translate all of these solutions into local languages. You need the ability to adjust the solutions to the cultural representativity of the populations that you have, and have the data also, avoid data under representation or at least compensate for it with local data or processes around the solutions that allow you to avoid the kind of bias issues that you can find with the solutions.

Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusion, access. I’ll ask, I’ll be back to you, Dr. Nibal, and ask you about the higher level of representativeness and the AI governance ecosystem. So you, in Esquoir, as a regional UN body, deal with member states, and these member states, you deal with regulators, you deal with ministers, you deal with stakeholders at the top of the governance ecosystem who are tasked to represent their countries in these global fora around AI governance or around digital inclusion or about other elements as well. Based on your experience, and you highlighted this, some countries have structural restrictions related to, you know, we have conflicts, we have all of that, but also some countries are not having a seat on the table, although they have. something to offer in some of these global fora. How can we, and this is just about our region, but it also can be an issue for the rest of the global South. How can these voices be represented at a higher level in the ecosystem of governance around AI that usually decides around AI safety, AI measurement, AI standards, AI ethics. So it has real implications for their countries, but they don’t have a seat on the table at the top of the chain. Do you think there is a way for this kind of mechanism to exist that represent these countries in the global South, especially in AI, given that it’s currently more of an elite club in these discussions?

Nibal Idlebi: Yeah, in fact, it is quite complex. Let me say that. First of all, I would like to return back to what Gilles was saying, and I would like to mention that there is technology gap between the South and the North. And this is behind the scene, everything is related to technology gap. There is a big gap nowadays between developed country or the Northern country and the Southern country. And from that, we can derive a lot of issues. I mean, from this issue, the big issue that is sustained, I mean, with the time it’s sustained, and it is sometimes it’s becoming bigger. I mean, the digital, the technology gap, or it could be digital gap. But I mean, in this case, maybe digital or technology, because I can think about big data, everything is then the gap, it’s unfortunately, it’s widening, rather than that we are not easily bridging the gap, even in digital technology. But returning back to your question, I believe there is a we have to work at different level. From one side, we have to work at decision level, I mean, the decision making level, because at decision making level, it is those who decide in a way or another, their participation in the global forum, then it is the ministries or the regulatory authority that who are visible vis-a-vis this global forum and the global forum, like the IGF, for example, or WSIS, or whatever, or digital cooperation, compact, for example. And these decision makers should be aware about the importance of their engagement and their participation. And here, maybe we can tackle this issue from countries which are more advanced in the South, like, for example, KSA, UAE, in the Arab region, who are maybe, they can afford it, I mean, they can from one side afford it, and they can discuss it more. Then from one side, the decision maker, but also we have to build capacity of people, those who will go and discuss. I mean, here also, we sometimes we see the gap in the capacity building, but because they cannot argue enough, I mean, on the matters, because from one side, because of the digital gap, or the technology gap that we have, we have to admit that in the South, we are more user of the technology rather than developer of technology, we are speaking about AI technology, most of the time, it’s developed in the US, or in Europe, let me say maybe, but I mean, it is you can, you cannot find a lot of solution, a global solution, which might be developed currently now, here in the in the region, unless it is very local solution, I mean, but I mean, this big solution, this big company, then there is the capacity building of those who will be arguing the technical people, technical people, or academic people, if there is this, this is the need, then there is, I believe in these two area, and we have to be more proactive, I mean, then and maybe build some consensus build force, I mean, at regional level, to have representation from different regions, I mean, and here we can collaborate with other regions as well, Africa for example, Arab countries, Asia, some Asian countries are developed today, they are quite developed, but I mean this also, this consensus building between regions, southern regions, then I think it might matter because we do have some similar matters or similar issues, like for example, this issue of language, the issue of the gap in technological gap and so on, then I believe there is decision maker, at least a practitioner should be really aware about the issue, and if we want to go even deeper with the, you mentioned that it is consumer, the consumer that generated the GDPR, it will be fantastic also if we can have this knowledge spread according, in terms of user, I mean the user, and here maybe that it will be NGOs, and I think IGF is very good forum for this, for building capacity of NGOs and user, from the user perspective, then I believe we need really to build capacity and to convince decision maker at the first high level for the participation and to convince them, to show them the value of their participation, I think some of them are aware today, and we feel, I mean I believe I saw many many times representative of KSA or UAE who are, I mean in the international forum, and they are participating, Oman also sometimes it’s participating, I would say that there are Egypt sometimes they are participating, but also there is this building network between the regions in the south to have, because we have the same issue, then we have to say to push it more our agenda than we are doing it today, I mean.

Fadi Salim: Great, I mean I think as a school of government, we’re coming from that point of view, to add to your point, we do deal with the region as people in the senior and the mid-career and high-level positions in the region, in terms of capacity building and leadership development, so there is something that definitely we can be done in terms of capacity building at the highest level as well, in many countries around the region where they need to be aware, but sometimes they have to develop the capacity, and then the issues around language, issues around access, issues around financial matters that are not able, that not allow them to access these fora, all of these are in the leadership capacity building so thank you for that.

Nibal Idlebi: Let me, but just add one thing that I have forgotten, it’s private sector, the role of private sector and the association of private sector it should be because I mean technology is developed by private sector they have to be also on the loop.

Fadi Salim: Absolutely, private sector back to you Martin, I wanted to ask you around you know with the data that Selma presented we noticed and this is about companies so private sector companies but this survey that was presented is hundreds of small and medium enterprises and startups around 10 countries in the region working specifically in the AI domain and they have identified in the data that showed that Selma showed regulatory uncertainty as a question mark and this even talking some of these in the interviews with these it’s not working even talking in the discussing this in interviews with some of these companies they feel less willing to participate and share even their issues with us as researchers so there is this culture of why do I need to share I will leave it to the anthropologists such as Selma to understand why this is happening but from your point of view as as a private sector leader this regulatory uncertainty that was also highlighted in many of these discussions as a reason for restricting how do you balance this trade-off as in our region many countries have that clear direction around regulations on AI or knockoff but others do not have anything this uncertainty is causing many of our companies but also individuals who are interested in being included or talking to hold back do you have a do you have a view on on this trade-off that that is happening right now in our in the private sector at least and the small business enterprises that you might be involved with.

Roeske Martin: Thanks Fadi, great question. So I think you made a great point that came out in the research which was it’s maybe not the absence of regulation, it’s that level of uncertainty that is holding the private sector back a bit. I’d like to offer a slightly different perspective because I feel this region actually is getting quite a lot of things right when it comes to governance of AI and we see that a lot of focus over the past couple of years has been really about ethics and principles. So first countries adopted national AI strategies and pretty much every country in the region now has one at different levels of implementation with in many cases international input and support from the private sector to then developing their ethics and principles and guidelines and what they haven’t done yet is to come with hard regulation and laws similar to the AI Act or otherwise and in our mind that’s not necessarily a bad thing. I mean there has been an intentional bit of a wait and see attitude to first let the technology get to a point where you can see the actual use cases in practice. A lot of countries in the region, Saudi being a good example, UAE and other have implemented regulatory sandboxes where you can try new technologies in a safe environment, controlled implementation and we’ve seen some great investment in things like Arabic LLMs. So you’ve got the Falcon model in the UAE, you’ve got Alam here, you’ve got FANAR in Qatar, all of which are available for researchers around the world to work on. So we’re starting to see greater data sets from the region and actually homegrown technology as well enabled sort of by a slightly hands-off attitude to regulation. regulation. Now, Google has published a lot about what kind of regulation we think makes sense. We talk a lot about bold and responsible, both. How do you get that balance between preventing users from harm, but at the same time, keeping the innovation open and flourishing? And I think countries here realize most countries in the Gulf, at least, will try to position themselves as the next global AI hub. They’re not just thinking regionally. They’re thinking globally. How do we build the infrastructure that will attract businesses and small and medium enterprise and AI startups to see this as a place from which they can grow and flourish in the world? And so investing in energy, alternative energy, green energy, building data centers, et cetera, I think a lot of countries built their strategies around attracting talent, attracting companies, attracting business. So that’s on the positives. I think a lot of regulators ask me, what should we do? What’s the right way to go about implementing AI regulation? Should we even focus on AI regulation as such? Or should we focus on first filling the gaps we have in our existing regulation? And I think that’s a very good point to take home, is that whatever was illegal without AI probably should be illegal with AI. It’s about making the regulation that already exists adapt in such a way that AI is included in the thinking. That doesn’t necessarily mean that one has to regulate all the inputs that go into developing models, scientific breakthroughs. It’s more about trying to, on the downside, regulate the outputs. And I think we haven’t seen enough outputs in everyday usage beyond the sandboxes and some of the use cases I mentioned to know quite yet where the regulation needs to focus. So apart from those principles I mentioned earlier, what are the goals of? broader global governance, I think the regional specifics still being worked through. We talked about language quite a bit earlier, and Arabic in particular. I just want to give some interesting data points. One is, so you know there is a product called Google Translate. At the moment it exists in just under 260 languages. We started this project 20 years ago. We’re now at 260 odd languages. Of those 260, 110 languages were developed in the last six months. Thanks to AI. 20 years of development, in six months it’s gone like this. 110 languages. So this is about creating an inclusive way of accessing the technology. Another interesting thing I just learned a couple of weeks ago is that Gemini, which is our generative AI tool, we have more daily active users in the MENA region than in the US, in Arabic. Which is crazy to think of, but it’s a testament to the appetite that exists in the region for actually using these tools. And the fact that you have a very young demographic, the fact that people are generally open to technology, they tend to be more optimistic about using technology, means there is an almost instant embracing of what’s available, especially if it’s available in language. So that’s why I said slightly different perspective. I tend to be a bit more optimistic on where the region can go with this. And I think they are getting a lot of things right. Why is the private sector still a bit hesitant? I think there are other underlying factors. The funding streams for startups, investments in SMEs, how easy is it to get access to finance? Things about data privacy, even though all the countries in the region now have some form of data privacy law, the implementing regulations were still missing, and so you don’t quite know how to apply. There’s a lot of ambiguity around it. So focusing on finishing the job on some of those issues that are really the enabling mechanisms and policies for AI, in my mind, maybe should be the first priority right now.

Fadi Salim: Great. Thank you. This is very insightful. And you highlighted the standards. I mean, this, Jill, while Martin was telling us how this region has more Gemini users and Gen AI products users, probably if you extrapolate from that, than the US. But we have ethical implications for this, right, in our region. And in the IEEE, you have set both specifications and lots of deep research into AI ethics, as well as capacity building for AI ethics assurance, and in a way that is currently also we’re working together to adapt it to the region. But given this massive explosion of AI use, and at the same time, lack of ethical standards or regulations or systems in place for our region to govern that use, do you feel that this can take things into misdeployment? to creating some victims in society? And if it is the case, is it an argument for more inclusion, more regulation, or other things? I know this is a very complex question, but I trust that you can highlight all of these.

Jill: Thank you, Fadi. I think in a nutshell, I think it’s important to acknowledge and realize that without the contribution of the private sector, it would be very hard to achieve literacy and capacity building in the region. When I run courses in Africa, I run them on COLA. I run them on Google Meet. I run them on tools that are made available to me by the private sector. So it is very important to acknowledge the enabling nature of the private sector in this. I think from a regulatory standpoint, what is interesting to see is that many countries, even in Europe now, are starting to look at it more from, regulatory is good in terms of protection, but it should not come at the expense of innovation. And there is kind of a turnaround in a sense. People are coming to realize that you should not go too much in one direction and not too much on the other direction, especially Europe stands kind of in the middle on both of these. And so I think the region here has the option and has grabbed the option to kind of leapfrog some of these issues and grow into an environment, some countries, I’m not talking about the whole region, but grow into an environment where they can be even ahead of some European countries in terms of AI deployment. So again, it’s not an us versus them kind of thing. And it’s very… very important to acknowledge the fact that you cannot do that work in AI without the private sector. This being said, the AI at the end of the day is a tool. It’s not AI that can be good or bad or nefarious. It is the person behind it and the use that it is being put to that decides whether it is turning in good shape or not. The problem that you face, I think, is where you are trying to use AI to bridge a gap that you don’t know how to bridge otherwise. So suppose that you are in a situation where you don’t have enough resources to fulfill something and you say, oh, generative AI is going to do it. We had the same issue 10 years ago with the chatbots. Chatbots are going to fix it. Well, chatbots did not fix it. And generative AI might be able to fulfill some of these roles. But the condition is that they need to be well-defined in the first place in terms of use cases. And this is where, because without that, you don’t have trust. And if you don’t have trust, you don’t have adoption in the AI. At the end of the day, AI is an anthropomorphic user interface. Whether the way it interfaces with humans is by behaving like a human. It is basically assuming more and more, whether it is autonomously or through augmentation, roles and decisions that were left to humans before. So what we expect from it is trust, the same way as we expect trust from humans in the way they behave. So from that perspective, the trust in the solution is very important. So how do you build that trust? It’s very important to make sure that you look at the use cases and that you take into consideration from an inclusivity perspective all of the stakeholders that need to be involved in it. And this is a role that is at the individual level participating into standards. The private entities. being very inclusive of all stakeholders, the way they are doing it, or the government even pushing for empowerment of the different civil societies group into achieving that. But it’s a magnifying glass at the end of the day. So everything it does will get magnified. If you don’t prepare it well, it will get magnified in the wrong way, and you will see more negative aspects than positive aspects. So it’s important to, from the onset, look into it. And in order to look into it, like Neha was saying, you need to have the ability to understand what it is about. If you don’t know what is AI, then you are just relying on what the provider tells you it is. And at that point, you have no say in its implementation.

Fadi Salim: Well, great. Thank you. I know we’re coming to questions, and we have questions coming up. If you’re eager to give us your question, please go ahead. Do you have a mic? Ah, here it is. I think, yeah.

AUDIENCE: I’m sorry for being so anxious, but I’m a panelist in another workshop, and I have to, I’m late, but this workshop is very important. So I have two quick questions for the panelists. The first, data was said that without data, we don’t have local solution. We need local data. So my first question is, how do you think is the best approach to encourage, to have local data? Should it be through regulatory ways, or what other incentives could be used? If there’s some best practice already that can be shared, that’s my first question. And my second question is mainly to Dr. Roske, is that one of the big gaps, once the developing country have the data, where to run it? This requires. requires huge computing power that, well, imagine we don’t have enough for data centers. Imagine the computing power that we need. That’s the gap that is growing. I’m asking, maybe it’s already happening, but maybe in the future it will be possible a business model in which the big companies that already have the hardware could bring as a service to run the local data of countries for training of models or other things as a service for developing countries that do not have the infrastructure to run the data in their machines. Could that be a possible way to give some time till the countries could have their own hardware?

Fadi Salim: Thank you. Two questions. So who would like to start? Okay, so local data.

Nibal Idlebi: I believe there are some initiatives that there are some practice in a way or another to have this to encourage the local data. I mean, there are some initiatives, I believe even Google did one at one point for encouraging teachers and students to develop and to put their data or whatever research they are doing. But we can copy the example and make it for users in general, for local community, for students, for teachers, and so on. And to make some initiatives or to make some awards in a way or another. I mean, the awards are very capturing, I mean, to collect, they might be a solution to capture some data. Then, I mean, there are some practice in a way to encourage citizens and to encourage people to provide their own data through specific initiatives. And I agree with you, there should be some initiative, some incentives. I mean, of course, I mean, local governments, we can have the data collected from e-government, I mean, or digital government. These data, we need to encourage maybe the locality or government to open their data in order to be used. And this is one of the examples that I mentioned by Fadi. This open data is very important, I believe, that you can put all, but I mean, it needs some efforts even from the government to clean the data, to put it in a proper way. But there are some initiatives that could encourage or accelerate the generation of data. Because I mean, through digital government, you have a lot of data with the government, a lot. I mean, okay, if you don’t have digital government in the country, then it’s another question. But I mean, through this initiative, from the government, from local government, from institutions, I would say also, you can encourage in a specific field the generation of a new data. It could happen. And there are some initiatives.

Fadi Salim: Great question as well. Maybe on the data point first, and then I’ll come to the second part of your question. I don’t know if you’ve heard of data commons, but data commons is Google has been quite involved in that initiative for a couple of years now. And the idea is to take whatever publicly available data there is, clean it, structure it, and then provide insights to anyone who wants to query the data. So it doesn’t become a walled garden of information that’s only available to some people who are willing to pay for it, but to create, if you like, the repository of data that you can then base some of the research questions around. And whether that’s environmental data, climate data, health data, et cetera. Governments can make a lot more data available than they’re currently doing. I think there’s a lot of hesitancy around what is sensitive, what is not sensitive. When we worked on data protection laws, for example, in providing best practice and consultations, the default position was, oh, it’s all sensitive. It’s all a national security interest, et cetera. But what people don’t realize is that so much of that data could easily be anonymized or made impersonal, that you’re not giving away secrets by just sharing the data in a more meaningful way. So I think there is a lot of work that can be done there on just sharing data between departments within the government, but also with the private sector and others. On the other point, what can Google and other tech companies do to include the global south and the emerging markets in more of an infrastructure development? development. So first of all, the cost of compute and capacity to run models is going down all the time. So a lot of work that is happening at Google and other companies to just reduce the amount of compute and storage and everything else needed to come up with good functioning AI systems. And so Gemini 2.0, which was just announced last week, now uses 90% less compute than Gemini 1.5, which is a huge scaling down in terms of that. The other way is to bring a lot of the compute closer to the device and have the algorithms run on the edge or on the user device itself so you don’t need to run it all through global infrastructure. But we do recognize that a lot of global infrastructure will still be needed. And so we and I know other tech companies invest, for example, a lot in subsea cabling and satellite systems. One of my colleagues here just gave a talk on the interplanetary internet that is being developed. So how do you create an infrastructure that you can easily bring to markets that don’t have it today at a low cost? And so a lot of development going into that at the moment. Of course, the capacity building, scaling, all of that is super important as well. And making sure that the universities that exist, there are some very good institutions here, are connected to the research that’s happening in other parts of the world, including them. So that would be hopefully…

AUDIENCE: Can I add to this? Yeah, please. Okay. So I’m just going to be brief and quick on this. I think there is no one answer for everything. So in the sense of data, right? LLMs today have scrubbed the internet completely and now are generating synthetic data and are growing out of synthetic data. So I don’t know about Google Translate, maybe if that’s part of the uptake. But there is a lot of use of synthetic data there. So how to generate data when you don’t have enough data. This could be very interesting for the global south. And help in generating synthetic data can be very helpful to make sure that there is no under-representation at the data level in the global south. That’s one. The second thing is it depends on what AI we’re talking. We are always now talking about LLMs and agents on top of LLMs. But if you go all the way to the neural nets that are just the layers below, in terms of sophistication, I would say, or building blocks, you can do a lot with tools that are available either from private entity, like a colab, for example, available for free. And you can even do a lot with lower computationally hungry algorithms that provide you with a lower performance level. But that is still enough in many global south countries. Like if I achieve 70%, but I don’t need to use a data center, I can run it on my local PC. I achieve a 70% accuracy. Or do I go for a 98% accuracy, but then I’m dependent on the external data center? I have choices, essentially. So there are ways to adjust all of this. But in order to know that I have choices, I need the literacy for it in the first place.

Fadi Salim: Thank you. Let’s go.

Nibal Idlebi: Maybe GIS data or satellite information are also useful in some cases. Thank you.

Fadi Salim: Oh, I have the mic. OK, thank you very much for your question. And looking forward to your panel as well afterwards. We’ll look forward to your panel. All right. So as we started the questions, if there’s any other questions at the floor, I’ll come back to you. But I would like to take one question online. We have a vibrant, clearly, community out there. But let me read the question to you. What common cultural aspects deter the participative appetite of the global south? For instance, board member, public or private, how public and private board members are selected. Those members are responsible for overseeing cases, assessing data, readiness, internal governance, and so on. Who assesses board members and certifies them as AI worthy? How does culture affect such aspects? This is a very good question. I don’t know if anybody would like, you have, all of you have boards, one way or another. So is there anything to be learned on how these cultural aspects deter participation in these boards in our region or in the Global South in general?

AUDIENCE: Any insights or thoughts? Just one quick thought around good practices I’ve seen governments adopt in the region, which is, for example, implementing chief AI officers or AI officers across different government departments and empowering people to actually learn about the technology and then lean into those conversations when it comes to multi-stakeholder dialogue. So yes, there is some capacity building to be done still, but appointing people, giving them the mandate, and putting structures in place that actually allow this dialogue to happen is a very good first step.

Fadi Salim: Great, thanks. Any other comments on this? OK, we can move on. You have one?

AUDIENCE: Very quickly, I think one important aspect here is, how do you make sure that the people who are in charge, once these organizations or these structures are put in place, how do you make sure that they are actually effective and they have the capacity for it? So this is where you have some kind of, they can be certified or authorized or basically recognized for their capacity. So it’s like continuing education. We hear that a lot in the WENA regions, where basically, how do you make sure that people are always catching up with the technology that they are using or for which they need to make the important decisions? And there are courses and structures, like the ones that MBRSG offers, that can offer these trainings.

Fadi Salim: Thank you. And thank you, Sami Assa, for the question. Now we move on to another question from the floor. Can you please, let’s give you a mic. Yeah, because you need, everybody else needs to hear you.

Jasmin Alduri: Perfect. Hi, my name is Jasmin Alduri. I’m the co-director of the Responsible Tech Hub. We’re a youth-led non-profit focusing on responsible tech. So the name already gives it away. I really like the aspect that Dr. Niyabal actually brought up about the Global South being not as involved in AI developing and that mostly the Global North or the Western countries are doing it. Because I 100,000% agree on this. However, there’s this one aspect of training. training AI that is happening in the Global South, meaning most of the labelling is actually happening in the Global South with click workers doing the main work and to some extent also being exploited. So my question for the round would be how can we make sure that specifically click workers and the Global South actually does not only feel included but can actually benefit from the fact that it’s part of that developing stage of AI.

Fadi Salim: Is there any, who are you targeting your question to? I think it’s coming back to Martin, but yeah, so this is something that is common in all the, it’s AI. So in a way it’s how to exclude rather than include these from being exploited, right? So how to, because it happens. So is there any measures that Google or technology companies are applying to ensure that this is properly governed that we might learn from?

AUDIENCE: I think beyond skills programs and helping developers and people working in those industries in the click content work that you did mention. Of course, a lot of it is encouraging local businesses to adopt some of these technologies and the startup ecosystem to take those technologies and build things that are regionally relevant from the ground up. And so one of the things we’re focused on a lot is to work with the startup ecosystem in particular. We just had a program last year called the Women AI Founders Program, but we found that there is a huge gap in, well, women founders in general, I think in the MENA region it’s only 3% of startups are run by women. And then the funding gap is even worse. It’s 1% of funding going to women. So we realized that there is a huge amount of talent in the region that is not tapped into. properly and that help and support is required. So we run these accelerator programs for different groups of startups. We started generically looking at AI startups. We’re now starting to go down into more thematic approaches whether it’s around health or education or fintech, even gaming. We’re doing accelerators. So there are some very interesting sectors here, particularly in economies that are trying to diversify away from fossil fuels, etc. and encouraging the build-out of new economic sectors where there’s a lot of opportunity. And it’s just making sure that the ecosystems are there, the platforms that all this talent can tap into and work with. And that there are pathways to success, right, that people don’t get stuck. They graduate, they have a degree and then know where to go. There need to be the jobs to go with it.

Fadi Salim: I think, yeah, I want Jill to comment on this because IEEE has ethical specification for AI development, procurement, you name it. So is this embedded already in how for you as a assessor of AI ethics or AI ethical application of AI can look into or want to look into?

Jill: Certainly from an ethical perspective, part of the process of evaluating a solution is taking into consideration inclusively all of the stakeholders, including the labelers. So if you take the labelers in Kenya developing for some big company, some labeling for some LLM stuff and being mentally impacted by it, for example, or being underpaid for it, this is certainly something that would be caught at the identification, what we call the ethics profiling at the use case level. Now, this is part of, for example, the IEEE certified assessment framework, which allows you to assess solutions. So irrespective of whether or not you have an AI governance, you have a solution today and you want to know if it is ethical or not, you can go through a very detailed and formal process that will allow you to do this. But I’d like to, to tackle the question a bit differently as well. For every challenge, there is an opportunity. And in a sense, AI costs a lot of money. It costs a lot of money to countries, Western countries, so-called Western countries, that will go for cheaper labor in some developing countries, right? But at the same time, it is an opportunity, as Martin was referring to, to grow the capacity building into these developing countries and grow even the capacity for local employment and local expertise. And once you have that local expertise, then you can afford more AI solutions because you can afford local salaries instead of having to pay for external salaries there. And on top of that, closing the loop on the programs, the programs allow you to become authorized assessors. The IEEE certified program. And once you are an authorized assessor, you can work worldwide anywhere, so you can compete with others and it opens up opportunities that don’t force you into a niche market of labeling or specifically doing some tedious tasks there. So there are opportunities there that go beyond just the current economics that we see.

Fadi Salim: Thank you. And we still have around 10 minutes to go. You have a question to the panelists. If I may, if you allow it. Yeah, but that will mean that they will have to ask you a question.

Nibal Idlebi: If I may just ask a small question. We know that UNESCO have published ethics for AI. I want to know from you as a private sector or AIEEE, up to which limits you are applying these international ethics, I mean, of UNESCO? for example, in AI?

Fadi Salim: I’ll start and then Jill, let you weigh in as well. So we had, we published AI principles, I think back in 2018, quite a while ago, which defined what we will and what we won’t do with AI. And I think a lot of those have since been, you know, incorporated in some of the global governance standards as well. So it’s important not just to keep checking and assessing, but also to build these principles into the product from day one, right? And so when DeepMind or one of our units that works a lot on AI develops a product, it does so with those principles in mind. And this predates, by the way, Gen AI by many, many years. So, you know, our CEO declared Google to be an AI first company in 2017, I think. And it’s now in all the products, right? It’s in search, it’s in YouTube, it’s in maps, it’s in… And so there is a sort of established practice now of how do you take these principles and build them into products and there are working groups, product teams that check this on a very, very regular basis. So yes, I would say these principles are very much part of our everyday life and there are whole groups dedicated within the company to working on it. Would you like to comment? Yes.

AUDIENCE: Maybe quickly I’ll answer and then you can grab the mic. So I have to say that, I mean, since you are opening the door for it, actually IEEE pioneered social technical standards, which is the social impact of technology. Back in 2016, the first ethically aligned design framework was built from which a lot of recommendations and standards came out, including for software development, similar to what Google did, or for assessment like the IEEE certified, or for procurement, or even for governance. So all of these is the work of this grassroots, thank you. community work. The EAD principles actually were very much used in the UNESCO principles. So we are applying them from the start, and we are promoting their use worldwide for this.

Fadi Salim: Sorry. You have a question? Yeah. Okay. There’s a mic over there.

Lars Ratscheid: Thank you. My name is Lars Ratscheid. I’m from Germany, just like Yasmin, and I work in international cooperation. And now all three of you gave examples on regulation, on governing standards for AI. But bringing it back to the title of the session, how was the Global South or the global majority involved in each of these at Google, at UNESCO, and at IEEE? Thank you.

Fadi Salim: So I guess this is a closing question, right? It’s a closing question because we’re almost out of time. But in a way, it’s an important question. How can we learn? Do you have some examples of how inclusion happens within your organization? I know IEEE, maybe starting with you, Jill, IEEE has a massive working group, activities, volunteers, et cetera. Tell us more about the examples of inclusion that exists in AI.

Jill: Sure. So IEEE has chapters in every country in the world. So there is representation from every country in the world, and every country is encouraged to work with our chapter and

Fadi Salim: get… Recording didn’t work. I can’t hear you.

N

Nibal Idlebi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of representation in global AI forums

Explanation

Nibal Idlebi points out that countries in the MENA region are not adequately represented in global AI governance discussions. This lack of representation limits the region’s ability to influence AI policies and standards.

Evidence

Nibal mentions that often only one country from the Arab world attends international forums, and their contributions are siloed and weak.

Major Discussion Point

Inclusion in AI Governance in the MENA Region

Need for capacity building at decision-making level

Explanation

Idlebi emphasizes the importance of building capacity among decision-makers in the MENA region. This includes raising awareness about the importance of engagement in global AI governance and developing the skills needed to participate effectively.

Evidence

She suggests that decision-makers should be convinced of the value of their participation in international forums.

Major Discussion Point

Inclusion in AI Governance in the MENA Region

Agreed with

Jill Nelson

Roeske Martin

Agreed on

Need for capacity building and literacy in AI

Differed with

Roeske Martin

Differed on

Approach to AI regulation

Lack of local data limiting AI development

Explanation

Idlebi highlights the shortage of local data as a significant barrier to AI development in the MENA region. This lack of data hinders the creation of locally relevant AI solutions and perpetuates dependence on external data sources.

Evidence

She mentions that through digital government initiatives, a lot of data is available with the government, but it needs to be cleaned and made accessible.

Major Discussion Point

Data and Infrastructure Challenges

Agreed with

Jill Nelson

Roeske Martin

Agreed on

Importance of local data for AI development

Need for initiatives to encourage local data generation

Explanation

Idlebi suggests that initiatives are needed to encourage the generation of local data in the MENA region. These initiatives could involve various stakeholders and use incentives to promote data collection and sharing.

Evidence

She proposes ideas such as awards, initiatives for citizens, and encouraging governments to open their data for use.

Major Discussion Point

Data and Infrastructure Challenges

Differed with

Jill Nelson

Differed on

Focus of inclusion efforts

J

Jill Nelson

Speech speed

146 words per minute

Speech length

2019 words

Speech time

827 seconds

Importance of private sector involvement in enabling inclusion

Explanation

Jill emphasizes the crucial role of the private sector in enabling inclusion in AI governance. She argues that without private sector contributions, it would be challenging to achieve literacy and capacity building in the region.

Evidence

Jill mentions running courses in Africa using tools like COLA and Google Meet, which are made available by the private sector.

Major Discussion Point

Inclusion in AI Governance in the MENA Region

Agreed with

Roeske Martin

Agreed on

Role of private sector in enabling inclusion

Differed with

Nibal Idlebi

Differed on

Focus of inclusion efforts

Need for literacy and capacity building to enable meaningful participation

Explanation

Jill stresses the importance of AI literacy and capacity building to enable meaningful participation in AI governance. She argues that without understanding what AI is, people are reliant on what providers tell them, limiting their ability to influence implementation.

Major Discussion Point

Inclusion in AI Governance in the MENA Region

Agreed with

Nibal Idlebi

Roeske Martin

Agreed on

Need for capacity building and literacy in AI

Use of synthetic data to address data scarcity

Explanation

Jill suggests the use of synthetic data as a potential solution to address data scarcity in the Global South. This approach could help generate representative data when real data is insufficient or unavailable.

Evidence

She mentions that LLMs today have scrubbed the internet and are now generating and growing out of synthetic data.

Major Discussion Point

Data and Infrastructure Challenges

Agreed with

Nibal Idlebi

Roeske Martin

Agreed on

Importance of local data for AI development

Need to consider all stakeholders, including labelers, in ethical assessments

Explanation

Jill emphasizes the importance of considering all stakeholders, including data labelers, in ethical assessments of AI systems. This inclusive approach ensures that the impacts on all parties involved in AI development are taken into account.

Evidence

She mentions the IEEE certified assessment framework, which allows for the assessment of AI solutions from an ethical perspective.

Major Discussion Point

Ethical Considerations in AI Development

IEEE’s work on social-technical standards and ethical frameworks

Explanation

Jill highlights IEEE’s pioneering work on social-technical standards and ethical frameworks for AI. These standards and frameworks provide guidance for responsible AI development and implementation.

Evidence

She mentions the ethically aligned design framework developed by IEEE in 2016, which has influenced various recommendations and standards.

Major Discussion Point

Ethical Considerations in AI Development

Opportunity to grow local expertise and employment

Explanation

Jill points out that the challenges in AI development also present opportunities for growing local expertise and employment in developing countries. This could lead to more affordable AI solutions and increased local capacity.

Evidence

She mentions the potential for local employment and expertise growth, which could make AI solutions more affordable through local salaries.

Major Discussion Point

Fostering Local AI Ecosystems

R

Roeske Martin

Speech speed

149 words per minute

Speech length

1826 words

Speech time

734 seconds

Regulatory uncertainty holding back private sector

Explanation

Martin highlights that regulatory uncertainty, rather than over-regulation, is holding back private sector involvement in AI development in the MENA region. Companies are struggling to navigate a regulatory landscape that is still taking shape.

Evidence

He cites the research findings showing that companies are supportive or neutral towards regulation, but face uncertainty about the regulatory direction.

Major Discussion Point

Inclusion in AI Governance in the MENA Region

Differed with

Nibal Idlebi

Differed on

Approach to AI regulation

Gap in computing power and infrastructure in developing countries

Explanation

Martin acknowledges the gap in computing power and infrastructure in developing countries, which limits their ability to run large AI models. However, he also notes ongoing efforts to reduce the computational requirements of AI systems.

Evidence

He mentions that Gemini 2.0 uses 90% less compute than Gemini 1.5, indicating progress in reducing computational requirements.

Major Discussion Point

Data and Infrastructure Challenges

Potential for data commons and public data sharing

Explanation

Martin suggests the potential of data commons and increased public data sharing to address data scarcity issues. This approach could make more structured, clean data available for AI development and research.

Evidence

He mentions Google’s involvement in the data commons initiative, which aims to clean, structure, and provide insights from publicly available data.

Major Discussion Point

Data and Infrastructure Challenges

Agreed with

Nibal Idlebi

Jill Nelson

Agreed on

Importance of local data for AI development

Importance of building ethical principles into products from the start

Explanation

Martin emphasizes the importance of incorporating ethical principles into AI products from the beginning of development. This approach ensures that ethical considerations are integral to the product, rather than an afterthought.

Evidence

He mentions that Google published AI principles in 2018 and has since incorporated these principles into product development processes.

Major Discussion Point

Ethical Considerations in AI Development

Need to encourage local businesses and startups to adopt AI

Explanation

Martin stresses the importance of encouraging local businesses and startups in the MENA region to adopt AI technologies. This approach can help build a robust local AI ecosystem and drive innovation.

Evidence

He mentions Google’s programs like the Women AI Founders Program and various accelerator programs focused on different sectors.

Major Discussion Point

Fostering Local AI Ecosystems

Agreed with

Jill Nelson

Agreed on

Role of private sector in enabling inclusion

Importance of creating pathways to success for local talent

Explanation

Martin highlights the need to create clear pathways to success for local talent in the AI field. This includes ensuring that there are job opportunities and support systems for graduates and emerging professionals.

Evidence

He mentions the need for jobs to go along with degrees and the importance of building out new economic sectors.

Major Discussion Point

Fostering Local AI Ecosystems

Agreed with

Nibal Idlebi

Jill Nelson

Agreed on

Need for capacity building and literacy in AI

Role of regulatory sandboxes in enabling safe experimentation

Explanation

Martin points out the positive role of regulatory sandboxes in enabling safe experimentation with AI technologies. These sandboxes allow for controlled implementation and testing of new technologies.

Evidence

He mentions examples of countries like Saudi Arabia and UAE implementing regulatory sandboxes for AI experimentation.

Major Discussion Point

Fostering Local AI Ecosystems

J

Jasmin Alduri

Speech speed

163 words per minute

Speech length

156 words

Speech time

57 seconds

Exploitation of click workers in Global South

Explanation

Jasmin Alduri raises concerns about the exploitation of click workers in the Global South who are involved in AI development, particularly in data labeling. She questions how these workers can benefit from their involvement in AI development rather than just being exploited.

Major Discussion Point

Ethical Considerations in AI Development

Agreements

Agreement Points

Need for capacity building and literacy in AI

Nibal Idlebi

Jill Nelson

Roeske Martin

Need for capacity building at decision-making level

Need for literacy and capacity building to enable meaningful participation

Importance of creating pathways to success for local talent

All speakers emphasized the importance of building capacity and literacy in AI across various levels, from decision-makers to the general public, to enable meaningful participation in AI governance and development.

Importance of local data for AI development

Nibal Idlebi

Jill Nelson

Roeske Martin

Lack of local data limiting AI development

Use of synthetic data to address data scarcity

Potential for data commons and public data sharing

The speakers agreed on the critical role of local data in AI development and suggested various approaches to address data scarcity in the region.

Role of private sector in enabling inclusion

Jill Nelson

Roeske Martin

Importance of private sector involvement in enabling inclusion

Need to encourage local businesses and startups to adopt AI

Both speakers highlighted the crucial role of the private sector in enabling inclusion in AI governance and development, emphasizing the need to encourage local businesses and startups.

Similar Viewpoints

Both speakers proposed innovative solutions to address the lack of local data, suggesting initiatives to encourage data generation or the use of synthetic data.

Nibal Idlebi

Jill Nelson

Need for initiatives to encourage local data generation

Use of synthetic data to address data scarcity

Both speakers emphasized the importance of incorporating ethical considerations into AI development from the beginning, considering all stakeholders involved.

Jill

Roeske Martin

Need to consider all stakeholders, including labelers, in ethical assessments

Importance of building ethical principles into products from the start

Unexpected Consensus

Positive view on regulatory sandboxes

Roeske Martin

Nibal Idlebi

Role of regulatory sandboxes in enabling safe experimentation

Need for initiatives to encourage local data generation

Despite coming from different sectors (private and public), both speakers showed support for initiatives that allow controlled experimentation and innovation in AI, such as regulatory sandboxes and data generation initiatives.

Overall Assessment

Summary

The main areas of agreement included the need for capacity building, the importance of local data for AI development, and the crucial role of the private sector in enabling inclusion. There was also consensus on the need for ethical considerations in AI development and support for initiatives that encourage innovation.

Consensus level

The level of consensus among the speakers was moderately high, with agreement on several key issues. This consensus suggests a shared understanding of the challenges and potential solutions for AI governance and development in the MENA region. However, there were also some differences in emphasis and approach, reflecting the diverse perspectives of the speakers from different sectors and organizations. This level of consensus implies that there is potential for collaborative efforts in addressing AI governance challenges in the region, but also a need for continued dialogue to address remaining differences and develop comprehensive strategies.

Differences

Different Viewpoints

Approach to AI regulation

Nibal Idlebi

Roeske Martin

Need for capacity building at decision-making level

Regulatory uncertainty holding back private sector

Nibal Idlebi emphasizes the need for capacity building among decision-makers to participate in global AI governance, while Martin Roeske highlights that regulatory uncertainty, rather than lack of regulation, is holding back private sector involvement.

Focus of inclusion efforts

Nibal Idlebi

Jill Nelson

Need for initiatives to encourage local data generation

Importance of private sector involvement in enabling inclusion

Nibal Idlebi emphasizes the need for initiatives to encourage local data generation, while Jill Nelson stresses the importance of private sector involvement in enabling inclusion through literacy and capacity building.

Unexpected Differences

Ethical considerations in AI development

Roeske Martin

Jasmin Alduri

Importance of building ethical principles into products from the start

Exploitation of click workers in Global South

While Martin Roeske focuses on incorporating ethical principles into AI products from the start, Jasmin Alduri unexpectedly raises concerns about the exploitation of click workers in the Global South. This highlights a potential blind spot in ethical considerations that major tech companies might be overlooking.

Overall Assessment

summary

The main areas of disagreement revolve around approaches to AI regulation, focus of inclusion efforts, and strategies to address data scarcity in the MENA region.

difference_level

The level of disagreement among the speakers is moderate. While there are differences in approaches and focus areas, there is a general consensus on the importance of inclusion, capacity building, and fostering local AI ecosystems. These differences in perspective can be beneficial in developing a comprehensive approach to AI governance and development in the MENA region, as they highlight various aspects that need to be addressed.

Partial Agreements

Partial Agreements

All speakers agree on the need to address data scarcity in the MENA region, but propose different approaches. Nibal Idlebi suggests initiatives to encourage local data generation, Martin Roeske proposes data commons and public data sharing, while Jill suggests the use of synthetic data.

Nibal Idlebi

Roeske Martin

Jill Nelson

Lack of local data limiting AI development

Potential for data commons and public data sharing

Use of synthetic data to address data scarcity

Both Martin Roeske and Jill agree on the importance of fostering local AI ecosystems, but focus on different aspects. Roeske emphasizes encouraging local businesses and startups to adopt AI, while Jill highlights the opportunity to grow local expertise and employment.

Roeske Martin

Jill Nelson

Need to encourage local businesses and startups to adopt AI

Opportunity to grow local expertise and employment

Similar Viewpoints

Both speakers proposed innovative solutions to address the lack of local data, suggesting initiatives to encourage data generation or the use of synthetic data.

Nibal Idlebi

Jill Nelson

Need for initiatives to encourage local data generation

Use of synthetic data to address data scarcity

Both speakers emphasized the importance of incorporating ethical considerations into AI development from the beginning, considering all stakeholders involved.

Jill Nelson

Roeske Martin

Need to consider all stakeholders, including labelers, in ethical assessments

Importance of building ethical principles into products from the start

Takeaways

Key Takeaways

There is a lack of representation from the MENA region and Global South in global AI governance forums and discussions

Regulatory uncertainty is holding back AI development and adoption by the private sector in the MENA region

There is a need for greater literacy, capacity building, and local data to enable meaningful participation in AI development

Ethical considerations, including the treatment of data labelers and click workers, need to be addressed in AI development

Fostering local AI ecosystems and talent is crucial for inclusion and development in the Global South

Resolutions and Action Items

Implement chief AI officers across government departments to build capacity and enable dialogue

Develop more accelerator programs and support for local AI startups, especially those led by underrepresented groups like women

Increase efforts to make AI tools and resources available in local languages like Arabic

Unresolved Issues

How to effectively bridge the technology gap between the Global North and South in AI development

How to balance innovation with regulation in emerging AI markets

How to ensure fair compensation and treatment of data labelers and click workers in the Global South

How to address the lack of computing power and infrastructure in developing countries for AI development

Suggested Compromises

Use of synthetic data to address data scarcity issues in the Global South

Implementing regulatory sandboxes to allow safe experimentation with AI technologies while developing appropriate governance frameworks

Leveraging existing global tech infrastructure (e.g. from companies like Google) to enable AI development in countries lacking local infrastructure, while building local capacity

Thought Provoking Comments

There is technology gap between the South and the North. And this is behind the scene, everything is related to technology gap. There is a big gap nowadays between developed country or the Northern country and the Southern country. And from that, we can derive a lot of issues.

speaker

Nibal Idlebi

reason

This comment highlights a fundamental issue underlying many of the challenges discussed regarding AI governance and inclusion in the Global South.

impact

It shifted the conversation to focus more explicitly on the North-South divide and its implications for AI development and governance.

How can it be a race if we don’t all start from the same point? And that technology is an important priority nationally, certainly. But what about things like the inability to read and write? Or things like, you know, not enough access to proper health care.

speaker

Salma Alkhoudi

reason

This comment challenges the framing of AI development as a ‘race’ and highlights more fundamental development challenges faced by some countries.

impact

It broadened the discussion to consider the wider context of development challenges beyond just AI and technology.

We’re now starting to go down into more thematic approaches whether it’s around health or education or fintech, even gaming. We’re doing accelerators. So there are some very interesting sectors here, particularly in economies that are trying to diversify away from fossil fuels, etc. and encouraging the build-out of new economic sectors where there’s a lot of opportunity.

speaker

Martin Roeske

reason

This comment provides concrete examples of how AI development can be tailored to specific regional needs and opportunities.

impact

It shifted the discussion towards more practical, sector-specific applications of AI in the Global South.

For every challenge, there is an opportunity. And in a sense, AI costs a lot of money. It costs a lot of money to countries, Western countries, so-called Western countries, that will go for cheaper labor in some developing countries, right? But at the same time, it is an opportunity, as Martin was referring to, to grow the capacity building into these developing countries and grow even the capacity for local employment and local expertise.

speaker

Jill Nelson

reason

This comment reframes the issue of labor exploitation in AI development as a potential opportunity for capacity building in developing countries.

impact

It introduced a more optimistic perspective on the potential for AI to contribute to development in the Global South.

Overall Assessment

These key comments shaped the discussion by highlighting the complex interplay between AI development, global inequalities, and development challenges. They moved the conversation beyond abstract discussions of AI governance to consider more concrete applications and opportunities, while also maintaining a critical perspective on the challenges faced by the Global South in participating fully in AI development and governance.

Follow-up Questions

How can we build a unified AI ecosystem in a region with very diverse regulatory approaches, different development priorities, and varying levels of digital infrastructure?

speaker

Salma Alkhoudi

explanation

This is important to address the interoperability issues faced by companies trying to scale across countries in the MENA region.

How can we operationalize inclusion better in the AI governance ecosystem?

speaker

Fadi Salim

explanation

This is the core focus of the panel and crucial for ensuring diverse perspectives are represented in AI development and governance.

How can voices from countries without a seat at the table be represented at a higher level in the ecosystem of AI governance?

speaker

Fadi Salim

explanation

This is important for ensuring global AI governance decisions consider perspectives from all regions, not just an ‘elite club’.

What is the best approach to encourage the collection and use of local data?

speaker

Audience member

explanation

Local data is crucial for developing AI solutions that are relevant and representative of different regions.

Could big tech companies provide computing power as a service for developing countries to run local data for training AI models?

speaker

Audience member

explanation

This could help bridge the gap in computing infrastructure between developed and developing countries.

How can we ensure that click workers and the Global South actually benefit from being part of the AI development stage?

speaker

Jasmin Alduri

explanation

This is important to address potential exploitation and ensure fair compensation for workers contributing to AI development.

How was the Global South or global majority involved in developing AI regulations and standards at Google, UNESCO, and IEEE?

speaker

Lars Ratscheid

explanation

This question directly addresses the core theme of inclusion in AI governance from a global perspective.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #194 The Internet Governance Landscape in The Arab World

WS #194 The Internet Governance Landscape in The Arab World

Session at a Glance

Summary

This discussion focused on internet governance in the Arab region, exploring challenges, priorities, and future directions. Panelists emphasized the importance of multi-stakeholder engagement in shaping internet policies and governance frameworks. They highlighted the need for increased participation from civil society and the private sector, noting that funding and awareness were key barriers to involvement.

The conversation touched on several priorities for the region, including enhancing connectivity, addressing the digital divide, promoting digital literacy, and strengthening cybersecurity measures. Participants stressed the importance of aligning regional priorities with the global internet governance agenda.

The role of national and regional Internet Governance Forums (IGFs) was a recurring theme, with speakers noting their potential to foster dialogue and shape policies. The upcoming Arab IGF in Amman was highlighted as a crucial opportunity for regional stakeholders to contribute to the global WSIS+20 review process.

Panelists discussed the challenges of combating misinformation and disinformation, acknowledging the complexities introduced by emerging technologies like AI. They emphasized the need for balanced regulatory frameworks and international cooperation to address these issues.

The future of the IGF itself was debated, with participants calling for a more empowered and sustainable model. Suggestions included improving linkages between IGF outcomes and decision-making processes, and clarifying the distinction between digital governance and internet governance.

Overall, the discussion underscored the importance of continued collaboration, capacity building, and active participation from all stakeholders to shape the future of internet governance in the Arab region and beyond.

Keypoints

Major discussion points:

– The need for greater multi-stakeholder engagement and collaboration in Internet governance in the Arab region

– Challenges and opportunities for civil society and private sector participation

– The future of the Internet Governance Forum (IGF) and its evolution

– Priorities for Internet governance in the Arab region, including youth engagement and capacity building

– The upcoming WSIS+20 review process and Arab participation

The overall purpose of the discussion was to examine the state of Internet governance in the Arab region, identify key challenges and priorities, and explore ways to strengthen multi-stakeholder participation and regional engagement in global Internet governance processes.

The tone of the discussion was largely constructive and forward-looking. Participants spoke candidly about challenges but focused on opportunities for progress. There was a sense of optimism about the future of Internet governance in the region, particularly regarding increased collaboration between national and regional IGF initiatives. The tone became more urgent towards the end when discussing the need for Arab voices to be heard in upcoming global processes like the WSIS+20 review.

Speakers

– Qusai AlShatty: Moderator

– Christine Arida: Government, African Group

– Ayman El-Sherbiny: UN ESCWA

– Zeina Bouharb: Ogero, Lebanon

– Charles Shaban: International Trademark Association

– Ahmed N. Tantawy: NTIA Egypt

Additional speakers:

– Waleed Alfuraih: Saudi IGF

– Nana Wachuku: Advisory board member for Digital Democracy Initiative

– Maisa Amer: PhD researcher at Leipzig Berlin, Germany

– Nermine Saadani: Regional vice president at the International Society of the Arab Region

– Tijani bin Jum’ah: Former member of the Civil Society Bureau at WSIS

– Desire Evans: Technical community and RIPE region

– Shafiq (no surname provided)

Full session report

Internet Governance in the Arab Region: Challenges, Priorities, and Future Directions

This discussion, held during a global Internet Governance Forum (IGF) event, focused on the state of internet governance in the Arab region, exploring key challenges, priorities, and future directions. The conversation brought together a diverse panel of experts representing various stakeholders, including government bodies, international organisations, civil society, and the private sector.

Multi-stakeholder Engagement and Regional Initiatives

A central theme throughout the discussion was the critical importance of multi-stakeholder engagement in shaping internet policies and governance frameworks. Christine Arida emphasised the need for dialogue and collaboration among all stakeholders, while Ayman El-Sherbiny highlighted the importance of promoting regional cooperation through initiatives like the Arab Internet Governance Forum (IGF) and the Arab Digital Agenda.

There was broad agreement on the need for greater involvement from all sectors, with a particular focus on increasing participation from civil society and the private sector. Charles Shaban, representing the International Trademark Association, stressed the importance of strengthening civil society and private sector participation, noting the need for sustainable funding. Ahmed N. Tantawy emphasised the need to focus on youth engagement and capacity building.

Priorities and Challenges

The discussion touched on several priorities and challenges for internet governance in the Arab region:

1. Enhancing connectivity and digital infrastructure: Zeina Bouharb of Ogero, Lebanon, stressed the importance of improving digital infrastructure to ensure equitable access.

2. Addressing the digital divide and promoting digital literacy: Ahmed N. Tantawy and Zeina Bouharb highlighted these as key areas of focus.

3. Strengthening cybersecurity measures and data protection: Zeina Bouharb raised concerns about these issues.

4. Combating misinformation and disinformation: Maisa Amer, a PhD researcher, discussed the challenges of tackling these issues, particularly in light of emerging technologies like AI.

5. Aligning regional priorities with the global internet governance agenda: Ahmed N. Tantawy stressed the importance of this alignment to ensure the Arab region’s voice is heard in global discussions.

6. Balancing intergovernmental processes with multi-stakeholder engagement: Ayman El-Sherbiny provided a nuanced perspective on this challenge.

7. Clarifying concepts: Christine Arida noted the need to define the difference between digital governance and internet governance to focus future discussions and policy work.

The Role of Internet Governance Forums

The role of national and regional Internet Governance Forums (IGFs) was a recurring theme. Speakers noted the potential of these forums to foster dialogue and shape policies. Ayman El-Sherbiny called for revitalising the Arab IGF and creating an “Arab IGF 2.0”, while Desire Evans emphasised the importance of linking national, regional, and global IGF initiatives. Evans also suggested including NRI representatives on the IGF leadership panel.

The upcoming Arab IGF in Amman, scheduled for February 23-27, 2025, was highlighted as a crucial opportunity for regional stakeholders to contribute to the global WSIS+20 review process. Christine Arida and Nermine Saadani both stressed the importance of connecting IGF discussions to actual decision-making processes. Saadani also mentioned ongoing research on intermediary liability in the region.

WSIS+20 Review and Arab Participation

Tijani bin Jum’ah and other speakers emphasized the importance of the WSIS+20 review process and the need for strong Arab participation. This global review presents an opportunity for the Arab region to contribute to shaping the future of internet governance.

Future Directions and Call to Action

Looking to the future, participants called for a more empowered and sustainable model for the IGF. Suggestions included improving linkages between IGF outcomes and decision-making processes, and creating a network of regional internet governance initiatives to enhance collaboration and focus on regional challenges. Dr. Waleed highlighted the importance of educating society about the internet ecosystem.

The discussion concluded with a strong call to action for participants to engage in open consultations and upcoming meetings to contribute to these important conversations. Ayman El-Sherbiny specifically mentioned an upcoming consultation meeting, urging stakeholders to participate and provide input.

Conclusion

The discussion underscored the importance of continued collaboration, capacity building, and active participation from all stakeholders to shape the future of internet governance in the Arab region and beyond. As the region prepares for upcoming global processes like the WSIS+20 review and the Arab IGF in Amman, there is a clear need for coordinated input and a strong Arab voice in shaping the future of internet governance. Stakeholders are encouraged to engage in open consultations, participate in upcoming meetings, and contribute to the ongoing dialogue on internet governance in the Arab region.

Session Transcript

Christine Arida: wants to open up to dialogue with other stakeholders within the classical intergovernmental processes, so specifically within the Arab League, to shape the future of policies related to digital governance as we approach the WSIS plus 20 and also to talk about the future of the IJF in that perspective. So I think governments do play a role and I think there is a need for enhancing and empowering the role and the dialogue that governments have by injecting further multi-stakeholder processes into that and I see here that national and regional IJFs that are in the Arab region, you mentioned Tunisia, Lebanon, now we hear about Saudi Arabia but also the Arab IJF, the North African IJF, they can play a great role to bring those processes and paths together. I hope this helps. Thank you, Koussaï.

Qusai AlShatty: Thank you, dear Christine, for your intervention and very interesting points. I’ll shift to my dear colleague Ayman Sherbini from the UN SQA and I would like to address the question on how can intergovernmental organisations such as the UN SQA facilitate more substantial synergies among and between the Arab countries and the global internet governance ecosystem, including if you can shed the light on the digital cooperation, the panel of digital cooperation.

Ayman El-Sherbiny: Hello, thank you, do you hear me? Yes, we hear you. Okay, thank you very much, my dear colleague.

Qusai AlShatty: No, we’re not hearing you clearly.

Ayman El-Sherbiny: There you go Thank you very much Poseidon for Organizing and organizing this session and the way I see very dear faces and partners that we haven’t met for some time and That is not by coincidence actually we in you and economic and social commission for Western Asia Have Collaborated with our dear colleagues in the Saudi government the digital government authority and others more than a year ago during the hosting arrangements and preparations for this very dear and very Honorable event that we do in the region for the second time in the last 15 years I recall Sharma Sheikh in 2009 and here we are in Riyadh and We are proud to be here in the right time in the right place like 10 months or less from the WSIS plus 20 review, so What what is the role of the United Nations in the internet governance Arena and in the region first of all as you all know the WSIS itself which gives spin-off the global IGF was a platform Under the auspices of the United Nations Secretary-General and General Assembly during 2003 2005 the inception as you define of the definition of the internet governance was part of the world in group on internet governance during 2003 2005 Which give birth in Tunis agenda to the IGF it also gave birth to other processes like in has cooperation and others During this period During this period In the beginning, few IGFs spawned off in different parts of the world. So in 2009 in Sharm el-Sheikh, we in ESCWA came up with the idea of having an Arab Internet Governance Forum, similar to other regions, and we took a couple of years for internal consultation with the League of Arab States.

Qusai AlShatty: Thank you.

Ayman El-Sherbiny: From 2009 Sharm el-Sheikh till 2011, we got all the arrangements in partnership between us and the League of Arab States and member countries, and I here commend the role of the Egyptian government, the National Telecom Regulatory Authority, in a very important event which Kosai shared and Shafi and many others, Christine also, called Habtour event, January 2012. We put the first building block for the Arab Internet Governance Forum, which is under the auspices of the ESCWA and LAS, and has its mag also like the global model, as well as hosting changes from one place to the other. I also recall the role of Kuwait as the first host of the Arab Internet Governance Forum and the role played by KITS and Kosai at that time. So we did a model that mimics the global model, and this is what the UN brought. We created the Arab IGF, created the model, built the partnership and championships, and we hosted six annual Arab Internet Governance Forums since then, and it’s a good chance to ask my colleague Rita to distribute some information about the Arab IGF7 that will take place in Amman in February 2025, which is like six, eight weeks from now. We would like that all of you here today in the session by Arab IGF in the Global IGF to do two things. First of all, to mark your calendar for the Arab IGF7 in Amman, in the region, and spend the week over there, not only for the Arab IGF, but for the Arab voices. which is the Arab Forum for the WSIS and 2030 Agenda. Furthermore, for the first conference on the Arab Digital Agenda that we with all member states have developed in the region, for the region, adopted by heads of states, which has 35 goals for digital development in the region until 2033. So we’ll have three different communities coming together in the same week. You will have the brochure now from Rita on the DCDF, and we’d like that you register. But what is the second thing we want from you? We want you to be here tomorrow by two rooms towards room workshop 11. But at 10.30, if I’m not mistaken, we will double check the date now. And tomorrow we’ll mention what Shafiq alluded to. We will make a consultation on the WSIS plus 20 review the next 10 months. We will also connect this to the internet governance process, as well as to the GDC Global Digital Compact process. It’s a round of consultation that we are doing tomorrow for 90 minutes. And your presence tomorrow is no less important than your presence today. So please be there. We will touch more, deep dive into the GDC, its relationship with Arab Digital Agenda, Arab IGF, IGF, and the Saudi IGF. And I’m proud that we have, as Shafiq mentioned, had the North African IGF, and we have here Charles also, he is a head of the MAG of the Arab IGF. And now we have a colleague partner from the Saudi IGF, and we have the Lebanese IGF. So tomorrow it will be very important that we come together to shape also our strengths and presence, not only for Amman IGF7, but hopefully we are going to do an announcement also related to the next edition of the Arab IGF, Arab IGF8, very soon, inshallah. Thank you so much.

Qusai AlShatty: Thank you. Thank you, dear Ayman. I would like also to welcome our second online panelist, dear Zaina Abu Harbo from Ojero, Lebanon. And she is with us online.

Zeina Bouharb: Yes, good morning, everyone.

Qusai AlShatty: Good morning, Zaina. And I would like to direct you to the question, what policies and recommendations would you suggest in the Arab region to ensure a more robust and sustainable internet governance framework? I’ll pass the floor to you.

Zeina Bouharb: Thank you, Koussaie. First, let me thank also my, let me join my colleagues in thanking the government of Saudi Arabia for hosting this important event. And thank you and Shafiq for organizing this workshop. Well, if I want to answer your question about recommendation, you know, it’s to ensure a more robust and sustainable internet governance framework in the Arab region. The governments maybe should first address technical, regulatory and social challenges. And the picture differs from country to country, you know, because there is also this difference in technological advancement between the Arab countries. So first, I would say my recommendation would be to enhance connectivity and to invest in the development of reliable digital infrastructure because sustainable internet governance requires strong and accessible digital infrastructure before everything else to enable equitable participation. participation across the region. My second recommendation would be regarding cyber security and data protection. Also, a more secure and trusted internet is fundamental for internet governance. There is a need to develop comprehensive cyber security laws and data protection frameworks that are aligned with international best practices, while at the same time accounting for local needs and local contexts. And one very important recommendation would be to promote the multi-stakeholder government approach, where all the relevant stakeholders, governments, business, civil society, technical community and academia, they all can play an active role in policy making by establishing national frameworks for dialogue between stakeholders, such as the national IGF and the regional IGF, and also by encouraging participation in the global internet governance institutions like ICANN and the IGF. So, if you want, we can list a lot of recommendations, but you know, there is also a need for digital literacy programs within the Arab countries to integrate this literacy into educational curricula. And also, there is something very important, which is to foster the regional cooperation. I think Mr. Ayman already tackled this issue and he will go deeper into this topic. So this would be my recommendation. Thank you, Kosar.

Qusai AlShatty: Thank you, Zainab, for your valuable input. I will shift the floor to my dear colleague, Charles. And actually, I will ask you a compounded question, because you represent a civil society organization, but your membership base is also private sector. So my question will be in that compounded form. When we see the internet governance landscape, civil society is the most stakeholder that participates. Private sector is from the least. So how, in that context, what role does the civil society play and the private sector would play in advancing internet governance in the Arab region? And how can this role be further enhanced?

Charles: Thank you very much, my dear Kosar. And thank you for the invite. Thanks for the Saudi government for having this wonderful event here. In fact, as you mentioned, good to mention that, because now I represent the International Trademark Association, which is officially a nonprofit organization. But more than 95% of our members are private sector. So this is my history with the Arab IGF. I was in the private sector before even joining INTA. So to answer your question, Kosar, I think the best way, the role, I don’t think I will talk a lot about it, because everybody knows the important role of the multi-stakeholders and what you, Anshan, specifically was talking about all the time. So we need the opinion and the views of these two important sectors in everything we do here, of course, and in the Arab IGF and the local IGFs. But maybe how to enhance it better, I can maybe think about it. I think we need more sustainability. financially, especially for the civil society to be able to continue participation. I know that many organizations usually don’t have the budgets to attend such forums and participate actively. Even with the online now, it’s easier, but still, you know, sometimes the in-person presence helps a lot in this. Going to the private sector, I think we need to show them more the importance of being part of the policy making, let’s say, because I know this is mainly a non-binding forum. At the same time, it’s important not to wait to see what the others want. We need to know the private sector, which is mainly the runners of the big economy around the world. We heard in the morning, Minister Sowaha excellently put it in a wonderful presentation that we are talking about 20.6 trillion. I mean, so we know that the drivers are mainly the private sector. We need them from the beginning to know exactly what they expect from the Internet, how to be part of it as we talked. So I think this, especially in our region, needs to be clearer for the private sector, because this is exactly what we face at the Arab IJF. We always had a lack of private sector. So this is exactly what we need to tell them, not only to concentrate on your work. You need to work, of course, but at the same time, be part of this, because this will affect your work in the future. So how to pass this, I’m not sure. So my recommendation would be awareness, I think, more of the importance of this for all the stakeholders, especially the private sector. Thank you. And if you allow me, Qusai, you know I have a conflicting meeting. Thank you. If you allow me to leave slightly so I don’t disturb anyone. I’ll see you tomorrow in the other workshop if I need to. one looking forward. Thank you Charles for being with us. Thank you

Qusai AlShatty: I will pass the floor to my dear colleague Ahmed from the NTIA Egypt and I will ask him a question. What are, in your view, the top priorities for the Arab region and internet governance and how can government align these priorities with the global internet governance agenda?

Ahmed N. Tantawy: Thank you Saeed, thank you Sharif for organising this workshop and thank you for the KSA for hosting the Global Ajaif and give us the opportunity to be here this year. In my view about the priorities in our region, I think there is a common need in our region called engagements. I think we should support the multi-stakeholder engagement in all our process in the Arab region. Actually at the past years we have many initiatives actually as you mentioned in the presentation. We have now national IJFs, we have regional IJFs and we have youth IJFs by the way in Lithuania. I think creating kind of a networking between all these initiatives in our region is something will help us to focus on our challenges in the region. I got this idea actually after our webinar organised one week ago and it was co-hosting between the Arab IJF, North African IJF and Lebanese IJF. Actually it was a remarkable webinar, gave me this idea about the network who manage, who compile the initiative in our region, okay, and through this networking we can share our thoughts. If you are talking about how to effectively enforce our voices in the global tracks, okay, I think this might be helping our priorities. Referring to the priorities, okay, I think how to make youth engagements and capacity buildings, this is one of my priorities, okay. I see a lot of youth, they have the knowledge, they have the capability, okay, and I think sense a capacity building for them to be engaged and to be leaderships in the future. This is one of my priorities, I think, and of course the digital divide between inside the same countries, between men and women, all this comes as a priority in our region. Thank you, thank you, thank you.

Qusai AlShatty: If I would like to ask a question.

Nana Wachuku: Thank you very much. My name is Nana Wachuku, advisory board member for Digital Democracy Initiative. It’s a program that focuses on supporting initiatives that help provide digital access, particularly in the global south. My question, there’s a lot of conversation around multi-stakeholder engagement. and there’s also the conversation about funding for civil society in participating in these multi-stakeholder platforms. But I’m also curious because there’s also the part about the government not being a very active participant in these multi-stakeholder engagement platforms. And I was wondering, beyond funding, are there other reasons that prevent civil society organizations from participating? And for the government, how democratized are these engagements? Is there also a reason that keeps the government away from engaging actively with the different stakeholders in these platforms? That’s one. My second question is, what are the top two priorities for civil society in this region? Specifically civil society, what are the top two priorities for civil society in this region? Sorry, I just want to get the questions out. And then the last one is, considering we’ve had conversations around private sector, how involved or how open is the policy making process enough to invite private sector for them to understand that we can be willing participants in the process? How involved are they in some of these policies that are made in the region? Thank you.

AUDIENCE: Let me start from your last question, because this is what the third question is. So we’ll start from the third question. I recall it correctly. So, I correct me if I’m wrong, please. 1st of all, what does it take to engage more social society here in Saudi? I think more awareness. Will bring more focus to the Internet community as a whole. Especially educating the society about Internet ecosystem as a whole. And the end user benefit out of engaging with us. So, we will engage their voice, take their voice all the way to the decision makers. And this is where we act as a society internally. To shape up the Internet ecosystem as a whole. So, they will be part of developing the whole ecosystem. So, I think education is a focal point where we need. And we are doing it already. So, we are doing a lot of workshops, seminars, online and face-to-face workshops in Saudis. And we are getting momentum every month. We are seeing growth, we are seeing reactions. We launched several initiatives where we educate the end user about what we could do to them. Especially hearing their needs, requirements, stuff like that. This is in one front. I think the second question was about… Involvement in policy making, how to ensure that others can be involved in the development of policy making. Absolutely. And this is one of the major things that we are doing right now. As part of the education, we are telling end user in order to be… part of the effective development of internet ecosystem, you have to be part of us. We have to hear your voice, you have to engage with us in seminar, workshop, discussion groups, stuff like that. And this is where we take their opinion very actively and passing their needs to the decision maker from policy, in the policy making or in the regulator side. And this is on internet, on cybersecurity, on other domains, not only internet usage.

Qusai AlShatty: Thank you. Thank you, Dr. Waleed. For Christine with us online, kindly if you can reply to the first question, which what stops the civil society other than funding from participating, if you see another reason other than the issue of funding. So, dear Christine with us and can reply to this.

Christine Arida: Thank you, Kosai. Can you hear me now?Yes. Okay. So, I think for civil society, likewise for all the other stakeholders, participating in a venue and engaging is all related to how much is at stake and how influential they can be to shaping the discussion, to participating, to really achieving something from that participation. So, of course, funding is a very important topic for civil society. But also, if I talk about the region, and I think this would apply to private sector and to civil society in the region, there’s a lot related to the maturity of how they can be influential in policy making or in policy shaping in the region. And if I take an example from the private sector, if you look at the global north, you would always find in private sector companies, a public policy division. Whereas if you come to our region, this is sometimes the least of their worries, and it’s only starting and picking up. Similarly with civil society from the region. So bottom line, it is related to how much they can influence what is happening and how can they shape policies around what is at stake for them. Thank you.

Qusai AlShatty: Let me pass the floor also to our colleague Ayman. The second question is how to integrate all stakeholders in the policy development process. How do you think that engagement can take place?

Ayman El-Sherbiny: So this is a very important point. I think the two worlds of intergovernmentalism and multi-stakeholderism, they can live together very smoothly. They are two sides of a coin. No side can work without the other. So what is the complementarity here? The complementarity is the dialogue in the multi-stakeholder arenas and gatherings such as forums or conferences and so on, which shape the ideas, which shape the positions, messages that should be taken further to the policy making. So it is policy support or policy shaping. Policy making naturally happens in the government circles in most of the issues, not all of the issues. So here comes the role of governments and the way we put them together, multi-stakeholders with equal footing and everything, is in the forums. While we go to the League of Arab States or UN closed meetings, General Assembly, or whatever, we get all these messages and we take the decisions and resolutions and go further to adopt what is in the interest of the global public. It’s a global public good in the globe or in the region. It is a public good. So it’s a public policymaking that has inputs from the citizens, civil society, business, and so on. The weakness here in the region is mainly in the civil society, as Christine said, alignment with what is happening, and also business sector. They see that most of the decisions pertaining to a global good is made elsewhere outside of the region. But the reality is through this regional forum, we contribute to the global public policymaking. And the global public policymaking will also impact the governmental bodies in the region. So it’s a continued loop. The last thing which I want to bring to this very important question is a forum with dialogue only without connecting tentacles or without the tangent points, contact points, with the decision making will not also help. As I said, there are two sides of the coin. So what is needed here also is an agenda for the region that everyone agrees to. So there are objectives, a compass for everyone to try to achieve certain targets. And this is in Europe. This is now in the global thing, in the global digital compact, in Europe, European digital agenda. So we have an Arab digital agenda with the same characteristics. And this has goals, targets, agreed upon by everyone. Multi-stakeholder engagement in the Arab digital agenda is a must, as well as in this kind of dialogue forums. So this complementarity between having a goal or a compass, having two places for policy shaping and policymaking, three of them work together very smoothly. And this is what we have in the region. And we guarantee the commitment of heads of member states into what the citizen and the youth aspires to. The last message I will say more about tomorrow in the discussion of global digital compact. In the summit of the future, we have five different facets. Some of them are related to political dimensions global level of the General Assembly, even. Some related to the finance, some related to the digital and the youth. So we are at the center of the summit of the future. This digital platform is really the place where we not only shaping the digital future, but the real future of the planet, the policy, the political paradigm, and the next generation.

Qusai AlShatty: Thank you. Thank you. I will pass the floor to the lady in the back, please. You can present and then turn it.

Maisa Amer: Thank you all very much for the interesting discussion. I would like to ask, first, I’m Maisa Amer, PhD researcher at Leipzig Berlin, Germany. I would like to ask all the panelists, how do you see the regulatory framework to tackle disinformation in the Arab world? For example, if there’s disinformation and misinformation disseminating on the digital platform, what’s the primary approach the states take in this point? Do you communicate with the platform directly to, I don’t know, negotiate or something to see how to look forward, how to tackle this phenomenon, or do you intervene directly? So I just curious to know what is the regulatory framework to tackle disinformation and misinformation. Thank you.

Qusai AlShatty: I would let Ahmed first, maybe, to address the…

Ahmed N. Tantawy: Thank you so much. I think all Arab regions have a legal framework. We hear you, we hear you. Okay. We have frameworks. I think all regulatory authorities are working on developing this framework every three, four years, following up. any updates, as you mentioned, about the misinformation and disinformation. I think also there is communicating channels between the governments and the platforms and the private sectors within the region or within the country. However, many of the platforms are outside the country. Maybe the framework or the legal framework will not be applying on these platforms. However, the channels are still running and communicated all over and updated.

Qusai AlShatty: Thank you. Any further elaboration from the panelists?

AUDIENCE: Thank you very much for the question. It’s really very important. And if I may add, it’s getting more complicated if you add AI as well. So the misinformation in AI is going to be massive. And I would urge you to read about the global initiative, about how AI initiative has been framed to make sure that this kind of misinformation is tackled. The framework talks about what should the action look like. And it is evolving. And everybody would like to make sure internet is a safer place, a trusted place. And without a global cooperation, I don’t think we’ll reach it to that point. So thank you very much. I think it’s a global phenomenon and it’s been tackled, I hope, very efficiently. Thank you so much. Adding to your intervention, the AI is going to create a lot of not only of misinformation is disinformation, but it will propagate the disguise of the truth in general. So that is a really very complicated issue. The session, the plenary session before us here discussed the ideas related to the truthfulness and ethical matters, and also the explainability, as well as the discoverability of algorithms and this kind of things. What we are doing in the international organizations region, we have developed a strategy for AI in the region, a vision for it, and we connected this to the Arab Digital Agenda. And we are currently putting metrics for measurement, the baseline now of readiness, of maturity, of adoption of AI. And we’re trying to put targets for the regions to combat this phenomena in collaboration with pertinent UN bodies, which is UNESCO. So we are working on that. It’s like an uphill battle. It’s like the virus and antivirus. The more you create antiviruses, the more there are people who are trying to disguise. So with AI, it’s difficult. But when the AI is under the jurisdiction of the humans, it’s still manageable. So the governance is by humans, of the AI, and we will not really be afraid to combat this kind of threats. And we think that it will continue like that. There is an upper hand for the human intelligence that will control such kind of malicious behaviors. Thank you. I’ll pass the floor to dear Nermine Saadani from ISOC.

Nermine Saadani: Hello, everyone, and thank you so much for giving me the chance to contribute to this valuable discussion. My name is Nermine Saadani. I’m the regional vice president at the International Society of the Arab Region. I have actually two points. One question for the distinguished panelists, and as well an intervention to feedback. our colleague here from Berlin, on her question on the regulatory framework and to complement what the distinguished panelists said. So first, my question to the panelists, and this is, I think, an opportunity. Can we start with the intervention so we can remember the question better? So on the intervention and your question about the regulatory framework that could be there to prevent the communities and the societies from the fraud information that could be present on the platform, the intervention is a very, very useful research this year, and we’ll complete this research in the coming year, inshallah, on intermediary liability. And this is a legal framework that many of the developed regions and the countries has been working on. And I think it will be very, very useful for our region as well as Arabs to look at intermediary liability from that perspective on how to strike the balance between protecting the platforms and as well protect the people and the users of the internet from any fraud or any misinformation that could be prevailing on those platforms. Striking a balance is something that is very useful in general, and this framework from a legal perspective is very useful. The document and the research will be announced on our platform, inshallah, in the second quarter of 2025. And we have conducted the research on six countries, including Saudi Arabia, United Arab Emirates, Egypt, Lebanon, Jordan, Oman, and Bahrain. So that would be very useful, and I think it will give a context of whatever they have been mentioning and your concern or your question. So I would refer this to you, and maybe it will be announced on our website shortly, inshallah. On my question, and this is a huge opportunity because all the pioneers from the Arab region who has been building the internet governance in our region are here. So I would like to pose a question very direct about the future of the internet governance forum itself, coming up very soon, the review on the WSIS plus 20. I would like to see or to understand. How do you see this process, and how can we encourage the Arab governments to get engaged more and more in the Internet Governance Forum? We still see lack of governments present, and definitely not all the stakeholder groups are there. So how do you see this, and how do you look at the review process for that forum? Thank you so much.

Qusai AlShatty: I’ll pass the floor to the panelists.

AUDIENCE: Thank you so much for the panelists who gave me the first intervention. But of course, it is not the only intervention. But I know the question of Nermin, and tomorrow we’ll give more details. But now, in general, I see the future, and this is not political talking. I see the future of the Arab IJF bright. I see it revived. I see it strengthened. It’s another wave. Remember the IJF-9? One or two years afterwards, 2009, IJF-4 in 2009, we strengthened our presence through this Arab IJF. With this booster in the IJF-19 in 2014, we have agreements already and discussions with the Saudi government, and we have discussed with the Emirati government, we have discussed with people in the NDRA. We want to strengthen and create the Arab IJF-2.0. Much more stronger, much more vital with goals and targets like the evolution we saw in the global IJF. The IJF lived a very long period, bottom-up only, until at a certain point in time, they created the so-called leadership panel, for example. Now, they connected more with the CSTD, connected more with the General Assembly. So, connecting more with intergovernmentalism. Now, we are already connected, so we don’t have a problem. But, as I said, we have weakness in the business sector. They don’t see a benefit. We have weakness in the civil society, too. They are not enough aligned. So, together, we are reviving a new wave, inshallah. The last eight, 10 months are vital. What you asked about the WSIS plus 20, and let me really give a word of command to Nermeen herself. She was working in when she was in her previous work in the IGF and WSIS plus 10 process. She knows how important it is. At that time, the global IGF plus 10 in 2015 led to connection between IGF and 2030 agenda. In general assembly 70 over 125 resolution, there was such a connected with the WSIS and the 2030 agenda never has it been before. So I see that after 10 months from today, there is going to be a strong connection between the IGF process, the WSIS process, and they will prevail hopefully for another 10 years or more. And maybe the least would be until 2030, but I see it at least 10 to 15 years. And while so doing, they have a GDC. They have never had a global digital compact before. So there’ll be strengthened. We have an Arab digital agenda. So I see the future is really bright in the region globally and at the national level too. We started by Arab IGF and then Tunisian IGF, then Lebanese IGF. And now we see the Saudi, we see the North African. So things are maturing and inshallah, the citizens themselves will play a bigger role and the next generation together with ISOC and with other member countries.

Qusai AlShatty: Thank you.

Shafiq: Please. If I may add as well, thank you very much, Ayman. The good thing about IGF itself, it is a platform for discussion, to be honest. And people come here with open-minded, a kind of open topics. And the more we have this kind of a platform to discuss the idea, especially at this stage where a lot of factors in a place. We are talking about ICANN and the initiative of new GTLDs. and talks about it, things about what we discuss on AI and how it’s impacted global usage of the internet. And there are a lot of topics that it is really emerging. So this kind of a platform, I think the more we have, like IGF, where we discuss ideas, we meet with experts, decision makers. I think our thoughts will bring it together more and more. And everybody who worked a long time in internet, we understand that it is a multi-stakeholder approach. Internet without having a collaboration and connectivity everywhere, it is not internet, per se. So thank you very much for raising this question. Any other question from Ahmed?

Qusai AlShatty: Thank you, thank you, Dr. Amin.

Ahmed N. Tantawy: I’m not gonna add anything new, but maybe I will summarize with two words. Trust between stakeholders is a mandatory, okay? And to avoid avoiding, avoid avoiding being here, avoid avoiding participation, avoid avoiding engagement. Let’s talk, we are different stakeholders. We have different mindsets. We have different perspective. We have different priorities. It’s normal to be not on the same line, but let’s keep dialogue, let’s keep being here, engagement and discussing and showing our perspectives. Thank you.

Shafiq: Shafiq. Thank you, thank you, Ahmed. Just, I will reflect on Nermin’s question. It’s a very critical question. And I cannot agree more with all my colleagues and friends, but I will take this opportunity for a call, for a call to the Arab community, for a call for the all Arab stakeholders to commit and to go and fill. and all the open consultations that make the Arab IGF sustainable for the next five, 10, 15 years. As Dr. Waleed said, it’s the only venue for inclusivity, for bottom-up, for the multi-stakeholder to voice their concerns. So during all these open consultations, as technical community, as RIPE NCC, we recognized and we committed to a multi-stakeholder environment that the IGF is fostering. And this is why my call is for all the Arab stakeholders to go online, fill the open consultations, and make your voice heard that the IGF should be sustainable, should be there to discuss all the concerns and all the challenges that every one of us is facing. Thank you, Qusai.

Qusai AlShatty: Then let me take the privilege as a moderator to give the floor to our dear colleague, Tijani bin Jum’ah, who witnessed the WSIS process and internet governance since the start. And he was a member of the Civil Society Bureau at the WSIS time. And now we have the privilege that he’s with us and looking at the evolution of the WSIS and the continuation of the IGF. So I will take this privilege to give you the floor yourself.

AUDIENCE: Thank you very much, Qusai, and thank you all for having me. I think that we are witnessing an evolution of the IGF. Since Tunis Agenda, IGF was created there and we had IGF plus 10 where we had the possibility to have the output because at the beginning, there was no output. It was not permitted to have output. Now we have output, but it is not binding to anyone. it is only if you want to look as someone said last time. So this is an evolution but we need evolution, we need to have these recommendations out of the IGF be considered by the decision making people, by people who are deciding on behalf of us. So the multi-stakeholder model is the only model that can give me the possibility to express my opinion since I am civil society so I don’t have any decision making rights. So the multi-stakeholder model I can speak, I can give my opinion and this multi-stakeholder model should be considered, it should be improved in my point of view, it should be improved so that it will be a real multi-stakeholder model because the stakeholders are not equal, we need multi-equal stakeholders. Civil society people don’t have the possibility to go to the meetings because they are not funded to go there but fortunately there are some sources of funding, it is not enough in my point of view. So they cannot express their opinion as well as the governments who are paid for it as well as the private sector who have an interest, a financial interest so they are paid to go there and to express their point of view. This model should be improved in my point of view and we have to fight for it. There is no other model of governance that can give all people the right to express themselves. Now about the evolution, as you know GDC was plus 20 etc. The problem is that there is a lack of participation from our region. Unfortunately, I think that it is too short, 10 months is too short to have the opinion, the consensus opinion of the region about the evolution of WSIS, WSIS plus 20. We didn’t have, Nermeen is right, she is right because we don’t have something already prepared and we should have prepared it much before this time, but no problem if you want, we can work on it, we can do that, but we need the engagement of everyone, real engagement. It is not about bring this kind of people or this kind, we have to have the opinion of all people and in my point of view, we have to have the opinion of all the national and sub-regional IGFs in our region and also for the Arab IGF. We have to have also the opinion of all the ISOC chapters in our region. This will help to shape, if you want, the opinion, the consensus opinion about this evolution. We don’t have to be absent in this occasion. Thank you, Kossai.

Qusai AlShatty: Thank you, thank you. I think Christine, our panelist, she has a comment and a question.

Christine Arida: Thank you, Kossai. So, I really thank Nermeen for putting forward that very important question and I would like to recall here a discussion that happened yesterday in the NRI coordination session where, if I recall right, Bertrand de La Chapelle has put forward the proposal that all NRIs across the globe in their sessions between here and between Riyadh and Norway, IJF, that they initiate a discussion about how they would like to see the mandate of the IJF renewed and produce actual paragraph or text about the IJF. And I think that’s a very innovative idea, though it’s basic, it is very innovative, because if we can harness the power of grassroots in terms of looking at the future of IJF, I think there is real impact that can be done in terms of the multilateral process that will look into the WSIS plus 20 review and the mandate of the IJF renewal. And in that respect, I think we can lead by example within the Arab region. I think what we need to see is a very thorough discussion about the renewal of the mandate of the IJF that should happen in the upcoming Arab IJF. And we should see that feeding into the ministerial sessions, whether within ESCWA, within League of Arab States, others to actually shape the intergovernmental process in that area. And to add one more final comment to that, the IJF has done a lot of dialogue, a lot of discussion, even outcomes. We have policy outcomes, policy recommendations. I think what we should be focusing on at this stage is defining the difference between digital governance and internet governance. because this is one point that needs to be tackled. The other thing that we need to look at is linkages. How can the outcomes that come out of the IGF actually feed into actual decision making elsewhere? And we’ve been saying that, but we haven’t been doing it so good. I think those two points are very important for the IGF, in addition to a solid proposal about how to empower the IGF in terms of funding and resources. Thank you very much.

Qusai AlShatty: Thank you, Christine. Any questions from the audience? Let me just make a comment. I had the privilege to attend the GDC Action Day. During the preparation session there, the Arab world was represented by the Kingdom of Saudi Arabia, the Republic of Egypt, and the United Arab Emirates. They presented what they are doing in terms of digital cooperation and regarding the issues related to the digital compact. Yet, they represented the view from our part of the world that preserves the interests of the key players here in front of the global players. The global players are not necessarily the government counterparts, but also the global players like Amazon, Google, and Microsoft, and so on, and other regional groups. It was substantial in reflecting our point of views and having some text in these documents that represents our interests and our priorities. So, leading this back, the next 10 months, I agree it is short, but It’s important to, with governments being our umbrella and representing our legitimate interest as all the stakeholders, and they are the people who will be on the table protecting our interests and the priorities. We need to pass the view of what we want from, let’s say beyond, which is plus 20, and what we want from the IDF as an evolution, beyond 25 years, and how we want this to be set up in the Arab world after the evolution that we see on the landscape regarding internet governance, not necessarily a specific platform, but national, regional, and our attack to the, our interaction with the global forum. So in that sense, really, we need to follow the timeline from here till July, which is the high-level event that we’ll take in Geneva. I think the host will be the International Telecommunication Union, and up to the General Assembly, the session that will be the last decision to say to go or not, and that will take, and so far, it is December 25. But this is all to be confirmed. Any final comment from the audience? Well, Desiree, do you have an intervention? Yes, please.

Desire Evans: Thank you, Desiree Evans. With the technical community and RIPE region, I really admire the Arab region in terms of how much effort has been made to include everyone and have a lot of, especially for the youth, internet governance forums that are taking place in different region. So I just wish that this enthusiasm continues past this WSIS Plus 20 review with the revived ILO. IGF, and not just 10-15 years. I think some people are calling for the unlimited life of this useful platform. But one important point, in addition to Shafiq was saying, please fill in the open consultation forum on the ITU’s website by 14th of March, saying how useful this platform has been. I think we also have an opportunity in June in Norway for the next IGF to get together and think about this proposal that Christine has mentioned, how NRIs could have more connections with other UN agencies. And I think speakers on the panel, Ayman, have said to have more of this bottom-up inclusion. So if we have a new leadership panel, for example, maybe they should come from NRIs, you know, for people who have been involved in the process. So let’s not miss that opportunity. Thank you.

Qusai AlShatty: Thank you. I’ll give the last word to my dear colleague Shafiq to wrap up the workshop. Thank you.

Shafiq: Thank you, Qusai. I just took some notes as final remarks, and I hope that I didn’t miss

Qusai AlShatty: any point. But please feel free to contact me or Qusai in order to add any other key message from this session. So first of all, thank you very much, dear panelists, dear audience. It really was very interesting, very interactive, and it’s an opportunity. And this is the advantage of the IGF, getting all of us in the same room, tackling challenges that are coming for us. So thank you once again. The key messages that I note, first, congratulations for Saudi for hosting this, and congratulations for Dr. Ibrahim, Dr. Waleed, for initiating the the Saudi IGF, and we hope that we will have a lot of meetings to coordinate among the NRIs, Lebanon IGF, Saudi IGF, Arab IGF, North Africa IGF. Thank you. Second, there is the message for strengthening collaboration among NRIs, I said it now. Capacity building and inclusivity, yeah, there is a demand for more capacity building, especially for the underrepresented communities that Dr. Waleed mentioned, civil society, that our dear guest here mentioned. Yes, we need capacity building and fellowship and funds to give them the tools to attend these meetings. The fourth point about the sustainable and evolving IGF that our colleague Nermeen mentioned a very interesting question. Yes, we need to have a new commitment, we need to renew our commitments for a sustainable IGF that includes all the voices. And the last point I noted here is the absence of Arab voices at the global level, at the global scene. So please, once again, Desiree, Nermeen, Christine, myself, please go ahead, fill the open consultations and make your voice heard. So these are the five points that I think Qusay will be the takeaway from this workshop, and hopefully that these discussions will continue. Thank you once again, and I wish you a great day and great IGF. Ayman, you have the last words.

Ayman El-Sherbiny: I would like to thank you, we all thank you and Qusay for this very important session. And add one more message, the road to Amman, Arab IGF 7, is going to take place from 23 to 27 of February. So on our way to Norway, we will pass by Amman. So remember, Amman, then Geneva for the CSTD, then for Norway, and then all the way till the General Assembly, Amman, third week or fourth week of February. And before that, tomorrow here 11.30 in room 10 for the consultation, one step on the road for the West Plus 20, 10 months are still valuable. So tomorrow, 11.30 in room 10, inshallah. Shukr.

Qusai AlShatty: Let me take this opportunity to let me first give a warm applause to our distinguished panelists. And I would like to thank our wonderful audience who remained with us all through the workshop and I would like to thank you for your interactivity and listening to us. So thank you all and see you around, hopefully. Thank you. Bye.

C

Christine Arida

Speech speed

128 words per minute

Speech length

701 words

Speech time

327 seconds

Need for multi-stakeholder engagement and dialogue

Explanation

Christine Arida emphasizes the importance of involving all stakeholders in internet governance discussions. She suggests that this approach is crucial for shaping policies and influencing outcomes in the digital governance landscape.

Evidence

She mentions the need to open up dialogue within classical intergovernmental processes, specifically within the Arab League.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Zeina Bouharb

Ayman El-Sherbiny

Charles Shaban

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Governments as key players in policy-making

Explanation

Christine Arida highlights the role of governments in policy-making processes. She suggests that while multi-stakeholder engagement is important, governments still play a crucial role in shaping and implementing policies.

Major Discussion Point

The Role of Different Stakeholders

Differed with

Zeina Bouharb

Differed on

Role of governments in internet governance

Z

Zeina Bouharb

Speech speed

97 words per minute

Speech length

341 words

Speech time

210 seconds

Importance of enhancing connectivity and digital infrastructure

Explanation

Zeina Bouharb emphasizes the need to invest in reliable digital infrastructure to enable equitable participation across the region. She argues that this is fundamental for sustainable internet governance.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Ahmed N. Tantawy

Agreed on

Enhancing digital infrastructure and connectivity

Differed with

Christine Arida

Differed on

Role of governments in internet governance

Addressing cybersecurity and data protection

Explanation

Zeina Bouharb stresses the importance of developing comprehensive cybersecurity laws and data protection frameworks. She argues that these should be aligned with international best practices while accounting for local needs and contexts.

Major Discussion Point

Challenges and Priorities for Internet Governance

A

Ayman El-Sherbiny

Speech speed

148 words per minute

Speech length

1574 words

Speech time

634 seconds

Promoting regional cooperation and initiatives like Arab IGF

Explanation

Ayman El-Sherbiny emphasizes the importance of regional cooperation in internet governance. He highlights initiatives like the Arab IGF as crucial platforms for dialogue and policy shaping in the region.

Evidence

He mentions the creation of the Arab IGF and its evolution over the years, including the upcoming Arab IGF 7 in Amman.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Charles Shaban

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Intergovernmental organizations facilitating cooperation

Explanation

Ayman El-Sherbiny discusses the role of intergovernmental organizations in facilitating cooperation on internet governance. He emphasizes the complementarity between intergovernmentalism and multi-stakeholderism in shaping policies.

Evidence

He mentions the work of ESCWA and the League of Arab States in organizing regional internet governance forums.

Major Discussion Point

The Role of Different Stakeholders

Revitalizing the Arab IGF and creating “Arab IGF 2.0”

Explanation

Ayman El-Sherbiny proposes revitalizing the Arab IGF to create a stronger, more vital platform with clear goals and targets. He envisions an “Arab IGF 2.0” that would be more effective in addressing regional internet governance challenges.

Evidence

He mentions ongoing discussions with various governments in the region to strengthen the Arab IGF.

Major Discussion Point

Evolution of Internet Governance Forums

A

AUDIENCE

Speech speed

138 words per minute

Speech length

1720 words

Speech time

744 seconds

Increasing awareness and education about internet governance

Explanation

An audience member emphasizes the need for more awareness and education about internet governance in the region. They suggest that this would lead to more effective engagement from various stakeholders.

Evidence

The speaker mentions ongoing workshops, seminars, and online initiatives to educate users about internet governance.

Major Discussion Point

The Future of Internet Governance in the Arab Region

C

Charles Shaban

Speech speed

158 words per minute

Speech length

500 words

Speech time

189 seconds

Strengthening civil society and private sector participation

Explanation

Charles Shaban highlights the importance of involving both civil society and the private sector in internet governance discussions. He argues that their participation is crucial for a comprehensive approach to policy-making.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Ahmed N. Tantawy

Agreed on

Importance of multi-stakeholder engagement

Ensuring sustainable funding for civil society participation

Explanation

Charles emphasizes the need for sustainable funding to enable civil society participation in internet governance forums. He argues that financial constraints often limit the involvement of civil society organizations.

Major Discussion Point

Challenges and Priorities for Internet Governance

Private sector’s importance in driving economic growth

Explanation

Charles Shaban highlights the crucial role of the private sector in driving economic growth in the digital economy. He argues that their involvement in internet governance is essential for shaping policies that support innovation and development.

Evidence

He mentions the significant economic impact of the digital economy, referencing a figure of 20.6 trillion.

Major Discussion Point

The Role of Different Stakeholders

A

Ahmed N. Tantawy

Speech speed

116 words per minute

Speech length

494 words

Speech time

254 seconds

Focusing on youth engagement and capacity building

Explanation

Ahmed N. Tantawy emphasizes the importance of engaging youth in internet governance processes and building their capacity. He sees this as crucial for developing future leaders in the field.

Major Discussion Point

The Future of Internet Governance in the Arab Region

Agreed with

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Charles Shaban

Agreed on

Importance of multi-stakeholder engagement

Bridging the digital divide within countries

Explanation

Ahmed N. Tantawy highlights the need to address the digital divide within countries in the Arab region. He emphasizes that this is a priority for ensuring equitable access to digital resources and opportunities.

Major Discussion Point

Challenges and Priorities for Internet Governance

Agreed with

Zeina Bouharb

Agreed on

Enhancing digital infrastructure and connectivity

Aligning regional priorities with global internet governance agenda

Explanation

Ahmed N. Tantawy discusses the importance of aligning regional priorities with the global internet governance agenda. He suggests that this alignment is crucial for effective participation in global discussions and decision-making processes.

Major Discussion Point

Challenges and Priorities for Internet Governance

Youth as future leaders in internet governance

Explanation

Ahmed N. Tantawy emphasizes the role of youth as future leaders in internet governance. He argues for the importance of empowering young people with the knowledge and skills to take on leadership roles in this field.

Major Discussion Point

The Role of Different Stakeholders

M

Maisa Amer

Speech speed

134 words per minute

Speech length

116 words

Speech time

51 seconds

Tackling disinformation and misinformation

Explanation

Maisa Amer raises concerns about the spread of disinformation and misinformation on digital platforms. She inquires about the regulatory frameworks in place to address this issue in the Arab world.

Major Discussion Point

Challenges and Priorities for Internet Governance

N

Nana Wachuku

Speech speed

94 words per minute

Speech length

229 words

Speech time

145 seconds

Civil society’s role in shaping discussions

Explanation

Nana Wachuku highlights the importance of civil society in shaping internet governance discussions. She emphasizes the need for more active participation from civil society organizations in multi-stakeholder platforms.

Major Discussion Point

The Role of Different Stakeholders

N

Nermine Saadani

Speech speed

174 words per minute

Speech length

469 words

Speech time

161 seconds

Importance of connecting IGF to decision-making processes

Explanation

Nermine Saadani emphasizes the need to connect IGF discussions and outcomes to actual decision-making processes. She argues that this connection is crucial for the IGF to have a meaningful impact on internet governance policies.

Major Discussion Point

Evolution of Internet Governance Forums

S

Shafiq

Speech speed

135 words per minute

Speech length

380 words

Speech time

167 seconds

Ensuring equal participation of all stakeholders

Explanation

Shafiq emphasizes the importance of ensuring equal participation of all stakeholders in internet governance processes. He argues that the current multi-stakeholder model needs improvement to achieve true equality among participants.

Major Discussion Point

Evolution of Internet Governance Forums

D

Desire Evans

Speech speed

128 words per minute

Speech length

207 words

Speech time

96 seconds

Linking national, regional, and global IGF initiatives

Explanation

Desire Evans highlights the importance of connecting national, regional, and global IGF initiatives. She suggests that this linkage could strengthen the overall impact of internet governance forums at all levels.

Evidence

She mentions the opportunity to discuss this proposal at the upcoming IGF in Norway.

Major Discussion Point

Evolution of Internet Governance Forums

Agreements

Agreement Points

Importance of multi-stakeholder engagement

Christine Arida

Zeina Bouharb

Ayman El-Sherbiny

Charles Shabani

Ahmed N. Tantawy

Need for multi-stakeholder engagement and dialogue

Promoting regional cooperation and initiatives like Arab IGF

Strengthening civil society and private sector participation

Focusing on youth engagement and capacity building

Speakers agreed on the critical importance of involving all stakeholders in internet governance discussions and decision-making processes.

Enhancing digital infrastructure and connectivity

Zeina Bouharb

Ahmed N. Tantawy

Importance of enhancing connectivity and digital infrastructure

Bridging the digital divide within countries

Speakers emphasized the need to invest in digital infrastructure and address the digital divide to ensure equitable access and participation in the digital economy.

Similar Viewpoints

Both speakers highlighted the important role of governments and intergovernmental organizations in shaping internet governance policies while also emphasizing the need for multi-stakeholder engagement.

Christine Arida

Ayman El-Sherbiny

Governments as key players in policy-making

Intergovernmental organizations facilitating cooperation

Both speakers emphasized the importance of empowering underrepresented groups (civil society and youth) to participate effectively in internet governance processes.

Charles Shabani

Ahmed N. Tantawy

Ensuring sustainable funding for civil society participation

Focusing on youth engagement and capacity building

Unexpected Consensus

Revitalizing regional Internet Governance Forums

Ayman El-Sherbiny

Desire Evans

Revitalizing the Arab IGF and creating “Arab IGF 2.0”

Linking national, regional, and global IGF initiatives

Despite representing different stakeholder groups, both speakers agreed on the need to strengthen and revitalize regional IGFs, suggesting a shared recognition of the importance of these forums in shaping internet governance.

Overall Assessment

Summary

The main areas of agreement included the importance of multi-stakeholder engagement, the need to enhance digital infrastructure, the role of governments and intergovernmental organizations in facilitating cooperation, and the importance of empowering underrepresented groups in internet governance processes.

Consensus level

There was a moderate to high level of consensus among the speakers on key issues. This consensus suggests a shared understanding of the challenges and priorities for internet governance in the Arab region, which could facilitate more coordinated efforts to address these issues. However, the diversity of perspectives also highlights the complexity of internet governance and the need for continued dialogue and collaboration among all stakeholders.

Differences

Different Viewpoints

Role of governments in internet governance

Christine Arida

Zeina Bouharb

Governments as key players in policy-making

Importance of enhancing connectivity and digital infrastructure

Christine Arida emphasizes the crucial role of governments in shaping and implementing policies, while Zeina Bouharb focuses more on the need for infrastructure development, implying a less central role for governments.

Unexpected Differences

Approach to addressing disinformation

Maisa Amer

Ayman El-Sherbiny

Tackling disinformation and misinformation

Promoting regional cooperation and initiatives like Arab IGF

While Maisa Amer raises concerns about disinformation and seeks information on regulatory frameworks, Ayman El-Sherbiny’s focus on regional cooperation does not directly address this issue, highlighting an unexpected gap in addressing a critical challenge.

Overall Assessment

summary

The main areas of disagreement revolve around the role of different stakeholders in internet governance, approaches to capacity building, and priorities in addressing regional challenges.

difference_level

The level of disagreement among speakers is moderate. While there are differing emphases on various aspects of internet governance, there is a general consensus on the importance of multi-stakeholder engagement and regional cooperation. These differences in perspective could lead to varied approaches in implementing internet governance policies in the Arab region, potentially affecting the balance between government-led initiatives and grassroots participation.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of multi-stakeholder participation in internet governance, but Ayman El-Sherbiny emphasizes regional cooperation through initiatives like Arab IGF, while Charles focuses more on strengthening civil society and private sector involvement.

Ayman El-Sherbiny

Charles Shabani

Promoting regional cooperation and initiatives like Arab IGF

Strengthening civil society and private sector participation

Both speakers agree on the need for broader participation in internet governance, but Ahmed N. Tantawy emphasizes youth engagement and capacity building, while Charles focuses on sustainable funding for civil society participation.

Ahmed N. Tantawy

Charles Shabani

Focusing on youth engagement and capacity building

Ensuring sustainable funding for civil society participation

Similar Viewpoints

Both speakers highlighted the important role of governments and intergovernmental organizations in shaping internet governance policies while also emphasizing the need for multi-stakeholder engagement.

Christine Arida

Ayman El-Sherbiny

Governments as key players in policy-making

Intergovernmental organizations facilitating cooperation

Both speakers emphasized the importance of empowering underrepresented groups (civil society and youth) to participate effectively in internet governance processes.

Charles Shabani

Ahmed N. Tantawy

Ensuring sustainable funding for civil society participation

Focusing on youth engagement and capacity building

Takeaways

Key Takeaways

There is a need for greater multi-stakeholder engagement and dialogue in internet governance in the Arab region

Enhancing connectivity and digital infrastructure is crucial for sustainable internet governance

Regional cooperation and initiatives like the Arab IGF play an important role in shaping internet governance

Youth engagement and capacity building should be prioritized

Civil society and private sector participation needs to be strengthened in the region

Addressing cybersecurity, data protection, and misinformation are key priorities

The Arab IGF needs to be revitalized and evolved to be more effective

Resolutions and Action Items

Participants encouraged to fill out open consultations on the ITU website by March 14th regarding the usefulness of the IGF platform

Arab IGF 7 to be held in Amman from February 23-27, 2025

Consultation meeting on WSIS+20 to be held tomorrow at 11:30 in room 10

Unresolved Issues

How to effectively increase government participation in multi-stakeholder internet governance processes

Specific mechanisms to link IGF outcomes to actual decision-making processes

How to ensure equal participation of all stakeholders, especially civil society

Concrete steps to address the digital divide within Arab countries

Suggested Compromises

Balancing intergovernmental processes with multi-stakeholder engagement in internet governance

Finding ways to make IGF outcomes more impactful without making them binding

Striking a balance between protecting platforms and users in addressing misinformation

Thought Provoking Comments

The two worlds of intergovernmentalism and multi-stakeholderism, they can live together very smoothly. They are two sides of a coin. No side can work without the other.

speaker

Ayman El-Sherbiny

reason

This comment provides a nuanced perspective on the relationship between governmental and multi-stakeholder approaches to internet governance, challenging the notion that they are incompatible.

impact

It shifted the discussion towards considering how these two approaches can complement each other rather than viewing them as opposing forces. This led to further exploration of how to integrate multiple stakeholders in policy development processes.

We need to show them more the importance of being part of the policy making, let’s say, because I know this is mainly a non-binding forum. At the same time, it’s important not to wait to see what the others want.

speaker

Charles Shabani

reason

This comment highlights the challenge of engaging the private sector in internet governance forums and proposes a proactive approach.

impact

It sparked discussion on how to make internet governance forums more relevant and impactful for all stakeholders, particularly the private sector. This led to considerations of how to demonstrate the value of participation in these forums.

I think creating kind of a networking between all these initiatives in our region is something will help us to focus on our challenges in the region.

speaker

Ahmed N. Tantawy

reason

This comment introduces the idea of creating a network of regional internet governance initiatives, which could enhance collaboration and focus on regional challenges.

impact

It shifted the conversation towards discussing concrete ways to improve coordination and collaboration among various internet governance initiatives in the Arab region.

We need to have a new commitment, we need to renew our commitments for a sustainable IGF that includes all the voices.

speaker

Shafiq

reason

This comment emphasizes the need for renewed commitment to inclusive and sustainable internet governance forums.

impact

It served as a call to action, encouraging participants to actively engage in shaping the future of internet governance forums. This led to discussions about concrete steps to ensure sustainability and inclusivity in these forums.

Overall Assessment

These key comments shaped the discussion by highlighting the need for greater collaboration between different stakeholders, including governments, civil society, and the private sector. They emphasized the importance of creating more inclusive and sustainable internet governance forums, particularly in the Arab region. The discussion evolved from identifying challenges to proposing concrete solutions, such as networking regional initiatives and renewing commitments to multi-stakeholder engagement. Overall, the comments pushed the conversation towards a more action-oriented and forward-looking approach to internet governance in the region.

Follow-up Questions

How can we encourage Arab governments to get more engaged in the Internet Governance Forum?

speaker

Nermine Saadani

explanation

There is a lack of government presence at IGF events, which limits the forum’s effectiveness and representation.

How can we strengthen the Arab IGF and create an ‘Arab IGF 2.0’?

speaker

Ayman El-Sherbiny

explanation

There is a need to revitalize and strengthen the Arab IGF to make it more effective and relevant in the region.

How can we better connect IGF outcomes to actual decision-making processes?

speaker

Christine Arida

explanation

There is a need to improve the impact of IGF discussions by ensuring they feed into concrete policy decisions.

How can we define the difference between digital governance and internet governance?

speaker

Christine Arida

explanation

Clarifying these concepts is important for focusing future IGF discussions and policy work.

How can we improve the multi-stakeholder model to ensure more equal participation, especially for civil society?

speaker

Tijani bin Jum’ah

explanation

There is a need to address the imbalance in resources and representation among different stakeholder groups in internet governance discussions.

How can we develop a consensus opinion from the Arab region on the evolution of WSIS and WSIS+20?

speaker

Tijani bin Jum’ah

explanation

There is a need for more coordinated input from the Arab region into global internet governance processes.

How can we better integrate AI considerations into internet governance frameworks?

speaker

Audience member (unspecified)

explanation

The impact of AI on misinformation and other internet issues is becoming increasingly important and complex.

How can we improve digital literacy programs within Arab countries?

speaker

Zeina Bouharb

explanation

Enhancing digital literacy is crucial for effective internet governance and participation in the digital economy.

How can we foster more regional cooperation on internet governance issues in the Arab world?

speaker

Zeina Bouharb

explanation

Increased regional cooperation could strengthen the Arab voice in global internet governance discussions.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #28 How to procure Internet, websites and IoT secure and sustainable

Open Forum #28 How to procure Internet, websites and IoT secure and sustainable

Session at a Glance

Summary

This discussion focused on the use of internet.nl, an open-source tool for measuring and improving internet security standards, and its adoption in various countries. The tool allows organizations to test their websites and email systems for compliance with modern security standards. Representatives from the Netherlands, Brazil, Singapore, and Japan shared their experiences and plans for implementing similar tools.


Key points included the importance of transparency in security ratings, the role of government in promoting adoption, and the potential for these tools to drive improvements in cybersecurity practices. The Dutch approach of using procurement processes to encourage better security practices was highlighted. Participants also discussed the challenges of implementing such tools in developing countries and the need for proactive measures to ensure no country is left behind in adopting internet security standards.


The discussion then shifted to sustainability in digital systems, introducing the concept of the “twin transition” – balancing digitalization with sustainability. The Dutch Coalition for Sustainable Digitalization was presented as an example of a public-private partnership addressing this issue. The importance of sustainable IT procurement was emphasized, with a focus on energy efficiency, emission reduction, and circular economy principles.


Overall, the session underscored the interconnected nature of internet security, sustainability, and procurement practices. It highlighted the potential for tools like internet.nl to drive improvements in these areas and the importance of international collaboration in addressing these challenges.


Keypoints

Major discussion points:


– Internet.nl tool for measuring adoption of internet security standards


– International efforts to implement similar tools (Brazil, Singapore, Japan)


– Using procurement processes to drive sustainability in IT


– Combining digitalization and sustainability efforts


– Building critical mass to influence big tech companies on security standards


Overall purpose:


The discussion aimed to share information about the Internet.nl tool for measuring internet security standards adoption, highlight international efforts to implement similar tools, and explore how procurement can be used to drive both security and sustainability improvements in IT.


Tone:


The tone was informative and collaborative throughout. Speakers shared their experiences and insights in a constructive manner, with an emphasis on learning from each other and working together to improve internet security and sustainability globally. There was a sense of optimism about the potential for these tools and approaches to make a positive impact.


Speakers

– Wout de Natris: Coordinator of the Internet Standard Security and Safety Coalition


– Wouter Kobes: Standardization Advisor at the Netherlands Standardization Forum


– Annemieke Toersen: Senior Policy Advisor at the Netherlands Standardisation Forum


– Gilberto Zorello: Project coordinator at NIC.br (Brazil)


– Steven Tan: Assistant director at the Cyber Security Agency of Singapore


– Daishi Kondo: Associate professor at Osaka Metropolitan University


– Hannah Boute: Program coordinator for the Dutch Coalition for Sustainable Digitalization


– Rachel Kuijlenburg: Coordinator sustainability for Logius, Ministry of the Interior (Netherlands)


– Coen Wesselman: Rapporteur for the session


Additional speakers:


– Flavio Kenji Yanai: System developer at NIC.br (Brazil)


– Peter Zanga Jackson, Jr.: From the regulatory body in Liberia


– Shawna Hoffman: With Guardrail Technologies


– Munzel Mutairi: CEO of Nataj Al Fikr


Full session report

Internet Security Standards and Sustainable Digitalisation: A Global Perspective


This comprehensive discussion focused on the implementation and promotion of internet security standards through tools like internet.nl, as well as the integration of sustainability principles in digital systems. Representatives from various countries shared their experiences and plans, highlighting the importance of international collaboration in addressing these challenges.


Internet Security Standards and Assessment Tools


The discussion began with an introduction to internet.nl, an open-source tool developed in the Netherlands for measuring and improving internet security standards adoption. Wouter Kobes, Standardization Advisor at the Netherlands Standardization Forum, emphasised that the tool provides a quick assessment of an organisation’s ICT environment security. Wout de Natris, Coordinator of the Internet Standard Security and Safety Coalition, added that it allows organisations to test themselves and improve their security posture. Importantly, it also enables governments to test organizations and pressure them to enhance their security practices.


Several countries have implemented or expressed interest in similar tools:


1. Brazil: Gilberto Zorello, Project coordinator at NIC.br, reported that Brazil has implemented Top.br, with increasing adoption rates.


2. Singapore: Steven Tan, Assistant director at the Cyber Security Agency of Singapore, shared that they have developed the Internet Hygiene Portal (IHP). Since its launch in 2022, IHP has conducted over 200,000 scans with users from across 40 countries, with more than 45% of domains showing improvements from their initial scan to their most recent evaluation. Singapore also introduced the Internet Hygiene Rating Initiative.


3. Japan: Daishi Kondo, Associate professor at Osaka Metropolitan University, expressed interest in implementing a Japanese version of the tool.


Government Strategies for Promoting Internet Standards


Speakers highlighted various government strategies to promote internet standards adoption:


1. The Netherlands: Annemieke Toersen, Senior Policy Advisor at the Netherlands Standardisation Forum, outlined a three-fold strategy involving mandates, monitoring, and community building. She noted that the Dutch government mandates specific open standards through a “comply or explain” list.


2. Singapore: Steven Tan explained that Singapore uses a transparent rating system to shift industry behaviour towards better security practices.


3. Brazil: Gilberto Zorello shared that Brazil promotes their tool through industry meetings and events.


A key point of agreement was the importance of engaging with major technology companies to improve standards support. Annemieke Toersen emphasised that by collaborating with other countries, they create a critical mass that enables more effective negotiations with suppliers.


International Collaboration and Challenges


The formation of an international community to share experiences with internet.nl-like tools was highlighted as a positive development. This collaboration was seen as crucial for creating a unified approach to improving internet security globally. The community plans to meet twice a year online, with the first meeting scheduled for spring 2025.


However, the discussion also revealed challenges faced by developing countries in adopting these standards. Peter Zanga Jackson, Jr., from the regulatory body in Liberia, raised concerns about the disparity in internet development between countries and sought guidance on how regulators in developing nations could implement tools like internet.nl with limited resources.


Sustainable Digitalisation


The latter part of the discussion shifted focus to sustainability in digital systems, introducing the concept of the “twin transition” – balancing digitalisation with sustainability.


Hannah Boute, Program coordinator for the Dutch Coalition for Sustainable Digitalization, presented their organisation as an example of a public-private partnership addressing this issue. She briefly mentioned the Corporate Sustainable Reporting Directive in the EU as a relevant development in this area.


Rachel Kuijlenburg, Coordinator sustainability for Logius at the Ministry of the Interior (Netherlands), emphasised the importance of sustainable IT procurement. She highlighted a framework for minimising energy needs and emissions in IT, noting that 80% of IT’s environmental footprint comes from hardware production. Kuijlenburg presented a sustainability framework based on the “refuse, reduce, reuse” strategy, although it was noted that her slide was in Dutch. She also discussed the SARD legislation in Europe related to procurement practices.


Key points in sustainable digitalisation included:


1. Focus on sustainable procurement of IT


2. Development of frameworks to minimise energy needs and emissions


3. Importance of contract management in sustainable IT procurement


Conclusions and Future Directions


The discussion underscored the interconnected nature of internet security, sustainability, and procurement practices. It highlighted the potential for tools like internet.nl to drive improvements in these areas and the importance of international collaboration in addressing these challenges.


Key takeaways included:


1. The effectiveness of tools like internet.nl in assessing and improving internet security standards adoption


2. The value of government strategies involving mandates, monitoring, and community building


3. The potential of international collaboration to influence big tech companies


4. The growing focus on sustainable digitalisation, particularly in IT procurement


Unresolved issues and areas for further exploration included:


1. Effective implementation of internet security tools in developing countries with limited resources


2. Development of specific metrics or targets for sustainable IT procurement


3. Balancing security and sustainability requirements in IT procurement


The discussion concluded with a call for continued international cooperation and the development of workshops and capacity-building initiatives to implement internet security recommendations globally. As a light-hearted note, it was mentioned that internet.nl t-shirts were available at the end of the session.


Session Transcript

Wout de Natris: which is a tool that tells you how secure your ICT environment is within seconds. Two, Internet.nl is an international community, launched in 2020. And your environment… There is no sound at the moment, so… I hear you, and I hear myself as well. Continue? There was a sound issue. Sorry for that. Three, ICT and sustainability is becoming an ever more serious topic, and it is discussed in the final part of the session. In the panel discussion, you will learn how the Dutch government uses procurement and how it negotiated with big tech to deploy… … Also, you will hear from other organizations that work with the Internet.nl tool, or in the future. I think the mic… Electricity wasn’t really working anymore, so I got a new microphone. So, the big tech to deploy important Internet standards. Also, you will hear from other organizations that work with the Internet.nl tool, or strive to do so and learn from their experiences. There will be ample time to ask questions after the two panels, but first, a short survey. Please show your hands. Who in the room is familiar with Internet.nl? I see a few hands. Who has used Internet.nl? And are you interested to deploy it in your country? Yes, but you have. Okay, thank you very much, but without further ado, let me… introduce the first speaker. Wouter Corbis from the Standardization Advisor at the Netherlands Standardization Forum is going to take you through the internet.nl tool and how it works.


Wouter Kobes: So Wouter, the floor is yours. Thank you very much Wout. Yes, so we will talk about measuring adoption of standards to improve security and for that we use the internet.nl tool. And on the next slide we first have a motto. Why are we using internet standards? Well, to keep our internet open, free and secure. However, those standards do not implement themselves. You have to implement them actively. And to support this adoption of standards we developed the internet.nl tool which can be used to measure your adoption of these important standards. And for demonstration purposes we measured the IGF donors and partners for this event using our dashboard. And the dashboard is a measuring tool which you can automatically measure multiple domain names on the adoption of email and website standards. As you can see here, this is just a snippet of our results from last week. The adoption of standards for IGF donors is not optimal yet. However, some scores are over 50% which is actually pretty good considering international standard adoption. This dashboard can be used for scheduled scanning and also for trend monitoring over time and is regularly updated using the latest standards and measurements. And on the next slide you can see a more detailed scanned result when you use our internet.nl website to scan individual domain names. Quite ironically, the IGF website of this year did not perform too well. And the nice thing is that we do not only present a score to each domain name but we also explain why this score is given and give guidance to how to improve your score. For instance, in this case, to enable some settings to protect your HTTP as connection better. On the next slide, you can see who is behind internet.nl, which is a private public collaboration of organizations in the Netherlands and outside. And we also are striving to more international use of internet.nl. And on the next slide, you can see some of our international users. We are used at the moment in Brazil, in Portugal, in Denmark, and well, hopefully after this session, you get inspired to use internet.nl in your country as well. And to use internet.nl is actually quite easy because on this next slide, I will explain to you that internet.nl is an open source software program. It’s available on GitHub if you really want to reuse it yourself. However, if you’re not in the capacity to host your own internet.nl instance, you can also use our dashboard or API functionality. For that, I ask you to send us an email at question at internet.nl, in which we will give you access to our tooling. And as already introduced by Wout, we are also launching a new international initiative where you can actually participate in using internet.nl internationally. So if you’re interested in joining that user community, please send an email to international at internet.nl. And then I thank you for your attention.


Wout de Natris: Martin, and actually, if you mail to that email address, internet.nl, or sorry, international@internet.nl, then it’s me that will respond to you because I am recently appointed as its coordinator. That if you want to test it yourself, just. now you can because you can just type in internet.nl and for example type in the URL of your bank or your organisation and you will immediately see within about 30 seconds what the security of your organisation is. That is what the tool tells you and the sort of message you can actually take. Thank you again Wouter for the presentation. We’re going into the panel discussion which we have four participants in and the first participant is Annemieke Toersen of the platform Internet Standards and Standardisation Forum and she was a Senior Policy Advisor at the Netherlands Standardisation Forum. So Annemieke please.


Annemieke Toersen: Thank you very much Wouter for your brief introduction and thank you Wouter for showing us the advantages of internet.nl. And thank you for joining our session. Lately also more people came into the room. Thank you very much indeed. My name is Annemieke, Annemieke Toersen from Netherlands Standardisation Forum and this is a think tank and aims for more interoperability of the Dutch government and open standards are key to this goal. Think about interoperability for trustworthy data exchange or security which influence of course the trust positively. Interoperability as government is obliged and to inform the society as a whole and neutrality. No dependency on vendors is very important in this case. The forum is actively promotes and advises the Dutch government about the usage of standards and you have to consider that it’s about 25 people from various backgrounds. So you think about government, business and science. And when it comes to internet standards the Dutch government has a threefold strategies shown in the next sheet and I briefly go through it. It’s a bit, it’s another sheet. It’s backwards. Yes that’s the one. Thank you very much. First, we mandate specific open standards. We can do so by including standards on the comply or explain list. This is done after careful research, in which we also consult technical experts. Standards on this list should be required when governments are investing in new IT systems or services. As we survey under some bigger ICT organizations within the Dutch government, we have seen quite some progress using open standards. However, it also became clear that some organizations have not yet moved yet on. In addition to a comply or explain, the standardization forum can also make agreements with ultimate implementation dates, and we have already done so far for several modern internet standards like HTTPS and DNSSEC and as well as RPKI. Firstly we mandate, by law, specific open standards, for instance with the open standards HTTPS and HSTS. Second, we go for monitoring to promote the adoption of standards, reviewing tenders and procurement documents. And for modern internet standards, we happily use internet.nl, as just mentioned, to frequently measure over about two and a half thousand government domains, so that’s pretty much. Finally number three, we invest in community building. We try to bridge the gap between technical experts and governmental officials, therefore we are really happy with the internet standard platforms and are actively participating. This cooperation enables us to be more effective, helpful to governments with their technical questions and also with their questions regarding how to request modern internet standards for their vendors. If you watch for community building and international collaboration for digital standards, we engage several efforts, for instance, the Platform Internet Standard and the Secure Mail Coalition. And our international initiatives include MESHEU, collaboration with European countries. Wouter mentioned already countries like Denmark and Czech and Portugal. But Internet.nl as well, you encode reused and partners like Australia and you further on hear Flavio from Brazil and Denmark to greater critical mass. What we do, we actively reach out to vendors and hosting providers like Cisco, Microsoft, Open Exchange, Google and Akamai. And this approach inspired Denmark to adopt similar practices, resulting in successes such as Microsoft’s announcement of full support for the Dane email security standards on Exchange Online last October. We are pretty proud for that. This achievement is partly due to ongoing correspondence and discussions between the Dutch government and Microsoft since 2019. And by lobbying other countries, we create a critical mass that enables more effective negotiations with suppliers. Our experience with Microsoft demonstrates the importance of formalizing agreements through concrete correspondence, which is very important, of course. This strategy can be applied to other areas, such as sustainability. Further on, we hear about them and suppliers are more inclined to modify their services when multiple governments or countries support an issue. The key is to build a critical mass and the next sheet will show you that the most important is sorry, this was the next sheet. You can show the next sheet, please. Building a critical mass is very important to find other partners and formalize agreements through concrete correspondence is a very key thing in having this succession. Thank you very much.


Wout de Natris: Thank you, Annemieke. And as you can see, is that everybody thinks that big tech, everything is in concrete. It is not until you start. discussing with them and perhaps they change their ways and make us all more secure because that apparently is what is going to happen. Now we’re going to move outside of the Netherlands and listen to what other countries have been doing so far with Internet.nl but giving it of course their own name. The first to speak is Gilberto Zorrello and he will talk about Top.br from Brazil where he’s the project coordinator at NIC.br and in the room is his colleague Flavio Kenji Yanai who is a system developer and if you have any questions about Brazil you can ask Flavio after the session. But first Gilberto,


Gilberto Zorello: the floor is yours. Hi, good afternoon for all. It’s a pleasure to be in this meeting. The implementation of Internet.nl in Brazil we call Top.nic.br. In Brazil the tool was deployed using a middle-up-down approach unlike the Netherlands. However, we hope that the key players in Brazil market will also adopt the best practices in the usage. We promote the tool in meetings that we have here in Brazil, in events of NIC.br and the Association of Internet Service Providers. We have some numbers of our utilization here in Brazil. For website tests, the unique domain that’s until now, is about 4,000 websites. The Hall of Fame is about 600 and adoption of IPv6 is about 20% only. DNSSEC signed 20% and HTTPS about 6%. We have a work with government here in Brazil. They tested many websites of government and they are working internally with the internal organs of the government to improve the implementation of the best practice here in Brazil. We mailed tests about 21,000 tests, unique domains tested. The Hall of Fame is about 80. IPv6 only 30%. DNSSEC signed 11%. The mark, the KSPF, 16%. And StartCLS, just 1%. The connection test is about 300,000 tests, more tests in this case. The UNIC-IS tested about 7,000. It’s an important number because we have here in Brazil 9,000 ASs. Then we can check that 7,000 was tested up to now. The DNS Recursive validating DNSSEC is about 210,000, 71%, and user for IPv6 is 60, 70%. It is important to say that the adoption of IPv6 in Brazil is increasing in the last year. In Brazil, the tool was, as I said, using a middle up-down approach. I don’t know if you… The other important thing is we just implemented a version of 1.7. In the next year, we will implement the 1.8 version of Internet. I don’t know if Flavio can complete some information. Flavio is okay, he says. Thank you, Gilberto, for showing us how things have changed. Were you finished? Yes, I’m finished.


Wout de Natris: Okay, that’s what I thought, but I just wanted to check to be certain. Thank you very much. It’s encouraging to see that numbers are going up because of the work that you’re doing, and I want to congratulate on that. The next person who is going to speak is going to come in from Singapore, and that is Stephen Tan. He works as an assistant director at the Cyber Security Agency of Singapore and is responsible for internet security and mobile security. Stephen, you’re working with the Internet.nl tool, or something very similar to it, for some years. Can you tell us the experience that you have in Singapore?


Steven Tan: Right. Hi, everyone. I’m Stephen. So, similar to Internet.nl, Singapore has developed our own internet hygiene portal, which is meant to improve the country’s cybersecurity landscape, right? The IHP encourages service providers to adopt key internet security best practices through a very transparent rating system. So on top of the tool itself, we have come up with a rating system to actually understand whether they, sorry, can I just double check because I’m seeing that I’m being muted here. We can hear Stephen, but you dropped away for a few seconds, but we can hear you. Okay, sure, right. So basically we came up with a transparent rating system known as our internet hygiene rating. The approach has actually helped to shift industry behavior and perspective by promoting proactive security enhancements, right? So since its launch in 2022, IHP has conducted more than 200,000 scans with users from across 40 countries, right? And importantly, more than 45% of domains have also shown improvements from their first initial scan to their most recent evaluation, right? That has actually helped to provide us with data points that IHP has reflected meaningful progress in cybersecurity readiness itself as well. So now what we have done is that similar to what internet.nl has done, we have also included an API which will be available early next year, right? That will enable seamless integration for businesses that’s looking to automate their security assessments. Several of the industry players and ICT providers have also signed up with us, showing keen interest and commitment to enhance their cybersecurity postures, right? So to date, IHP has also helped to shift the cybersecurity landscape in Singapore. We have seen ICT service providers that’s within our APAC regions like Orion and Exabytes. Basically these companies, right, they have stepped up and joined the Internet Hygiene Rating Initiative. What it means is that they have configured their websites and… email services to meet, by default, strong internet security best practices, which also means that any of the clients out there or businesses that actually goes to these providers would, by default, have a high rating of internet security best practices. So what has been even more encouraging is that even smaller ICT firms in Singapore has come on board. They are now being featured in our internet hygiene rating under this section known as the ICT website and email management providers category. I’ll provide the link later on. So it shows that more of these businesses are starting to recognize the importance of following recognized internet security best practices. By far, the responses from vendors has been great. I think similar to what internet.nl team has shared earlier on, even tech giants like Microsoft, Amazon, and Akamai has also shown willingness to collaborate with us, recognizing the tools potential to actually drive collective cybersecurity improvements. So besides this, I think Bart and I, we have been sharing notes between our engagement with Microsoft so that we know we could actually help to nudge the different tech giants into doing action to really make the internet a safer place. So to date, the involvement between us and the tech giants have also significantly signaled an industry shift towards greater cooperation and shared responsibility in maintaining a secure internet environment. So by adopting the IHP’s recommended best practices, they have also strengthened trust in their platforms while contributing to a safer digital ecosystem. Such collaboration proves that we could actually set security norms through voluntary industry engagement where transparency and fair recognition are in place. So moving forward, besides the internet hygiene portal, we are seeing similar tools like internet.nl and of course, NIC.br as a strong starting point for establishing broader security norms. And while, of course, we understand that formal regulations may come later, the primary focus of such tools itself is to really encourage voluntary adoption through industry recognition, public visibility, and of course, healthy competition. So I think kudos to the various teams here that have actually, you know, create such wonderful tools for your countries, and of course, to your region, and in fact, internationally. Thanks.


Wout de Natris: Thank you, Stephen. And I think what the three presentations show is that on the one hand, with this tool, organizations can test themselves, but also what you hear Anamika and Stephen say, that they test organizations and let them know what their current status is. And we heard from both that organizations that are tested and not tested so well have the inclination to move upwards and to better themselves and enter this Hall of Fame, but also that the big corporations are more or less exposed as being less secure. And that also means that some pressure on them starts existing to better themselves. And I think that that is one of the major factors that this tool can be so successful. We’ve heard from three organizations that have deployed. We will now hear from an organization that is looking into deployment. So I’m inviting the next speaker, Daishi Kondo. He’s an associate professor at Osaka Metropolitan University, and one of the research interests is internet security, including email security. So Daishi, can you tell us what you’re doing in Japan to make Internet.nl happen and what the challenges are that you’re running into? Thank you. Okay.


Daishi Kondo: Thank you very much. I’m Daishi Kondo from Osaka Metropolitan University. And actually, I have to say that to the best of my knowledge, the Japanese government doesn’t provide a tool similar to the internet.nl. And so I want to ask, I want to answer two questions. The first question should be like, what makes the internet.nl principle interesting for Japan? But for me, so this is just like imagination because we don’t have a similar tool. So the important point is like security visualization. So the internet.nl has like a squaring system and most people don’t know the detail of the security measures, such as SPF, DKIM and DMARC. Although it’s not necessary for people to know the detail. However, the people can understand the security level of the systems through the squaring system. So this principle allows the people to easily prepare the specification sheets for introducing the systems. For example, so we want to achieve at least like 80% of the activity level or like this. So also the, we have internet.nl have a whole of frames and the internet.nl compliant patch. So these features can create a peer pressure among the competitors within the same industry. So this should also like very interesting for me. The second question is what do you expect to achieve in Japan by using the internet.nl? So my answer is that, so the internet.nl using internet.nl can encourage the people to take better care of the systems. So currently the Japanese, in Japan, we don’t have a like standard. So the one potential use case is to create the specification sheet, therefore introducing system using a Japanese version of internet. .nl which can also be used to check the security measures implemented by the system provider. So I hope at some point we can implement Japanese internet.nl in Japan. Thank you very much.


Wout de Natris: And I wish you good luck with the implementation of internet.nl in Japan. I understand that there’s an online question. No? Then we go to the room to see if there are any questions on your side. So who would like to ask a question? Yes, please, sir. Do you have a microphone for a question in the room? Do you have a microphone? And please introduce yourself and your affiliation first, please, sir.


Audience: Firstly, my name is Peter Zanga Jackson, Jr. I’m from Liberia, from the regulatory body. My question is, our countries are far behind in terms of internet development. And now we are talking about internet. I’m hearing nl for my country will be internet.LRO. We as regulators for a developing country like mine, what can we do? What is required of the regulators to ensure that we too have internet.LRO, which is .Liberia. What can we do?


Wout de Natris: Well, I’m going to give the question to Walter, because he is working in that field. So he’s going to ask a question, answer the question. Thank you. And a very good question, indeed. Yes, thank you very much for this question.


Wouter Kobes: Like I mentioned, it is possible to reuse the code of it’s an open source project and well as a regulator I think it might be an interesting tool to also to reuse because you can of course measure standard adoption. Yeah well the question how to launch it it’s basically if you look from a technical perspective it’s simply following the installation instructions or to make use of our own tool through the dashboard or API features. Of course if you want to use this for regulatory purposes you will have to do some own work in making it land in your country, making operators, hosting providers, ISPs to actually use this tool to measure and improve their own standard adoption. But I would say the starting point for adopting adoption is making it possible to measure your adoption and therefore this source code could contribute to that. I hope that answers


Wout de Natris: your question. And I think it’s available for free so it’s not like it’s a business model it’s there for you to use if you want to start using it.


Audience: Yeah well yeah my fear is will we not be breaching because I’ll be using internet.nl we are not be creating problem by interfering into the setup of Netherlands. I won’t be interfering. Will that not create a problem for Liberia if we should start using internet.nl in Liberia? How can we handle this?


Wout de Natris: I would say that if you decide to start using it then you deploy it in your own country and from that moment you name it like Brazil for example they call it top and in Singapore they name it something else so you give it your own name it’s no longer internet.nl it’s the name that you choose. And from that moment on, it’s yours. So it’s available for free and the instructions, just exchange cards with Walter and then it will be okay. And then you can join the community that we will be setting up very soon. The first invitation will go out probably in January, early February. So please join if you would like to. Is there another question in the room? Time for one more. And is there a question? Yes. And something online, Doreen? No? Then the floor is yours. Please introduce yourself first in your field. No? Or not? Oh, okay. Yes. The lady here. Okay. Thank you.


Audience: So first off, thank you for this initiative. Please introduce yourself. Thank you so much. Shawna Hoffman with Guardrail Technologies. One of the questions that I have, I actually just looked up my website and realized we have some work to do ourselves. I come from the United States. So what advancements have you been making with the U.S. or is this something that we need to start in our country?


Wouter Kobes: I’m not quite aware actually if we have any initiative in the U.S. right now. But I think as Wouter is saying, this is partly the reason why we are having this session right now is to make this tool more known globally. And I think perhaps we can talk after this session how we could reach the U.S. as well. Because I think it’s in the end in the best interest of the internet and everyone here at the IGF to have this tool widely used and widely known. Not necessarily under the name internet.nl, but rather as a tool in your country that is known and used by as many providers as possible. So let’s talk after this session about this. Thank you.


Wout de Natris: You have a question, sir, then please introduce yourself. yourself first and your affiliation? Sure, yeah, it’s my pleasure.


Audience: My name is can you hear me? Okay, engineer Munzel Mutairi, I’m CEO of Nataj Al Fikr. My question is regarding being proactive in terms of building the capacity of countries, especially with limited resources, technical limited resources. Usually we talk, but is there like any proactive initiative where there is measuring of their, you know, infrastructure and trying to develop the infrastructure so it’s standardized as much as possible with other countries? I’m not sure if I got your question fully. Okay, for example, the gentleman from Liberia is asking and that’s a reactive to what’s happening. How about being proactive by, you know, that’s maybe by UN or international organization in the field. So to make sure that no country is left behind. So that’s my question.


Wouter Kobes: Yeah, I think that’s a very good point. Well, the power of the tool is that it is not bound to a country to test domain names. So you can actually test, like I presented, you can test the IGF website, you can test websites from all over the world, already. So the challenge is not in the technology, the challenge is in making it be adopted in those countries where the reach might be more difficult. And well, I consider this session one of our proactive measures to make this adoption more known. But if you have any ideas on that, then perhaps we can discuss it also after this session, how to make sure no one is left behind, as was introduced this morning as well. Thank you.


Wout de Natris: Yes, thank you. And I think what I can add, if you would like to be proactive on this, you can use internet.nl to test websites in your country, and it will work. Except if you really want to measure your country and make it better known, it would be better then next as a next step to deploy the system yourself. But to show your countrymen, you can just use internet.nl and for example, test the bank or the government, whatever in your own country, and you will get the results also. So that’s perhaps a more proactive way to start. And thank you for your question.


Steven Tan: Maybe I can answer to that question as well. I think firstly, right, like what the Netherlands Standardization Forum is actually doing, similar to what, you know, in Singapore, CSA, what we are trying to preach here is that when we identify the various best practices here, or these various modern standards itself, right, what we are trying to do is that we are coming up with a list to set a series of standards that all countries, and hopefully at an international level, can deploy. So while we, and we do believe that, you know, no country should be left behind when it comes down to secure internet adoption. By creating all these best practices available, like on internet.nl, or NIC.br, or the internet hygiene portal on the CSA government’s website, right, what we are trying to do is that every country should embrace this, and that, you know, this is where we have gone through the pain and various efforts that we have tried many standards out there. And then we have also tried things that by then is already dated, right, and that new technologies are coming along the way, and that we have curated this list for everybody to use it. So for countries who are now starting to embrace internet, and to start to take on these various standards, right, the by de facto great standards that many have already tested, and tried, and have also learned our lessons, right, more or less the correct answers are now available. And that for you who are trying to, you know, really trying to get your internet up now, these standards are perhaps, I wouldn’t say it’s the best, but if it will be to second or one another, right, that you may want to actually look at these standards and then, you know, adopt it across your country and then have your ICT providers also develop and actually adopt such best practices. They would, by default now, be great and useful, especially with that many countries have actually already tried and tested already. So this is what we are trying to do here.


Wout de Natris: Yes, thank you very much, Stephen. And apologies that I didn’t understand where the voice came from, because we can’t see anybody online here. So thank you for that comment, because it’s very significant and I think very explanatory. I see the gentleman asking the question, saying thank you and nodding. As we have only an hour, we are going to move into the second section of our open forum, and that is on sustainability. And many of them in the room will have knowledge and most likely opinions on sustainability and the role that different stakeholders have to play to reach a more sustainable future for all living creatures. But how to go about this? Well, the Dutch government is working on a novel approach and use procurement processes. But what is the plan? What is the current state of the plan? What actions can we look forward to? And these are more questions will be answered by Hannah Bouten, whose program coordinator for the Dutch Coalition for Sustainable Digitalization. And then next, Rachel Kuylenburg, who is coordinator sustainability for Logius, which is a part of the Ministry of the Interior. And Rachel is committed to getting sustainable IT a step further. But first we hear Hannah about the project and then Rachel more about the policy side of it. So Hannah, please first.


Hannah Boute: Thank you so much, Wout, and thank you all live in RIAZ and here online for attending the session. And I’d like to take you along and make the bridge from security. sustainability and in the Netherlands we do that with the Dutch coalition for sustainable digitalization. So if I could have the next slide please. So to start with the internet obviously is part of a digital system and that digital systems shouldn’t only be secure but also sustainable. If we zoom out in Europe we call this the twin transition. So we’re in a digitalization transition but the other side of the digitalization transition is the sustainability transition. We see a growth of connections and growth in quality and also technology becoming more efficient but nevertheless we as human beings have the tendency to use efficiency that comes free. So if we can go to the next slide please. So that means that we have to see digitalization in terms of sustainability as well and we can look at sustainability of the digital system in three scopes and the first scope has to do with your own company. Think of your company facilities but also the vehicles of your company. We call this scope one and also scope two where purchasing comes in perspective. So the emissions you make by purchasing energy for example and scope three has to do with the indirect responsibility you have when you purchase something throughout the supply chain and also the distribution to your end consumer. And with regards to IT you can think about e-waste and also the scarcity of minerals and metals. So next slide please. So in the Netherlands we try to give this so-called twin transition. We try to drive it forward. in a public-private cooperation since stakeholder groups all have to play a role in this transition. And here you see a number of the logos of the parties we cooperate with. The Coalition for Sustainable Digitalization is possible with help of all these parties and especially together with the Ministry of Economic Affairs, together with the coalition and the Ministry of Economic Affairs in the Netherlands, we created the Action Plan for Sustainable Digitalization which has analyzed and identified several action points to be able to move this transition forward. If we can go to the next slide please. And we do that in four program lines and that is visualized in this House of Sustainable Digitalization. A very important slide is the first program line technological innovation. This has to do with sustaining the digital system and within the coalition we focus on sustaining artificial intelligence since the big adoption of sustainable intelligence. But also we’re looking at how to make internet more sustainable. Then on the right you see sustainable by IT and as you already heard in my introduction, digitalization needs energy and therefore we’re looking into the relation between the energy system and the digital system. But also making the IT of organizations more sustainable is a very important program line. In the European Union we currently have a directive, it’s called the Corporate Sustainable Reporting Directive, which makes companies to report about the three scopes I just mentioned. And we have several working groups working towards solutions for organizations to start their journey. to making their IT more sustainable. And a very important working group in this programme line… is the IT procurement working group… of which Rachel is our chair. And this is a very natural moment to give the word to Rachel… so she can explain a bit more how we do that with regards to procurement. Thank you so much for your attention.


Wout de Natris: And thank you, Hannah. And, Rachel, the floor is yours.


Rachel Kuijlenburg: Okay, thank you. Thank you very much for this opportunity… to talk a little bit about sustainable procurement of IT. My name is Rachel and I work for Logius… and that’s the digital government service of the Netherlands… of the Ministry of the Interior. And we maintain government-wide ICT solutions and common standards… to simplify the communication between the government and our society. As Logius, we procure a lot of IT… and that’s why I’m also chair of the working group… IT procurement within the coalition. And the proof of the pudding is in the eating… because we can talk a lot, but we should do this. So, next slide. We… Well, I… Well, here you see a policy frame. And in my former job, when I was also in sustainable or tangible world… of plastics and food waste, we developed this framework… to commit our strategies to a policy within an organisation. So, we focus on refuse. That should be the strategy in the tactics is reduce and reuse. So, it was really nice to hear that our former speakers… really try to reuse the standard of the internet.com. But we strive really to… make a better sustainable footprint on IT. And that involves software, but also storage, cloud data, and hardware, because 80% of the footprint of IT is in the production of hardware. So to make sure that hardware should be, well, bought as less as possible, this framework could help. But then the question is, how to procure this? And in the next slide, we show you our focus points. And firstly, it’s really we focus on to minimize our energy needs. So energy efficiency is really impossible, very needed, within the procurement of IT. Also emission-free, so that’s focused on hardware. Also in the data centers to become CO2 neutral in 2030 or 50, whatever your policy is. And in the end, we really try to procure circular and climate proof. But then the question is, how are you going to do this? So at this moment, and that’s on the next slide, we are developing a framework. And here you can see, and it’s not its start. So please, if you want to help us, send me an email. How to really focus on several focus points in relation to hardware, software, and cloud. So in hardware, what you can see if we procure, you can really aim for energy requirements. Also there are some ISO standards or an energy audit. You can ask for. What is a very helping legislation at this moment is to see SARD within Europe. So every big company from whom we procure, they have to report their sustainability credentials. So that also will help you with to become CO2 neutral. And also for cloud, because we within Logius, but the Dutch government, we have our own data centers, but we also procure a lot of data centers. For that, we also have standards to make sure that our suppliers are trying to aim for, well, becoming as sustainable as possible. Within the coalition at this moment, we try to develop, well, a better framework. So coming year, we will focus on hardware, software and cloud. But also a very important part of the procurement is the contract management. So the next step, what we will take coming year is to do more research on how to, not only how to procure, but also how to contain within the contract management that all the IT procurement is going to be secure. So if you, the last slide, if you want to help us, or if you need any information, we are more than willing to share. Just send me an email. My email address is here within the sheet. And well, we make nice steps in procuring sustainable IT. And hopefully, with your help, we can make better steps. So thank you for the attention.


Wout de Natris: Thank you, Rachel. As everything is in Dutch on this slide, perhaps you could translate what people are reading here for them.


Rachel Kuijlenburg: Oh, yeah. Well, it’s about our aims of Logius. What I said, we are the digital government service. And it’s about well, how do I translate this in English? It’s early morning here. Accessibility, I would say. Accessibility, it’s about interaction, it’s about data exchange, the infrastructure, and it’s about standards within IT. So, these are our aims within the company and I should have used an English one. So, my apologies for this Dutch.


Wout de Natris: Sorry for putting you on the spot, Rachel. There’s one question online, but one short comment. I think what was promised is a third topic and that is how things interact between the two topics, but that is procurement. Because if you measure your internet standards, that’s the moment you know what you can procure on to make yourself more secure and the same goes for the sustainability. When you know what it’s about, then you can start procuring measures that are actually supporting sustainability. There’s a question online. I’m pointed to my cell phone. And it’s in the chat. It’s someone called Bart and he’s asking, what does the international inter.nl community focus on and is more the tool or is it usage? And Walter, would you like to answer that? I can do that myself. I think what the community is trying to do is bring together the organisations that are currently working with the internet.nl or a tool like it and the organisations that are interested to do so in the future. And what we would do is to come together twice a year and online, not physically, but online, and discuss where we actually are at this point in time and what the challenges are that we run into to learn from each other, but also, for example, to coordinate on the next step. So if we all would move together to a next evaluation of evolution of the project, then we could do so together, develop together and that way learn together. So that is what we’re going to do in the first year and after the first year we’ll evaluate and see if we continue in the same way or that we have to change it, or perhaps that there’s no interest, that’s the other option of course, but we’ll be working on that in the next year and have two sessions, probably one in the spring and one in the fall, on the Northern Hemisphere at least, and from there see how we develop. I saw there was a question in the room, no there isn’t, then I will move to Koen Wesselman who is our rapporteur, and Koen can you tell us what we went through and what the lessons are that we learned here today, Coen the floor is yours.


Coen Wesselman: Yes I can, thank you all for being here, we started this session off with a clear explanation by Wouter of what internet.nl is and what it’s doing and what it’s standing for, an open free and accessible internet for everyone. We moved into the panel discussion where Annemiek from the Dutch Standards Forum made a clear appeal to build a critical mass of countries and organisations to make lasting changes to internet standards and security. We have seen very promising presentations from the Brazilian Organisation for Internet Standards and the Cyber Security Agency from Singapore, thanks for that a lot, and we hope to see that Japan is moving, Daishi is moving forward to start an internet.nl version for Japan and monitor their internet activities. We have seen clear questions from the present person from Liberia which have been answered and Hannah and Rachel gave a clear insight into what the Netherlands is doing in combining digitalization and sustainability in a private-public cooperation with the government and companies and how procurement plays an important role in improving the situation for the Netherlands and I hope that for everyone this concludes the session.


Wout de Natris: Thank you very much Koen and this is what we will be putting online as a report for this session. I think that we’ve learned a lot in this session and I’m not going to repeat what Koen said because that is the summary already but I think that what is important to know that if you want to procure products you have to know what you’re buying and you can only know what you’re buying when you test it beforehand and I think that is a lesson that we’re starting to learn where sustainability is concerned but also where the security of the internet is concerned and I’ve got a minute so I will make some advertisement about the dynamic coalition that I’m chairing also and I’m a coordinator of which is called Internet Standard Security and Safety Coalition which functions within the IGF and internet standards as I say is the main part that we’re working on and to make sure that procurements start happening especially within governments that they procure secure by design but also to watch education and skills and how our youth trained on cybersecurity and tertiary cybersecurity and can we improve that in the future because there seems to be a skills gap of about 20 years what we’ve been hearing but we’re also looking at IOT security we’re looking at in post quantum encryption pretty soon that is starting as we speak we signed the contract on Friday to research on it but we also hope to move to the next phase and the next phase is not just looking at the theory of our recommendations but to start producing I can’t think of the word in English, but the workshops, the capacity building, so that actually our findings are being used by organizations around the world so that the world becomes inherently more safer for all the users of the internet and not just the privileged few who can afford it. So that’s where we are. For example, at internet.nl, that is me, I can say, because I will be replying to you. If you have any questions, please contact us there. And if you’re interested, you will get an invitation to the first meeting that we will be organizing pretty soon. And we’ll let you know when the first meeting happens, but it will be somewhere in the spring of 2025. But let me end here. We’re actually on time, and we had not expected that that would happen, but we say that our speakers really kept their time. I didn’t have to correct anybody at some point, so that is all kudos here. Let me thank the speakers first, because we’ve heard a lot about their experience, and I think that it’s tremendously important that we’re going to improve ourselves and make ourselves better and keep making ourselves better. It looks like with internet.nl, it’s certainly possible. Doreng, thank you for monitoring online, and Koen, thank you for rapporteuring for us. And I want to thank the technicians for setting everything up, and our scribes somewhere in the world, I don’t know where you are, but thank you very much for making sure that everything is recorded. So let me stop there. Thank you very much for your attention, and hope to meet again pretty soon. Bye-bye. It’s one thing, I think we have a few t-shirts, right? So if anybody wants an internet.nl t-shirt, it’s yours. So, thank you very much. Bye-bye. Thank you. Bye-bye. Bye-bye. Bye-bye. Bye-bye. Bye. Bye-bye. Bye-bye. Bye-bye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


W

Wouter Kobes

Speech speed

155 words per minute

Speech length

906 words

Speech time

350 seconds

Tool provides quick assessment of ICT environment security

Explanation

Internet.nl is a tool that quickly evaluates the security of an organization’s ICT environment. It measures the adoption of important internet standards and provides a score within seconds.


Evidence

Demonstration of measuring IGF donors and partners using the dashboard


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


W

Wout de Natris

Speech speed

135 words per minute

Speech length

2557 words

Speech time

1131 seconds

Allows organizations to test themselves and improve security

Explanation

The Internet.nl tool enables organizations to assess their own security status. This self-assessment capability encourages organizations to identify areas for improvement and take steps to enhance their security measures.


Evidence

Organizations that are tested and not tested so well have the inclination to move upwards and to better themselves and enter this Hall of Fame


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Formation of international community to share experiences

Explanation

An international community is being formed to share experiences with implementing and using Internet.nl-like tools. This community aims to facilitate learning and coordination among countries and organizations interested in promoting internet standards.


Evidence

Plans for biannual online meetings to discuss challenges and coordinate on next steps


Major Discussion Point

International collaboration on internet standards


A

Annemieke Toersen

Speech speed

131 words per minute

Speech length

718 words

Speech time

326 seconds

Promotes adoption of important internet standards

Explanation

The Internet.nl tool encourages the adoption of crucial internet standards. It provides a way to measure and visualize the implementation of these standards, motivating organizations to improve their practices.


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Gilberto Zorello


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Dutch government uses three-fold strategy: mandate, monitor, community building

Explanation

The Dutch government employs a comprehensive approach to promote internet standards. This strategy includes mandating specific standards, monitoring their adoption, and fostering community engagement to drive implementation.


Evidence

Mandating standards through ‘comply or explain’ list, monitoring over 2,500 government domains, and participating in community initiatives like the Internet Standard Platform


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Gilberto Zorello


Steven Tan


Agreed on

Government strategies for promoting internet standards


Engagement with big tech companies to improve standards support

Explanation

The Dutch government actively engages with major technology companies to enhance support for internet standards. This approach aims to create a critical mass of support for these standards, leading to wider adoption.


Evidence

Success with Microsoft announcing full support for the DANE email security standard on Exchange Online


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Gilberto Zorello


Steven Tan


Agreed on

Government strategies for promoting internet standards


G

Gilberto Zorello

Speech speed

76 words per minute

Speech length

341 words

Speech time

266 seconds

Implemented in Brazil as Top.br with increasing adoption

Explanation

Brazil has implemented its version of the Internet.nl tool called Top.br. The tool is being used to measure and promote the adoption of internet standards in the country, with growing usage and improvements in various security metrics.


Evidence

Statistics on website tests, email tests, and connection tests showing adoption rates for various standards


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Steven Tan


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Brazil promotes tool through industry meetings and events

Explanation

The Brazilian organization NIC.br actively promotes the Top.br tool through various industry meetings and events. This outreach strategy aims to increase awareness and adoption of internet standards among key players in the Brazilian market.


Evidence

Promotion in meetings of NIC.br and the Association of Internet Service Providers


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Annemieke Toersen


Steven Tan


Agreed on

Government strategies for promoting internet standards


S

Steven Tan

Speech speed

160 words per minute

Speech length

1045 words

Speech time

391 seconds

Developed as Internet Hygiene Portal in Singapore

Explanation

Singapore has created its own version of the Internet.nl tool called the Internet Hygiene Portal. This tool is designed to improve the country’s cybersecurity landscape by encouraging service providers to adopt key internet security best practices.


Evidence

Over 200,000 scans conducted since 2022, with users from 40 countries and 45% of domains showing improvements


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Daishi Kondo


Agreed on

Importance of Internet.nl tool for measuring internet security standards


Singapore uses transparent rating system to shift industry behavior

Explanation

The Internet Hygiene Portal in Singapore employs a transparent rating system to influence industry behavior. This approach promotes proactive security enhancements and has led to improved cybersecurity practices among service providers.


Evidence

ICT service providers in the APAC region joining the Internet Hygiene Rating Initiative and configuring their services to meet strong internet security best practices by default


Major Discussion Point

Government strategies for promoting internet standards


Agreed with

Annemieke Toersen


Gilberto Zorello


Agreed on

Government strategies for promoting internet standards


Sharing best practices internationally

Explanation

Singapore actively shares its experiences and best practices in implementing internet security standards internationally. This collaborative approach aims to drive collective cybersecurity improvements on a global scale.


Evidence

Collaboration with tech giants like Microsoft, Amazon, and Akamai, and sharing notes with other countries like the Netherlands


Major Discussion Point

International collaboration on internet standards


D

Daishi Kondo

Speech speed

132 words per minute

Speech length

312 words

Speech time

140 seconds

Interest in implementing a Japanese version

Explanation

There is interest in implementing a Japanese version of the Internet.nl tool. The potential benefits include security visualization and creating peer pressure among competitors to improve their security measures.


Major Discussion Point

Internet.nl tool for measuring internet security standards


Agreed with

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Agreed on

Importance of Internet.nl tool for measuring internet security standards


A

Audience

Speech speed

123 words per minute

Speech length

331 words

Speech time

160 seconds

Interest from developing countries in adopting standards

Explanation

There is interest from developing countries in adopting internet standards and implementing tools like Internet.nl. This reflects a growing awareness of the importance of internet security and standards in countries at various stages of digital development.


Evidence

Question from a representative from Liberia about how regulators in developing countries can implement similar tools


Major Discussion Point

International collaboration on internet standards


H

Hannah Boute

Speech speed

136 words per minute

Speech length

615 words

Speech time

271 seconds

Dutch Coalition for Sustainable Digitalization coordinates public-private efforts

Explanation

The Dutch Coalition for Sustainable Digitalization is a public-private partnership that aims to drive forward the twin transition of digitalization and sustainability. It brings together various stakeholders to address the sustainability challenges of digital systems.


Evidence

Creation of the Action Plan for Sustainable Digitalization in collaboration with the Ministry of Economic Affairs


Major Discussion Point

Sustainable digitalization


R

Rachel Kuijlenburg

Speech speed

129 words per minute

Speech length

731 words

Speech time

338 seconds

Focus on sustainable procurement of IT

Explanation

The Dutch government is emphasizing sustainable procurement of IT as a key strategy for improving the sustainability of digital systems. This approach aims to minimize energy needs and reduce the environmental impact of IT infrastructure.


Evidence

Development of a framework for sustainable IT procurement focusing on hardware, software, and cloud services


Major Discussion Point

Sustainable digitalization


Framework for minimizing energy needs and emissions in IT

Explanation

A framework is being developed to guide the procurement of sustainable IT. This framework focuses on energy efficiency, emission-free hardware, and circular and climate-proof solutions for hardware, software, and cloud services.


Evidence

Specific focus points for hardware (energy requirements, ISO standards), software, and cloud services


Major Discussion Point

Sustainable digitalization


Importance of contract management in sustainable IT procurement

Explanation

Contract management is identified as a crucial aspect of sustainable IT procurement. Ongoing research is being conducted to determine how to ensure that sustainability requirements are maintained throughout the contract lifecycle.


Evidence

Plans for future research on contract management to ensure sustainable IT procurement


Major Discussion Point

Sustainable digitalization


Agreements

Agreement Points

Importance of Internet.nl tool for measuring internet security standards

speakers

Wouter Kobes


Wout de Natris


Annemieke Toersen


Gilberto Zorello


Steven Tan


Daishi Kondo


arguments

Tool provides quick assessment of ICT environment security


Allows organizations to test themselves and improve security


Promotes adoption of important internet standards


Implemented in Brazil as Top.br with increasing adoption


Developed as Internet Hygiene Portal in Singapore


Interest in implementing a Japanese version


summary

Speakers agree on the value of Internet.nl and similar tools for assessing and promoting internet security standards across different countries.


Government strategies for promoting internet standards

speakers

Annemieke Toersen


Gilberto Zorello


Steven Tan


arguments

Dutch government uses three-fold strategy: mandate, monitor, community building


Engagement with big tech companies to improve standards support


Brazil promotes tool through industry meetings and events


Singapore uses transparent rating system to shift industry behavior


summary

Speakers highlight various government strategies to promote internet standards, including mandates, monitoring, community engagement, and collaboration with industry.


Similar Viewpoints

Both speakers emphasize the importance of engaging with major technology companies and sharing best practices internationally to drive improvements in internet standards.

speakers

Annemieke Toersen


Steven Tan


arguments

Engagement with big tech companies to improve standards support


Sharing best practices internationally


Unexpected Consensus

Interest from developing countries in adopting standards

speakers

Audience


Wouter Kobes


arguments

Interest from developing countries in adopting standards


Tool provides quick assessment of ICT environment security


explanation

The interest from developing countries in adopting internet standards and tools like Internet.nl was unexpected, showing a growing awareness of cybersecurity importance across different levels of digital development.


Overall Assessment

Summary

There is strong agreement on the importance of tools like Internet.nl for measuring and promoting internet security standards, as well as the need for government strategies to drive adoption. Speakers from different countries shared similar approaches and experiences in implementing these tools and strategies.


Consensus level

High level of consensus among speakers on the core issues. This implies a growing international recognition of the importance of internet security standards and the potential for increased collaboration in developing and implementing tools and strategies to promote these standards globally.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

There were no significant disagreements among the speakers.


difference_level

The level of disagreement was minimal to non-existent. The speakers largely presented complementary information about implementing and promoting internet security standards and sustainable digitalization in their respective countries or contexts. This high level of agreement suggests a shared understanding of the importance of these tools and approaches, which could facilitate international collaboration and adoption of similar practices across different regions.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers emphasize the importance of engaging with major technology companies and sharing best practices internationally to drive improvements in internet standards.

speakers

Annemieke Toersen


Steven Tan


arguments

Engagement with big tech companies to improve standards support


Sharing best practices internationally


Takeaways

Key Takeaways

The Internet.nl tool allows organizations to quickly assess and improve their internet security standards adoption


Several countries have implemented or are interested in implementing versions of the Internet.nl tool


Government strategies involving mandates, monitoring, and community building can effectively promote internet standards adoption


International collaboration and creating a critical mass of countries can help negotiate improvements with big tech companies


Sustainable digitalization efforts are focusing on areas like sustainable IT procurement and minimizing energy needs


Resolutions and Action Items

Formation of an international community to share experiences with Internet.nl-like tools


Plans to develop a more comprehensive framework for sustainable IT procurement


Invitation for interested parties to join the Internet.nl international community


Development of workshops and capacity building initiatives to implement internet security recommendations


Unresolved Issues

How to effectively implement Internet.nl-like tools in developing countries with limited resources


Specific metrics or targets for sustainable IT procurement


Details on how to balance security and sustainability requirements in IT procurement


Suggested Compromises

Using existing tools like Internet.nl to test websites in countries without their own versions, as a starting point before full deployment


Thought Provoking Comments

To keep our internet open, free and secure. However, those standards do not implement themselves. You have to implement them actively.

speaker

Wouter Kobes


reason

This comment succinctly captures the core purpose of internet standards and the need for proactive implementation, setting the stage for the entire discussion.


impact

It framed the subsequent conversation around the importance of tools like internet.nl in promoting and measuring the adoption of these standards.


We mandate specific open standards. We can do so by including standards on the comply or explain list.

speaker

Annemieke Toersen


reason

This insight into the Dutch government’s approach to mandating standards provides a concrete example of how to drive adoption at a national level.


impact

It sparked discussion about different approaches to promoting standards adoption, from government mandates to voluntary initiatives.


By lobbying other countries, we create a critical mass that enables more effective negotiations with suppliers.

speaker

Annemieke Toersen


reason

This comment highlights the strategic importance of international collaboration in influencing major tech companies to adopt standards.


impact

It shifted the conversation towards the global impact of coordinated efforts and the potential for smaller countries to influence industry giants.


Since its launch in 2022, IHP has conducted more than 200,000 scans with users from across 40 countries, right? And importantly, more than 45% of domains have also shown improvements from their first initial scan to their most recent evaluation.

speaker

Steven Tan


reason

This data-driven insight demonstrates the tangible impact of implementing such tools on improving internet security across multiple countries.


impact

It provided concrete evidence of the effectiveness of these initiatives, encouraging other countries to consider similar approaches.


Our countries are far behind in terms of internet development. And now we are talking about internet. I’m hearing nl for my country will be internet.LRO. We as regulators for a developing country like mine, what can we do?

speaker

Peter Zanga Jackson, Jr.


reason

This question from a representative of a developing country highlights the global disparities in internet infrastructure and the challenges faced by nations trying to catch up.


impact

It prompted discussion about how to make these tools and standards accessible and relevant to countries at different stages of internet development.


80% of the footprint of IT is in the production of hardware. So to make sure that hardware should be, well, bought as less as possible, this framework could help.

speaker

Rachel Kuijlenburg


reason

This comment introduces the important connection between IT procurement and sustainability, broadening the discussion beyond just security standards.


impact

It shifted the conversation to include sustainability considerations in IT procurement, linking the earlier discussion on security standards with environmental concerns.


Overall Assessment

These key comments shaped the discussion by broadening its scope from a focus on the technical aspects of internet standards to encompass global collaboration, the challenges faced by developing countries, and the intersection of security with sustainability. The conversation evolved from explaining the internet.nl tool to exploring its potential for driving systemic change in internet governance and IT procurement practices worldwide. The discussion highlighted the need for both top-down (government mandates) and bottom-up (community-driven) approaches to improving internet security and sustainability, while also acknowledging the disparities in resources and infrastructure between different countries.


Follow-up Questions

How can developing countries implement tools like internet.nl?

speaker

Peter Zanga Jackson, Jr. (Liberia)


explanation

Important for ensuring developing countries are not left behind in internet security and standards adoption


What advancements have been made with implementing internet.nl or similar tools in the United States?

speaker

Shawna Hoffman (Guardrail Technologies)


explanation

Explores potential for expanding the use of these security assessment tools to other major internet markets


How can international organizations proactively help build capacity in countries with limited technical resources to implement internet security standards?

speaker

Engineer Munzel Mutairi


explanation

Addresses the need for a coordinated approach to ensure global adoption of internet security standards


How to improve contract management for sustainable IT procurement?

speaker

Rachel Kuijlenburg


explanation

Identified as a next step for ensuring long-term sustainability in IT procurement practices


What does the international internet.nl community focus on – is it more the tool or its usage?

speaker

Bart (online participant)


explanation

Seeks clarification on the priorities and activities of the international community around internet.nl


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

Session at a Glance

Summary

This workshop focused on how data governance initiatives can promote ethics by design in AI and other data-oriented technologies. Speakers from various organizations discussed challenges and strategies for embedding ethical considerations into technological development.

Key themes included the need for multi-stakeholder collaboration, the challenge of defining and implementing ethics across different contexts, and the importance of moving from principles to actionable implementation. Speakers highlighted initiatives like UNESCO’s recommendation on AI ethics, which provides a global standard, and tools like ethical impact assessments to evaluate AI systems.

Challenges discussed included varying levels of AI readiness across countries, differing interpretations of ethical principles, and ensuring meaningful inclusion of civil society voices beyond tokenistic representation. The importance of education and capacity building around AI ethics was emphasized.

Speakers noted the value of open source AI for collaborative development and risk mitigation. Initiatives to bring together diverse stakeholders, including coalitions and expert networks, were described as ways to advance ethical AI governance globally.

Overall, participants agreed on the need to operationalize ethical principles through concrete actions and implementation strategies. Moving from high-level agreement on ethics to practical application across diverse contexts was seen as a key next step for advancing responsible AI development and deployment worldwide.

Keypoints

Major discussion points:

– The challenges of defining and implementing ethics in AI across different contexts and cultures

– The importance of multi-stakeholder collaboration in shaping ethical AI governance

– Various initiatives and frameworks being developed by organizations to promote ethical AI, including UNESCO’s recommendation on AI ethics

– The need to move from high-level principles to concrete implementation of ethical AI practices

– Involving civil society and underrepresented voices in AI governance discussions

The overall purpose of the discussion was to explore how data governance initiatives and multi-stakeholder collaboration can promote ethics by design in AI and other digital technologies. The panelists aimed to address challenges in embedding ethics in AI systems and debate strategies for meaningful inclusion of diverse stakeholders in shaping ethical norms and standards.

The tone of the discussion was largely constructive and solution-oriented. Panelists acknowledged the complexities and challenges involved, but focused on sharing concrete initiatives and proposing ways to make progress. There was a sense of urgency about moving from principles to implementation as the discussion progressed. The tone became slightly more critical when discussing the tokenistic inclusion of civil society voices, but remained overall collaborative and forward-looking.

Speakers

– José Renato Laranjeira de Pereira: Researcher at the University of Bonn, Co-founder of LAPIN (Laboratory of Public Policy and Internet)

– Thiago Moraes: Joint PhD candidate in law at University of Brasilia and University of Brussels, Specialist in data protection and AI governance at Brazilian Data Protection Authority (ANPD), Co-founder and counselor of LAPIN

– Ahmad Bhinder: Policy Innovation Director at the Digital Corporation Organization

– Amina P.: Privacy Policy Manager at META

– Tejaswita Kharel: Project Officer at the Center for Communication Governance at National Law University Delhi

– Rosanna Fanni: Program Specialist in the ethics of AI unit at UNESCO

Additional speakers:

– Alexandra Krastins: Senior lawyer at VLK Advogados, Former project manager at Brazilian National Data Protection Authority, Co-founder and counselor of LAPIN

Full session report

Revised Summary of AI Ethics and Governance Discussion

This workshop explored strategies for embedding ethical considerations into AI and digital technologies. Speakers from various organizations discussed challenges and approaches to promoting ethics by design, emphasizing the need for multi-stakeholder collaboration and the importance of moving from principles to actionable implementation.

Ahmad Bhinder – Digital Corporation Organization (DCO)

Ahmad Bhinder highlighted two main regulatory approaches to AI governance: a prescriptive, risk-based approach led by the EU and China, and a more flexible, principles-based approach favored by the US and Singapore. He noted varying levels of AI readiness across countries, complicating the development of global governance frameworks. Bhinder also mentioned the DCO’s Digital Space Accelerator program, which aims to bring together multiple stakeholders to address AI governance challenges.

Amina P. – META

Amina P. presented open source AI as a tool to enhance privacy and safety, challenging common perceptions by arguing that opening AI models to the wider community allows experts to identify, inspect, and mitigate risks collaboratively. She emphasized META’s partnerships with academia and civil society, including the Partnership on AI and the Coalition for Content Provenance and Authenticity. Amina also highlighted the need for better education on AI and privacy among stakeholders.

Rosanna Fanni – UNESCO

Rosanna Fanni emphasized UNESCO’s recommendation on AI ethics as a global standard agreed upon by 194 member states. She introduced UNESCO’s readiness assessment methodology for evaluating governance frameworks at the macro level and an ethical impact assessment tool for specific algorithms at the micro level. Fanni mentioned UNESCO’s plans to launch a global network of civil society organizations focused on AI ethics and governance, and their ongoing implementation of AI ethics recommendations through readiness assessments in over 60 countries. She also noted the upcoming AI Action Summit hosted by France in February and referenced the Global Digital Compact in her concluding remarks.

Tejaswita Kharel – Center for Communication Governance

Tejaswita Kharel highlighted the need for a context-specific understanding of ethical principles, emphasizing that ethics is a subjective concept varying across individuals and cultures. She raised concerns about ensuring meaningful inclusion of civil society voices in AI governance discussions, pointing out the challenges of moving beyond tokenistic representation to incorporate diverse perspectives effectively.

Challenges in Implementation

Speakers identified several key challenges in implementing ethics by design in AI systems:

1. Varying levels of AI readiness across countries

2. Difficulty in operationalizing ethical principles

3. Subjectivity and differing interpretations of ethics

4. Misconceptions about AI and privacy among stakeholders

5. Ensuring meaningful inclusion of civil society and Global South voices in AI governance processes

Tools and Frameworks for Ethical AI

Various tools and frameworks were presented to promote ethical AI development:

1. DCO’s AI governance assessment tool

2. META’s open source AI and responsible use guidelines

3. UNESCO’s readiness assessment methodology and ethical impact assessment framework

Moving Forward: Resolutions and Unresolved Issues

The discussion led to several action items:

– UNESCO’s launch of a global network of civil society organizations focused on AI ethics and governance

– Continued implementation of UNESCO’s recommendation on ethics of AI through readiness assessments

– META’s planned launch of a voluntary survey for businesses to map AI use across their operations in summer 2024

Key unresolved issues include:

1. Effectively operationalizing ethical principles in AI development and deployment

2. Addressing varying levels of AI readiness across different countries and regions

3. Reconciling differing interpretations and applications of ethical principles across contexts

Conclusion

The discussion highlighted the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. While there was broad agreement on the importance of ethical AI and multi-stakeholder collaboration, the specific implementation strategies and tools varied among different organizations and stakeholders. Moving from high-level agreement on ethics to practical application across diverse contexts emerged as a key next step for advancing responsible AI development and deployment worldwide.

Session Transcript

José Renato Laranjeira de Pereira: here in our time zone, but also good morning, good evening for those watching us online in other time zones. My name is Jose Renato. I am a researcher at the University of Bonn in Germany, but originally from Brazil, also co-founder of LAPIN, the Laboratory of Public Policy and Internet, a non-profit organization based in Brasilia, Brazil. And well, we’re going to start now our workshop number 45 on AI ethics by design. And well, our main goal here is mainly to delve into how data governance initiatives can serve as a cornerstone for promoting ethics by design in data-oriented technologies, in particular artificial intelligence. So more specifically, this panel aims to, one, offer an overall understanding of ethics by design and importance of embedding ethical considerations at the inception of technological development, two, address the challenges of embedding ethics in AI and other digital systems, and finally, debate multi-stakeholder collaboration and its relevance in shaping ethical norms and standards, particularly in the context of the recent UN resolution, A78L49, which underscores the importance of internationally interoperable safeguards for AI systems. We have as policy questions, which will guide the panelists to reflect upon these issues. First one is how can policymakers effectively promote the concept of ethics by design, ensuring integration of ethical principles into the design of process of AI and digital systems in a way that meaningfully includes multiple stakeholders, especially communities affected by the systems? Second policy question is what are the primary challenges to embedding ethics in AI and other systems, and how can policymakers, industry, and civil society collectively? address them to ensure digital technology’s responsible development and deployment? And finally, what strategies and mechanisms can be implemented to foster these multi-stakeholder collaboration in an ethical way, considering the diverse interests among these communities? Who is going to moderate this session? Our first, Thiago Moraes, who is a joint PhD candidate in law at the University of Brasilia and in the University of Brussels. Hope I pronounced that correctly, but my Dutch is not so good. Definitely. And he also works as a specialist in data protection and AI governance at the Brazilian Data Protection Authority, ANPD. Thiago is also co-founder and now counselor of the Laboratory of Public Policy and Internet, LAPIN. The online moderator will be, but which is, who is also with us in person here, is Alexandre Crastins, who is a senior lawyer at VLK Advogados, V-L-K Advogados. She provides consultancy in privacy and AI governance, also a former member of the, a former worker at the Brazilian National Data Protection Authority as a project manager and also co-founder and counselor of the Laboratory of Public Policy and Internet, LAPIN. Well, I hope you all enjoyed the session. Looking forward for the great discussions that we’re, that I’m sure we’re going to have. And I pass the floor to Thiago.

Thiago Moraes: Thank you, José. Well, we are really excited to be here today because this is not only a relevant discussion, but also it’s an opportunity for us to understand better what’s being done in a more hands-on approach when we are discussing this topic of ethics in AI. And that’s why the by-design part of it is so important. And we brought brilliant speakers today, and I will briefly introduce each one of them as they are. open the introductory remarks. And for starting, I would like to invite Mr. Ahmad Binder to speak. Ahmad Binder is the Policy Innovation Director at the Digital Corporation Organization, leading digital policy initiatives to foster collaboration amongst its six team members. With over 20 years of experience in public policy and regulation, he has shaped innovative policies driving connectivity and digital economic growth. Ahmad is dedicated to advancing the DCO’s mission on promoting digital prosperity for all. So, Ahmad, many thanks for coming here. Yesterday, we had the opportunity to see a bit of your framework of the DCO. It’s very interesting. And now we’ll have another interesting moment to know how it can relate with the questions that we are provoking here in this session. So, please, the floor is yours.

Ahmad Bhinder: Thank you very much, Thiago. Thank you very much everybody, for inviting me and on behalf of the Digital Corporation Organization to this session. So, just a very brief introduction to what DCO is. We are an intergovernmental organization, and we are headquartered in Riyadh with the countries from the Middle East, Africa. We have European countries as our member states, and so far we have South Asian countries. So, we started in 2020. Within the last four years, we have come up from five member states to 16 member states now, and we are governed by the council that has representatives or the ministers for digital economy and ICT from our member states. Our sole agenda is to promote digital prosperity and the growth of digital economy, a responsible growth of digital economy. So, this makes us one of a kind organization, a global intergovernmental organization that is not looking at sectors but looking at broadly digital economy. Again, we have our offices here, so we welcome you from Brazil, the whole group of you here to Riyadh. I hope you’re enjoying. Okay, so coming to the global AI governance, there are different initiatives that DCO is doing. I will take you through one of those when I explain the framework, but broadly, we have AI development and governance is not a harmonized phenomenon across the globe. So, we see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles. So initially, we called it as a principles-based approach, but actually all the approaches are based on principles. So even the prescriptive regulatory approaches, they are also based on certain principles. Some call them ethical AI principles, some call them responsible AI governance principles, et cetera, et cetera. So also, we have seen across the nations different means of approaching AI governance or AI regulation. For example, there are domain-specific approaches. So we have laws, for example, for health sector, for education, for a lot of other sector-specific laws, and those laws are being shaped and developed in advance to take into consideration the new challenges and opportunities that are posed by AI in them. Then we have framework approaches, where a broader AI frameworks are being shaped in the countries that would either reform some of those laws or they would impact the current laws. So there’s a broader AI framework. And the third one, as I said, is the specialized AI acts. So the EU AI Act, for example, Australia is working on an AI Act, China has an AI law. So just wanted to give. you

José Renato Laranjeira de Pereira: you you you you you you you you you where she’s served as a member of the jury for data protection officer certification. Amina is CIPP-C certified and has conducted numerous training sessions for professionals on personal data.

Amina P.: Or an additional layer of complexity. And so Ahmed mentioned earlier the approach based on risk-based approach. Yes, you mentioned the risk-based approach. You mentioned the principle-based approach as well. And these are exactly what we advocate for when it comes to regulating AI. We also advocate for technology-neutral legal frameworks, build on existing legal frameworks without creating conflict, and to avoid creating conflicts between different legal frameworks, and then most importantly, collaboration between different stakeholders. And the way we approach this collaborative work at META when it comes to privacy or ethical standards in general is that we rely mainly today on open source AI. So some people will ask a question, very simple one, saying normally open sourcing AI would bring more complexities because we are opening the doors to malicious actions, et cetera, and malicious actors. And so how come that we are enhancing privacy? with or through open-source AI. Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks. And it becomes a collaborative way of working on these risks and mitigating these risks within the AI community altogether. Of course, all this work is also preceded by a certain work, a privacy review before the launch of products. Redeployment risk assessments are done by Meta. Fine-tuning, safety fine-tuning, red teaming is also done ahead of any launch of any product. But in addition to all of these, of course, we have the Privacy Center. We can talk about the privacy-related tools that we have. But if we want to be specific on collaborative work, of course, once the product, once the model is launched and at Connect 2024, which is a developer conference that is organized by Meta annually, we announced the launch of LAMA 3.2, one of our open and large language models and to open using an open model that can be used by the AI community, of course. And so just to describe this, one of the tools that we use in open source AI is the purple project that we have, which means that before putting in place standards and sharing standards with the user, we, there is a project that is called a purple project which enhances the privacy and safety, means that this is an open tool that is where developers and experts can use and to mitigate the existing risks and rules. They have tested and mitigated these risks through the purple project, because it combines both the blue teaming and red teaming and both are necessary in our opinion. It puts in place standards and we call this the responsible use guide and of course, accessible to everyone. So this is when it comes to open source AI. To conclude on open source AI, for us, it’s a tool to enhance, it’s through open source AI that we can enhance privacy. Another project that is worth mentioning is the open loop projects that we have at Meta, which is a collaborative feedback way of working. So we gather policy makers with companies, share feedback and when it comes to prototypes of AI regulations and ethical standards or open loop, an issue that has been identified in a specific country. So there are prototypes that are being put in place, gathering policy makers and tech companies and starting from there under real conditions, under the real world conditions, these prototypes or these rules or testing rules, let’s go and then starting from there, we can issue policy recommendations, learn from the lessons and then also issue policy recommendations. These are the four steps that use OpenLoop and actually last year or the year before we organized an OpenLoop sprint, not an OpenLoop, OpenLoop we accompanied, for instance, the EU AI Act in Europe, testing some of the provisions ahead of their publication officially, but the OpenLoop sprints are a very small version of the OpenLoop projects that we organized at the Dubai Assembly last year, the year before. In the Minat region, the way we do it, as a privacy policy manager, for instance, I organized expert group round tables ahead of the launch of any product, even related to AI, not related to AI, we gather our experts, we have a group of experts, we share the specificities of the product and we get their feedback to improve our products, whether that is a legal, in relation to safety, in relation to privacy, in relation to human rights. et cetera, we take into consideration this feedback. We organize roundtables with policymakers. Recently, we had one in Turkey around AI and the existing data protection rules and whether they are enough to protect within the AI framework or not, what is necessary to do, a discussion on data subject rights as well. We also contribute to all the public submissions in the region, in some of the countries, not all of them, depending on the regulation, the importance or the nature of the regulation. In Saudi Arabia, of course, recently, Saudi Arabia has been very active on that front, completing the legal framework around data protection, putting in place AI ethical standards as well. So they have been very active on this and we shared our public comments and that we do believe that it’s always a discussion with policymakers. Yeah, looking forward to your questions. Sorry if I took more than seven minutes.

Thiago Moraes: Yeah, it’s okay. The only challenge we have is like, as we have to go with this first round and then try to have some discussions, but it’s interesting to see like the many different activities that META is being involved to try to bring more collaborative approach. The open source AI is definitely a hot topic and there are even some sessions here at the IGF that are also discussing this topic. So it’s nice to know that there are initiatives like that in META as well. Well, without further ado, I think I should move on to our next speaker. Our next speaker is. online is Tejasvita Kharel. I don’t know if I pronounced it right, but she’s a project officer at the Center for Communication Governance at National Law University Delhi. Her work relates to various aspects of information technology law and policy, including data protection, privacy, and emerging technologies such as AI and blockchain. Her work on the ethical governance and regulation of technology is guided by human rights-based perspectives, democratic values, and constitutional principles. So, Tejasvita, thanks a lot for participating with us. And, yeah, well, we’re looking forward to know more about your work regarding these topics.

Tejaswita Kharel: Thank you. Can you guys hear me? Just want to confirm.

Thiago Moraes: Yes.

Tejaswita Kharel: All right. So, I’m Tejasvita. I’m a project officer at the Center for Communication Governance at NLU Delhi. We do a lot of research work in terms of a lot of governance of emerging technology and whether the governance is ethical is, I think, a large part of what our work is. So, in terms of what I want to talk about today, I know we have three policy questions. Out of these three, what I want to concentrate on is number two, which is on primary challenges to embedding ethics. I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else. And that is something we can already see in a lot of existing AI principles, ethical principles, or guiding documents, where in one you can see that they might consider transparency to be a principle, which will be a recurring principle across documents, but privacy may not necessarily be one, which means that there will be a varying level of of what these ethical principles might be implementing. So what this means for us is that when you’re implementing ethics, there’s a good chance that not everybody’s applying it in the same way, or even the principles might be different. So in terms of what I mean when I say that people may not implement it in the same way is I will talk about fairness in AI. When we look at fairness in AI, fairness as a concept is different when you look at it in, let’s say the United States versus what you would consider to be fairness as an ethical principle in India, right? In India, there’ll be various factors such as caste, religion, which will be very, very high value rules when you’re determining fairness. Meanwhile, in the US, these factors may look like race. So I specifically mean this in terms of AI bias when you’re looking at discrimination bias in AI. So with that in mind, the first challenge when we’re looking at embedding ethics is that ethics is different for everyone. And even the principles, even though they may be similar, there’ll be a lot of varying factors or differences in how these ethical principles even are understood. So with that in mind, we need to solve this issue and how do we deal with that is, when we look at that as the answer, I will get to the point number three, which is the strategies and mechanisms, what strategies and mechanisms can be implemented, right? So one way that we solve this problem is by ensuring that there’s collaboration between multiple stakeholders in the sense that we very often as civil society and policymakers, we have certain ideas of what ethics means, but do the developers and designers of these systems understand what this even means? Whether they have the ability or not to implement ethics by design into these systems is a very big question. The main way that we can solve this issue is by first identifying what the… ethical principles are, what it means for each differing context. I am of the belief that we cannot define ethics as a larger concept. We must understand that depending on the system, depending on the regional societal context, there will always be differences in terms of what ethics by design is going to look like and there must always be differences because there cannot be a one-size-fit- all standard application of ethics by design because not everybody agrees on what ethics means. So first we determine what ethics even means, what these principles can be, whether for example we want to ensure that ethical, whether we want to ensure that privacy is a part of the ethical principles, for example. And then we get into the question of what these factors are that will be included within these ethical principles. Like I said, if it’s for fairness, are we looking at fairness in the sense of non-discrimination, inclusivity, what are these factors that fall within this is very important to have like one level of understanding on. And then we get into understanding how developers and designers can actually implement this in their systems, whether it’s by ensuring that their data is clean before they start working to ensure that there’s no bias that comes into the data inherently. So I think that the main way that we ensure ethics by design is by ensuring that there’s good collaboration between stakeholders. This collaboration can perhaps look in the can be in the form of a coalition, like for example in India what we have right now is we have a coalition on responsible evolution of AI, where there’s a lot of stakeholders, some of them are developers, some of them are big tech, there’s also civil society participation, and all of us we talk about how number one what the difficulties are in terms of AI and the responsible evolution of it. And then we also discuss how we solve this. So the only way that we can do this is by actually creating a mechanism where there’s collaboration between between all of these different stakeholders where we discuss and identify how we design it. So I’m, so this is my point predominantly in terms of how you can implement ethics by design. Thank you.

Thiago Moraes: Thanks a lot, Jastha. And quite interesting to know about these coalitions that are trying to engage different stakeholders to tackle issues such as fairness in AI. I think this is part of the puzzle that we have to solve here. When we’re discussing about what we really means, right, on ethics by design and where we will get from here is definitely a challenge that we have to consider these many different perspectives. And one of the challenges is how to make these collaborations actually to work and come into results. So thanks for giving a glimpse on that. Hopefully we can have some time to go back a bit, but we’ll move now for our last but not least speaker who is Rosana Fani from UNESCO. So Rosana is a program specialist in the ethics of AI unit at UNESCO, part of the bioethics and ethics of science and technology team, focusing on technology governance and ethics. She supports the global implementation of UNESCO’s recommendation on the ethics of AI, they run process and assist countries in shaping ethical AI policies. Previously, she coordinated AI policy projects at SIPS and contributed to research at the Brookings Institution, the European Parliament Research Service and Nuclear. Rosana holds expertise in international AI governance and policy analysis. So thanks a lot for being here with us, Rosana, and we are looking forward to know more about your work in the topic.

Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things have already been mentioned that I maybe would have been repeating. So I hope I will not do that, but instead offer some sort of, yeah, first maybe putting together the remarks that we’ve heard together today and also offering some perspectives for the discussion. And I will outline that based on, of course, the work that we do at UNESCO to implement ethics of AI around the world. And first, thanks also for organizing the session because I think it’s really, really important to always think when we think about new technologies and especially artificial intelligence, to look at the ethics, because the ethics is what makes us human, what makes us come together, what makes us sit together in the room and discuss and interact and exchange different perspectives. So for us at UNESCO, ethics is not something philosophical and it’s also not something that is built in as an afterthought, so to say, when we look at AI, but it means really from the first moment to respect human rights and human dignity and fundamental freedoms and to put people and societies at the center of the technology. So we really believe that it should not be about controlling the technology, but rather steering the development in a way that serves our goals for humankind, because we believe that the technology conversation, especially the conversation is about AI is in the end, a societal one, not a technological one. And this means that we must scale our governance and our understanding of the technologies in a way that matches the growth of the industry and the growth of the technology itself as it develops into our societies in every aspect. I mean, I don’t have to, I think, mention the examples of where we see AI already happening today and also the risks that arise with it. And there was one point in the discussion when it came to. AI regulation, it was a bit, let’s say, let’s think back a few years when we didn’t yet have the AI Act in place, when we didn’t yet have the discussion about the US framework and also not other standards. There was still this moment, if you remember, that a lot of governments were like, oh, but we see the technology is developing so fast, we can’t really do anything about it, we don’t really know how to steer it and we need to leave the market, solve the problems on its own. But that was the moment when UNESCO started to implement its work on the recommendation on the ethics of AI. So UNESCO actually has been working on ethical and governance consideration of science and technology for several decades. Previously, we have promoted rigorous analysis and multidisciplinary and inclusive debate regarding the development and implication of emerging technologies with our scientific committees that we have. And this started off actually as a debate about ethics and the human genome editing. And since then, we have at UNESCO constantly reflected on the ethical challenges of emerging technologies. And this work eventually accumulated in the observation of member states, seeing that there is actually a lot of ethical risks when it comes to the development and application of artificial intelligence. And this is what has led us to work on the recommendation on the ethics of AI. The recommendation on the ethics of AI, if you think again now today, is actually quite, I think, a fascinating instrument because it is approved by all 194 member states and it has a lot of ethical principles, values, and policy action areas that everybody agreed to. So maybe already reacting to my previous speaker and fellow panelists, that there is actually a global standard I can maybe just quickly, very quickly list the values we have, the respect, promotion and protection of fundamental rights and human rights, then we have the environment and ecosystem flourishing which is also something that is really important when you look at ethics to also look at the environment, and we have ensuring diversity and inclusiveness and peaceful just and interconnected societies, and then we have ten principles how these are translated into practice and these values, for example fairness and discrimination, safety, right to privacy of course, human oversight, transparency, responsibility, you can all read them up online, I will not outline them for the purpose of, for the sake of time. And this recommendation that was adopted in 2021 is actually now being implemented in over 60 member states already around the world and counting, what does that mean implementing the recommendation? It means implementing the recommendation through a very specific tool, it’s the readiness assessment methodology, I have only the French version but here it is, the readiness assessment methodology is actually ethics by design for member states AI governance frameworks, so what does it mean? It means that when member states work on AI governance strategies or before they start working on them, we offer them this tool, it’s basically a long questionnaire that gives member states a 365 degree vision of how their AI ecosystem looks like at home, and this has five dimensions, so societal and cultural, regulatory, infrastructural and other dimensions and through this tool we really ensure that member states know where they stand and how they can improve their governance structures. to ensure that ethics is really at the center of what they do when they work on AI policy and governance. We also have another tool, and I want to quickly spend a minute or so on explaining this tool as well. It’s the ethical impact assessment. The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation. That’s really important when we look at AI systems used in the public sector. For example, when we see AI systems used for welfare allocation or where children go to school, or when we look at AI used in the healthcare context. It is really crucially important that these AI systems are designed in an ethical manner. The ethical impact assessment does exactly that. It analyzes the systems against the recommendation, and this is done in the entire life cycle. Looking at the governance, for example, the multicycler governance, how has it been designed, which entities have been involved. Then it looks at the negative impacts and the positive impacts. That’s also something that I think is really important to emphasize when you look at ethics by design. It’s not just about mitigating the risks, but also looking at the opportunities that exist in the use of AI systems. There’s also always the contextualization of weighing the negatives against the positives. That’s also something that the ethical impact assessment looks at. I will just very briefly, because I think I’m almost also over time, I will very briefly also mention that we work with the private sector. We work with the private sector as well, because we think that when it comes to AI governance, nobody can do it alone. key entity in ensuring that AI systems are being designed and implemented in an ethical manner. So we have been teaming up with the Thomson Reuters Foundation to launch a voluntary survey for business leaders and companies to map how AI is being used across their operations products and services. And this was actually not yet live, it’s going to be launched in June, but now we have already launched the initiative and the questionnaire will be available in summer next year. And the idea really is for businesses to conduct a mapping of their AI governance models and also assess, for example, where AI is already having an impact, for example, on the diversion and inclusion aspect or human oversight or the environmental impact assessment is also featured there. And by offering this tool to the private sector, we really want to support the sector to ensure that their governance mechanisms are becoming more ethical, that they can also disclose this to their investors, their shareholders, but also to the public and really ensure that ethics is at the center of their operations. And last but not least, another aspect that we have heard a lot of times today is multi-stakeholderism. And especially we at UNESCO see that civil society is always a really critical part of the discussion about ethics of AI and governance, but it’s most often civil society is not properly, I think, sitting at the table when it comes to discussions. So we at UNESCO want to change that. We have been already through the last year working on mapping all the different civil society organizations that are working on ethics of AI and governance of AI and we’re bringing them all together also the next year at the the AI Action Summit in Paris first, and then at the Global Forum on the Ethics of AI, that’s UNESCO’s flagship conference on ethics and AI governance. And we will be bringing this global network of civil society organizations for the first time together at these both events. And we invite all civil society organizations that would like to join us as well to ensure that we bring these voices to affect the major AI governance processes that are ongoing right now. And with that, I will close. I really look forward to discussion. I have many more points to say, but yeah, thanks a lot and over to you, back to you, Tiago, or to our next moderator.

MODERATOR: Thank you, Rosanna, for your speech. We’re going to the second part of our panel, but first I would like to engage our audience online and on site. So does anyone have any questions, comments, observations of any kind? Please reach out the standing mic. Okay, so I’m gonna make some questions to our speakers. You can answer as you like. So how are you involving stakeholders from civil society in academia in the initiatives you have mentioned?

Ahmad Bhinder: I spoke last, so maybe I’ll pass the floor. Well, I’ll be quick. So we, as Rosanna said, we as an intergovernmental organization is all about collaboration and discussion. We have, so first of all, we have member states who we hold and conduct the discussion and workshop with. We have a very big network, a growing network, not very big, so it’s of observers and all the initiatives that we propose, we try to seek the inputs from them. them and improve and shape the dialogue. And we want to then become or position ourselves as a collective voice and advocate for the best practices on that behalf. So yeah, this is from an intergovernmental organization perspective. Would you want to take it?

Amina P.: Okay. We have an initiative at META. So actually we partner with a non-profit community that has been created called Partnership on AI. And it’s a partnership with academics and civil society, industry and media as well, creating solutions so that AI advances positive outcomes for people and society. And out of this initiative, specific recommendations are provided under what we call synthetic media framework, recommendations on how to develop, create and share content generated or modified by AI in a responsible way. So this is one of the initiatives that META collaborated. Actually, we collaborate whether with academia, but also with CSOs. We have other projects, Coalition for Content Provenance and Authenticity, with the publication of what we call content credentials about how and when digital content was created or modified. This is called C2PA. And this is another kind of coalition that we have, not necessarily with academia or CSOs, or limited to these actors. Another partnership is the AI Alliance that was established. with IBM, and this gathers creators, developers, and adopters to build, enable, and advocate for open source AI. Tejaswita, do you want to join us?

Tejaswita Kharel: Yes. I would say as somebody who represents more civil society and academia, I can give more of the input on how I think we get involved in these conversations. So like I said before, it’s predominantly in a lot of these coalitions or other groups, there is a lot of representation predominantly by industry, but I do think very often academia and civil society organizations are invited to get opinions in, to listen and understand more about what our beliefs are. But I do believe that very often when this is done, it ends up being a little bit of a minority perspective, and it feels like you’re not necessarily always taken very seriously, because it almost feels like you’re, it’s a little bit like advocacy, where you know that you’re speaking about things that may not necessarily be what other people want to do. So I think even though academia and civil society representation exists, I don’t think it’s being done in a way that is actually useful, because it’s almost like it’s a tokenization of representation. So I will be asked to do something or attend an event representing civil society, academia, and I will do it. But I feel like at the end of the conversation, I am there solely to mark for like a tick box being like, okay, we have had representation, we’ve heard from them. But ultimately, what we want to do is what we believe should be done. So I think it’s more of a criticism from my end on this part. That being said, I unfortunately have another clashing event, so I will not be able to stay for this event any further. I really apologize. It’s been a great. I really love listening to everyone. I’m really grateful for this opportunity and to have been part of this panel with all of these other excellent panelists and even the moderators all of you. Thank you very much. I will be leaving now. Thank you.

MODERATOR: Thank you very much for participation

Ahmad Bhinder: I just want to add one quick thing which I would skip my head so we have a mechanism called digital space accelerator program, where we hold global roundtables on different different digital economy issues. So, this AI tool that we are developing. We actually have have been. So we gather on the sidelines of big events we gather the expert stakeholders, like we did yesterday, and we seek their inputs, while we are shaping and designing our product so this tool so we went to Singapore, for example, we went to, to reality and a couple of other places we gathered the experts, and this is a mechanism not just for the AI, but this is what it’s a holistic program for DCO, please have a look at it on on our website, and feel free to contribute or join as well. So this digital space accelerator program is how we collect. We involve all the stakeholders into our initiatives. Thank you.

Rosanna Fanni: Yes, and I will also add a couple of points, and maybe directly picking up by panelists that has unfortunately now left us. It’s very much true that we also observed that there’s this tick box exercise, especially when it comes to civil society involvement in global governance processes on AI. So this is just exactly why we want to set up this, or are setting up the global network of civil society society organizations, and maybe to give a bit more context. We will launch this in the context. Summit hosted by France, which is happening in February next year. As many of you know, a government led summit, the first one having been taken place in the UK as the safety summit, known as the safety summit, and then followed by the second one hosted by the Republic of Korea. And for us, it’s really crucial that we indeed do not do it again as a tick-tock exercise, but that we bring civil society in the discussions and leverage their voices during the ministerial discussions as well. So this is something that also the organizers have actually already announced. So if you go to the AI Action Summit website, you will see that civil society will actually be a high priority. And our idea is really to link the dots and to make this network, so to say, permanent, to then also offer it as a consultative body, so to say, for future AI Action Summits or for other major governance processes on AI. And this is something that is really at the heart of our endeavor, and we really thank also the cooperation with the Patrick McGovern Foundation, which funds this initiative for their support in this project. The other part that I really wanted to mention is the work with academics. This is also for us a really crucial part of our work, and especially people from academia support us in implementing the recommendation through the readiness assessment methodology that I mentioned beforehand as a 360-degree scanning tool for governments. And we bring together these experts. So imagine we are conducting the readiness assessments in over 60 countries. So that means we have already 60 experts engaged in each country, and every expert brings something that is a bit unique from the country to the discussions, and we assemble these experts in a network that we call AI Ethics Experts Without Borders. So this AI Ethics Experts Without Borders network is really there to unite the knowledge that we find in governments, in on country level and maybe even on regional or local level on ethical governance of AI and brings this, so to say, together at UNESCO. And what is really, really special about is this, that then experts can exchange, hey, what was the experience on, let’s say maybe AI used in healthcare or AI used in another sector, or maybe there was an issue with the supervision of AI. So the idea is really to bring this expertise together and leverage also the knowledge of local experts. And what I want to also emphasize, and it’s also links to the civil society discussion that’s really important. It’s also very often the same, let’s say theme or the same issue happens with civil society as it happens with countries from the global South, so that it is really more of a tick-box exercise. Oh, we have someone from Africa here, but actually the grand majority of the countries that do AI governance are mainly developed economies. So for us, this is very much also linked to our work that we bring in these voices from the global South and not to bring them in as, let’s say a tick-box exercise, but to really leverage their voices. And that’s also why we are already working with out of these 20, we have out of 60 countries, 22 far from Africa and even more from small island developing states. And for us, it’s really important to bring in these actors that are normally underrepresented. And we really hope to be continuing the work with as far as the IGF, but also in many other contexts as well.

MODERATOR: Thank you. We’re going to ask you to bring us your final remarks, but as part of this final remarks, if you could bring us some last insights about one question. So what is the feedback you have received from? stakeholders participating in those collaborative approaches? And what were the challenges they shared in doing those collaborative work? What were the key takeaways? And thank you very much for your participation.

Ahmad Bhinder: Well, okay. I think my last concluding thoughts are actually connected to this this question. So we have engaged with stakeholders across our DCO member states and the governments as well as the civil society. What we have noticed that across our membership, and that’s a sample because we are very diverse to the global examples as well, there are varying levels of AI readiness across the member states. So while some member states or some countries are struggling with the basic infrastructure, the other ones are really, really at the forefront of how to shape AI governance. There are diverse definitions, diverse approaches to the governance. So the uniting factor, as Rosanna said, are the principles, which have been very widely adopted because they are not controversial, but how to action those principles has been quite diverse. Some countries are really, really looking at but the principles are common. So there’s a huge potential for engagement, for harmonization, for synchronization of the policies because the AI or all the emerging technologies, the regulations are not restricted to the countries themselves. So they are global actors, the borders do not define technologies, etc. So I think it’s really, really important now that this when we talk about multi-stakeholderism. or multilateralism to actually action it to to have those voices heard to have those these global forums these global discussions and then the global rule rulemaking or rule setting bodies to be more active and and and push the the the right set of rules etc for the nations to adopt and I think the dialogue is very important that we are having here and we you know we have across these forums. Thank you.

Amina P.: Yeah I would I would highlight so one of the I cannot provide a detailed of course feedback because when we work for instance with experts the experts provide their feedback depending on the product that we are asking them to provide feedback on so but like in a very very general and overview of the comments that we receive sometimes we feel that there is some very varying level of understanding of what AI is of the risks that are on the table being put on the table are we talking about existential risks in general or are we trying to like have a more specific and more scoped approach identifying a specific risk and trying to target this specific risk and mitigate it properly and in a very specific way I think and sometimes we face also some misconceptions from the the experts because if we are talking with experts who are from the human right who have a human rights approach or based approach then maybe in terms of privacy or when it comes to AI specific specificities sometimes they there are some misconceptions. So, the educational work is absolutely indispensable and hence some of the privacy tools that we put in place. For instance, the system cards when it comes to AI to explain to the user, the user who does not have a knowledge and if a user does not understand the AI model, how it works and why it works and behaves this way, it’s very difficult to get the trust from this user. And this is why, for instance, the system cards that we put in place explain, let’s say, the ranking system in our ads, how our ads are ranked, how the users when it comes to the ads are ranked, the ranking systems, the privacy center, some other educational tools as well. It’s very important to educate, to do this education work.

Rosanna Fanni: Yes, I will make it really, really short, implementation, implementation, implementation. We hear from member states that they want to operationalize the principles, they want to do something with AI, they want to use AI, but at the same time, they want to not get it wrong. They don’t want to use it in an unethical manner. They want to have the benefits for everyone, for their citizens, for their businesses. And I think implementation of the recommendation, but also implementation of other tools that we have heard today, hear from other stakeholders, my fellow panelists, also we have the implementation of the Global Digital Compact. I think now the focus really needs to shift from the principles, from the kind of agreement consensus that we have found. Yes, we need ethics. Yes, we need ethics by design. Yes, we need also a global governance for AI, but yes, how do we do it and how do we move from the principles to the action? And I think there’s still a lot of work to be done. necessity to build capacities in governments, to build capacities in public administration, but also in the private sector, also in civil society to really, you know, be actionable, be operational. And at the same time, use AI for, of course, the benefit of the citizens, but also be aware of the risks and mitigate the ethical challenges that we have.

Thiago Moraes: Thanks a lot. And it was amazing having this discussion. And I think Rosanna just got this main question now. Now that we have a consensus, where do we go from here, right? How do we do it? And we’re looking forward to these initiatives that are being developed by different organizations, the iAction Forum that’s coming and many others that we have been sharing in the Internet Governance Forum. So thanks everyone. Thanks to our speakers to be here, the audience for being and for the whole discussion. And yeah, looking forward to what’s coming.

A

Ahmad Bhinder

Speech speed

140 words per minute

Speech length

1105 words

Speech time

472 seconds

Risk-based and principles-based regulatory approaches

Explanation

Ahmad Bhinder discusses two main approaches to AI governance: prescriptive rules-based approaches (like the EU AI Act) and principles-based approaches focused on innovation. He notes that all approaches are ultimately based on certain principles, whether called ethical AI principles or responsible AI governance principles.

Evidence

Examples of EU AI Act and approaches in the US and Singapore

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Varying levels of AI readiness across countries

Explanation

Ahmad Bhinder observes that across DCO member states, there are varying levels of AI readiness. While some countries struggle with basic infrastructure, others are at the forefront of shaping AI governance. This diversity presents challenges in harmonizing approaches to AI governance.

Evidence

Observations from DCO member states

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Digital Cooperation Organization’s collaborative initiatives

Explanation

Ahmad Bhinder discusses the DCO’s collaborative approach to AI governance. The organization engages with member states, governments, and civil society to gather inputs and shape dialogue on AI governance issues.

Evidence

DCO’s digital space accelerator program and global roundtables

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Amina P.

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

DCO’s AI governance assessment tool

Explanation

Ahmad Bhinder mentions the development of an AI governance assessment tool by the DCO. This tool is being shaped through inputs from expert stakeholders gathered at global roundtables and events.

Evidence

Stakeholder consultations in Singapore and other locations

Major Discussion Point

Tools and Frameworks for Ethical AI

A

Amina P.

Speech speed

118 words per minute

Speech length

1488 words

Speech time

755 seconds

Open source AI as a tool to enhance privacy and safety

Explanation

Amina P. argues that open source AI can enhance privacy and safety by allowing experts to identify, inspect, and mitigate risks. She emphasizes that this collaborative approach within the AI community helps address potential issues before product launch.

Evidence

Meta’s open source AI initiatives, including the LAMA 3.2 model and the purple project

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Misconceptions about AI and privacy among stakeholders

Explanation

Amina P. highlights that there are varying levels of understanding about AI and its risks among stakeholders. She notes that misconceptions can arise, particularly when experts from different backgrounds (e.g., human rights) engage with AI specificities.

Evidence

Feedback from expert consultations and the need for educational tools like system cards

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

META’s partnerships with academia and civil society

Explanation

Amina P. describes META’s collaborations with academia and civil society organizations to address AI ethics and governance. These partnerships aim to create solutions for responsible AI development and use.

Evidence

Partnership on AI initiative and Coalition for Content Provenance and Authenticity (C2PA)

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

T

Tejaswita Kharel

Speech speed

175 words per minute

Speech length

1277 words

Speech time

436 seconds

Need for context-specific understanding of ethical principles

Explanation

Tejaswita Kharel emphasizes that ethics is subjective and can mean different things in various contexts. She argues that ethical principles for AI must be understood and applied differently based on regional and societal contexts, as there cannot be a one-size-fits-all approach.

Evidence

Example of fairness in AI differing between the United States and India due to factors like caste and religion

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Subjectivity and differing interpretations of ethics

Explanation

Tejaswita Kharel points out that ethics is a subjective concept, leading to varying interpretations and implementations of ethical principles in AI. This subjectivity creates challenges in consistently applying ethics by design across different contexts and stakeholders.

Evidence

Variations in ethical principles across different AI guidelines and documents

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Need for meaningful inclusion of civil society voices

Explanation

Tejaswita Kharel criticizes the current state of civil society involvement in AI governance discussions, describing it as often tokenistic. She argues for more meaningful inclusion of civil society perspectives beyond just ticking a box for representation.

Evidence

Personal experiences in participating in stakeholder consultations

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Need for multi-stakeholder collaboration in AI governance

R

Rosanna Fanni

Speech speed

159 words per minute

Speech length

2581 words

Speech time

972 seconds

UNESCO recommendation on ethics of AI as global standard

Explanation

Rosanna Fanni presents UNESCO’s recommendation on the ethics of AI as a global standard approved by 194 member states. This recommendation provides a set of ethical principles, values, and policy action areas that have gained widespread agreement.

Evidence

UNESCO’s recommendation on the ethics of AI and its implementation in over 60 member states

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Differed on

Approaches to AI Governance and Ethics

Difficulty in operationalizing ethical principles

Explanation

Rosanna Fanni highlights the challenge of moving from agreed-upon ethical principles to practical implementation. She emphasizes the need to shift focus from establishing principles to taking concrete actions in AI governance and ethics.

Evidence

Feedback from member states expressing the desire to operationalize principles and use AI responsibly

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Challenges in implementing ethics by design

UNESCO’s readiness assessment methodology

Explanation

Rosanna Fanni describes UNESCO’s readiness assessment methodology as a tool for ethics by design in AI governance frameworks. This tool provides member states with a comprehensive view of their AI ecosystem across multiple dimensions.

Evidence

Implementation of the readiness assessment in over 60 member states

Major Discussion Point

Tools and Frameworks for Ethical AI

Ethical impact assessments for AI systems

Explanation

Rosanna Fanni introduces UNESCO’s ethical impact assessment tool, which evaluates specific AI algorithms against the principles outlined in the UNESCO recommendation. This tool is particularly important for AI systems used in the public sector.

Evidence

Examples of AI systems in welfare allocation, education, and healthcare

Major Discussion Point

Tools and Frameworks for Ethical AI

UNESCO’s global network of civil society organizations

Explanation

Rosanna Fanni discusses UNESCO’s initiative to create a global network of civil society organizations focused on AI ethics and governance. This network aims to amplify civil society voices in major AI governance processes and discussions.

Evidence

Planned launch of the network at the AI Action Summit in February

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

Agreements

Agreement Points

Need for multi-stakeholder collaboration in AI governance

Ahmad Bhinder

Amina P.

Rosanna Fanni

Tejaswita Kharel

Digital Cooperation Organization’s collaborative initiatives

META’s partnerships with academia and civil society

UNESCO’s global network of civil society organizations

Need for meaningful inclusion of civil society voices

All speakers emphasized the importance of involving various stakeholders, including governments, industry, academia, and civil society, in shaping AI governance and ethics frameworks.

Challenges in implementing ethics by design

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Misconceptions about AI and privacy among stakeholders

Subjectivity and differing interpretations of ethics

Difficulty in operationalizing ethical principles

Speakers agreed that implementing ethics by design in AI systems faces various challenges, including differing levels of readiness, misconceptions, and the difficulty of translating ethical principles into practical actions.

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Unexpected Consensus

Importance of education and capacity building in AI ethics

Amina P.

Rosanna Fanni

Misconceptions about AI and privacy among stakeholders

Difficulty in operationalizing ethical principles

While not explicitly stated as a main argument, both speakers emphasized the need for education and capacity building to address misconceptions and enable the practical implementation of ethical principles in AI governance.

Overall Assessment

Summary

The main areas of agreement include the need for multi-stakeholder collaboration, recognition of challenges in implementing ethics by design, and the importance of context-specific approaches to AI governance and ethics.

Consensus level

There is a moderate to high level of consensus among the speakers on the fundamental aspects of AI ethics and governance. This consensus suggests a growing recognition of the complexities involved in ethical AI development and the need for collaborative, context-sensitive approaches. However, the specific implementation strategies and tools vary among different organizations and stakeholders, indicating that while there is agreement on the importance of ethical AI, the path to achieving it remains diverse and evolving.

Differences

Different Viewpoints

Approaches to AI Governance and Ethics

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Risk-based and principles-based regulatory approaches

Open source AI as a tool to enhance privacy and safety

Need for context-specific understanding of ethical principles

UNESCO recommendation on ethics of AI as global standard

Speakers presented different approaches to AI governance and ethics, ranging from risk-based and principles-based regulatory approaches to open source AI and context-specific ethical principles. While Ahmad Bhinder discussed various regulatory approaches, Amina P. focused on open source AI, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni presented UNESCO’s global standard.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement centered around the specific approaches to implementing ethical AI governance and the challenges in operationalizing ethical principles across different contexts.

difference_level

The level of disagreement among the speakers was moderate. While they shared common goals of ethical AI governance, they presented different perspectives and approaches. This diversity of viewpoints highlights the complexity of the topic and the need for continued multi-stakeholder dialogue to develop comprehensive and effective ethical AI frameworks.

Partial Agreements

Partial Agreements

All speakers agreed on the need for ethical AI governance, but differed in their approaches to addressing the challenges. Ahmad Bhinder highlighted varying levels of AI readiness, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni focused on the difficulty of operationalizing ethical principles. They all recognized the complexity of implementing ethics by design but proposed different solutions.

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Need for context-specific understanding of ethical principles

Difficulty in operationalizing ethical principles

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Takeaways

Key Takeaways

There are varying approaches to AI governance and ethics globally, including risk-based and principles-based regulatory approaches

Open source AI and multi-stakeholder collaboration are seen as important tools for enhancing privacy, safety and ethical AI development

UNESCO’s recommendation on ethics of AI provides a global standard agreed upon by 194 member states

Implementing ethics by design in AI faces challenges due to varying levels of AI readiness across countries and differing interpretations of ethical principles

There is a need for context-specific understanding and application of ethical principles in AI

Moving from ethical principles to practical implementation and action remains a key challenge

Resolutions and Action Items

UNESCO to launch a global network of civil society organizations focused on AI ethics and governance

UNESCO to continue implementing its recommendation on ethics of AI through readiness assessments in over 60 countries

META to launch a voluntary survey for businesses to map AI use across their operations in summer 2024

Continued development of tools like DCO’s AI governance assessment tool and UNESCO’s ethical impact assessment framework

Unresolved Issues

How to effectively operationalize ethical principles in AI development and deployment

How to ensure meaningful inclusion of civil society and Global South voices in AI governance processes

How to address varying levels of AI readiness across different countries and regions

How to reconcile differing interpretations and applications of ethical principles across contexts

Suggested Compromises

Balancing prescriptive regulatory approaches with more flexible principles-based approaches to AI governance

Using open source AI as a way to enhance both innovation and ethical safeguards

Combining global ethical standards (like UNESCO’s recommendation) with context-specific implementations

Thought Provoking Comments

We see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles.

speaker

Ahmad Bhinder

reason

This comment provides a clear overview of the two main regulatory approaches to AI governance globally, highlighting the key differences between prescriptive and principles-based approaches.

impact

It set the stage for discussing different regulatory frameworks and their implications, prompting further exploration of how ethics can be embedded in these different approaches.

Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks.

speaker

Amina P.

reason

This comment challenges the common perception that open-sourcing AI models could lead to more risks, instead presenting it as a collaborative approach to identifying and mitigating risks.

impact

It shifted the discussion towards the potential benefits of open collaboration in AI development and ethics, prompting consideration of how transparency can contribute to ethical AI.

I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else.

speaker

Tejaswita Kharel

reason

This comment highlights the fundamental challenge of defining and implementing ethics in AI, pointing out the subjective nature of ethical principles.

impact

It deepened the conversation by prompting reflection on the complexities of implementing ethical AI across different cultural and societal contexts.

The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation.

speaker

Rosanna Fanni

reason

This comment introduces concrete tools for assessing ethical AI implementation at both macro and micro levels, providing practical approaches to the challenge.

impact

It moved the discussion from theoretical considerations to practical implementation strategies, offering tangible ways to embed ethics in AI development and governance.

Overall Assessment

These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. They moved the conversation from abstract principles to concrete challenges and potential solutions, emphasizing the need for collaboration, transparency, and practical assessment tools. The discussion evolved from identifying the problem to exploring multifaceted approaches for embedding ethics in AI development and governance.

Follow-up Questions

How can we move from ethical principles to actionable implementation of AI governance?

speaker

Rosanna Fanni

explanation

There is a need to operationalize ethical principles and implement AI governance in practice, beyond just agreeing on high-level concepts.

How can we ensure meaningful inclusion of civil society voices in AI governance discussions, beyond tokenistic representation?

speaker

Tejaswita Kharel

explanation

Civil society participation often feels like a ‘tick box exercise’ without real influence, so more effective ways of inclusion are needed.

How can we address varying levels of AI readiness across different countries while developing global AI governance frameworks?

speaker

Ahmad Bhinder

explanation

There are diverse approaches and capabilities related to AI across countries, which creates challenges for harmonizing global governance.

How can we improve public understanding of AI systems and their implications?

speaker

Amina P.

explanation

There are often misconceptions about AI among experts and the public, highlighting a need for better education and explanation of AI systems.

How can open source AI be leveraged to enhance privacy and security?

speaker

Amina P.

explanation

Open sourcing AI models allows for collaborative risk mitigation, but the implications and best practices need further exploration.

How can we ensure ethical principles are applied consistently across different cultural and societal contexts?

speaker

Tejaswita Kharel

explanation

Ethical principles like fairness can have different interpretations in different contexts, creating challenges for global standards.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #133 Better products and policies through stakeholder engagement

WS #133 Better products and policies through stakeholder engagement

Session at a Glance

Summary

This discussion focused on the importance of stakeholder engagement in developing better technology products and policies. Participants from various sectors shared insights on effective engagement strategies and challenges.

Richard Wingfield from BSR emphasized the need for companies to engage with diverse stakeholders, especially vulnerable groups, to understand potential human rights impacts of their products. He outlined a five-step approach for meaningful stakeholder engagement. Thobekile Matimbe highlighted the importance of proactive engagement with communities in Africa, suggesting platforms like the Digital Rights and Inclusion Forum for companies to connect with stakeholders.

Fiona Alexander, drawing from her government experience, stressed the value of targeted questions and political will in successful stakeholder engagement. She noted that while the process can be messy, it ultimately leads to better policies and buy-in. Charles Bradley shared Google’s approach, describing their External Expert Research Program and how it has improved product development by incorporating stakeholder feedback early in the process.

The discussion also addressed challenges, including time constraints, the fast pace of technology development, and potential disincentives for companies to engage meaningfully. Participants debated the effectiveness of regulation versus voluntary engagement, with some arguing for a combination of frameworks and impact assessments.

Overall, the panel agreed that while progress has been made in stakeholder engagement over the past decade, there is still significant room for improvement. They emphasized the need for more transparent, proactive, and meaningful engagement practices across the technology sector to ensure products and policies better respect human rights and meet community needs.

Keypoints

Major discussion points:

– The importance of meaningful stakeholder engagement in technology product and policy development

– Challenges and best practices for effective stakeholder engagement by companies and governments

– The role of regulation and other external pressures in driving responsible tech development

– Unique challenges of stakeholder engagement in the fast-moving tech sector

– The need for more proactive and inclusive engagement, especially in regions like Africa

Overall purpose/goal:

The discussion aimed to explore how stakeholder engagement can lead to better technology products and policies, sharing perspectives from industry, civil society, and former government officials on effective approaches and ongoing challenges.

Tone:

The overall tone was constructive and solution-oriented, with speakers acknowledging progress made while also highlighting areas needing improvement. There was a shift to a more critical tone when discussing ongoing shortcomings in tech company engagement practices, but the conversation remained professional and focused on identifying ways to advance meaningful stakeholder engagement.

Speakers

– Jim Prendergast: Moderator

– Richard Wingfield: Director, Technology and Human Rights, BSR

– Thobekile Matimbe: Senior Manager Partnerships and Engagements at Paradigm Initiative

– Fiona Alexander: Former official at the U.S. Department of Commerce

– Charles Bradley: Manager for trust strategy on knowledge information products at Google

Additional speakers:

– Lena Slachmuijlder: Executive Director, Digital Peacebuilding, Search for Common Ground; Co-Chair, Council on Tech and Social Cohesion

Full session report

Stakeholder Engagement in Technology Development: Challenges and Best Practices

This discussion focused on the critical importance of stakeholder engagement in developing responsible and effective technology products and policies. Participants from various sectors, including industry, civil society, and former government officials, shared insights on effective engagement strategies and ongoing challenges in the rapidly evolving tech sector.

Importance of Stakeholder Engagement

All speakers emphasised the fundamental importance of stakeholder engagement in technology development and policy-making. Richard Wingfield from BSR highlighted that stakeholder engagement is critical for responsible business practices, referencing the UN Guiding Principles on Business and Human Rights as a key framework. He also stressed the importance of prioritizing engagement with communities most likely to be at risk. Charles Bradley of Google noted that proactive stakeholder engagement builds trust and improves products, while Thobekile Matimbe stressed the importance of meeting stakeholders where they are, especially in Africa. Fiona Alexander, drawing from her government experience, argued that despite taking more time, stakeholder engagement ultimately leads to better policies and products.

Challenges in Implementing Effective Stakeholder Engagement

The discussion acknowledged various challenges in implementing effective stakeholder engagement, particularly in the fast-paced technology sector. Richard Wingfield pointed out the unique challenges faced by the tech industry due to the rapid pace of development. Charles Bradley highlighted the difficulty of engaging stakeholders early in the product development process when many products may not make it to market. He also noted internal pressures and incentives that can work against meaningful engagement. Thobekile Matimbe called for more proactive and meaningful engagement from companies, especially in Africa, noting a lack of willpower from some companies to engage effectively. Fiona Alexander added that cultural differences impact approaches to engagement and regulation across different regions.

Best Practices for Stakeholder Engagement

Speakers shared several best practices for effective stakeholder engagement:

1. Richard Wingfield outlined a five-step approach toolkit developed by BSR to help companies implement stakeholder engagement.

2. Charles Bradley described Google’s “External Expert Research Program,” which integrates stakeholder input into product development through regular engagement with a panel of experts.

3. Thobekile Matimbe suggested leveraging multi-stakeholder platforms like the Digital Rights and Inclusion Forum (DRIF) for companies to connect with stakeholders in Africa.

4. Fiona Alexander emphasised the importance of setting clear goals and deadlines for engagement processes, as well as transparency, including discussing when products are not released due to stakeholder feedback.

Specific Examples of Stakeholder Engagement

Charles Bradley provided concrete examples of Google’s stakeholder engagement efforts:

1. The development of the Circle2Search feature, which involved extensive consultation with accessibility experts.

2. AI overviews for various products, created in response to stakeholder feedback requesting more transparency about AI use in Google’s services.

Role of Regulation and External Pressure

The discussion touched on the role of regulation and external pressure in driving stakeholder engagement. Richard Wingfield noted that regulation, particularly in the EU, is requiring companies to engage with stakeholders as part of their risk assessment processes. Charles Bradley explained that Google views regulation as a way to level the playing field and ensure all companies are held to high standards. He argued that proactive engagement can help companies get ahead of regulatory pressures. Fiona Alexander expressed uncertainty about the impact of recent regulations like GDPR, suggesting their effectiveness is still unclear.

Critique of Current Practices

Lena Slachmuijlder raised important questions about the effectiveness of current stakeholder engagement practices in big tech. She challenged the notion that existing approaches are sufficient, highlighting the need for more upstream testing and transparency in product development. This critique sparked a discussion about how to make stakeholder engagement more meaningful and impactful.

Unresolved Issues and Future Considerations

The discussion identified several unresolved issues and areas for future consideration:

1. Balancing the need for stakeholder engagement with the fast pace of technology development and market pressures.

2. Ensuring stakeholder engagement is truly meaningful and not just a ‘tick-box’ exercise.

3. Addressing the ‘de-incentives’ that work against thorough stakeholder engagement in some companies.

4. Assessing the effectiveness of recent regulations like GDPR and the EU AI Act in driving responsible technology development.

5. Exploring ways to combine frameworks with impact assessments, as suggested by Avri Doria.

Conclusion

The discussion underscored the critical importance of stakeholder engagement in responsible technology development while acknowledging the complex challenges involved in its implementation. While progress has been made in recognizing the value of stakeholder engagement, there remains significant room for improvement in creating more transparent, proactive, and meaningful engagement practices across the technology sector. The conversation highlighted the need for continued dialogue, innovation in engagement strategies, and a commitment to ethical product development that respects human rights and meets diverse community needs.

Session Transcript

Richard Wingfield: you you you you you you you and rights and lead our work with technology companies on how to act responsibly as a company and to build products and services that align with international human rights standards. So really pleased to be part of this conversation because stakeholder engagement is a critical part of the way we work with companies at BSR. And the approach that we take is very much in line with a number of existing frameworks and standards that exist in relation to stakeholder engagement. And for companies, whether in the technology sector or any other sector who are looking to be responsible businesses, to build trust and confidence, to align with what being a responsible business means, one of the most critical frameworks that is used and where we draw our inspiration from is the UN Guiding Principles on Business and Human Rights. So the idea that states have human rights responsibilities is one that is well-established in international law, in various international treaties. But in the last few decades, there was increasing concern from a number of external stakeholders that businesses also should be taking a role in making sure that the human rights of people affected by their businesses were respected as well. And that resulted about 15 years ago in the endorsement by the United Nations Human Rights Council of this framework called the UN Guiding Principles on Business and Human Rights. So this is the framework that sets out what business and human rights looks like. It has various obligations that are imposed on states in terms of regulation of businesses. It imposes responsibilities on businesses to respect human rights. And it also imposes expectations as to how individuals who have been adversely affected can seek a remedy for any harm that has been suffered. And the most critical part of the UN guiding principles as a framework when it comes to businesses is that pillar that is specifically about how businesses should respect human rights. Now, the reason why I’m sort of mentioning this framework in a conversation around stakeholder engagement is because the UN guiding principles on business and human rights make regular and explicit recognition of the importance of stakeholder engagement and meaningful stakeholder engagement when it comes to companies behaving responsibly. And this exists in a number of different aspects. One of the things that we do a lot of work with companies at BSR is to try to think through the way that companies have human rights impacts at all. And those can be risks to human rights. So for example, the way the use or misuse of a particular technology might cause harm to somebody, perhaps restrictions on their freedom of expression or impacts upon their rights to privacy. We also look at the way that companies can advance human rights, the way that different technologies can be developed and used in ways which advances societal goals, for example, supporting freedom of expression or enabling education or improving healthcare. So looking at the way that the actual technological products and services can sort of both improve but also create risks to human rights. But we also look at the way that company’s own policies are also relevant here. And obviously company’s policies are pretty instrumental in the way that technologies are designed, the way they are used. And these can be everything from a company’s AI principles, which might govern the way that it develops and uses AI. If we were looking at social media or online platforms, the rules that they impose as to how people can and cannot use the platform in different ways. So there are a range of different ways that companies can sort of have impacts upon human rights. And what the guiding principles say is that in trying to understand those impacts. need to speak to the people who are actually ultimately affected. So when we’re working with a company at BSR we put a huge amount of effort into alongside the company talking to and working with stakeholders to understand the risks that might be connected to a particular company, so what’s happening in practice in different parts of the world and with different communities, the opportunities that can be provided, so the way that different technologies are being used in ways that can create benefit for communities, but also the company’s own policies, so the way that the rules that the company sets relating to how they develop and use technology or the way that users can and cannot use those technologies, how they might also themselves be having impacts upon human rights. So what does this look like in practice? So one of the complicating aspects of technology as a sector is that so many people are affected and often the impacts of technology are global in nature, so if you’re looking at a large online platform that might be used by people across the world, potentially hundreds of millions or even billions of people across the world, that’s a huge number of people who might potentially be affected by the way that that service operates or by the rules that that company imposes, and so we really try to prioritise our stakeholder engagement with the communities that are most likely to be at risk. So we know, for example, that there are certain groups around the world who are particularly vulnerable to human rights harms. We know, for example, that persons with disabilities, for example, have historically been marginalised, may not be able to access or use technologies in the same way. We know that certain groups are vulnerable to things like hate speech or other types of harmful content online, particularly minority groups. We know that there are certain groups who might be vulnerable to discriminational bias when it comes to AI systems because of the lack of data that was used connected to that group when those AI systems were created. So we try to prioritise our stakeholder engagement by working with those communities and groups that are going to be particularly affected by the risk or particularly vulnerable to that risk. And so that might mean working with women’s rights organisations, it might mean working with organisations that support persons with disabilities, it might mean working with groups representing those who are vulnerable to discrimination within different societies. But we also know that the ways that technologies are used and the ways that the rules that companies impose can be felt very differently in different parts of the world. There are different cultural contexts, there are different language issues depending on the company and the primary language that it uses, there are different levels of digital literacy in different parts of the world and so familiarity with technology and the way that it can be used or misused. So we also try to make sure that we take a global approach to our engagement with stakeholder and that we talk with groups that can either speak to the experience of people in different parts of the world or in some cases we talk directly to groups in certain parts of the world where their experiences may be different from elsewhere. So that’s the kind of approach that we take to stakeholder engagement is really trying to, particularly when you have potentially hundreds of millions or billions of people who are using a platform or affected by a technology, prioritising our engagement with those groups who are going to be most vulnerable or most at risk but also making sure that we are geographically and culturally diverse so that we hear the full range of experiences and can provide recommendations that are nuanced appropriately. What the UN Guiding Principles don’t give a lot of detail on however is the actual mechanics of stakeholder engagement. So yes they talk about the importance of talking to a diverse range of stakeholders and meaningful using what they tell you in the way that the company develops its technology or it or it creates or modifies it rules. But it doesn’t really tell you how to do it in practice. And so we use a range of different ways, depending on the company we’re working with and the issue in question. So it can be something like organizing one-on-one interviews. So we might just simply organize a number of sort of one-on-one interviews with different organizations around the world. We ask them questions. We talk to them about their concerns. We might get into some specificity about a particular rule or a particular product, depending on the work that we’re doing in question. We might also organize workshops where we bring together a broader range of people. And sometimes that can be helpful because then you have a diversity of opinions within a room and people are able to counter each other or raise different perspectives or push back. And so you get much more of a dialogue. So sometimes we’ll use workshops or those broader interviews as a way of seeking engagement as well. We also know that there is a lot of stakeholder fatigue as an issue. So a lot of stakeholders constantly being asked to participate in interviews and meetings. So we also try to use existing spaces where people are talking about the issues that concern them. And the IDF is a great example of that. There are other conferences around the world like RightsCon, TrustCon, the UN Forum on Business and Human Rights. So there are many existing spaces where NGOs and other stakeholders come and talk about the issues that are most important to them, including the impacts of different technologies and different technology companies. And so quite often we will come to these events, run sessions ourselves, participate in other sessions, and use that as an opportunity of hearing directly from people on the ground. So we use a variety of different tactics and techniques to ensure that we are not only talking to a broad range of people, but also not adding additional burden and time to them, but using existing spaces wherever possible. What the Ewing Guiding Principles also don’t give a lot of guidance on is how you then incorporate that feedback back into company decision-making. So, and I’m sure perhaps some of our company colleagues on this call will speak to, decision-making at a company is not a straightforward exercise. The design and the creation of new technology products, the way they’re launched, the way the policies are created and modified, these are complex processes. And so it’s not always straightforward simply to take the results of one interview or one workshop back and then to very quickly and easily make changes as a result of it. Stakeholder engagement and the feedback received will be one of a number of different sources of input into the ultimate decision-making of a company. So what we at BSR try to do is to make the feedback that we get from stakeholders as practical as possible. We try to make sure that it’s very clear in exactly what stakeholders would like to see from the company and how that can be measured and assessed in time. One of the things that we also try to do is to build long-lasting relationships between companies and stakeholders as well. So simply bringing in one organisation for one interview at a point in time and then never speaking to them again does not encourage a sort of a long-standing and trusted relationship. So we often try to make sure that companies are providing updates to the stakeholders on what’s happened, involving them in later decision-making and trying to create relationships rather than something which is merely transactional. So the Ewing Guiding Principles as a framework is a really helpful starting point in setting out that companies should engage with stakeholders. This should be used to understand the company’s risk profile but also where there might be opportunities as well and ensuring that there is a diversity of opinions in terms of the range of stakeholders that you speak to and the nuance that you get from those engagements. And then at BSR we’ve tried to add a bit more practicality to that framework in terms of what those engagements look like in practice and how we make sure that they are meaningful. and impactful rather than transactional. So that’s the approach that we take. Jim, maybe I’ll pass back to you for our next speaker at this point.

Jim Prendergast: Yeah, great. Thanks, Richard. So yeah, turning to our next speaker, Thobekile, I hope I’m pronouncing that correctly. You know, one of the things that Richard talked about was going to various fora where stakeholders are, and I know you’re heavily involved with DRIF, which is the Digital Rights and Inclusion Forum. Could you share with us sort of your take on this concept of stakeholder engagement and maybe any outcomes from that conference that may be relevant to our discussion today?

Thobekile Matimbe: Thank you so much. So hi, everyone. I’m Toba Kile Matimbe, and I work for Paradigm Initiative, which is an organization that promotes digital rights and digital inclusion across the African continent and within the global South. And I work for PIN, Senior Manager Partnerships and Engagements. So this is a very important conversation around stakeholder engagement. And I’m happy that Richard was able to unpack the UN Guiding Principles on Business and Human Rights and what they say with regards to corporate responsibility, which is something that is critical. And one of the key things, you know, that point towards adherence with corporate responsibility is obviously stakeholder engagement is something that is critical to ensuring that we have better products that are out there and that also take into consideration human rights. So as I reflect on, you know, that topic on stakeholder engagements and better products, it’s important to articulate that it is important, I think, for the private sector to sort of, you know, within their quest for due diligence to be able to think about what stakeholder engagement looks like. And from where I’m sitting, I think for me, one thing that I’ll echo is the importance of meaningful stakeholder engagement and not just tokenistic stakeholder engagement. where, you know, it’s just sort of like ticking the boxes, but how can engagements become more and more meaningful? And thinking about it, it’s so important for companies to think about how they can meet the community where they are, as opposed to perhaps, you know, scheduled meetings that come in once off, probably as a way of transactions. I think Richard just mentioned transactional engagements, but more proactive, you know, strategies to actually meet the community where they are. And that’s what DRIF is. The Digital Rights and Inclusion Forum is a platform that Paradigm Initiative hosts annually. And we’re looking forward to hosting the 12th edition in Lusaka, Zambia next year, from 29 April to 1 May. But what happens at DRIF, which is the acronym for the Digital Rights and Inclusion Forum, is that we have multi-stakeholder engagements where we have different actors coming into the room to discuss trends and developments in the digital rights space. And we have governments come in, we have civil society organizations, the media technologies, you know, companies as well. But we’ve not seen as much companies coming on board to engage with the community. This year alone, we held the Digital Rights and Inclusion Forum in Accra, Ghana, and we had just almost 600 participants who were there, and from not just Africa, but around 40 countries that were at DRIF. And we had attendees from Africa and global South spaces as well. So it’s a really rich platform where any product designer would want to be there to be able to engage with the community interface and discuss products. But thinking about it, the key players really, with regards to better products, these would be those who use the products, and that’s… why I’m saying that it’s important for companies to think proactively about engaging with those who use their products to be able to flag out what they think about when they are designing products and how they can even improve those products better and what better place than to be in platforms where there are different stakeholders that can be able to input into the design process of technology. I would also highlight that one critical thing in the design process is obviously the do no harm principle, especially in the context of human rights and who are those who are bearing the brand of bad products that are unleashed on the market. It’s the users of those technologies, those who are probably marginalized groups, those are minority groups and their voices can only be heard in spaces where human rights are discussed, even discussing as well persons with disabilities, what are their challenges? And this is why I think it’s important for more and more companies to find themselves in platforms where there are such conversations happening and DRIF is one such platform. And I think one key thing when we’re looking at policies themselves, maybe community standards, for instance, if we’re looking at social media platforms, we’ll find that they come up with community standards and it’s always important to circle back to the community and say, this is what we have, is this still fit for purpose because technology does not wait, it’s fast evolving. So it’s important as well to always have that interface with the community proactively. And I’ll give an example that for us as Paradigm Initiative just recently, we had a very interesting engagement with one telecommunications company that reached out to us after seeing one of our reports that we had done on surveillance in Africa and they were so keen and they laid their hands on this research and they literally reached out and requested a meeting with us, which was a proactive action as opposed to a reactionary, probably reactionary stakeholder engagement process where. perhaps if we had risen to them and say, look, there’s this challenge that we’ve seen that this has happened in this country based on your product. But they were proactive in terms of reaching out to us and say, let’s have a conversation. And it was one of, I would say, one of our best engagements with the private sector, especially around community standards. And I think, as policies are being developed by different private sector actors, it’s important to always figure out where is the community that uses our product? Where can we get to? How can we reach the community to be able to get feedback on what we are churning out so that we strengthen what we develop and we put out something that is good and robust and rights respecting, mitigating as well human rights impacts. It’s also important as well to reflect on some of the outputs that have come from the Digital Rights and Inclusion Forum. So every year we come up with community recommendations and we gather these from the people who attend the Digital Rights and Inclusion Forum from across underserved communities across the global South and they give input. And I think for the private sector, what has been clear is that need for that engagement around policies and how they are developed to better strengthen security and safety. When we’re talking about trust as well, and safety is something that is critical to the context of products and how they can be better and better serve the users themselves. So I think one thing as well that I would highlight that has also come up is the importance of having policies that ensure that vulnerable groups as well are not left behind. So you have your human rights defenders or your media who feel that sometimes when policies are being developed, they are not really addressing some of the lived realities that they face. So I think reflecting more on the do no harm principle is something that I really want to echo. It’s something that is really important. and it’s actually something that should be embedded at every point of the product design process. So it’s really critical that we continue to have this conversation and also hear from colleagues within the private sector as well with regards to their views as well around proactive stakeholder engagement as opposed to stakeholder engagements that are reactive or just a ticking of the boxes and what they are doing as well to ensure that they are able to meet the community where the community is. And it’s something as well that I’ll highlight even as I conclude my reflections that due diligence is something that demonstrates corporate responsibility is important primarily as a corporate practice towards better products that within themselves respect human rights and echo the importance of mitigating human rights impact. So I think I’ll leave it at that for now and I’ll post in the chat as well the link to more about the Digital Rights and Inclusion Forum. And currently we actually have a call that’s out for session proposal. So that’s a good opportunity for those in the private sector who would want to engage around their products or discuss more about them with the community to be able to possibly consider being at the Digital Rights and Inclusion Forum in Lusaka, Zambia next year so that we continue to have meaningful, proactive stakeholder engagements.

Jim Prendergast: Great, thank you very much. Thanks to both of you. Now we’re gonna turn the perspective a little bit away from product development to policy development. You know, Fiona, who’s sitting across the table from me here in the room, you spent a long time at the Department of Commerce. I’ve known you for a long time and you were heavily engaged in stakeholder engagement. I think the U.S. government has been a leader in that aspect. So can you share with us sort of what you found worked with stakeholder engagement when it comes to developing government? and policies and maybe what didn’t.

Fiona Alexander: Sure, happy to. And let me turn one ear off. Maybe I’ll take both off. It’s hard to hear yourself when you’re talking. So thanks, Jim, for inviting me and to everyone remotely. You’re missing a beautiful venue. So sorry, you’re not here to join us in person for today’s conversation. But as Jim mentioned, I was at the Department of Commerce in the US government for about 20 years. And in terms of the conversation for today about better policy from my perspective through stakeholder engagement, I think it’s important to note, at least in the US government system, there’s a couple of different ways and processes that are used. So for regulation and under our regulatory regimes, our legislature will pass a law, but our independent regulators or other parts of the government will actually do a lot of stakeholder engagement to actually produce the specifics of how a law is implemented through regulation. And we actually have a pretty prescribed process for that through the Administrative Procedures Act, where actually a particular agency will get an assignment, they’ll have to put draft rules out, or they’ll do a notice of proposed rules. And there’s pretty formulaic 45 days, 90-day stakeholder feedback, that kind of stuff. And that’s on the sort of regulatory side in the United States, and that’s across sectors. So it’s not just technology sectors, but all of our sector regulatory approaches work that way. Where it becomes a little bit more flexible and a little bit different is with respect to broader policy setting. And in the agency that I worked at in the Department of Commerce, NTIA is a big proponent and has been historically with the multi-stakeholder model in places like the IGF and things like that. But we also talked about, and we talk about government as a convener. So there’s the idea of similarly seeking public input or stakeholder engagement on what should be the priorities and policies of your office or your administration. Sometimes you do a public meeting, sometimes you do a notice of inquiry and you ask for written feedback. And the outcome of these efforts is really, government policy setting or government priority setting. and it impacts what the team does and how advocacy happens across different parts of the world or bilateral engagement. But then there’s also government as a convener in terms of actually trying to set policy or participate in policy. And I had the great experience of being involved and responsible for the US government’s relationship with ICANN. So I was very much involved in the IANA Stewardship Transition, which is a big, probably one of the largest examples of a multi-stakeholder decision-making process versus a multi-stakeholder consultation process. And in that regard, we were sort of instrumental in setting some of the key foundational principles, participating in the process, and actually evaluating it. But something that’s probably not as well-known in this environment is at the time, NTIA actually tried to deploy sort of a multi-stakeholder decision-making process domestically, and it was much more challenging, actually, than it was globally. And the example I give is we were trying to actually implement some sort of baseline privacy rights without congressional legislation. And in the absence of that, tried to convene stakeholders and actually said, okay, what should we be talking about? And what do you all wanna talk about? And what policies do you all wanna set? And I will say the very first meeting of that was very strange for a lot of people because they were much more used to what I described at the outset, the Administrator for Teachers Act, where government comes in and says, here’s the particular problem we’re trying to solve. Here’s some of our initial thinking. What do you think? In this case, we were like, nope, nope. My boss at the time, we’re gonna let the stakeholders decide what did they wanna focus on? What did they wanna, what rules do they wanna set? And that those processes were much more uneven. Some yielded some specific, you know, voluntary codes around mobile apps transparency, but a couple of those stakeholder processes actually fell apart. And I think we’re, you know, it was a learning experience, I think, for the team, but didn’t yield any actual policy outcomes because the stakeholders themselves didn’t have a particular focus that they wanted to talk about. So again, I think when we’re talking about better policies through stakeholder engagement. Some of the lessons learned, at least from my experience, depending on how you’re handling it and setting aside again our required regulatory approach, but if you’re going to try to deploy a multi-stakeholder process or if you’re going to try to do stakeholder engagement, it’s better when you have a targeted question you’re asking people, just like when you’re developing a particular product. If you’re developing a particular policy and it helps people focus, that tends to be a little bit more useful. At least in the governmental sense, there’s got to be political will to actually want to follow this approach because there’s a lot of people that will challenge the approach and there’s a lot of people that, when they don’t get exactly what they want from the approach, will try to go around you or go to other parts of the government to get what they want. So there’s strong commitment to political will is an important thing. You’ve got to also, as someone else mentioned as well, it can’t just be a check the box exercise. You actually have to always be talking to people. It can’t just be, okay, I have this particular problem, I’m going to talk to you now. You’ve got to build relationships and you’ve got to sustain the relationships and you’ve got to actually keep working with people so that you understand each other and can talk. There’s also got to be enough resources. Not just in the sense of stakeholders being able to participate, which can be a challenge if you want a broad range of stakeholders, not everybody’s resourced the same. The same is true of governments. You’ve got to actually have enough staff and enough people and resources to do them. And then again, I go back to, at least in my experience, where better policy through stakeholder engagement has occurred when the questions have been a little bit more focused and the problem set has been a little bit more narrow. We have a broad problem set. I think it lends itself to inertia sometimes and it’s hard to get past some of the different competing perspectives. The other thing that helps is having a deadline. There’s a clear deadline. It drives people to particular outcomes. And that was kind of my takeaway from my experiences. And maybe I’ll end there and keep the conversation going.

Jim Prendergast: Yeah. Thanks, Fiona. And, you know, as you were speaking and giving some of your best practices, I could see Charles reacting on screen, you know, deadlines and political will. And, you know, so let’s flip it back to product development. I see you reacting to a lot. of what Fiona says. Why don’t you share with us some of your experiences at Google in the product development lifecycle and this engagement process that you’ve undertaken? Absolutely, yeah. So,

Charles Bradley: hi everyone. I’m Charles. I’m the manager for trust strategy on knowledge information products here at Google. So, just a bit of context of what that means. Knowledge and information products are our search, maps, news, Gemini products. So, anything that connects people with information rather than our hardware or cloud work. And manager of trust strategy, well, our role in our team is to shape our products and our product strategy so that we continue to build trust with users. And so, sort of a department that was built about three or four years ago. And a fundamental part of that is our stakeholder engagement. We built a program at Google called our external expert research program, which is all about ensuring that we get meaningful expertise into the product development lifecycle in a company that’s moving at a million miles an hour at all times. We, having been on the other side of this conversation for many years now, I totally understand some of the challenges that have been raised by my fellow panelists. I was one of the stakeholders who’s fatigued about being asked the same questions by different companies over and over. I was also one of the stakeholders who sort of would come to consultations and be like, I have absolutely no idea what you’re talking about. You’ve been thinking about this question for three years and you’re asking me to split a hair on something in 15 minutes. Maybe a bit of context would have been helpful and you could have helped me sort of understand the problems to base a little bit more. So, I think that might be why I was hired in the first place was to try and bring a bit of that understanding from the stakeholder perspective into the product development lifecycle, which is at Google. run by product managers and engineers who are trying to build and ship products to millions and billions of users. So when we come along and say, hey, we need to be speaking to a wider range of expertise, often we get sort of flags being flown of that’s going to slow us down, how do we get the product market fit faster, etc, etc. So the program was built as a way of showing that if we do this right at the beginning, our products will be more successful, and we’ll build greater trust when they launch rather than having to build that over time. And I want to talk about two sort of examples that we’ve done in 2024, which has been quite an exciting sort of year for us in this space. Firstly, it was on Circle2Search. Circle2Search is a new feature available on Android, where on any surface on an Android device, you can long hold the bar at the bottom and circle a bit of your screen, and that will send a search up to Google Search. Why is this useful? Well, people are finding information in many different ways. They’re looking for access to information, not just coming to search directly anymore, but coming from different platforms. And we thought it was a great way of meeting users where they’re at. So if you’re on a video somewhere, or you’re on some other piece of content, and you want to know a bit more about that, why can’t you just circle it and off you go? Well, there were a number of key risks to launching this product, including some of the privacy risks you could imagine associated with it. So our product manager, who’s leading this, is very familiar with some of these risks and forced a opening in the product development lifecycle to ensure that we went out and got expert feedback. And we got expert feedback through a number of one-on-one consultations to start with, so thinking through what Richard was talking about in terms of formats. The format of engagement has been very important to ensure that we can get direct and specific feedback from individuals as well as group feedback. So we went out and spoke to dozens of experts in human-computer interaction, as well as privacy and human rights experts. And then we went and after a few one-on-one engagements with these experts, we also brought them together in a group setting. And we came back with five key themes, which actually led to amendments to the product. So the first issue that we heard was, how do you prevent unintentional feature activation or sharing of data? So if you don’t want to have this product on your phone, how can we stop that from unintentionally opening us, sharing data with you? So we ensured that there was explicit user action to launch and to actually activate the product, rather than it being auto-on. And we also provided access in the search itself to delete that search, because we wanted to make sure that people had the closest control to deletion. We also heard, how do we ensure that users can access the controls over this information as well? So we integrated a delete the last 15 minutes search, which is something that we’re trying to do more broadly across a number of our products. We understand that deleting your whole search history might not be what you want to do, but you may have searched for something that might be a present for someone, or it might be a more sensitive query that you want to be able to quickly delete your last 15. minute search. So we integrated that as a feature. Meaningful disclosure, so what on earth is going on? How do we ensure that there’s a meaningful consent to what’s happening with this product and how do we educate users? So in the first launch of the product, we provide a lot more clear language explanation of what this product is and how it works. And we provide much clearer control and consent to how we’re using your data. One risk that also came up, so the fourth of the five points, was around facial recognition technology. So we use visual search a lot in this. This is like our lens product you may have come across before. And people were very worried that we were going to be using biometric technologies for this. We don’t use biometric technologies. We’re using a similar image-to-image matching service. So if this picture is available in the open web, it’s indexed and then we’ll be able to find you a similar copy of that. But we don’t know who that person is and we’re not taking a photo of Charles Bradley and saying, oh I know that’s Charles Bradley, let me return you other photos of him. It’s purely on a visual match-to-match basis and we’re explaining that to users. And then what information are we using and what data are we storing when this product is being invoked was one of the key points as well. So the whole point of the circling part of it is that users can precisely select a part of their phone that they want to search for. Nothing outside of that search is used or collected in the process. And what we’re doing here is actually turning, if it’s an image we’re return that image into text or we’re using the text to create a search and that text is stored as part of your search history but no other information is stored. So we’re not taking the photo of it and storing that photo against your account or anything else, we’re just using the text that we’ve generated from that. So these were sort of five really critical things that came up through these engagements and I think the team had a good sense of some of some of these issues but not the level of priority to some of them and I think the stakeholder engagement we were able to more clearly develop escape hatches or solutions for users which met users needs so providing control front and center in the product as you’re using it rather than back in a setting or some account profile which is often how products provide you with control over search history and everything else. So it really fundamentally changed the way in which we launched this product and has resulted in a really good launch for us and a product that’s been used quite a lot over the last few years. Just one example where we’ve done a very specific product development thing and I think to some of Fiona’s points we had a very clear deadline, we had a very clear problem statement and scope, we were looking to launch this product and we’re looking for ways in which we could build it more sustainably, more suitably for users. Another example which I’ll use just to sort of show some of the other strategies that we have is our work on AI overviews. So now if you go to search you may see an AI overview where we generate a response to your query using generative AI, and then below that provide you with 10 blue links. This is something that we launched about 14 months ago in beta in labs, which is all like beta opt in service on search, and have recently rolled out to over 100 markets. But when doing so, we knew that there were going to be a number of broader challenges to sensitive queries. So things that may not be a very straight tie, a straight line answer to a factual response, there, there, obviously, we apply our product policies to, to ensure that we don’t trigger an AI overview on something that is policy prohibitive. So it’d be that about illegal activity or hate speech, etc. But there are obviously a number of gray area queries where we could with Google voice and our point of view, provide a less than suitable answer to that. And to do this, we didn’t, we didn’t really have a very clear sense of what the product, like strategy should be, and how we should and how we should do this, because it’s such a new and evolving space. So we built a panel of experts that we are now in the second year of engaging. And we work with on a monthly basis, either through one on ones through online virtual calls or through in person meetings, who are giving us much higher level advice around like the product strategy and direction, as well as providing clear guidance on when we have quite specific questions to ask them about whether we should respond in this way, or what frameworks we should be using to train our models to respond here. I think the benefit of this has been that it’s such a complicated space, and the asking an expert in a one or two, like one hour calls, we would be really under utilizing the expertise of these experts. There was a quite a large ramp up to build a clear and consistent understanding amongst our experts of what our ultimate challenges were with this. Like how is the model actually being trained and what different strategies do we have within our model and like product launch strategy do we have at our disposal? And then going iteratively across a number of these, looking at different verticals of sensitive queries, stack ranking them and working through some of those strategies has been very, very fruitful. And I know that the experts that we’ve worked with in this program have found it very rewarding because not only can they see some of their works of directly being integrated into the product and being launched and we’ve now had many billions of queries trigger overviews now, but also they get to sort of learn about the different strategies that we’re focusing on and some of their expert work is actually based on this, but they’ve never had the opportunity to integrate that within a business context. So there’s two things within the program. We’ve now done about 30 studies this year and we’re sort of focused on a number of areas for next year and always we can do a better job, but we think we’re sort of moving in the right direction to provide clarity over how we integrate experts into product development. Pass it back to you, Jim.

Jim Prendergast: Great, Charles. Thanks a lot. You know, it’s really interesting to see how the product development life cycle did take into account the outside expertise and feedback. I can only imagine your engineers looking at you saying, are you kidding me? You want to do this on the front end? But as you said, it probably saved time and a lot of aggravation in the long run. So for those in the room, we’re gonna be moving to discussion and question and answer. We do have a couple of microphones up here. I can play Phil Donahue for your Americans who understand that reference and move the microphone around. There’s one up here on the table, but I’ll sort of get the conversation going. I know Richard, you engaged with Avri Doria just about a five-step approach toolkit that you’ve developed. Do you want to just, for those who aren’t in the chat, do you want to give a quick overview of what that is and how folks might be able to access it?

Richard Wingfield: Yes, absolutely. So the approach is linked to in the chat, but you can also find it by using the search engine of your choice and looking up BSR stakeholder engagement five-step approach. In short, the approach is, it’s a toolkit that we’ve developed, which helps companies think about how to approach stakeholder engagement. The steps are, first of all, developing a strategy. So basically setting out what you want to do as a company in terms of your vision for stakeholder engagement, your level of ambition, maybe reflecting on existing stakeholder engagement. This is obviously something that will vary depending on the resources of the company and what it wants to achieve through stakeholder engagement. Secondly, stakeholder mapping. So when I was talking earlier, I mentioned the kind of the breadth and diversity of stakeholders that exist or people who might be affected by a company’s products or policies. And so undertaking a mapping of which groups or organizations or individuals you need to speak to and maybe where those relationships already exist. Third, preparation. And this is sort of coming back to some of the points that Charles and others have made around making sure that stakeholders are able to… to engage in that process with confidence and with an understanding of what’s happening. So that’s everything from building those relationships, thinking about what the logistics might be for those meetings, preparing and capacity building beforehands that people can come to them and genuinely participate in a helpful way. The fourth stage is the actual engagement itself. And we provide some guidance there on how to manage difficult situations. For example, making sure that all voices are heard in dealing with some of the barriers that might exist to stakeholder engagement relating to language or accessibility, for example. And then fifth, setting out an action plan as to how you’re going to use the inputs for that engagement, either to make changes or just to make sure that the people are kept in the loop about what’s happening. So those are the five steps of the approach and the toolkit is available via the link or just by searching BSR, Stakeholder Engagement Five-Step Approach.

Jim Prendergast: Great, thank you, Richard. Thobekile, a question for you. You know, for many companies, Africa is an opportunistic market. It’s a growing market. It’s a place where they want to do business, but there are unique challenges to it as well. And what would you say are, you know, some of the challenges that for companies that want to engage in stakeholder engagement across the continent, what might they face and what are your recommendations, how they might overcome those?

Thobekile Matimbe: Thanks, Africa is a very great place where there is room to engage with civil society actors on what they’re facing, what the challenges are. But I think the challenge has been really, you know, having more of, you know, willpower from private sector actors to actually want to meet with, you know, the community on the ground to be able to engage on key challenges. Like I mentioned, we host the Digital Rights and Inclusion Forum and we’ll have, there are definitely companies that we’ll know will be there, will be in the room, will have Google in the room, will have Meta there to be able to engage. engage, but we feel that there’s a whole lot of other private sector actors who would want to be in the room. We really have had several actors that have been able to come through and be able to engage. But I think what is important to highlight is that the environment that we operate in on the African continent, it’s marked by repressive governments that obviously have their own calls on companies and they might want to also make certain orders even as well on companies. And that’s the kind of challenging atmosphere, environment that companies face when they come on the African continent willing to engage. But I think there’s a way around it. I think that proactiveness in terms of stakeholder engagement will be able to ensure that even when there has been a challenge and companies have been forced to do certain things or even not to be able to respond according to their policies effectively to certain situations, they can still have a space to engage with actors on the African continent to say, what else can we do and support other forms of strategies that civil society actors might be using to actually address some of the challenges that we face. So with regards to stakeholder engagement, there’s a willing civil society space and it’s open because I think the ways we’ve been engaging, the formats of engagement, they can always be adapted to context. So there is room to actually engage. I think what we need to see is more willpower from private sector actors to actually meet the community where the community is.

Jim Prendergast: Great. Thank you very much. And that’s good insight and good advice. I’m going to look at Fiona and Charles virtually. I’m going to look at you. Charles, you each, Fiona, from your perspective talking to, well, as a former government official speaking to other governments, what one key piece of advice would you give governments who are looking to engage in stakeholder advice? And Charles, yours would be, what piece of advice would you give to other private sector entities about government? going down this path?

Fiona Alexander: So I think I just might say that as I listen to others speak and Charles in particular, I think it’s easy or even natural, right? If you’re the decision maker, whether you’re a company making a product or a government making a policy, it’s kind of natural to be like, I know best. I’m gonna sit in my office and talk to my team and I’m gonna decide. And that’s just a natural, I think, human way of thinking. It’s really important though to take a step back and realize that even though talking to people might take more time, and if you’re doing a multi-stakeholder process, it probably is a little bit messy. At the end of the day, you’re gonna get a better product or a better policy and you’re gonna have buy-in if you actually take the time to talk to people in a meaningful way. And I think that’s my advice to people is to actually take that step back. And I’m not on my computer in front of me, so I don’t know who’s on, but you mentioned that Avri was on and I don’t know why, but it makes me think of that maybe sort of stakeholder engagement or participation almost needs ambassadors to make the case as to why this is actually a better way to do policy and the better way to make product is to actually convince people that’s the best way to do it. But I think it’s natural to be like, you know, I know best, I’m just gonna make my own choice. And I think we realize that the outcome of that isn’t always the best.

Jim Prendergast: Great, Charles?

Charles Bradley: Yeah, I mean, I sort of agree with all of that. I’d be willing to further on the knowing best point. I think sometimes people do know that they need to do it, but there’s just so many other pressures on time and some of the skills needed to be able to do this. I think there’s a confidence issue as well with some people who are very familiar with engaging with different internal stakeholders, but not external stakeholders and a concern about what they might hear or how they might get that feedback. I totally agree with Fiona’s point around champions. I think the smartest. thing that my boss did was turn this into a formal program at Google with high visibility and structure to it so that we could build champions underneath that program. And champions not just who are staffed to this program, but also champions in different parts of the business who have utilized the program and deliver greater products. We get all sorts of challenges from other product areas, as Jim alluded to, of you’re going to make me want to do this beforehand. Why aren’t we doing this down the line to see what actual risks there are or harms there are, rather than foreseeing them? And I think that once we’ve got a bunch of case studies within a formal program with a bunch of ambassadors, the inbound requests for this have really started to appear. And we have a number of expert engagements at the moment that are underway, which came to us saying, oh, I really want to make sure that my product lands in the right way. And I know that you’re a team that can do that. But you can also do that at pace and with inside the infrastructure of the business. So it’s internalizing it, creating a formal program, and then building champions who can drive up demand would be my advice.

Jim Prendergast: Great. Thank you. So you’ve created almost like a little cottage industry within Google on how to engage on this, so maybe a profit center someday. So turning to the audience, I know we have a question here. If anybody else has a question, let me know. I don’t have eyes behind my head, so I don’t think we have any there. To be fair, just please identify who you are. And if you have a question directed to one of our panelists, just let them know. Thanks.

Lina Slakmolder: Thank you, everybody. My name is Lina Slakmolder. I work with Search for Common Ground, which is a community-based platform that is a part of Google Cloud Platform. And I’m here to talk to you about how we’re going to make sure that we’re able to make the most of Google Cloud Platform. So I’m going to start with a little bit of background on Google Cloud Platform. So Google Cloud Platform is a platform that’s been around for a number of years. And it’s been around for a number of years. an international peace building organization, but I also co-chair the Council on Tech and Social Cohesion, which brings technologists, peace builders, academics, policy influencers to influence tech design for social cohesion. And listening to this panel, each of you are saying things that are true, but I feel like there’s some other truths that also need to be put on the table. And then I’m curious to hear what are some of your thoughts about those, right? And I want to say, Charles, that, you know, just dovetailing from where you left it, like the rest of the industry has completely depleted trust teams. It’s extraordinary that you’ve built it up and that you just said that, you know, that your senior leadership is actually trying to incentivize this, because this is the first thing I want to say is that there is actually a de-incentive for this kind of engagement, even when organizations like Tobaz and others are bringing forth the harms to these companies, right? They’re basically saying it’s not going to be prioritized over profit. We’re looking for growth. We’re looking for engagement. And to you, Richard, you know, I wonder if you also feel like there is a real changing narrative in the sort of business and human rights space when it comes to big tech today, and that the things that are really leading to most of the changes, again, not in any way excluding, Charles, those excellent examples of how you’ve made change, but in most cases, the changes that the tech companies are making in their products is due to litigation, fear of fines, reputational damage, and things like that. And somehow, even with really good multi-stakeholder-ness, the companies are not necessarily interested in making these changes. And I’ll go one step further, that in Africa, there’s places where these companies are even trying to damage the reputations of organizations that that are pointing out the harms of these products, right? They’re using money to fund other groups that may be saying what they want to hear, and they’re actually damaging the other organizations that are being more critical. So even with multi-stakeholder engagement, there’s something that’s going really wrong when we look at big tech. And it’s why, and I’ll end with this, that we still see a number of products, whether it’s the chatbots, whether it’s the notify things, there’s a whole range of products coming out on the market each week that are not doing an upstream test on safe, that are not being transparent. And without the transparency, again, what kind of stakeholder engagement are you really looking at, right? When you ask people for the consultations, you’re not subsidizing them to give you all those consultations, right? So again, I’d just love to hear from the panelists, are we recognizing that we’re at a different time here? And even with all the good five-step skills in multi-stakeholder consultations, there’s still a real issue on the table here.

Jim Prendergast: All right, who wants to go first on that one? Maybe Richard, do you want to take it from the high level?

Richard Wingfield: Yeah, I’m happy to. I’m hesitant to generalize too much by saying that all technology companies do or don’t do something. I think there is huge variation in terms of maturity and attitude towards the importance of being a responsible business with some taking that responsibility a lot more seriously than others, for sure. But I can understand why there is a sort of feeling that the overall, the sector still hasn’t done enough on this and it still isn’t doing enough. And I think one of the real challenging things, and I don’t have a solution to this, is that meaningful stakeholder engagement takes a long time and it requires organizations to be brought in at a very early stage. If you’re a company with potentially thousands of- different products that might be developed, some of which will never make it to market. You often don’t know until a relatively late stage, which ones are ultimately likely to launch or not. And by that point, it’s very difficult to then bring stakeholders in unless you want to exhaust them by constantly asking them about all of the different options that there might be at every single stage. And of course, technology moves so fast. And that when you’ve got, you know, companies and we think about generative AI and the rush for companies to make sure that they are leading on this as a new technology, you know, bringing in stakeholder engagement slows the process down. And that’s not to say that we shouldn’t do it. But I’m just saying that there are ways that I think the technology sector is faces unique challenges when it comes to meaningful stakeholder engagement, because it does run contrary to a number of other business interests potentially. So I think the solutions to that and these aren’t none of these are silver bullets. One is regulation. And we’re seeing more regulation, particularly in the EU, which requires companies to engage with stakeholders as part of their risk assessment processes, things like the EU’s Digital Services Act, the AI Act, the Corporate Sustainability Diligence Directive. Second is to find it make it easier for stakeholders to become engaged. And that might mean more sectorally focused engagement. So for example, at BSR, we’re now doing a human rights impact assessment into generative AI, which is across the sector as entirety. So not just individual companies, but working collectively to try to reduce the amounts and the demands on them. But there are some of those, as you know, what you call disincentives that are really hard to work around for sure. So I’m not going to pretend there isn’t a problem there. But I do think that there is huge variation still in terms of the approach that different companies take with some with some doing it better than others, for sure.

Jim Prendergast: Thobekile, anything to add?

Thobekile Matimbe: Thanks. I would just say that I think from from my earlier reflections, what I mentioned about willpower, I think on the part of companies to engage and not just engage, but meaningfully engage is something that we still as an organization are looking forward to experiencing more of. And more specifically as well, engaging where the community is, especially at community convenings, is something that we would love to see as well gain more traction. I think one thing that I would say is that what we have experienced, especially with engagements with the private sector company, is definitely those few that are willing to engage with us and that we engage with, it’s usually engagements that will have like side meetings, like closed meetings. And it’s not out there where we are engaging with the broader communities that we represent or that we support and stand for. It’s more of, OK, who are we going to engage with on the continent? OK, there’s this organization and that organization. But proactive stakeholder engagement is looking further than that and saying, hey, if you’re reaching out to Paradigm and say, hey, Paradigm Initiative, we are working on the African continent. We would like to meet the community. Where can we meet the community? And we open it up to broader actors on the continent. I think there would be much more enriching conversations around the challenges that the communities are facing with regards to products, as well as more better inputs into how to shape policies, even for big tech companies as well. So I think what we would definitely love to see is more interest in engaging, especially where the community is, meeting the community where it is, the broader community, and not just picking out or cherry picking those organizations that will know, OK, if we say we were in a room, we engaged with Paradigm, then tick. We’ve done our part. But we need it to be more meaningful and be able to, as well, address the concerns of the broader community on the African continent.

Jim Prendergast: Thank you. Charles, obviously, you can’t speak for the industry, but what’s your take from the Google standpoint?

Charles Bradley: Yeah, I mean, I’m glad that it’s been raised. And if it was very easy, and if it was all going swimmingly, we probably wouldn’t have our jobs trying to do this. I think there are two ways of seeing some of the more harsh government actions over the last few years, where the increase in regulation, which has been welcomed and important to ensure that decision-making about the way in which products are developed and deployed to users, is much more democratic and organized by nation states and regional bodies. It’s been really important to see, and we’re really encouraged by the stable engagement that the different national governments and regional bodies have done there, as well as you mentioned some of the fines. I mean, our view of this has been that we can either continue to wait for these fines and more regulation that may or may not be fit for purpose, or we can engage more with stakeholders to ensure that our products are more aligned with expectations and the values that we’re trying to inhibit. And that’s actually been a strategy that’s got a lot of traction at a leadership level. And as you say, that’s not the case for every company, and we are very fortunate to be able to take a long-term view on this. But it has been sort of part of the business’s DNA for a long time to be able to engage with stakeholders and bring that expertise into product development. I think we’re just getting much sharper at doing so in a more meaningful way, i.e. actually showing impact at the product level. And there are hundreds of success stories where products have never, ever been launched because we have spoken to external stakeholders and experts who have given us very clear guidance on what the risks will be, which are way beyond thresholds of… that we would be able to accept. But internally, we didn’t see those, those issues, we didn’t pattern match those or, or understand the trends there. So I think there’s, um, there are different viewpoints from different businesses, obviously, our viewpoint is that, you know, with the increase of a harder government action in this space, we’re going to end up with greater need for stakeholder engagement and building trust and strength and safety into our products, rather than the opposites, where people are racing to get things out the door to try and get product market fits.

Jim Prendergast: Thanks to all three of you. I’m looking around the room just to see if there are any other questions. I’m not seeing any hands. Oh, Fiona.

Fiona Alexander: I might just respond a little bit to this one as well. Because I think it’s important and I get the perspective that you’re bringing, but there’s no universal solution. And there’s not a single path that will fix all of these things. All companies are slightly different. All products are very different. And not all products or policies are equal in terms of their purpose and their impact. And this is why frameworks and the one that I think Richard mentioned is why frameworks can be useful, because you can talk about how to implement those frameworks and how you can incentivize action, and how to get to that. I will just say that the idea that regulation is going to solve all these problems, I think it’s slightly misguided. I think we’ve seen a lot of regulation emanate from Brussels in the last five years. And I think it’s unclear yet what the implication of that regulation is going to be, how damaging it’s going to be, how effective it’s going to be, or if it’s going to be good. So I think the jury’s out on all of that. I think if GDPR is any example, I think that’s probably not going to help on the innovation side, at least from coming from Europe. But again, we’ll wait and see. And I think a lot of this culturally depends on sort of where you come from, from my perspective, a policy and regulatory perspective. But even I guess, from a company perspective, engagement perspective, and this gets back to ex ante or ex post, right? Do you deal with something once something’s out, and there’s a proven problem? Or do you try to look at something? mapping, map out all the possible potential things and then decide whether to do it or not. And I think that probably goes for products as well. And you have to kind of decide what’s your risk factor and what you’re willing to do and not do. And a lot of that I think comes from culturally where you are. Western philosophy, a U.S. approach versus a European approach, they’re very different. They’re not the same. And I think companies have that same dynamic and where they fit in that. Thanks. I’ll just read out a comment that

Jim Prendergast: Avri did put into the chat. Frameworks plus impact assessments. So probably the combination of the two will yield some effective outcomes for sure. I don’t see any questions online or in the room. So maybe just a quick sentence or two as a wrap up to sort of bring us to a close. I know everybody’s got busy schedules. And if I can give you 10 minutes of your time back, I’m sure you’d appreciate it. You could probably use it to get in line to the restrooms and for the rest of the session’s wrap up. So let’s go to you, Charles. Why don’t you

Charles Bradley: kick us off? Yeah. I mean, I think just, you know, this is such an important topic and one that I think needs to get, we need to move to very specific good practices and frameworks that can be used across industry. And I’m really glad that Richard and the BSR team are doing that for the industry and that we sort of try to bring the whole industry along on this journey. It’s not going to go away. It’s going to get more and more complicated. There are going to be more like unintended consequences or unforeseen utilizations of new technologies as the pace increases. And I’m excited that this is a space that continues to be a place where we can learn from each other and build some sort of like common understanding of how this is done well. So that our colleagues from civil society and academia. are not being asked to provide input into things that don’t go anywhere.

Jim Prendergast: Thobekile, please.

Thobekile Matimbe: Thanks. Thanks a lot. I think my last reflection is really that I think going forward, I think we are open as a paradigm initiative to engage as well as, you know, connect any, you know, product designers to, you know, the broader community on the African continent who are within our networks. And, of course, the Digital Rights and Inclusion Forum is going to be hosted in 2025 from 29 April to 1 May in Lusaka, Zambia. It’s a multi-stakeholder platform and we’re looking forward to having at least 800 stakeholders from the Global South in attendance. So it would be a good platform to continue those conversations and obviously a perfect platform as well for any policy consultations or any other, you know, product launches. And, yeah, I think it’s something that we look forward to and look forward to building lasting relationships as well with the private sector around human rights, so to speak. So thank you so much for the opportunity.

Jim Prendergast: Great. Thank you. Fiona.

Fiona Alexander: I think my takeaway from all of this is that, you know, it’s important to always talk about to anyone and everyone the importance of getting stakeholder feedback. And I think we talk a lot about the successes of the processes, but the fact that sometimes it doesn’t work or it doesn’t work and you don’t release a product, we don’t talk about that. Right? So I think it’s equally as important to talk about why things don’t work or if the outcome of the stakeholder feedback is not to release the product, making that known. And I think the more transparent we can all be in all of this, I think the better it will be for everyone.

Jim Prendergast: Thank you. And Richard, do you want to finish it off for us?

Richard Wingfield: Yeah, I just want to kind of say as well that, you know, although there is still so much more to do and it’s right that expectations increase and that demands on companies, you know, continue to be ones that cause a lot of problems. them to do better. We are a lot further advanced than we were 10, 20 years ago in terms of this issue being one that’s on the radar of companies and on the sophistication of existing efforts. There’s huge variation still. There’s an awful lot more to be done. And I think some of the criticisms have been rightly called out today. But I do think it is something that companies are aware of and thinking about in a way that they weren’t 10 plus years ago. And there are opportunities there to use that and to use other tools to increase what we do. So I hope that things will continue to improve. But there is, as you say, still a lot more to be done.

Jim Prendergast: Great. Thank you very much. And I’d like to thank everybody who found our workshop room tucked over here in the corner. And also for those who joined online. And for our speakers, it’s unfortunate you couldn’t be here, but here in spirit and here in sight. And once again, thanks, everybody, for joining us. And we’ll enjoy the rest of your week.

R

Richard Wingfield

Speech speed

174 words per minute

Speech length

3111 words

Speech time

1071 seconds

Stakeholder engagement is critical for responsible business practices

Explanation

Richard Wingfield emphasizes the importance of stakeholder engagement for companies to act responsibly and align with international human rights standards. He highlights that the UN Guiding Principles on Business and Human Rights explicitly recognize the importance of meaningful stakeholder engagement.

Evidence

UN Guiding Principles on Business and Human Rights framework

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Technology sector faces unique challenges due to fast pace of development

Explanation

Richard Wingfield points out that the technology sector faces unique challenges in stakeholder engagement due to the rapid pace of development. He notes that it’s difficult to bring in stakeholders at an early stage for all potential products, especially when it’s unclear which will make it to market.

Evidence

Example of generative AI and the rush for companies to lead in new technologies

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

C

Charles Bradley

Speech speed

153 words per minute

Speech length

2981 words

Speech time

1163 seconds

Proactive stakeholder engagement builds trust and improves products

Explanation

Charles Bradley argues that proactive stakeholder engagement leads to more successful products and builds greater trust upon launch. He emphasizes the importance of integrating stakeholder feedback into the product development lifecycle.

Evidence

Example of Circle2Search feature development and modifications based on stakeholder feedback

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Thobekile Matimbe

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Internal pressures and incentives can work against stakeholder engagement

Explanation

Charles Bradley acknowledges that there are internal pressures and incentives within companies that can work against stakeholder engagement. He notes that product managers and engineers often prioritize getting products to market quickly.

Evidence

Mention of challenges in convincing teams to slow down for stakeholder engagement

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Thobekile Matimbe

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

Create formal programs with champions to drive engagement

Explanation

Charles Bradley recommends creating formal stakeholder engagement programs within companies. He suggests building champions across different parts of the business who have utilized the program and delivered better products.

Evidence

Google’s external expert research program

Major Discussion Point

Best practices for stakeholder engagement

Proactive engagement can help companies get ahead of regulatory pressures

Explanation

Charles Bradley argues that proactive stakeholder engagement can help companies anticipate and address potential regulatory issues. He suggests that this approach is preferable to waiting for fines or regulations that may not be fit for purpose.

Evidence

Mention of increasing government action and regulation in the technology sector

Major Discussion Point

Role of regulation and external pressure

T

Thobekile Matimbe

Speech speed

163 words per minute

Speech length

2324 words

Speech time

854 seconds

Meeting stakeholders where they are is crucial, especially in Africa

Explanation

Thobekile Matimbe emphasizes the importance of companies engaging with stakeholders in their own communities, particularly in Africa. She argues that meaningful engagement involves reaching out to broader communities rather than just select organizations.

Evidence

Digital Rights and Inclusion Forum (DRIF) as a platform for engagement

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Charles Bradley

Fiona Alexander

Agreed on

Importance of stakeholder engagement

Lack of willpower from some companies to engage meaningfully

Explanation

Thobekile Matimbe points out that there is often a lack of willpower from private sector actors to engage meaningfully with communities on the ground. She notes that many companies prefer closed meetings with select organizations rather than broader community engagement.

Evidence

Observation of limited participation from private sector at community convenings

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Charles Bradley

Fiona Alexander

Agreed on

Challenges in implementing effective stakeholder engagement

Differed with

Richard Wingfield

Differed on

Effectiveness of current stakeholder engagement practices

Leverage multi-stakeholder platforms like the Digital Rights and Inclusion Forum

Explanation

Thobekile Matimbe recommends using multi-stakeholder platforms like the Digital Rights and Inclusion Forum for engagement. She highlights that these platforms bring together diverse stakeholders and provide opportunities for meaningful dialogue.

Evidence

Details about the upcoming DRIF in Lusaka, Zambia

Major Discussion Point

Best practices for stakeholder engagement

Civil society continues to push for more meaningful engagement

Explanation

Thobekile Matimbe indicates that civil society organizations continue to advocate for more meaningful engagement from companies. She expresses openness to connecting product designers with broader communities in Africa.

Evidence

Mention of Paradigm Initiative’s willingness to facilitate connections

Major Discussion Point

Role of regulation and external pressure

F

Fiona Alexander

Speech speed

211 words per minute

Speech length

1937 words

Speech time

549 seconds

Stakeholder engagement leads to better policies and products despite taking more time

Explanation

Fiona Alexander argues that while stakeholder engagement may take more time, it ultimately leads to better policies and products. She emphasizes the importance of taking a step back and realizing the value of talking to people in a meaningful way.

Major Discussion Point

Importance of stakeholder engagement in technology development

Agreed with

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Agreed on

Importance of stakeholder engagement

Cultural differences impact approaches to engagement and regulation

Explanation

Fiona Alexander points out that cultural differences influence approaches to stakeholder engagement and regulation. She notes that Western philosophy, U.S. approaches, and European approaches can differ significantly.

Evidence

Mention of differences between U.S. and European regulatory approaches

Major Discussion Point

Challenges in implementing effective stakeholder engagement

Agreed with

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Agreed on

Challenges in implementing effective stakeholder engagement

Set clear goals and deadlines for engagement processes

Explanation

Fiona Alexander suggests setting clear goals and deadlines for stakeholder engagement processes. She notes that having a clear deadline drives people towards particular outcomes.

Major Discussion Point

Best practices for stakeholder engagement

Impact of recent regulations like GDPR is still unclear

Explanation

Fiona Alexander expresses uncertainty about the impact of recent regulations like GDPR. She suggests that it’s unclear whether these regulations will be damaging, effective, or beneficial for innovation.

Evidence

Reference to GDPR and recent regulations from Brussels

Major Discussion Point

Role of regulation and external pressure

Differed with

Richard Wingfield

Differed on

Role of regulation in driving stakeholder engagement

Agreements

Agreement Points

Importance of stakeholder engagement

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Stakeholder engagement is critical for responsible business practices

Proactive stakeholder engagement builds trust and improves products

Meeting stakeholders where they are is crucial, especially in Africa

Stakeholder engagement leads to better policies and products despite taking more time

All speakers emphasized the critical importance of stakeholder engagement in developing responsible and effective technology products and policies.

Challenges in implementing effective stakeholder engagement

Richard Wingfield

Charles Bradley

Thobekile Matimbe

Fiona Alexander

Technology sector faces unique challenges due to fast pace of development

Internal pressures and incentives can work against stakeholder engagement

Lack of willpower from some companies to engage meaningfully

Cultural differences impact approaches to engagement and regulation

Speakers acknowledged various challenges in implementing effective stakeholder engagement, including technological pace, internal pressures, lack of willpower, and cultural differences.

Similar Viewpoints

Both speakers emphasized the importance of structured approaches to stakeholder engagement, including formal programs, champions, clear goals, and deadlines.

Charles Bradley

Fiona Alexander

Create formal programs with champions to drive engagement

Set clear goals and deadlines for engagement processes

Both speakers stressed the importance of engaging with stakeholders in their own communities and contexts for responsible business practices.

Richard Wingfield

Thobekile Matimbe

Stakeholder engagement is critical for responsible business practices

Meeting stakeholders where they are is crucial, especially in Africa

Unexpected Consensus

Proactive engagement to address regulatory pressures

Charles Bradley

Fiona Alexander

Proactive engagement can help companies get ahead of regulatory pressures

Impact of recent regulations like GDPR is still unclear

Despite coming from different perspectives (industry and former government), both speakers agreed on the importance of proactive engagement to address regulatory pressures, while acknowledging uncertainties in the regulatory landscape.

Overall Assessment

Summary

The speakers generally agreed on the importance of stakeholder engagement in technology development and policy-making, while acknowledging various challenges in implementation. They also emphasized the need for structured approaches and proactive engagement to address regulatory pressures.

Consensus level

There was a high level of consensus among the speakers on the fundamental importance of stakeholder engagement. This consensus implies a growing recognition across sectors of the need for collaborative approaches in technology development and policy-making. However, the speakers also highlighted various challenges and nuances in implementation, suggesting that while the principle is widely accepted, practical application remains complex and context-dependent.

Differences

Different Viewpoints

Effectiveness of current stakeholder engagement practices

Richard Wingfield

Thobekile Matimbe

We are a lot further advanced than we were 10, 20 years ago in terms of this issue being one that’s on the radar of companies and on the sophistication of existing efforts.

Lack of willpower from some companies to engage meaningfully

Richard Wingfield sees progress in stakeholder engagement practices, while Thobekile Matimbe emphasizes a lack of willpower from companies to engage meaningfully, especially in Africa.

Role of regulation in driving stakeholder engagement

Richard Wingfield

Fiona Alexander

One is regulation. And we’re seeing more regulation, particularly in the EU, which requires companies to engage with stakeholders as part of their risk assessment processes

Impact of recent regulations like GDPR is still unclear

Richard Wingfield sees regulation as a potential solution to drive stakeholder engagement, while Fiona Alexander expresses uncertainty about the impact of recent regulations.

Unexpected Differences

Cultural differences in stakeholder engagement approaches

Fiona Alexander

Thobekile Matimbe

Cultural differences impact approaches to engagement and regulation

Meeting stakeholders where they are is crucial, especially in Africa

While not a direct disagreement, it’s unexpected that Fiona Alexander highlights cultural differences between Western approaches, while Thobekile Matimbe focuses specifically on the African context. This suggests a potential gap in understanding or addressing regional differences in stakeholder engagement practices.

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current stakeholder engagement practices, the role of regulation in driving engagement, and the specific approaches to implementing stakeholder engagement across different cultural contexts.

difference_level

The level of disagreement among the speakers is moderate. While there is general agreement on the importance of stakeholder engagement, there are significant differences in perspectives on its current state, effectiveness, and implementation. These differences highlight the complexity of the issue and the need for continued dialogue and improvement in stakeholder engagement practices, especially in the rapidly evolving technology sector.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of proactive stakeholder engagement, but differ on the specific approaches. Charles Bradley focuses on internal company processes, while Thobekile Matimbe emphasizes the need for companies to engage with broader communities in their local contexts.

Charles Bradley

Thobekile Matimbe

Proactive stakeholder engagement builds trust and improves products

Meeting stakeholders where they are is crucial, especially in Africa

Similar Viewpoints

Both speakers emphasized the importance of structured approaches to stakeholder engagement, including formal programs, champions, clear goals, and deadlines.

Charles Bradley

Fiona Alexander

Create formal programs with champions to drive engagement

Set clear goals and deadlines for engagement processes

Both speakers stressed the importance of engaging with stakeholders in their own communities and contexts for responsible business practices.

Richard Wingfield

Thobekile Matimbe

Stakeholder engagement is critical for responsible business practices

Meeting stakeholders where they are is crucial, especially in Africa

Takeaways

Key Takeaways

Stakeholder engagement is critical for responsible technology development and business practices

Proactive and meaningful stakeholder engagement leads to better products and policies, despite taking more time

The technology sector faces unique challenges in stakeholder engagement due to the fast pace of development

There is significant variation in how different companies approach and prioritize stakeholder engagement

Regulation is driving more stakeholder engagement, but is not a complete solution to addressing concerns

Creating formal programs and champions within companies can help drive more effective stakeholder engagement

Leveraging existing multi-stakeholder platforms and meeting stakeholders where they are is important, especially in regions like Africa

Resolutions and Action Items

BSR has developed a five-step approach toolkit to help companies implement stakeholder engagement

Google has created a formal program called the ‘external expert research program’ to integrate stakeholder input into product development

Paradigm Initiative invited companies to participate in the upcoming Digital Rights and Inclusion Forum in Zambia in 2025

Unresolved Issues

How to balance the need for stakeholder engagement with the fast pace of technology development and market pressures

How to ensure stakeholder engagement is truly meaningful and not just a ‘tick-box’ exercise

How to address the ‘de-incentives’ that work against thorough stakeholder engagement in some companies

The effectiveness of recent regulations like GDPR and EU AI Act in driving responsible technology development

Suggested Compromises

Using sector-wide engagement processes to reduce the burden on individual stakeholders and companies

Focusing engagement efforts on the most vulnerable or at-risk communities to prioritize limited resources

Balancing proactive engagement early in product development with targeted engagement on specific issues later in the process

Thought Provoking Comments

The UN guiding principles on business and human rights make regular and explicit recognition of the importance of stakeholder engagement and meaningful stakeholder engagement when it comes to companies behaving responsibly.

speaker

Richard Wingfield

reason

This comment introduces a key framework for understanding stakeholder engagement in the context of business and human rights.

impact

It set the stage for the rest of the discussion by grounding it in an established international framework. This led to further exploration of how companies can implement stakeholder engagement in practice.

We try to prioritise our stakeholder engagement with the communities that are most likely to be at risk.

speaker

Richard Wingfield

reason

This insight highlights a strategic approach to stakeholder engagement that focuses on the most vulnerable groups.

impact

It shifted the conversation to consider how companies can identify and engage with the most relevant stakeholders, rather than trying to engage everyone equally.

I think reflecting more on the do no harm principle is something that I really want to echo. It’s something that is really important and it’s actually something that should be embedded at every point of the product design process.

speaker

Thobekile Matimbe

reason

This comment emphasizes a core ethical principle for technology companies to consider throughout product development.

impact

It broadened the discussion from just stakeholder engagement to the broader ethical considerations companies should keep in mind, leading to more discussion of responsible product development.

Where it becomes a little bit more flexible and a little bit different is with respect to broader policy setting.

speaker

Fiona Alexander

reason

This insight highlights the differences between regulatory processes and broader policy development in terms of stakeholder engagement.

impact

It added nuance to the discussion by distinguishing between different types of stakeholder engagement processes, leading to more specific examples and recommendations.

We built a panel of experts that we are now in the second year of engaging. And we work with on a monthly basis, either through one on ones through online virtual calls or through in person meetings, who are giving us much higher level advice around like the product strategy and direction, as well as providing clear guidance on when we have quite specific questions to ask them about whether we should respond in this way, or what frameworks we should be using to train our models to respond here.

speaker

Charles Bradley

reason

This comment provides a concrete example of how a major tech company is implementing ongoing stakeholder engagement in practice.

impact

It moved the discussion from theoretical frameworks to practical implementation, sparking more conversation about best practices and challenges in real-world stakeholder engagement.

Even with multi-stakeholder engagement, there’s something that’s going really wrong when we look at big tech. And it’s why, and I’ll end with this, that we still see a number of products, whether it’s the chatbots, whether it’s the notify things, there’s a whole range of products coming out on the market each week that are not doing an upstream test on safe, that are not being transparent.

speaker

Lena Slachmuijlder

reason

This comment challenges the effectiveness of current stakeholder engagement practices and raises important critiques of the tech industry’s approach.

impact

It significantly shifted the tone of the discussion, prompting the panelists to address criticisms and limitations of current stakeholder engagement practices in tech.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical frameworks to practical implementation, highlighting the challenges and limitations of current practices, and emphasizing the importance of ethical considerations throughout the product development process. The discussion evolved from a general overview of stakeholder engagement principles to a more nuanced exploration of how these principles are (or aren’t) being applied in the tech industry, with a particular focus on the challenges faced in different global contexts and the need for more meaningful, proactive engagement with diverse stakeholders.

Follow-up Questions

How can companies better meet communities where they are for stakeholder engagement, especially in Africa?

speaker

Thobekile Matimbe

explanation

This is important to ensure more meaningful and inclusive engagement with a broader range of stakeholders, rather than just select organizations.

How can the disincentives for meaningful stakeholder engagement in the tech industry be addressed?

speaker

Lena Slachmuijlder

explanation

This is crucial to understand why some companies may not prioritize stakeholder engagement over profit and growth, and how to change this dynamic.

How can stakeholder fatigue be mitigated when companies seek input?

speaker

Richard Wingfield

explanation

Addressing this issue is important to ensure continued meaningful participation from stakeholders without overburdening them.

How can companies better incorporate stakeholder feedback into decision-making processes?

speaker

Richard Wingfield

explanation

This is critical to ensure that stakeholder engagement leads to tangible changes in products and policies.

What are effective ways to build long-lasting relationships between companies and stakeholders?

speaker

Richard Wingfield

explanation

This is important for creating trust and ensuring ongoing, meaningful engagement rather than transactional interactions.

How can companies balance the need for stakeholder engagement with the fast-paced nature of technology development?

speaker

Richard Wingfield

explanation

This is crucial for finding ways to incorporate meaningful engagement without significantly slowing down product development.

How can the impact of recent tech regulations, particularly from the EU, be assessed?

speaker

Fiona Alexander

explanation

This is important to understand the effectiveness and potential consequences of new regulations on innovation and stakeholder engagement.

How can companies be more transparent about stakeholder engagement processes and outcomes, including when products are not released?

speaker

Fiona Alexander

explanation

This transparency is crucial for building trust and demonstrating the value of stakeholder engagement.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #254 The Human Rights Impact of Underrepresented Languages in AI

WS #254 The Human Rights Impact of Underrepresented Languages in AI

Session at a Glance

Summary

This panel discussion focused on the impact of underrepresented languages in AI, particularly in large language models. The speakers highlighted how the dominance of English and Western languages in AI training data leads to bias and exclusion of other languages and cultures. They discussed how this affects human rights, socioeconomic opportunities, and cultural preservation for speakers of underrepresented languages.

Key issues raised included the poor performance of AI models in non-dominant languages, the risk of further marginalizing minority languages, and the ethical and legal implications of using AI trained on limited language data for critical applications like asylum processing. The speakers emphasized the need for more diverse, high-quality datasets and greater transparency in AI development.

Legal and policy solutions were explored, including copyright law adaptations, personality rights considerations, and international cooperation for knowledge sharing and capacity building. The panelists noted the challenges of creating universal data platforms due to commercial interests but highlighted some promising initiatives for language inclusion in AI.

The discussion also touched on the role of governments in supporting local language AI development and the complex interplay between education systems, economic incentives, and language preservation efforts. Overall, the panelists stressed the importance of inclusivity in AI as a human rights issue and called for more holistic approaches to address language representation in AI technologies.

Keypoints

Major discussion points:

– The impact of underrepresentation of languages in AI from human rights and socioeconomic perspectives

– The role of legal and ethical frameworks in enhancing AI inclusivity and language-based inclusion

– Challenges and potential solutions for creating more diverse and inclusive AI on an international level

– Government support and incentives for developing AI in local languages

Overall purpose:

The discussion aimed to explore the challenges of language underrepresentation in AI systems and datasets, and to consider potential solutions for creating more linguistically diverse and inclusive AI technologies on both national and international levels.

Tone:

The tone of the discussion was largely analytical and informative, with speakers providing in-depth explanations of complex issues. There was also an undercurrent of concern about the societal impacts of language exclusion in AI. The tone remained consistent throughout, maintaining a balance between highlighting problems and proposing potential solutions.

Speakers

– Moderator: Luis Dehnert, fellow with International Digital Policy at the German Ministry for Digital and Transport

– Nidhi Singh: Project manager at the Center for Communication Governance in the National Law University Delhi, India. Works in information technology law and policy, AI governance and ethics.

– Gustavo Fonseca Ribeiro: Lawyer from Brazil, holds a Master’s of Public Policy from Sciences Po. Specialist consultant for AI and digital transformation at UNESCO. Youth ambassador for the Internet Society in 2024.

– Kathleen Scoggin: Online moderator

Additional speakers:

– Audience member: Asked questions about government support for local language initiatives and language requirements in education

Full session report

Language Underrepresentation in AI: Impacts, Challenges, and Potential Solutions

This panel discussion, moderated by Luis Dehnert from the German Ministry for Digital and Transport, explored the critical issue of language underrepresentation in artificial intelligence (AI), particularly in large language models. The speakers, Nidhi Singh from the Center for Communication Governance in India and Gustavo Fonseca Ribeiro, a specialist consultant for AI and digital transformation at UNESCO, provided in-depth insights into the multifaceted challenges and potential solutions surrounding this topic.

Impact of Language Underrepresentation in AI

The panelists agreed that the dominance of English and Western languages in AI training data leads to significant bias and exclusion of other languages and cultures. This underrepresentation has far-reaching consequences:

1. Human Rights and Cultural Preservation: Both speakers emphasized that the exclusion of non-dominant dialects and languages affects cultural rights and threatens cultural identity preservation.

2. Socioeconomic Implications: The panelists concurred that language exclusion in AI exacerbates the digital divide, potentially limiting economic opportunities for speakers of underrepresented languages.

3. Nuanced Exclusion: Nidhi Singh highlighted that even within English, only the most common internet dialect is represented, effectively excluding many English speakers as well.

4. Educational Discrimination: Singh provided a concrete example of how language bias in AI can lead to discrimination in education, noting that non-native English speakers are more likely to be flagged for plagiarism by AI-powered detection systems.

5. Legal Implications: Ribeiro mentioned the use of AI in Afghan asylum cases, highlighting the potential for bias in critical legal decisions.

Legal and Ethical Frameworks for AI Inclusivity

The discussion emphasized the crucial role of legal and ethical frameworks in enhancing AI inclusivity:

1. Detailed Implementation: Singh stressed the need for detailed implementation guidelines beyond broad inclusivity frameworks.

2. Copyright Law Adaptations: Ribeiro discussed the potential for copyright exceptions for data mining to facilitate AI development while protecting intellectual property rights.

3. Transparency and Accountability: Both speakers agreed on the importance of transparency and accountability mechanisms in AI development.

4. Traditional Knowledge Protection: Ribeiro introduced the concept of traditional knowledge protection under international intellectual property law as a potential framework for addressing data rights of underrepresented communities.

5. Personality Rights: Ribeiro highlighted the importance of considering personality rights in AI development and data usage.

International Efforts and Challenges

The panelists explored various international efforts and challenges in building more diverse and inclusive AI:

1. State-Driven Initiatives: Singh highlighted the importance of state-driven initiatives for language inclusion in AI, such as the Karya initiative in India.

2. Knowledge Sharing: Ribeiro emphasized the role of international organizations in facilitating knowledge sharing and capacity building.

3. Accessibility Framing: Singh suggested framing language inclusion as an accessibility issue to leverage existing legal frameworks.

4. Power Balancing: Both speakers stressed the need for balancing power in international policy conversations, particularly between the global north and global majority.

5. EU Data Governance Act: Ribeiro mentioned the Data Governance Act in the European Union as an attempt to create data commons and address challenges in data sharing.

Challenges in Creating Universal Data Platforms

The discussion revealed unexpected consensus on the challenges of creating universal data platforms for AI:

1. Economic Disincentives: Singh pointed out the strong economic incentives against data sharing for private companies.

2. Financial Sustainability: Ribeiro highlighted the tradeoffs between openness and financial sustainability in data sharing initiatives.

3. Data Protection Concerns: Both speakers acknowledged personal data protection as a significant obstacle to universal data platforms.

Role of International Organizations

Ribeiro elaborated on the role of international organizations, particularly UNESCO, in addressing language underrepresentation:

1. Capacity Building: Supporting member states in developing AI strategies and policies.

2. Standard Setting: Developing ethical guidelines and recommendations for AI development.

3. Facilitating Dialogue: Creating platforms for knowledge sharing and discussion among diverse stakeholders.

Government Support and Challenges

In response to an audience question, the panelists discussed government support for local language initiatives:

1. African Initiatives: Ribeiro provided examples of government support for AI development in African countries.

2. Inclusive Decision-Making: Singh emphasized the need for more inclusive decision-making processes in government initiatives.

3. Market Forces: Both speakers acknowledged the significant role of market forces in driving AI development.

4. Educational Challenges: Singh highlighted the complex interplay between education systems, economic incentives, and language preservation efforts, noting the difficulty in balancing English proficiency with local language preservation.

Conclusion

The panel discussion underscored the critical importance of addressing language underrepresentation in AI as both a human rights issue and a key factor in equitable technological development. The speakers emphasized the need for holistic approaches that combine legal and ethical frameworks, international cooperation, government support, and innovative technical solutions. While significant challenges remain, particularly in balancing economic incentives with inclusivity goals, the discussion highlighted promising initiatives and potential pathways for creating more linguistically diverse and inclusive AI technologies on both national and international levels.

Session Transcript

Moderator: Hello, can you guys hear me? We would start with the session now, so please go to channel 1 and then if you can give me a thumbs up if it’s working, that would be great. Cool. Okay. Thank you for joining. Our panel on the impact of underrepresented languages in AI. My name is Luis Dehnert, I’ll be moderating today, sorry for the slight delay, we’re trying to make up for that now. I myself, I’m a fellow with the International Digital Policy with the German Ministry for Digital and Transport. So obviously we heard about the AI divide already early on today, so the issue of representation and diversity is a critical subject for inclusivity in the digital age. So AI is also a technology that is increasingly attempting to model our reality based on training data, but data often fails to capture that reality. So this is especially the case for languages. For example, we have the commonly used so-called common crawl, a data set that is made of nearly everything on the internet and which is often used to train large language models, yet nearly half of the data in it is in English and it leaves out more than 8,000 documented languages by UNESCO worldwide. So we are going to discuss this topic today. We are joined, as far as I know, by two speakers, which I will now let them introduce themselves, so we start with a needy thing. So I give the floor to you to introduce yourself, please. Yeah. OK. Hi.

Nidhi Singh: Thank you so much for inviting me here today. My name is Nidhi Singh. I am a project manager at the Center for Communication Governance in the National Law University Delhi in India. I work primarily in information technology, law and policy and for about the last five years I’ve been working in AI governance and AI ethics, focusing on a global majority approach to how AI is being developed, how norms are being formed, and how it’s being regulated and governed at the international stage.

Moderator: Thank you so much. Now I would like to give the floor to Gustavo Ribeiro. I think he has joined us online. Could you please introduce yourself?

Gustavo Fonseca Ribeiro: Hello, good afternoon to everyone in Riyadh. I apologize because I think my camera is malfunctioning. We were trying to fix it before we started, but I was not able. I will introduce myself and then I’ll try to leave and rejoin so I can see if the camera is working. But thank you all for joining, really appreciate your presence here. So my name is Gustavo Fonseca Ribeiro. I’m a lawyer from Brazil. I hold a Master’s of Public Policy from Sciences Po, a university based in France. And I’m also a specialist consultant for AI and digital transformation at UNESCO. Here at the Global IGF, I’m speaking in my capacity as one of the youth ambassadors for the Internet Society in the year 2024. So I’m very happy to join this meeting. Yes. Thank you, Luis.

Moderator: Thank you so much, Gustavo, for joining. Yes, you can try to rejoin with video. We’d love that. But Nidhi, maybe I start with you with the first question. So can you tell us a bit… What are the impacts of under-representation of languages in AI from a human rights and also socioeconomic perspective?

Nidhi Singh: Yeah, thank you so much for the question. So I think this is something we’ve broadly said this in the introduction as well, but there’s a lot of concerns around bias and inclusivity that comes in. But before we talk about it, I just want to talk about how, when we talk about what languages that AI models are being trained on, it’s not even really just that the resource-heavy languages like English are being adapted. Only specific dialects of English are being adapted. So it’s not even like the English that we all speak, especially for non-native speakers, is the language that’s going into the model. So we’re not part of the majority in either case. Even if you do speak English, it’s not your dialect of English that goes in. So even within that, it’s only the version of English that’s most commonly present on the internet is something that’s being trained on. So in a sense, everybody’s sort of being excluded. When you look at what happens from this, there’s a couple of just use cases I wanted to bring up before we get into deeper discussion. There’s very real-world consequences of this. So as generative AI has gone up, universities have started using models to check if generative AI is being used, if students are cheating, if they’re using generative AI to turn in their homework or to write their papers. As a non-native speaker of English, even if you speak English with a high degree of proficiency, you are far more likely to be flagged for plagiarism. Because the way that these tools are developed, it’s actually developed only for native speakers of a certain dialect of English. So that’s one thing. The other is also AI-driven translation softwares that are used, which are being increasingly used in welfare uses by the state as well. They are also not working well for low-resource languages. So that’s another concern. So what happens is that as part of the internet which does not speak the majority language or the majority dialect, you are already not part of the majority. And as this digital divide of the people sort of increases, the language becomes a further barrier to that. So if the generative AI also just generates in the predominant dialect, there’s a very decent chance that in a few years, the internet will only be filled with this one dialect of English, only a few resource-heavy languages, and all of the other languages. languages will increasingly be removed from the internet. Another thing you can see is that it’s only generative AI content that’s now coming up on the internet. So as that gets collected for more training, it’ll just be one language that’s sort of getting repeated. And your native dialects, and the way that you speak, and your cultural identity on the internet will slowly be lost. So I think there’s a lot of implications of what happens when generative AI models, typically ones that are actually focusing on prompt-based answers, have such a big problem with how they’ve been trained in terms of language.

Moderator: OK, thank you very much for the answer. I would like to give the same answer to the same question to Gustavo. Could you please share your view on that, please?

Gustavo Fonseca Ribeiro: Luis, can you repeat the question, please? Because I was resetting my camera while you asked it.

Moderator: I was asking, what are the impacts of under-representation of languages in AI from a human rights and also a socioeconomic perspective?

Gustavo Fonseca Ribeiro: Of course. That is quite interesting. So when you think of languages in artificial intelligence, the first thought that comes to mind in terms of human rights is cultural rights. If we look at the International Covenant, particularly the one on human rights, particularly the one economic, social, and cultural rights, there are, broadly speaking, three cultural rights protected under international human rights law. The first one being the right of access to culture. The second one being the right of a society or people to guide, to steer the progress of their own scientific progress. And the third one relates to intellectual property. So in terms of human rights, I think to understand this, we to understand the impact of under representation of languages. We have to understand that these technologies, artificial intelligence and the data sets that are fueling it are being, as you mentioned, primarily developed in a Western setting, let’s say that way, for instance, United States, or with European languages, with the exception perhaps of China in Asia. So when these tools are translated into other contexts, contexts that speak different languages, they’re not going to work, they’re not going to perform as well. And this does affect, so this does affect those communities, the rights that I have just mentioned, it’s going to affect how the scientific community explores this new technology. And it’s also going to affect how everyday users of artificial intelligence relate culturally to the outputs of the technology. And in terms of socioeconomic benefits, I would say that you can think of this in two ways, you can think of this through supply and demand. In terms of demand for AI technology, I think this usually happens because it can bring a lot of productivity. But again, if there’s under representation of language, the people that speak that language, they would, they’re not going to reap the same benefits. If you look at the major language models out there, such as chat GPT, they perform very well in English, but they perform very poorly, for example, in African languages. So this is one socioeconomic benefit that it’s not going to be reaped. And on the side of the supply, though, we can think of opportunities, though, because if there is a demand, trade from local communities, right? There’s also an opportunity for local companies to come up. And we do have an example of this, some examples of this in Africa, for example, in Ghana, you have Ghana NLP, which is a startup producing language models in Ghanaian languages. And by the way, there are over 50 languages in Ghana alone. In South Africa, there is Lilapa AI. And another example is the Masakani Foundation, which is a Pan-African organization working also on advanced language inclusion. So I would say those are the main impacts of it. Thank you.

Moderator: Gustavo, so I want to turn now to another aspect of this. So Nidhi, what is the role of legal or ethical frameworks in enhancing AI inclusivity? How can this further language-based inclusion work in your opinion? Thank you.

Nidhi Singh: So I think when it comes to AI governance, there are of course now some legal instruments that are coming up. Even without the legal instruments, of course, countries have their own sort of constitutional protections and human rights protections that will still apply. But primarily a lot of AI deployment is still being governed through the use of AI ethics. There are more broad based sort of frameworks around which you can have AI deployment. And inclusivity is a key framework in almost all of them. So the UNESCO AI ethics, even the OECD ones, all of them have something on inclusivity. Now how that’s to be implemented is actually a really interesting question. Because when you look at something like large language models and trainings, inclusivity just dictates that you should make it available in all of the languages and you should have training in all of the languages. But that’s not very helpful because you actually need very high quality data sets in order to train the models. And that means that it would require a significant amount of time and investment to get those models. So just to give you an example, if you try to use chat GPT in any of the Indian languages, not maybe the bigger ones, but some of the smaller ones like Assamese or Kannada, which is something that we tried to do for fun. At some point, it will start repeating Bollywood dialogues to you, because they ran out of things to train it on. And the easiest thing they could find was Bollywood movies, and they started training large language models on that. So you’ve met the quota, basically, you’ve checkmarked that it is inclusive, but it’s not, that doesn’t make any sense, the model doesn’t actually work. So inclusivity, I think AI ethics, they make a good framework. But to implement that framework, you need to have a lot more detail attached there. Another case study that I want to talk about, this is something that has very real legal concerns, is that in the US, four in 10 Afghan asylum cases would be driven translation algorithm that was used a generative AI based translation algorithm. And they didn’t put a lot of effort into how it translated Afghan into English. And because of that, asylum applications were being derailed. So if you are going to use them, this is like a legal consideration. So if you are going to be using generative AI in such specific, but also important and critical aspects of your public welfare, like if you’re using it in healthcare, if you’re using it for asylum, you’re using it for security, you’re using it for any sort of public welfare benefits. Ideally, you want to make sure that it works really well across all languages. This is especially true in countries in the global majority, which generally have a large diversity of languages. Like Gustavo said, these models are typically made and trained in the global north, where they don’t have so much trouble with like such drastically different languages, so many dialects. So what will happen in these countries is that the people who potentially know English, but also the dominant dialect of English, will probably be able to access that and the people who don’t will not be able to access them. That further creates a larger digital divide. So even in English speaking countries, it really only recognizes the dominant dialect. So like the have dialects and all of the other dialects that are being used typically by immigrant communities in these countries also don’t get recognized as well. Which means that when you look again, specifically at when you ask questions of chat GPT, there is a whole field now from engineering, where if you phrase the questions just right, you’ll get a much better answer. That is, again, something that’s very dependent on you knowing the dominant language and being able to understand how to phrase that very specifically. So a large part of LLMs are designed to give you answers in the dominant language, only if you ask them properly in the dominant language. So the setting does kind of work against you. And it requires at the very fundamental level, a lot of change and investment and effort to be made into it. That might sound like it’s a private concern, but it’s really not considering that you’re using generate generative AI to check for cheating. And based on that, you know, potentially like ending somebody’s education career or branding them as somebody who’s plagiarized. These are things that should have an actual framework, which is of much higher threshold. Like this isn’t about us using chat GPT, Google jokes, or check the weather, or something like this. These are some things that have very real world consequences. So you shouldn’t ideally when you’re deploying AI in these contexts, especially generative AI is something that’s so dependent on language in your culture, it needs to have additional safeguards in place. As of right now, it’s, I think people are really only looking at technical solutions to these problems. I don’t think social and legal problems can really only be solved by technical solutions. So there needs to be I think, a more holistic approach to how you would approach this, but there does need to be some framework in place. If I can get back on this, you said, of course, legal and governance solution should be at the forefront. But can I ask you, maybe from a technical perspective, do you think perhaps, because with many languages, you also have the problem of data availability? Do you think technical solutions such as as synthetic data generation could be a potential solution to also address this? So I think synthetic data generation is something that’s gotten a lot of traction over the last couple of years. And then that’s gotten one set further, where now you have super synthetic data, the data that is synthesized from synthetic data. And I think all of those technical solutions work well within certain areas where they’ve been researched. At the end of the day, if your base data set isn’t of high quality, so these are what we call resource-intensive languages or resource-rich languages. If you don’t have the base data set built up and you try to generate synthetic data out of it, maybe the technology will catch up. But right now, it just sort of exacerbates the problems in it. Another problem is that languages don’t actually typically directly translate. Because the way you write the languages and the way the concepts are sort of explained in Asian languages versus European languages does tend to differ. And in a lot of these LLMs, what it does, it has some basic idea of what the words means. And so it’ll just try to directly translate. That doesn’t really work. And then also, if you’re using synthetic data and your base data wasn’t very good quality data, you’ll just end up with a lot of synthetic data, which is also replicating the same problems. And that could have further problems going down. This is something that I guess you’d need a lot of impact assessment. You’d need a lot of transparency in the model to figure out, which is, of course, something that we’re struggling with right now. Because the problem with an LLM is you need a large amount of data. If you sat down and you tried to individually check every single piece of data, we’d never be able to build an LLM. So these are things I think that, yes, you’d need technical solutions. Synthetic data could be one of them. But there’s no way to be sure that that won’t just exacerbate the problem right now. So unless there’s some way of figuring out, OK, unless we’re sure of how this works, let’s at least not use it in our justice system, in our welfare delivery system, in our asylum system. Until you have that at least pause put in there, I think that’s just going to keep worsening the problem.

Moderator: Thank you for the well-thought response. Gustavo, I’d like to get back to you. So from a legal perspective, what do you think could be other levers, so to speak, to enable more diverse datasets in AI, maybe also perhaps thinking about copyright?

Gustavo Fonseca Ribeiro: So yeah, thank you for the question. So one of the things that I had mentioned earlier, one of the three international human rights affected, right, was intellectual property. And I thought this was going to become a longer conversation. That’s why I didn’t mention anything in the first one. So yeah, I think one key area of law that calibrates access to data, right, not only language, but all types of data, right, is copyright law. But before going there, what is copyright, right? So everybody’s in the same page. Copyright is basically how we assign intellectual property rights to authors. And this applies as well. So when you build a dataset, that is something that you can do as well. You can protect the data with copyright, then you have an exclusive use for that, and only you can license it. That also happens with the source material that is used to create data. So creative works. So whenever we’re using, for instance, newspapers, or like Nidhi said, Bollywood movies, those are protected by copyright. So the way we work with the copyright governance of these two types of resources, right, the datasets and the raw materials for the datasets, is going to affect access to data. But right now, our copyrights, we do have an opportunity for expansion, right, for innovation, but our copyright laws are not necessarily yet adapted to it. We’ve seen a handful, for example, of lawsuits in the United States, between between OpenAI and the New York Times, because OpenAI used the New York Times without authorization. And sorry, there’s some noise around. Yeah. So this is the context, this is the problem. But there are some solutions already out there in terms of copyright. So for example, if you look at the European Union, the AI Act, they do have an exception to what they call the data mining exception. So originally would not be permissible for an AI company to mine data from this raw materials from the source materials like Bollywood movies or from the internet without authorization, the copyright owner. But in the UA Act, there’s an exception to that if you’re doing it for non commercial purposes. So that’s one way to develop to allow for the progress of science. In the progress of science, but still limit the shared benefit of resources, right? Second, a second development that we might see soon, it’s in the US, they have an exception to copyright, which is called the fair use doctrine, which is not so binary is not so certain how it applies, it’s it is on a case by case basis, based on a certain a set of criteria, such as whether this copyrighted certain copyrighted material is being used for educational purposes, or non commercial purposes, or for research, for example. And some of these lawsuits that we see in the US, we might see something like that coming out of it. But we have to wait and see another area. And in my opinion, this is under explored. I would love to see a larger discussion on it. There exists this concept under international intellectual property, law of traditional knowledge. It protects traditional knowledge and traditional knowledge in the sense of knowledge that has been passed from generation to generation, for example, in traditional and indigenous communities. And in the beginning of the late 90s, early 2000s, we saw a lot of debate on that, when it came to healthcare, when it came to medicines and traditional medicines, because you’d see a lot of big companies using this traditional medicines, and which were invented by communities and and this community is not benefiting from it. And yet, we’re still we yet to see this debate in the context of artificial intelligence and data, data has become per se the new oil. But we don’t really, we haven’t seen any stakeholders talking very strongly about how this idea of traditional knowledge communicates with this new, very valuable resource. And just to conclude another challenge that we have, which is not copyright law, but is associated with it is personality rights. The personality right is, for example, how, how is the rights to your own likeness, your voice, your image, your face, you know, and other people can only use it if you have given consent. And this is not an economic right, it’s a moral right, it’s attached to personality. It’s belongs to you because you’re human, not because you own any property, which is often the case, which is, generally speaking, what happens with copyright. So what that means is that you what that means is that you cannot be easily given away, sold away in a market, for example. So right now, we’ve actually seen some decisions by courts, for example, in India, the High Court of Bombay has found that like when an AI replicates the voice of someone who exists, that is a violation of personality rights. But what we don’t know is if taking the voice of someone and using it to train an AI, and if that in itself is a validation, like what if an AI is trained with it, but we don’t know, but it doesn’t necessarily replicate it, right? That is an area that we don’t know. And deciding on that the law would also require some adaptation to either allow for the expansion of data or narrow it, right? Maybe that they conclude my remarks. Yeah. Thank you.

Moderator: Very insightful. Thank you. Thank you very much. I would have would like to pose one last question to the both of you. So in my opinion, we can observe that, you know, this this problem of limited language inclusion in AI leads to efforts on a national or regional level, where countries and regions build their local data sets and models. So thinking more of the international perspective, what steps can we take to together on in within international organizations or beyond build more diverse AI? I would, yeah, Nidhi, maybe you want to start?

Nidhi Singh: Thank you. And you’re right, actually, I think a lot of the efforts that come for something like this come domestically. Because a I think that’s maybe a better place to start, but also probably because state and governments have a far better incentive. So you don’t make AI accessible, because it’s financially lucrative, it may not always be depending on the community. And if it’s not financially lucrative, then why would somebody like open AI be doing this? So it’s usually up to the states to make this. So just to give an example, also like the large language models that have been run in regional languages in India, like Jugalbandi, they’re also done with in partnership by state. So I think that in this case, it’s important to focus on the fact that this isn’t like a favor that’s being done. Inclusivity is, in fact, an essential part of, it’s an essential access to the internet, is, in fact, considered now an international human right. So if you are launching something like LLM, which is fast trying to replace large parts of the internet, you are required to make that accessible to everybody. And like Gustavo said, if you are using public data, there is an, I think, implied expectation that you would use it for public good. So it’s not like, oh, you can also do public good with it, but no, you are, in fact, chaining it on everybody’s data, so you are required to do good with it. So I think that it’ll be good to maybe, in international organizations, bring this under the head of accessibility and inclusivity, and in that sense, sort of apply the same protections to it that we are doing when we’re trying to spread internet to everybody, generally trying to bring everybody online. And yeah, I think, to some extent, even for private players, depending on how the initiative is structured, there might need to be some requirements of including at least some percentage of languages within their training data set. But all of this will really only be possible once you have transparency and accountability mechanisms down, because we don’t actually know how the data is, what data is being collected, exactly how they’re training it. So unless you have very solid transparency mechanisms and accountability mechanisms to see what they’re using to train all of these models with, I think it’ll be hard to really push anything through. It’s a very interconnected problem. You want to have inclusivity. In order to have that, you want to have accountability and transparency. So yeah, I think once you get from ethical AI to actually implementing it, I imagine a lot of the things will get sorted.

Moderator: Gustavo, would you like to add to that?

Gustavo Fonseca Ribeiro: Yeah, of course. So what can we do at the international level, and international organizations in particular? Three things that have come to mind. The first one is sharing of knowledge. and strategies to enhance language inclusion. We can learn from, countries can learn from one another, right? For example, in India, there’s this great initiative called Karya, which is a nonprofit that has this platform that allows data workers and data annotators to go on it, work on providing information and curating data sets. And once this data set is sold by Karya, the profit, the revenue associated with that data set is distributed with the workers. So here we have a case that first we get increased inclusion of Indian languages, and we manage to revert the profits of it to a socially beneficial purpose, right? To helping data workers, which are often don’t have the best working conditions in the value chain of artificial intelligence. So bringing that from one country, and this platform, for example, is open source. So international organizations, they do have the ability of bringing knowledge that exists in India, for example, to other countries that are in similar situations, for example, in Africa, or with indigenous languages in Latin America. The second one I would say is capacity building. Whatever we do with artificial intelligence, we often need data and technology and models. We often need governance as well, like laws that enable its development, but we always need human talent. So even this example that I gave with Karya, a platform doesn’t exist by itself, right? It needs training, people around it to do it. So I think international organizations can work in capacity building. It’s actually been one of the priorities outlined in the Global Digital Compact when it comes to artificial intelligence. And I would say the third one is bringing a better balance of power to conversations on international policy. It’s when we’re speaking of digital divides in particular, there is a clear imbalance of power and ability to tailor this conversation, for example, between the global north and the global majority. And wealthier countries have more resources to participate in this conversation in comparison to low-income countries, for example. So, international organizations, they also have the ability to provide a forum in which these different actors can talk face-to-face. So, I would say these are three potential roles. Thank you.

Moderator: Thank you. With that, I would like to open up this session to questions, either from the in-person audience or from online. I think we have an online moderator, so please let us know if there are any questions online. And, yeah, please also state to whom you want to direct your question. We have a mic in person right here, so if you have a question, please step forward. And, yes, you’re free to ask now. Any questions from the online audience?

Kathleen Scoggin: We don’t have any as of now, but if there aren’t any, I have one to throw out to the group. There’s been lots of talk about the way that we use platforms to gather this data. Either from a technical sense or from more of just a user interface sense, are there things that you all think would be beneficial to creating a universal platform for data collection? There have been lots of different ones. How would you see that going? Thanks.

Nidhi Singh: Should I start? Universal platform for data collection is a very interesting idea. I think that I’ve also heard a lot of conversations about AI Commons and data Commons. And yeah, the principle behind it is quite sound. Because basically what you’re saying is that you put all the data in one place so that everybody can benefit from it. And I do think that, in principle, that sounds good. I do think, however, and I think this might be a bit pessimistic of me, realistically data is quite like the new oil. It’s very unlikely that anybody who has a lot of data would want to share it in a way where other people can profit off of it as well. Well-labeled data is currently one of the biggest resources that we have. Like good clean data that you get for training these things. There are economies now built around these kind of data and data brokers. So I do think that actually putting that into place would be difficult. It’s also very interesting, and this is something that I thought of after Gustavo was speaking. We’ve talked a lot about copyright, but I think it’s quite impressive how much acceptance things like large language models have in our society right now. So when something trawls the internet to collect information to train a large language model, there is a very good chance that you will end up catching a lot of personal data as well. And typically personal data protection used to be very stringent about what you can train models on. Now we’re seeing an increasing trend where because of the lucrative promise of LLMs, countries are just saying that if you posted it on the internet and a large language model caught it, then I guess that’s just fine. As long as it isn’t directly harming your privacy in a way that the output is harming you. the case of the Bombay High Court, where his voice was directly being used, a lot of these protections are diluted when it comes to training the LLM itself. In that era of people sort of prioritizing, I think, economic progress or economic incentives that you can potentially get from generative AI, I think it’s very unlikely that people would agree to pool all of the data in a uniform platform. I hope that that would be the case, but I do think that it’s unlikely that people would agree to that.

Gustavo Fonseca Ribeiro: Yeah, and if I may jump on the question as well. Yes, sure, please. Thanks. I find it to be a very interesting proposal to think about data commons, right? I will first speak of a challenge and then of someone who’s trying to do that. The biggest challenge I see is how bringing data to the open source world, opening data up in a market of technology that is so highly competitive, creates, affects, comparative advantages, so market advantages, right? So it is true that openness, intuitively speaking, leads to more access to data. And you often see this advocacy, right? Oh, to companies, you see it also a lot in the development context. Open the data so everybody can have access to it. But openness can also come with a trade-off of financial sustainability, right? If you open the data without any restrictions whatsoever, it’s not that easy to profit from, to have a revenue from that. And in developing context, texts, the social economic security of companies and the people working on it is it’s a it’s a very big priority. But more importantly, so is if you manage to get people on board with this idea, you have to either come get everyone or no one to create a data commons because from the moment you get people certain amount of companies into the commons share the data for free, the companies that are private and have the size to to continue charging, like they’re private and have not opened their data and already have a big like it’s already big like a big big tech company, they’re going to have a financial advantage over whoever opened their data. And because their model will be more profitable, their model is also going to grow more, which could actually crowd out the commons the common data pool. So that is the challenge I see. It works a lot like a negative externality, you either get everyone on board or it’s hard to implement it. And but the second point is there is an attempt to to implement that kind of thinking actually in the European Union. Through the Data Governance Act, I wish I was the type of lawyer who was an expert in that I am not. But there is this regulation at the European Union level, the Data Governance Act, coupled with the Data Act, in which they try to create pan European pools of data sets in certain fields that are profitable, like agriculture, healthcare, mobility data. And the way they’re trying to build that is by kind of creating a compulsory licensing arrangement, like so the public sector can buy data sets that are of public value. from private entities, and the private entities, they’re mandated to sell it for a reasonable price. That’s literally what the law says, a reasonable price. So it’s somewhat of an attempt to do that, but still incorporate the cost structure of developing data sets, but whether it works or not, well, we’ll see. Yeah, thank you.

Moderator: Thank you. So I think we have two questions from the audience. Maybe we start with the madam in the front. Would you come to the microphone, please? And then we check quickly whether it’s on.

Audience: Can you hear me? Okay, thank you so much for the great session and presentations. I guess my question goes to both of you. You did speak when you were talking about the recommendation on domestically having the incentive coming from government. And I think Gustav online, he had also mentioned some of the large language models in Africa. I think you mentioned Lelapa AI in South Africa in Masakane. So my question is, do you see a lot of appetite, especially from governments to actually support a lot of these initiatives to develop our own local languages and having them included in these models? That’s the first one. And then the second one is just something that I’ve noticed from a very practical perspective where I’m originally from Zimbabwe, and for a learner to graduate and go to university, you must have passed mathematics and English. So not even our local language is included. So I was just thinking from an incentive perspective that why would I, if I’m a software developer, invest in local languages? when they are not useful to students and learners who are supposed to be using these technologies because you know you need English to go to university and all your products should be in English and you are making use of AI products in English. So I don’t know how you see it. Maybe you have practical examples from your regions where there is direct incentive from government financially to support a lot of these initiatives and as well with the school system, are there going to be any changes where we see more and more local languages actually being integrated for you to move to the next level? You must have at least passed a local language, not necessarily just English. So those are my two questions, thank you.

Nidhi Singh: Generate AI and then eventually get to building like translation softwares. As for the second question, that’s a very complicated question, I think which doesn’t maybe have a legal answer, it’s more for sociological answer. I know that some states in my country have a three language model where you must study three languages in school just because of how it works. I think this is just a general problem that many countries that have multiple languages have is the fact that the internet infrastructure and now increasingly AI is all really built on like one common thing which happened to be a specific dialect of English. Until you can have a lot more inclusivity in the room where these decisions are being made, that is unlikely to change. But I think that is something that you’re trying to do through these conversations is to just say that that actually doesn’t make any sense. If you just have a chatbot that really only recognizes English and like a specific way of accessing a service, then that’s not an accessible service. You need to have like more either human intervention and all of these problems. But yeah, I think that’s a more sociological problem that I think the world generally has like a majority perspective on thing and it’s not. I think that will really only get fixed when you have more voices in the room talking about what the experience is like.

Moderator: Okay, with the time in mind, we have one minute roughly left. Gustavo, would you want to add to that quickly?

Gustavo Fonseca Ribeiro: I think Niti’s answer was very good. So very quickly, government support, yes, you can see examples, for example, in Rwanda, the government has been quite supportive of developing datasets in Kinyarwanda, in partnership with academia and startups. And in Nigeria, you can also see the government supporting the development of a large language model in Nigerian languages. As for the second question on the education, I would say that Niti’s was on spot, I think Niti’s answer. I would add that the market is a powerful tool to drive development of AI. And in many countries that speak many, many languages, this market is in English, it is true. For example, in Kenya, that is the case, Uganda as well. You also have other purposes, right? Public services, for example, access of citizens to welfare, and research, which doesn’t necessarily have to have a commercial purpose. So in that context, I would say it’s quite relevant. I hope I’m touching on the question. And there was a question in the chat, which asked about the opportunities and risks of the localization, which is going beyond, it’s contextualizing the model as well, to the local culture, on the opportunities and risks. I would say very quickly. The opportunities? First, because it’s useful. People have a demand for solutions in their local language, and those tools work better for them. And the second, just cultural preservation, I think, will be an opportunity. And the risks, I would refer to risks at large of associated with AI. Even if you’re doing in a local language with culture, local culture embedded, you still have privacy risks. You still have bias risks. And, yeah. Thank you. I will give back the floor.

Moderator: So I saw that there was one in-person question at least left. Maybe I would ask you to, if Nidhi still stays here, you can also ask it after the session. I would like to thank our speakers for the very interesting insights. Also to the audience for the good questions. And, yeah. I wish you a few other good sessions today. And, yeah. Enjoy IGF. Thank you.

N

Nidhi Singh

Speech speed

180 words per minute

Speech length

3083 words

Speech time

1022 seconds

Exclusion of non-dominant dialects and languages

Explanation

AI models are primarily trained on specific dialects of English, excluding other languages and dialects. This leads to a lack of representation for non-native speakers and minority languages in AI systems.

Evidence

Example of universities using AI models to check for plagiarism, which are more likely to flag non-native English speakers.

Major Discussion Point

Impact of underrepresentation of languages in AI

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Underrepresentation of languages in AI has significant impacts

Exacerbates digital divide and loss of cultural identity

Explanation

The focus on dominant languages in AI systems widens the digital divide. It may lead to the loss of cultural identity as minority languages are increasingly removed from the internet and AI-generated content.

Evidence

Prediction that the internet may be filled with only one dialect of English in the future, crowding out other languages and dialects.

Major Discussion Point

Impact of underrepresentation of languages in AI

Need for detailed implementation of inclusivity frameworks

Explanation

While inclusivity is a key framework in AI ethics, its implementation requires more detailed guidelines. Simply making AI available in all languages is not sufficient without high-quality datasets for training.

Evidence

Example of chat GPT repeating Bollywood dialogues when used in smaller Indian languages due to lack of proper training data.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Need for legal and ethical frameworks to enhance AI inclusivity

Importance of transparency and accountability mechanisms

Explanation

Implementing inclusivity in AI requires strong transparency and accountability mechanisms. Without knowing how data is collected and models are trained, it’s difficult to push for meaningful inclusivity.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

State-driven initiatives for language inclusion

Explanation

Efforts for language inclusion in AI often come from domestic or state-driven initiatives. This is because states have better incentives to make AI accessible in local languages, even when it’s not financially lucrative.

Evidence

Example of Jugalbandi, a large language model for regional languages in India, developed in partnership with the state.

Major Discussion Point

International efforts to build more diverse AI

Agreed with

Gustavo Fonseca Ribeiro

Agreed on

Government support is crucial for local language AI development

Framing language inclusion as an accessibility issue

Explanation

Language inclusion in AI should be framed as an accessibility issue, similar to internet access. This approach would apply the same protections and requirements to AI as those used to spread internet access globally.

Major Discussion Point

International efforts to build more diverse AI

Economic incentives against data sharing

Explanation

The idea of a universal platform for data collection faces challenges due to economic incentives. Data is valuable, and those who possess it are unlikely to share it freely for others to profit from.

Evidence

Comparison of data to ‘the new oil’ and mention of economies built around data and data brokers.

Major Discussion Point

Challenges in creating universal data platforms

Differed with

Gustavo Fonseca Ribeiro

Differed on

Approach to data sharing and universal platforms

Personal data protection concerns

Explanation

The development of large language models raises concerns about personal data protection. There’s an increasing trend of accepting the use of personal data for training AI models, potentially diluting privacy protections.

Major Discussion Point

Challenges in creating universal data platforms

Need for more inclusive decision-making

Explanation

The lack of inclusivity in AI and internet infrastructure stems from a lack of diversity in decision-making processes. More diverse voices are needed to address the challenges of language inclusion in AI.

Major Discussion Point

Government support and incentives for local language AI

Sociological challenges in prioritizing local languages

Explanation

The prioritization of English in education and technology is a complex sociological issue. Changing this requires addressing broader societal norms and practices beyond just technological solutions.

Evidence

Mention of the three-language model in some Indian states as an attempt to address language diversity in education.

Major Discussion Point

Government support and incentives for local language AI

G

Gustavo Fonseca Ribeiro

Speech speed

142 words per minute

Speech length

2756 words

Speech time

1159 seconds

Affects cultural rights and socioeconomic benefits

Explanation

Underrepresentation of languages in AI impacts cultural rights protected under international law. It also affects the socioeconomic benefits that communities can derive from AI technologies.

Evidence

Reference to the International Covenant on Economic, Social, and Cultural Rights, mentioning three protected cultural rights.

Major Discussion Point

Impact of underrepresentation of languages in AI

Agreed with

Nidhi Singh

Agreed on

Underrepresentation of languages in AI has significant impacts

Creates opportunities for local AI companies

Explanation

The lack of language representation in AI creates opportunities for local companies to develop solutions. This can lead to the emergence of startups focusing on underrepresented languages.

Evidence

Examples of Ghana NLP, Lilapa AI, and Masakani Foundation working on language inclusion in Africa.

Major Discussion Point

Impact of underrepresentation of languages in AI

Copyright law and exceptions for data mining

Explanation

Copyright law plays a crucial role in regulating access to data for AI training. Exceptions to copyright for data mining, such as those in the EU AI Act, can facilitate the development of more inclusive AI systems.

Evidence

Mention of the data mining exception in the EU AI Act for non-commercial purposes.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Agreed with

Nidhi Singh

Agreed on

Need for legal and ethical frameworks to enhance AI inclusivity

Potential role of traditional knowledge protection

Explanation

The concept of traditional knowledge protection in international intellectual property law could be applied to AI and data. This could help address issues of data ownership and benefit-sharing for communities.

Evidence

Reference to debates on traditional medicines and healthcare in the late 90s and early 2000s.

Major Discussion Point

Legal and ethical frameworks for AI inclusivity

Knowledge sharing and capacity building by international organizations

Explanation

International organizations can play a role in sharing knowledge and strategies for language inclusion in AI. They can also focus on capacity building to develop the necessary human talent for AI development.

Evidence

Example of the Karya platform in India, which could be shared with other countries facing similar challenges.

Major Discussion Point

International efforts to build more diverse AI

Balancing power in international policy conversations

Explanation

International organizations can help balance power dynamics in discussions about AI and language inclusion. They can provide forums for different actors to engage in face-to-face conversations, particularly between the global north and the global majority.

Major Discussion Point

International efforts to build more diverse AI

Tradeoffs between openness and financial sustainability

Explanation

Creating open data commons for AI faces challenges related to financial sustainability. Companies may be reluctant to open their data due to competitive advantages and the need for revenue streams.

Evidence

Discussion of the European Union’s Data Governance Act and Data Act as attempts to create pan-European pools of datasets while considering cost structures.

Major Discussion Point

Challenges in creating universal data platforms

Differed with

Nidhi Singh

Differed on

Approach to data sharing and universal platforms

Examples of government support in African countries

Explanation

Some African governments are actively supporting the development of AI models in local languages. This demonstrates growing recognition of the importance of language inclusion in AI.

Evidence

Examples of government support for AI language models in Rwanda and Nigeria.

Major Discussion Point

Government support and incentives for local language AI

Agreed with

Nidhi Singh

Agreed on

Government support is crucial for local language AI development

Market forces driving AI development

Explanation

Market forces play a significant role in driving AI development, often favoring dominant languages like English. However, there are other purposes for language inclusion, such as public services and research, which may not have commercial motivations.

Evidence

Examples of English dominance in markets in countries like Kenya and Uganda.

Major Discussion Point

Government support and incentives for local language AI

Agreements

Agreement Points

Underrepresentation of languages in AI has significant impacts

Nidhi Singh

Gustavo Fonseca Ribeiro

Exclusion of non-dominant dialects and languages

Affects cultural rights and socioeconomic benefits

Both speakers agree that the underrepresentation of languages in AI has substantial negative impacts on cultural rights, socioeconomic benefits, and digital inclusion.

Need for legal and ethical frameworks to enhance AI inclusivity

Nidhi Singh

Gustavo Fonseca Ribeiro

Need for detailed implementation of inclusivity frameworks

Copyright law and exceptions for data mining

Both speakers emphasize the importance of developing and implementing legal and ethical frameworks to promote language inclusion in AI, including copyright exceptions and detailed inclusivity guidelines.

Government support is crucial for local language AI development

Nidhi Singh

Gustavo Fonseca Ribeiro

State-driven initiatives for language inclusion

Examples of government support in African countries

Both speakers highlight the importance of government support in developing AI models for local languages, citing examples from India and African countries.

Similar Viewpoints

Both speakers recognize that while the underrepresentation of languages in AI can exacerbate the digital divide, it also creates opportunities for local companies to develop solutions for underrepresented languages.

Nidhi Singh

Gustavo Fonseca Ribeiro

Exacerbates digital divide and loss of cultural identity

Creates opportunities for local AI companies

Both speakers emphasize the need for transparency, accountability, and balanced representation in AI development and policy discussions, particularly between the global north and global majority.

Nidhi Singh

Gustavo Fonseca Ribeiro

Importance of transparency and accountability mechanisms

Balancing power in international policy conversations

Unexpected Consensus

Challenges in creating universal data platforms

Nidhi Singh

Gustavo Fonseca Ribeiro

Economic incentives against data sharing

Tradeoffs between openness and financial sustainability

Both speakers unexpectedly agree on the challenges of creating universal data platforms for AI, citing economic incentives and financial sustainability as major obstacles. This consensus is significant as it highlights the complexity of balancing open data initiatives with commercial interests in AI development.

Overall Assessment

Summary

The speakers show strong agreement on the impacts of language underrepresentation in AI, the need for legal and ethical frameworks, the importance of government support, and the challenges in creating universal data platforms. They also share similar viewpoints on the digital divide, opportunities for local AI companies, and the need for transparency and balanced representation in AI development.

Consensus level

The level of consensus between the speakers is high, with agreement on most major points discussed. This high level of consensus implies a shared understanding of the challenges and potential solutions in addressing language inclusion in AI. It suggests that there is a common ground for developing policies and initiatives to promote more inclusive AI systems across different regions and languages.

Differences

Different Viewpoints

Approach to data sharing and universal platforms

Nidhi Singh

Gustavo Fonseca Ribeiro

Economic incentives against data sharing

Tradeoffs between openness and financial sustainability

While both speakers acknowledge challenges in data sharing, Nidhi Singh emphasizes the economic disincentives for companies to share valuable data, whereas Gustavo Fonseca Ribeiro focuses more on the balance between openness and financial sustainability, citing attempts like the EU’s Data Governance Act.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on the approach to data sharing and the specifics of legal and regulatory frameworks for AI inclusivity.

difference_level

The level of disagreement between the speakers was relatively low. Both speakers generally agreed on the importance of language inclusion in AI and the need for legal and regulatory frameworks to support it. Their differences were mainly in the emphasis and specific approaches they suggested, rather than fundamental disagreements. This low level of disagreement suggests a general consensus on the importance of the issue and the need for action, which could be beneficial for advancing policies and initiatives in this area.

Partial Agreements

Partial Agreements

Both speakers agree on the need for legal and regulatory frameworks to enhance AI inclusivity, but they focus on different aspects. Nidhi Singh emphasizes the need for detailed implementation guidelines beyond broad inclusivity frameworks, while Gustavo Fonseca Ribeiro discusses specific legal mechanisms like copyright exceptions for data mining.

Nidhi Singh

Gustavo Fonseca Ribeiro

Need for detailed implementation of inclusivity frameworks

Copyright law and exceptions for data mining

Similar Viewpoints

Both speakers recognize that while the underrepresentation of languages in AI can exacerbate the digital divide, it also creates opportunities for local companies to develop solutions for underrepresented languages.

Nidhi Singh

Gustavo Fonseca Ribeiro

Exacerbates digital divide and loss of cultural identity

Creates opportunities for local AI companies

Both speakers emphasize the need for transparency, accountability, and balanced representation in AI development and policy discussions, particularly between the global north and global majority.

Nidhi Singh

Gustavo Fonseca Ribeiro

Importance of transparency and accountability mechanisms

Balancing power in international policy conversations

Takeaways

Key Takeaways

Underrepresentation of languages in AI has significant impacts on cultural rights, socioeconomic benefits, and digital divide

Legal and ethical frameworks for AI inclusivity need more detailed implementation guidelines and transparency mechanisms

International efforts are needed to build more diverse AI, including knowledge sharing, capacity building, and balancing power in policy discussions

Creating universal data platforms faces challenges due to economic incentives and data protection concerns

Government support and incentives are crucial for developing AI in local languages, but sociological challenges persist

Resolutions and Action Items

International organizations should facilitate sharing of knowledge and strategies to enhance language inclusion across countries

Capacity building initiatives should be implemented to develop human talent for AI in diverse languages

Efforts should be made to bring better balance of power to conversations on international AI policy

Unresolved Issues

How to effectively implement inclusivity frameworks in AI development

Balancing openness of data with financial sustainability and market competitiveness

Addressing the lack of incentives for developers to invest in local language AI when education systems prioritize dominant languages

How to create universal data platforms that overcome economic and privacy challenges

Suggested Compromises

Using synthetic data generation to address data availability issues for underrepresented languages, while acknowledging potential limitations

Implementing copyright exceptions for non-commercial data mining to allow AI development while protecting intellectual property rights

Creating compulsory licensing arrangements for public sector to buy valuable datasets from private entities at reasonable prices

Thought Provoking Comments

Even if you do speak English, it’s not your dialect of English that goes in. So even within that, it’s only the version of English that’s most commonly present on the internet is something that’s being trained on. So in a sense, everybody’s sort of being excluded.

speaker

Nidhi Singh

reason

This comment highlights the nuanced issue of language bias in AI beyond just non-English languages, pointing out that even English speakers may be excluded if they don’t use the dominant dialect.

impact

It broadened the discussion from just underrepresented languages to issues of dialect and cultural expression within languages, leading to deeper exploration of inclusivity challenges.

As generative AI has gone up, universities have started using models to check if generative AI is being used, if students are cheating, if they’re using generative AI to turn in their homework or to write their papers. As a non-native speaker of English, even if you speak English with a high degree of proficiency, you are far more likely to be flagged for plagiarism.

speaker

Nidhi Singh

reason

This comment provides a concrete, real-world example of how language bias in AI can have serious consequences, especially in education.

impact

It shifted the conversation from abstract concepts to tangible impacts, prompting discussion on the ethical implications and potential discriminatory effects of AI in various sectors.

There exists this concept under international intellectual property, law of traditional knowledge. It protects traditional knowledge and traditional knowledge in the sense of knowledge that has been passed from generation to generation, for example, in traditional and indigenous communities.

speaker

Gustavo Fonseca Ribeiro

reason

This comment introduces a legal concept that could potentially be applied to AI and data rights, particularly for underrepresented communities.

impact

It opened up a new avenue of discussion on how existing legal frameworks might be adapted or applied to address issues of data ownership and cultural preservation in AI development.

Universal platform for data collection is a very interesting idea. I think that I’ve also heard a lot of conversations about AI Commons and data Commons. And yeah, the principle behind it is quite sound. Because basically what you’re saying is that you put all the data in one place so that everybody can benefit from it.

speaker

Nidhi Singh

reason

This comment addresses a potential solution to the problem of language bias in AI, while also acknowledging its challenges.

impact

It prompted a deeper discussion on the practical and economic challenges of creating inclusive AI systems, balancing idealism with realism.

Overall Assessment

These key comments shaped the discussion by expanding the scope of the conversation from simply underrepresented languages to issues of dialect, cultural expression, and real-world impacts. They introduced legal and ethical considerations, highlighted practical challenges, and prompted exploration of potential solutions. The discussion evolved from identifying problems to considering complex, multifaceted approaches to addressing language bias and inclusivity in AI development.

Follow-up Questions

How can synthetic data generation be used to address the problem of limited data availability for underrepresented languages?

speaker

Moderator

explanation

This is important to explore potential technical solutions for increasing language diversity in AI training data.

How can copyright laws be adapted to balance access to data for AI development with protection of intellectual property?

speaker

Gustavo Fonseca Ribeiro

explanation

This is crucial for determining how data can be legally used to train AI models while respecting copyright.

How does the concept of traditional knowledge under international intellectual property law apply to AI and data?

speaker

Gustavo Fonseca Ribeiro

explanation

This is an underexplored area that could have implications for how indigenous and traditional knowledge is protected and used in AI development.

How can personality rights be applied to AI training data, particularly when an AI is trained on someone’s voice or likeness but doesn’t directly replicate it?

speaker

Gustavo Fonseca Ribeiro

explanation

This is an unresolved legal question that has implications for data collection and AI training practices.

What are the benefits and challenges of creating a universal platform for data collection?

speaker

Kathleen Scoggin (online moderator)

explanation

This explores potential solutions for improving data diversity and accessibility for AI development.

How much appetite is there from governments to support initiatives developing local language AI models?

speaker

Audience member

explanation

This is important for understanding the potential for government-backed efforts to increase language diversity in AI.

How can education systems be changed to incentivize the development and use of AI in local languages?

speaker

Audience member

explanation

This addresses the systemic factors that influence language representation in AI development and use.

What are the opportunities and risks of localizing AI models to specific cultures and languages?

speaker

Audience member (via chat)

explanation

This explores the broader implications of developing AI models tailored to specific linguistic and cultural contexts.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.