WS #100 Integrating the Global South in Global AI Governance
WS #100 Integrating the Global South in Global AI Governance
Session at a Glance
Summary
This panel discussion focused on the inclusion of the Global South in AI governance and development. Experts from various organizations discussed challenges and opportunities for increasing participation from developing countries in the AI ecosystem.
Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in many Global South countries, and the need for capacity building. Panelists emphasized the importance of local data collection and infrastructure development to enable AI innovation. They also discussed the role of private sector companies in providing tools and platforms to support AI development in emerging markets.
The discussion touched on ethical considerations, including fair treatment of workers involved in AI training data labeling. Panelists noted the need for inclusive stakeholder engagement when developing AI governance frameworks and ethics guidelines. Cultural factors that may inhibit participation from the Global South were also explored.
Opportunities highlighted included leveraging synthetic data generation, adapting AI solutions to work with limited computational resources, and creating accelerator programs to support local AI startups. The importance of building AI literacy and technical capacity across all levels of society was stressed.
Overall, the panel emphasized that while challenges remain, there are promising avenues to increase meaningful inclusion of the Global South in shaping the future of AI. Collaboration between governments, industry, academia and civil society will be crucial to realizing this goal.
Keypoints
Major discussion points:
– Challenges of AI governance and inclusion in the MENA region, including technology gaps, lack of representation, and regulatory uncertainty
– The role of the private sector, governments, and international organizations in promoting AI development and ethical standards in the Global South
– The importance of capacity building, data availability, and localization of AI technologies for inclusive development
– Balancing innovation with responsible AI governance and regulation
– Opportunities and challenges for the MENA region to become a global AI hub
Overall purpose:
The discussion aimed to explore ways to operationalize inclusion in AI governance ecosystems, particularly for the MENA region and Global South countries. Panelists examined barriers to participation and proposed strategies for increasing representation and capacity.
Tone:
The overall tone was constructive and solution-oriented. Panelists acknowledged challenges but focused on opportunities and practical steps for improvement. There was a sense of cautious optimism about the potential for the MENA region to play a larger role in global AI development, balanced with realism about the work still needed. The tone became more urgent when discussing the need for capacity building and literacy to enable meaningful participation.
Speakers
– Salem Fadi: Director of the Policy Research Department at the Mohammad bin Rashid School of Government
– Salma Alkhoudi: Head researcher on AI governance research project
– Nibal Idlebi: UN ESCWA Acting Director of Cluster on Statistics, Information, Society and Technology
– Roeske Martin: Director of Government Affairs and Public Policy on Google MENA
– Jill Nelson: IEEE Standards Association advisor
Additional speakers:
– Jasmin Alduri: Co-director of the Responsible Tech Hub
– Lars Ratscheid: Works in international cooperation (from Germany)
Full session report
Expanded Summary: Inclusion of the Global South in AI Governance and Development
This panel discussion focused on the inclusion of the Global South, particularly the MENA region, in AI governance and development. Experts from various organisations explored challenges and opportunities for increasing participation from developing countries in the AI ecosystem.
Research Findings:
The Mohammad bin Rashid School of Government presented key findings from a survey of over 320 AI and digital companies across 10 MENA countries:
– Main concerns: cybersecurity, AI explainability, and bias
– Regulatory uncertainty is a major challenge for companies
– Nearly one-third of companies face interoperability issues with regulations across the MENA region
– High levels of partial implementation of AI ethics standards across categories
Key Challenges:
1. Technology Gap and Infrastructure
A fundamental issue underlying many challenges is the significant technology gap between developed and developing nations. Nibal Idlebi, Acting Director at UN ESCWA, highlighted this disparity, noting that “everything is related to technology gap”. This gap manifests in several ways:
– Lack of computing power and infrastructure in developing countries
– Limited access to local data for AI development
– Insufficient representation in global AI forums and discussions
2. Regulatory Uncertainty
Martin Roeske, Director at Google MENA, emphasised that regulatory uncertainty, rather than a lack of regulation, is holding back private sector involvement in AI development in the region. This uncertainty creates hesitation among businesses to fully engage in AI initiatives.
3. Capacity and Literacy
There was broad consensus among speakers on the critical need for capacity building and AI literacy. This includes:
– Building capacity at the decision-making level to enable meaningful participation in global AI governance (Nibal Idlebi)
– Improving general AI literacy to foster understanding and adoption (Jill Nelson, IEEE Standards Association advisor)
– Creating pathways for local talent to succeed in the AI field (Martin Roeske)
4. Ethical Considerations
The discussion touched on important ethical considerations in AI development:
– Jasmin Alduri, Co-director of the Responsible Tech Hub, raised concerns about the exploitation of click workers in the Global South
– Jill Nelson emphasised the need to consider all stakeholders, including data labellers, in ethical assessments
– Martin Roeske stressed the importance of building ethical principles into products from the start
5. Language Barriers
The panel noted the importance of Arabic language support in AI tools to increase accessibility and adoption in the MENA region.
Opportunities and Proposed Solutions:
1. Data Generation and Sharing
To address data scarcity, speakers proposed various solutions:
– Initiatives to encourage local data generation (Nibal Idlebi)
– Creation of data commons and public data sharing (Martin Roeske)
– Use of satellite data and GIS information (Nibal Idlebi)
2. Fostering Local AI Ecosystems
Speakers agreed on the importance of nurturing local AI talent and businesses:
– Encouraging local businesses and startups to adopt AI (Martin Roeske)
– Creating accelerator programmes, particularly for women founders (Martin Roeske)
– Growing local expertise and employment opportunities (Jill Nelson)
3. Capacity Building Initiatives
Proposed initiatives included:
– Implementing chief AI officers across government departments (Martin Roeske)
– Continuing education and certification programs for decision-makers (Jill Nelson)
– Literacy programs to understand AI capabilities and limitations (Jill Nelson)
4. Leveraging Private Sector Involvement
Jill and Martin Roeske emphasised the crucial role of the private sector in enabling inclusion in AI governance and development, particularly in encouraging local businesses and startups.
Role of International Organizations and Standards:
– IEEE’s grassroots approach to developing social-technical standards for AI (Jill Nelson)
– Google’s implementation of AI principles in product development (Martin Roeske)
– UNESCO’s ethics guidelines for AI and their adoption by organizations (Nibal Idlebi)
Unresolved Issues:
Several important questions remain unresolved:
1. How to effectively bridge the technology gap between the Global North and South in AI development
2. Balancing innovation with regulation in emerging AI markets
3. Ensuring fair compensation and treatment of data labellers and click workers in the Global South
4. Addressing the lack of computing power and infrastructure in developing countries for AI development
Conclusion:
The discussion highlighted the complex interplay between AI development, global inequalities, and development challenges in the Global South. While significant obstacles remain, there are promising avenues to increase meaningful inclusion of the Global South in shaping the future of AI. The panel emphasized opportunities for growth, such as the high adoption rate of AI tools like Google’s Gemini in the MENA region.
Moving forward, collaboration between governments, industry, academia, and civil society will be crucial to realising the potential of AI in the Global South. By focusing on capacity building, fostering local ecosystems, and addressing regulatory challenges, the MENA region and other developing areas can play a more significant role in the global AI landscape. As the discussion demonstrated, there is a strong commitment to leveraging AI as a tool for economic development and social progress in the Global South.
Session Transcript
Fadi Salim: of our panel. My name is Fadi Salim. I’m the Director of the Policy Research Department at the Mohammad bin Rashid School of Government. It gives me great pleasure to welcome you to this panel. And the panel will include a distinguished set of experts from technical communities, international bodies, as well as the private sector, who will join us momentarily. Prior to the panel, my colleague, Selma El-Khouldy, will present key highlights from a key research projects that we are running around the region, covering 10 countries in the Middle East and North Africa, trying to explore these questions on how to navigate the fragmentation of the AI governance ecosystem in our region, but also looking into the AI ecosystem, private sector companies, small and medium businesses and startups who are working in this field across the region. So the umbrella of this panel is trying to understand how to operationalize inclusion better in the AI governance ecosystem. So before I start, before I ask my colleague, Selma, to start, if this works, let me just tell you quickly about us. We are, Mohammad bin Rashid School of Government is an academic institution, a policy think tank, and works in among many other areas of policy and research on future government and digital governance areas. We have multiple publications, research projects. We work with many organizations and partners, including in this case, the research we are presenting here is supported by google.org and in collaboration with numerous stakeholders. And these research projects cover lots of areas, especially I would like to highlight the ongoing research and capacity building projects related to AI governance, that it’s AI governance. inclusion and AI competitiveness in MENA, one that we’re presenting and talking about the highlights here, as well as AI ethics assessment and capacity building in collaboration and partnership with the IEEE, global risk mapping on AI with FLI, OECD, GPA and others, as well as generative AI and public workforce. These are some of the ongoing projects. So it looks like it’s a lot of interest in the region in this field, and we hope that this will be something that will inform the decision making and capacity building. This is the executive education work that we were doing with the IEEE and working on capacity building and building direct assurance related to AI ethics in the public sector, but also across society. So that’s something that generates and produces a group of experts who are authorized to assess the ethical implications of artificial intelligence in their workplace. And we hold a lot of workshops and policy programs with lots of stakeholders across the region to try to engage and cover this question around inclusion, and with government bodies as well. Now, I’ll hand over to my colleague, Selma, who will take us through some of the research findings of this important projects, in my view, and then we’ll ask the distinguished panelists to join us for the panel.
Salma Alkhoudi: So this slide is probably well before I get to the slide, just really loud. Is this good? Okay. I’m Selma. I was the head researcher on this research project that Dr. Al-Fadi mentioned. This research project, as mentioned, was carried out over many, many months. We surveyed over 320 companies in AI and digital across the MENA region. MENA, as we define it for this project, is about 10 countries from Morocco, Tunis, Egypt, Jordan, and then the six countries of the GCC, including Saudi and the UAE. So just a quick overview, introduction to lay the ground. I’m sure all of you are already well aware, AI is top of mind. Adoption is more than tripling globally since 2017. 50% of organizations worldwide now rely on AI in at least one business function. I think this actual figure is much higher now. And there’s a dramatic surge in AI, in generative AI adoption through open source and private models, which, of course, complicates the governance landscape. So as a researcher, there’s sort of three interlocking challenges that contribute to this complexity in the global AI governance landscape. First, we have a fundamental definitional challenge. How do we govern something that we can’t concretely define? And this isn’t just really semantic. It’s a practical problem because the tech is evolving faster than our ability to kind of grab onto AI and define it as a subset of things. Second, we’re dealing with a few key issues that cross borders and cultures. And this is kind of a laundry list of, you know, just the tip of the iceberg as to what falls under the domain of global AI governance. Data privacy, cross-border data flows, transparency, bias, deeply social and cultural as well as technical challenges. And I’m sure you’ve heard a plethora of these challenges across the panels and workshops from today. And finally, we see a few core tensions that sort of revolve around the global AI governance landscape, which include innovation versus regulation. A lot of people view this as a false dichotomy. A lot of people hold that it is a true tension that exists. Economic transformation versus job displacement. Transparency versus intellectual property. And inclusive development versus monopolization. Of course, all of these tensions we heard time and time again through our expert interviews, through interviews with SME founders, startup founders, angel investors, and of course, in our survey. So, there are four distinct governance approaches that emerge when we look at the landscape globally. Risk-based approaches like the EU’s AI Act, which is focused on classifying and mitigating potential harms. There’s the rules-based approach exemplified by China’s Gen-AI measures that provide very specific concrete requirements for AI models. There are principles-based approaches like Canada’s voluntary code. And by the way, most countries in the MENA region are also principles-based, offering flexible guidelines as to where companies should head directional. And then outcomes-based approaches as seen in Japan, which focus on measurable results rather than prescriptive processes. Each one has its pros and cons, but the crucial question is how well do these different approaches serve the global south’s needs and contexts? Of course, we gleaned some interesting and important insights from our expert interviews. Let’s see. Okay. Inside some of the powerful challenges that face the region and governance, first is that geopolitical concerns have taken away attention from technological progress, if not pushed many countries in the MENA region a few years back in terms of issues like health and safety and education. Again, the definitional issue is a problem that came up in our expert interviews. If you can’t define what AI is, and you can’t define anything that includes the term AI, including AI governance and AI ethics and responsible AI. Of course, there’s the original problem of global cooperation. MENA countries are also in very different circumstances. They have very different priorities. The MENA region or the Arab world encompasses anywhere from emerging global AI leaders like the UAE and Saudi Arabia, to countries that are currently emerging from decades of war, countries that are still embroiled in war. So the playing field is very, very disparate and very large. What we heard time and time again, which is also quite interesting from our experts, is that a shared geographic location or even a shared identity or language like being Arab or Arabic is not enough to unify efforts. Some people believe that it is enough, and it should be enough. So that’s another question for our panelists, perhaps. As a result, the lack of openness and sharing, which further complicates the governance landscape in the MENA region. When it comes to inclusive governance, there’s kind of a sobering reality that we all have to grapple with, as pessimistic as it may seem. And I’m just here to lay out problems, by the way. Hopefully the panel will tackle solutions, which is that how can it be a race if we don’t all start from the same point? And that technology is an important priority nationally, certainly. But what about things like the inability to read and write? Or things like, you know, not enough access to proper health care. So it’s a problem of national priority. The issue of quick implementation versus governance is also a big one. There’s a lot of push to get things done now and quickly before things accelerate even faster. And given the pace of AI and many experts we spoke to stress that governance needs to come first. Others don’t think so. Others think that we need to just move as fast as possible, especially as a region that is being left behind in many instances. And when asked if MENA countries are invited to the table, the answer was yes and no. That global fora are open to their members, but they aren’t always taken seriously. That contributions are siloed and weak. Often it’s just one country from the Arab world that is in attendance. And that as a result, there aren’t more invitations to lead AI governance frameworking conversations. So that’s from a little bit of insight from the experts that we’ve spoken to. But we wanted to also dive a little bit into our survey findings. This is preliminary. We’re gonna reveal the full survey results with our published report early 2025. And also we only dove into the survey findings that correlate with the topic at hand, which is global AI governance inclusion, and also a bit on regulation and interoperability as it relates. So first, this hierarchy of concerns. Cybersecurity tops the list with 258 companies expressing concern. There’s a high level of worry also about AI explainability and bias. The interesting thing when we triangulate this data with our interviews is that there are some deeply felt cultural challenges here as well. When Amina company struggles with AI explainability, they’re not just dealing with sort of algorithmic complexity. They’re wrestling with how to make AI systems comprehensible in a setting that’s so widely diverse. I mean, in terms of language, in terms of tradition, in terms of viewpoints. So it’s not just about technology and also technological literacy. It’s not just about the technical problems. On the question of, we wanted to dive into the particularity. So we asked what the negative impacts of regulations are, if any. And 22.6 respondents cite increased costs as their biggest concern. It makes sense because funding is the biggest concern for SMEs in general. So the increased cost of regulations is top of mind. The combined impact of slowing innovation and limiting AI applications also accounts for over 36% of the responses. Then we also asked about potential positive impacts. I don’t think. Okay. Despite all those concerns, nearly 30% of companies acknowledge that regulations are making AI more secure and trustworthy and add that to the around 17% who see increased consumer confidence. This is pretty in line with what we heard in our interviews with company founders as well. They do believe regulations have a role to play in facilitating innovation, especially as they look to scale across markets, across borders, but they’re still hesitant at the scope of regulatory reach because the definitions are so vague because there’s still so much on the horizon. This radar chart looks quite simple, but the clustering towards supportive, very supportive and neutral rather than the extremes on the other sides, along with our interview data tells us something really important, which is that our region isn’t suffering from over-regulation, it’s suffering from regulatory uncertainty. There’s no sort of sense of where things are headed. And companies have told us time and time again that they’re trying to figure out how to navigate a regulatory landscape that’s still sort of taking shape and emerging. And then perhaps, you know, one of the most interesting findings is interoperability. Nearly one third of companies face interoperability issues with regulations across the MENA region. And the vast majority of those who said no are companies that are too small to have tried to scale across countries anyways. So when 31% of companies say they face interoperability issues, they’re really highlighting a fundamental question, which is how can we build a unified AI ecosystem in a region with very diverse regulatory approaches and different development priorities and varying levels of digital infrastructure? On the question of AI ethics standards implementation, this is the last chart before we’ll hand it off to our experts in residence today. The high levels of partially implemented across all the categories is not just about these companies themselves being halfway there, it’s about an entire ecosystem in transition, which is kind of how you can define the ecosystems that are still emerging and developing in the MENA region. And what’s particularly striking is that areas like record keeping and transparency show higher full implementation rates than things like third party evaluations. And this suggests that these companies are better at internal governance than external validation. Still need to extract a bit more insights as to what this means from our respondents, but this is also a critical gap when we think about building regional and global trust in our AI systems. So with that, I will hand it back off to Dr. Fadi and invite our panelists to stage.
Fadi Salim: Thank you. Thank you, Salma. And lots of points to talk about, but first let me ask the distinguished panelists to join us, Dr. Nibal Idilbi, the UN Esquire Acting Director of Cluster on Statistics, Information, Society and Technology. Martin Royske, Director of Government Affairs and Public Policy on Google MENA. And Jill Fayyad, IEEE Standards Association is an advisor. and he leads lots of projects related to AI ethics capacity building. So clearly from the presentation that we highlighted some of the findings and based on the discussions we had earlier with other panels as well today, there are a lot of questions around inclusion in the region and it has been an area that our region has not been able to achieve proper inclusion in the digital age. And I will start with Dr. Nibel from ESCWA. You have a regional view in ESCWA. You have a clear, deep understanding on each country in our region and this is a sample of the world. We have some of the highest ranking countries in the world in many of the digital transformation areas to some of the least developed. So in our region, what are the real goals of inclusion should be aiming for? How in the digital era as well as in the age of AI? What are we looking for? How can we understand what inclusion can lead to?
Nibal Idlebi: Okay, good afternoon everyone. Do you hear me well? It’s okay? Okay. There is a different facet for the inclusion in the Arab region for sure. I mean especially that AI is emerging in many countries. We can see that Arab countries are not heterogeneous in terms of AI development. We have countries which are really heading very well like some GCC countries like UAE, Saudi Arabia, Qatar and maybe other countries like some countries do have now the national AI strategy while other countries are really lagging behind. Whenever we speak about the region, we cannot speak about all countries together because they are leaders in the technology development like GCC. in general, and maybe Jordan, and there are other countries which are really lacking behind, like, let us say, Somalia, Syria, Iraq, Libya, and so on, and then there are countries which are really lacking behind. For inclusion, I mean, I believe there is different level, at least from the perspective of AI, we need to involve all stakeholders in the discussion about AI, either in the AI strategy, or AI framework, or AI governance, then inclusion of all stakeholders, which means private sector, government, as well as NGOs, and academia. Academia plays a very important role in AI, maybe in some information society we forget about them in some cases, but today, I mean, in AI, it is one of the areas where the research and development is very important, therefore the inclusion of academia is important, very important. Then the design, and in the discussion of AI, we need to include all stakeholders, but also we need to include all disciplines, because we know that AI, I mean, it matters for many sectors, like healthcare, like education, like agriculture, maybe transportation, then the discussion might be not only between technological people only, but also interdiscipline, then this is one side of the inclusion. The second side is to include all segments of people, I mean, whenever we are developing any system for AI, either during the design, or the deployment, or the use, we have to include all people, in terms of disabled people, or races, I mean, all segments of the society, elderly people, women, gender, youth, and everyone. Then we have to include in our algorithm, in our thinking. on our design, on our strategy, all segments of the society. This is also very important, because the needs might be different from one segment to another, and then here we have to think globally about all societal groups. There is also maybe inclusion in terms of, I don’t know how to link it with data, because data is very important in this regard, and here we have really to, we know that in some countries, in some of the Arab countries, we lack a lot of data. Data are not very well developed. We don’t have everything on digital format. I mean, we don’t have enough data in digital, in a way or another. Then the data from one side, we need to have it. We need to have it clean and reliable, and we have to have it reliable and timely in a way or another. Then the inclusion of data that represents all region or subregion or locality in a way or another, this is another form of inclusion in order to have our algorithm or our AI system, how to say, addressing all the needs of the society. Of course, I mean, if we speak about agriculture, we need to focus on agriculture. We don’t need to focus on everything, but I mean, whenever we think about it, the data is very important, and here we have to encourage or to generate data in the digital format today in order to have representation of the needs at the different level. I will stop here for the time being.
Fadi Salim: Thank you, Nibal, and very important, and you also talked a lot or worked a lot in the past on open data in the Arab region, and that’s something that is not yet mature around the region, which also limits the availability of data for AI development across the region, and this brings me now to my questions to Martin, and Martin, you come from global leader in AI development from Google and very active in the region, and you, as a private sector leader in this domain, and based on your understanding of this, our regional context, what are the roles that, you know, what is the role of, first, the private sector entity leader in AI that can help in the inclusion of the region, whether it’s in data availability, data representativeness, or having a seat on the table in these discussions around AI development globally for the region to have a voice.
Roeske Martin: Thank you, Fadi, both for having us here and for your great partnership in this research that we’ve done together. Some very interesting data points coming out of it. Now, to your question, I think there are many ways in which the private sector can play a key role, tech companies in particular in the region. Maybe before I go, there are just a couple of things we should focus on when we talk about governance because those are all aspects that Google also is very involved with and we’re looking at it from a number of different lenses. One is, of course, around equitable access, so bridging all the different divides. As His Excellency, Abdallah Sawaha mentioned in his opening remarks today, that there are all these different types of divide, the digital divide, the algorithmic divide, the data divide, et cetera. And so a lot of that is around accessible AI tools, affordable tools, access to infrastructure. Still one third of the world is not on the internet, so how do we help bridge that gap? And then making sure people have the right capacity and skills. And so one of the things we’ve been focused on as a private sector entity here very much over the past few years is trying to create skills programs for everyone, not just for technologists or developers, but also for users of AI, whether those are users of Gen AI or just people exposed to it at school and university. Small, medium enterprises, how they can adopt AI. So there it’s important, for example, to make this available for free in language, in Arabic, and scale it to as many people as possible. So just a few weeks ago, we announced a new google.org program granting $15 million over the next few years to train 500,000 people in the Arab world and to give grants to research universities on AI. Second point is about mitigating bias. So how do we create AI systems that are fair, unbiased, and have worked with inclusive data sets? And in. know, because a lot of the AI forays have been made in the West, or in China, this part of the world hasn’t traditionally been part of the data sets that were used to train models, for example. And so very conscious efforts have to be made to ensure that these data sets are inclusive. On protecting privacy and security, I think that’s, that’s obviously one of the key areas that all governance efforts are focused on. And lots of techniques that we as private sector companies use differentiated privacy techniques, trying to anonymize data, preventing data from being widely shared, if it doesn’t have to be for a particular purpose, giving users the option to opt out of their data being collected, website owners opt out of their information being used to train models. So making it a user choice as to how much data can be shared and to use those techniques to keep it private and secure. And then finally, about promoting transparency and accountability. Also, one of the points that I think the, when your survey brought up as one of the primary concerns, there, you know, there’s a lot of work happening across the board, Google participates in many global fora when it comes to privacy. We’ve done a lot of work recently on explainable AI techniques. So, for example, proving data provenance or content provenance. We’ve introduced tools like synth ID, which is a way to watermark content that’s generated by AI. So if it’s synthetic or adaptive content can easily be identified, whether that comes out of an image generation model or out of a text model or video model. We have asked our advertisers to disclose if any contents of their ads are Um, generated by the eye, particularly in sensitive context, like elections. We’re going to make sure that, um, electoral ads follow particular policies. We’ve introduced new policies around that. Um, we have, uh. 2 creators to enable their content if it’s if it’s generated through and I, and we also. Um, provide information about images on search where you can look at how this image was where this image appeared originally. Whether it’s been modified, what its history has been on the Internet. So there’s a problem that you can, we can track it back to. So, I think across those 4 domains, equitable access. Mitigating bias, privacy, security and transparency accountability. It’s like, there’s a huge role to play and, um, as you said. You know, it’s a multi stakeholder dialogue. Um, it’s very important that. All players are part of it. Um, we’d like to be part of the convening where we can. So 1 example is, uh, in the UAE, we’ve started this thing called an AI mattress. Where we bring together stakeholders from across academia. By 2 organizations and the tech industry as well as governments. To discuss, uh, responsible policymaking. And, um, I think those kinds of 4 help, uh, with taking some of the discussions happening. At IGF and elsewhere to a more local level and then continue that.
Fadi Salim: Thank you, Martin and, uh, this bring us. Uh, to, uh, Jill’s, uh. Jill, you come from, uh, the IEEE is, uh, is a standard organization. Uh, but also it has, um, the structure of doing things has a lot of. Um, horizontal, uh, working groups across domains across jurisdictions across the world. And the same thing happens in the, um, ecosystem. The same thing happened in ICANN ecosystem. This multi stakeholders model. That enables a lot of people to. Participate in, um, creating something, or at least be included in the discussion and that. it eventually could be representing them or representing what they want as an outcome. Now, the question is, our region, and this is something we hear a lot from these organizations, does not participate enough. And this is not just this region, but also maybe the global south in general, to use that term, has that of an issue. Is this a question of capacity or is it awareness or is it other reasons that we are not involved at a mass scale, whether it’s the researchers and academics, whether it’s the experts, whether it’s the technical community? What’s from your view as a standard organization that function in such a model think? Thank you.
Jill: Thank you, for the opportunity and also for the question, by the way. So, IEEE, as you say, is a standards organization, but it’s not a standard organization in the traditional sense of just developing standards. It is a non-profit organization that is completely voluntary based in the sense that all the people who participate to the standards are volunteers and they structure into working groups. So, you can think of it like a grassroots bottom-up approach in opposition to standard organization that would come from governments and that would trickle down all the way to get consensus at the engineering level. No, it’s more the engineers deciding that they need to build the standards. So, for example, Wi-Fi 802.11 was built by IEEE this way because it addressed the need. And by doing it this way, you are able to do it fast. Now, that’s one aspect of it. The fact that it is grassroots is something that maybe we don’t advertise enough. Because it’s open to everybody. Everybody can participate to the working groups. Everybody can contribute. And if we have a problem of inclusiveness, the level of AI governance, I think the problem has two sides to it. There is a side, like was described by Martin and when you heard before, that is about how do you import and localize technology in global south regions, which I don’t like the term global south. I think outside of we now, Western Europe and North America, right? So either we are able to, we have to localize that, and it is very important to be able to localize. But there is also the fact that we can contribute. If we don’t contribute the same way as others are contributing in these countries, then how can we expect to have our values and our cultural aspects reflected in the technologies that we use? So we have the opportunity to contribute as well to these standards and make sure, because once they are standards, they get adopted. And they very often get adopted through regulatory channels in many different regions. So that also helps address the other point that you brought up, which is how do we address standards or regulations at the country level versus at the regional level? So Pierce McConnell of DigiConnect at the EU level was at the plenary yesterday on misinformation and disinformation, was reflecting on the fact that what helped EU, EU was a consumer society of AI, pretty much like the global South is. But how did it manage to get its voice heard is by getting all this as a consumer society reflected through EU regulations. And this is what GDPR. was about. So the Brussels effect is really about enabling a build-up of needs at a regional level so that these needs can be taken into account by the technology developers and can be integrated into solutions. But this is outside of IEEE. Just to go back to the IEEE question, I think the interesting part is that IEEE beyond standards also offers other ways that are very useful. We are collaborating on one of these, which is about capacity building. And capacity building is really about the ability to build capacity, and it starts with people, then with data, and then ultimately with compute. And in that order, in the sense that you need first to have the literacy, the survey showed us that we don’t have a clear definition for AI, we don’t know exactly what is AI ethics, we don’t know what is responsible AI, what is the difference, why is there a need for trustworthy AI, all of these things. There is a need for literacy in the first place. That literacy, by the way, is needed everywhere. It’s not just in the region. It’s needed for the city manager in North America as much as it is needed for the government service provider in the GCC country. And that literacy allows you then to understand that you have an issue with data representativity, like Martin was reflecting upon, the fact that many of these algorithms, basically, are built based on data, and that data is reflective of certain societies and cultures. If your data is not represented, you might be, you might not necessarily are, but you might be misrepresented at the algorithmic level, and the outcomes might be non-beneficial for you. So you might have bias, you might have transparency issues, you might have other issues that could be associated with it. So from that perspective… perspective, capacity building allows you to contribute as well. So participating to working group in standards, going through capacity building efforts such as the one that we are developing, and last but not least, have the ability as well to localize content. So we came kind of to the conclusion that you develop a solution, that solution you can get it adopted, you can get it spread in a specific region, in a specific country, but at the end of the day the people who are going to implement it might not all be English speakers. So you need the ability to translate all of these solutions into local languages. You need the ability to adjust the solutions to the cultural representativity of the populations that you have, and have the data also, avoid data under representation or at least compensate for it with local data or processes around the solutions that allow you to avoid the kind of bias issues that you can find with the solutions.
Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusion, access. I’ll ask, I’ll be back to you, Dr. Nibal, and ask you about the higher level of representativeness and the AI governance ecosystem. So you, in Esquoir, as a regional UN body, deal with member states, and these member states, you deal with regulators, you deal with ministers, you deal with stakeholders at the top of the governance ecosystem who are tasked to represent their countries in these global fora around AI governance or around digital inclusion or about other elements as well. Based on your experience, and you highlighted this, some countries have structural restrictions related to, you know, we have conflicts, we have all of that, but also some countries are not having a seat on the table, although they have. something to offer in some of these global fora. How can we, and this is just about our region, but it also can be an issue for the rest of the global South. How can these voices be represented at a higher level in the ecosystem of governance around AI that usually decides around AI safety, AI measurement, AI standards, AI ethics. So it has real implications for their countries, but they don’t have a seat on the table at the top of the chain. Do you think there is a way for this kind of mechanism to exist that represent these countries in the global South, especially in AI, given that it’s currently more of an elite club in these discussions?
Nibal Idlebi: Yeah, in fact, it is quite complex. Let me say that. First of all, I would like to return back to what Gilles was saying, and I would like to mention that there is technology gap between the South and the North. And this is behind the scene, everything is related to technology gap. There is a big gap nowadays between developed country or the Northern country and the Southern country. And from that, we can derive a lot of issues. I mean, from this issue, the big issue that is sustained, I mean, with the time it’s sustained, and it is sometimes it’s becoming bigger. I mean, the digital, the technology gap, or it could be digital gap. But I mean, in this case, maybe digital or technology, because I can think about big data, everything is then the gap, it’s unfortunately, it’s widening, rather than that we are not easily bridging the gap, even in digital technology. But returning back to your question, I believe there is a we have to work at different level. From one side, we have to work at decision level, I mean, the decision making level, because at decision making level, it is those who decide in a way or another, their participation in the global forum, then it is the ministries or the regulatory authority that who are visible vis-a-vis this global forum and the global forum, like the IGF, for example, or WSIS, or whatever, or digital cooperation, compact, for example. And these decision makers should be aware about the importance of their engagement and their participation. And here, maybe we can tackle this issue from countries which are more advanced in the South, like, for example, KSA, UAE, in the Arab region, who are maybe, they can afford it, I mean, they can from one side afford it, and they can discuss it more. Then from one side, the decision maker, but also we have to build capacity of people, those who will go and discuss. I mean, here also, we sometimes we see the gap in the capacity building, but because they cannot argue enough, I mean, on the matters, because from one side, because of the digital gap, or the technology gap that we have, we have to admit that in the South, we are more user of the technology rather than developer of technology, we are speaking about AI technology, most of the time, it’s developed in the US, or in Europe, let me say maybe, but I mean, it is you can, you cannot find a lot of solution, a global solution, which might be developed currently now, here in the in the region, unless it is very local solution, I mean, but I mean, this big solution, this big company, then there is the capacity building of those who will be arguing the technical people, technical people, or academic people, if there is this, this is the need, then there is, I believe in these two area, and we have to be more proactive, I mean, then and maybe build some consensus build force, I mean, at regional level, to have representation from different regions, I mean, and here we can collaborate with other regions as well, Africa for example, Arab countries, Asia, some Asian countries are developed today, they are quite developed, but I mean this also, this consensus building between regions, southern regions, then I think it might matter because we do have some similar matters or similar issues, like for example, this issue of language, the issue of the gap in technological gap and so on, then I believe there is decision maker, at least a practitioner should be really aware about the issue, and if we want to go even deeper with the, you mentioned that it is consumer, the consumer that generated the GDPR, it will be fantastic also if we can have this knowledge spread according, in terms of user, I mean the user, and here maybe that it will be NGOs, and I think IGF is very good forum for this, for building capacity of NGOs and user, from the user perspective, then I believe we need really to build capacity and to convince decision maker at the first high level for the participation and to convince them, to show them the value of their participation, I think some of them are aware today, and we feel, I mean I believe I saw many many times representative of KSA or UAE who are, I mean in the international forum, and they are participating, Oman also sometimes it’s participating, I would say that there are Egypt sometimes they are participating, but also there is this building network between the regions in the south to have, because we have the same issue, then we have to say to push it more our agenda than we are doing it today, I mean.
Fadi Salim: Great, I mean I think as a school of government, we’re coming from that point of view, to add to your point, we do deal with the region as people in the senior and the mid-career and high-level positions in the region, in terms of capacity building and leadership development, so there is something that definitely we can be done in terms of capacity building at the highest level as well, in many countries around the region where they need to be aware, but sometimes they have to develop the capacity, and then the issues around language, issues around access, issues around financial matters that are not able, that not allow them to access these fora, all of these are in the leadership capacity building so thank you for that.
Nibal Idlebi: Let me, but just add one thing that I have forgotten, it’s private sector, the role of private sector and the association of private sector it should be because I mean technology is developed by private sector they have to be also on the loop.
Fadi Salim: Absolutely, private sector back to you Martin, I wanted to ask you around you know with the data that Selma presented we noticed and this is about companies so private sector companies but this survey that was presented is hundreds of small and medium enterprises and startups around 10 countries in the region working specifically in the AI domain and they have identified in the data that showed that Selma showed regulatory uncertainty as a question mark and this even talking some of these in the interviews with these it’s not working even talking in the discussing this in interviews with some of these companies they feel less willing to participate and share even their issues with us as researchers so there is this culture of why do I need to share I will leave it to the anthropologists such as Selma to understand why this is happening but from your point of view as as a private sector leader this regulatory uncertainty that was also highlighted in many of these discussions as a reason for restricting how do you balance this trade-off as in our region many countries have that clear direction around regulations on AI or knockoff but others do not have anything this uncertainty is causing many of our companies but also individuals who are interested in being included or talking to hold back do you have a do you have a view on on this trade-off that that is happening right now in our in the private sector at least and the small business enterprises that you might be involved with.
Roeske Martin: Thanks Fadi, great question. So I think you made a great point that came out in the research which was it’s maybe not the absence of regulation, it’s that level of uncertainty that is holding the private sector back a bit. I’d like to offer a slightly different perspective because I feel this region actually is getting quite a lot of things right when it comes to governance of AI and we see that a lot of focus over the past couple of years has been really about ethics and principles. So first countries adopted national AI strategies and pretty much every country in the region now has one at different levels of implementation with in many cases international input and support from the private sector to then developing their ethics and principles and guidelines and what they haven’t done yet is to come with hard regulation and laws similar to the AI Act or otherwise and in our mind that’s not necessarily a bad thing. I mean there has been an intentional bit of a wait and see attitude to first let the technology get to a point where you can see the actual use cases in practice. A lot of countries in the region, Saudi being a good example, UAE and other have implemented regulatory sandboxes where you can try new technologies in a safe environment, controlled implementation and we’ve seen some great investment in things like Arabic LLMs. So you’ve got the Falcon model in the UAE, you’ve got Alam here, you’ve got FANAR in Qatar, all of which are available for researchers around the world to work on. So we’re starting to see greater data sets from the region and actually homegrown technology as well enabled sort of by a slightly hands-off attitude to regulation. regulation. Now, Google has published a lot about what kind of regulation we think makes sense. We talk a lot about bold and responsible, both. How do you get that balance between preventing users from harm, but at the same time, keeping the innovation open and flourishing? And I think countries here realize most countries in the Gulf, at least, will try to position themselves as the next global AI hub. They’re not just thinking regionally. They’re thinking globally. How do we build the infrastructure that will attract businesses and small and medium enterprise and AI startups to see this as a place from which they can grow and flourish in the world? And so investing in energy, alternative energy, green energy, building data centers, et cetera, I think a lot of countries built their strategies around attracting talent, attracting companies, attracting business. So that’s on the positives. I think a lot of regulators ask me, what should we do? What’s the right way to go about implementing AI regulation? Should we even focus on AI regulation as such? Or should we focus on first filling the gaps we have in our existing regulation? And I think that’s a very good point to take home, is that whatever was illegal without AI probably should be illegal with AI. It’s about making the regulation that already exists adapt in such a way that AI is included in the thinking. That doesn’t necessarily mean that one has to regulate all the inputs that go into developing models, scientific breakthroughs. It’s more about trying to, on the downside, regulate the outputs. And I think we haven’t seen enough outputs in everyday usage beyond the sandboxes and some of the use cases I mentioned to know quite yet where the regulation needs to focus. So apart from those principles I mentioned earlier, what are the goals of? broader global governance, I think the regional specifics still being worked through. We talked about language quite a bit earlier, and Arabic in particular. I just want to give some interesting data points. One is, so you know there is a product called Google Translate. At the moment it exists in just under 260 languages. We started this project 20 years ago. We’re now at 260 odd languages. Of those 260, 110 languages were developed in the last six months. Thanks to AI. 20 years of development, in six months it’s gone like this. 110 languages. So this is about creating an inclusive way of accessing the technology. Another interesting thing I just learned a couple of weeks ago is that Gemini, which is our generative AI tool, we have more daily active users in the MENA region than in the US, in Arabic. Which is crazy to think of, but it’s a testament to the appetite that exists in the region for actually using these tools. And the fact that you have a very young demographic, the fact that people are generally open to technology, they tend to be more optimistic about using technology, means there is an almost instant embracing of what’s available, especially if it’s available in language. So that’s why I said slightly different perspective. I tend to be a bit more optimistic on where the region can go with this. And I think they are getting a lot of things right. Why is the private sector still a bit hesitant? I think there are other underlying factors. The funding streams for startups, investments in SMEs, how easy is it to get access to finance? Things about data privacy, even though all the countries in the region now have some form of data privacy law, the implementing regulations were still missing, and so you don’t quite know how to apply. There’s a lot of ambiguity around it. So focusing on finishing the job on some of those issues that are really the enabling mechanisms and policies for AI, in my mind, maybe should be the first priority right now.
Fadi Salim: Great. Thank you. This is very insightful. And you highlighted the standards. I mean, this, Jill, while Martin was telling us how this region has more Gemini users and Gen AI products users, probably if you extrapolate from that, than the US. But we have ethical implications for this, right, in our region. And in the IEEE, you have set both specifications and lots of deep research into AI ethics, as well as capacity building for AI ethics assurance, and in a way that is currently also we’re working together to adapt it to the region. But given this massive explosion of AI use, and at the same time, lack of ethical standards or regulations or systems in place for our region to govern that use, do you feel that this can take things into misdeployment? to creating some victims in society? And if it is the case, is it an argument for more inclusion, more regulation, or other things? I know this is a very complex question, but I trust that you can highlight all of these.
Jill: Thank you, Fadi. I think in a nutshell, I think it’s important to acknowledge and realize that without the contribution of the private sector, it would be very hard to achieve literacy and capacity building in the region. When I run courses in Africa, I run them on COLA. I run them on Google Meet. I run them on tools that are made available to me by the private sector. So it is very important to acknowledge the enabling nature of the private sector in this. I think from a regulatory standpoint, what is interesting to see is that many countries, even in Europe now, are starting to look at it more from, regulatory is good in terms of protection, but it should not come at the expense of innovation. And there is kind of a turnaround in a sense. People are coming to realize that you should not go too much in one direction and not too much on the other direction, especially Europe stands kind of in the middle on both of these. And so I think the region here has the option and has grabbed the option to kind of leapfrog some of these issues and grow into an environment, some countries, I’m not talking about the whole region, but grow into an environment where they can be even ahead of some European countries in terms of AI deployment. So again, it’s not an us versus them kind of thing. And it’s very… very important to acknowledge the fact that you cannot do that work in AI without the private sector. This being said, the AI at the end of the day is a tool. It’s not AI that can be good or bad or nefarious. It is the person behind it and the use that it is being put to that decides whether it is turning in good shape or not. The problem that you face, I think, is where you are trying to use AI to bridge a gap that you don’t know how to bridge otherwise. So suppose that you are in a situation where you don’t have enough resources to fulfill something and you say, oh, generative AI is going to do it. We had the same issue 10 years ago with the chatbots. Chatbots are going to fix it. Well, chatbots did not fix it. And generative AI might be able to fulfill some of these roles. But the condition is that they need to be well-defined in the first place in terms of use cases. And this is where, because without that, you don’t have trust. And if you don’t have trust, you don’t have adoption in the AI. At the end of the day, AI is an anthropomorphic user interface. Whether the way it interfaces with humans is by behaving like a human. It is basically assuming more and more, whether it is autonomously or through augmentation, roles and decisions that were left to humans before. So what we expect from it is trust, the same way as we expect trust from humans in the way they behave. So from that perspective, the trust in the solution is very important. So how do you build that trust? It’s very important to make sure that you look at the use cases and that you take into consideration from an inclusivity perspective all of the stakeholders that need to be involved in it. And this is a role that is at the individual level participating into standards. The private entities. being very inclusive of all stakeholders, the way they are doing it, or the government even pushing for empowerment of the different civil societies group into achieving that. But it’s a magnifying glass at the end of the day. So everything it does will get magnified. If you don’t prepare it well, it will get magnified in the wrong way, and you will see more negative aspects than positive aspects. So it’s important to, from the onset, look into it. And in order to look into it, like Neha was saying, you need to have the ability to understand what it is about. If you don’t know what is AI, then you are just relying on what the provider tells you it is. And at that point, you have no say in its implementation.
Fadi Salim: Well, great. Thank you. I know we’re coming to questions, and we have questions coming up. If you’re eager to give us your question, please go ahead. Do you have a mic? Ah, here it is. I think, yeah.
AUDIENCE: I’m sorry for being so anxious, but I’m a panelist in another workshop, and I have to, I’m late, but this workshop is very important. So I have two quick questions for the panelists. The first, data was said that without data, we don’t have local solution. We need local data. So my first question is, how do you think is the best approach to encourage, to have local data? Should it be through regulatory ways, or what other incentives could be used? If there’s some best practice already that can be shared, that’s my first question. And my second question is mainly to Dr. Roske, is that one of the big gaps, once the developing country have the data, where to run it? This requires. requires huge computing power that, well, imagine we don’t have enough for data centers. Imagine the computing power that we need. That’s the gap that is growing. I’m asking, maybe it’s already happening, but maybe in the future it will be possible a business model in which the big companies that already have the hardware could bring as a service to run the local data of countries for training of models or other things as a service for developing countries that do not have the infrastructure to run the data in their machines. Could that be a possible way to give some time till the countries could have their own hardware?
Fadi Salim: Thank you. Two questions. So who would like to start? Okay, so local data.
Nibal Idlebi: I believe there are some initiatives that there are some practice in a way or another to have this to encourage the local data. I mean, there are some initiatives, I believe even Google did one at one point for encouraging teachers and students to develop and to put their data or whatever research they are doing. But we can copy the example and make it for users in general, for local community, for students, for teachers, and so on. And to make some initiatives or to make some awards in a way or another. I mean, the awards are very capturing, I mean, to collect, they might be a solution to capture some data. Then, I mean, there are some practice in a way to encourage citizens and to encourage people to provide their own data through specific initiatives. And I agree with you, there should be some initiative, some incentives. I mean, of course, I mean, local governments, we can have the data collected from e-government, I mean, or digital government. These data, we need to encourage maybe the locality or government to open their data in order to be used. And this is one of the examples that I mentioned by Fadi. This open data is very important, I believe, that you can put all, but I mean, it needs some efforts even from the government to clean the data, to put it in a proper way. But there are some initiatives that could encourage or accelerate the generation of data. Because I mean, through digital government, you have a lot of data with the government, a lot. I mean, okay, if you don’t have digital government in the country, then it’s another question. But I mean, through this initiative, from the government, from local government, from institutions, I would say also, you can encourage in a specific field the generation of a new data. It could happen. And there are some initiatives.
Fadi Salim: Great question as well. Maybe on the data point first, and then I’ll come to the second part of your question. I don’t know if you’ve heard of data commons, but data commons is Google has been quite involved in that initiative for a couple of years now. And the idea is to take whatever publicly available data there is, clean it, structure it, and then provide insights to anyone who wants to query the data. So it doesn’t become a walled garden of information that’s only available to some people who are willing to pay for it, but to create, if you like, the repository of data that you can then base some of the research questions around. And whether that’s environmental data, climate data, health data, et cetera. Governments can make a lot more data available than they’re currently doing. I think there’s a lot of hesitancy around what is sensitive, what is not sensitive. When we worked on data protection laws, for example, in providing best practice and consultations, the default position was, oh, it’s all sensitive. It’s all a national security interest, et cetera. But what people don’t realize is that so much of that data could easily be anonymized or made impersonal, that you’re not giving away secrets by just sharing the data in a more meaningful way. So I think there is a lot of work that can be done there on just sharing data between departments within the government, but also with the private sector and others. On the other point, what can Google and other tech companies do to include the global south and the emerging markets in more of an infrastructure development? development. So first of all, the cost of compute and capacity to run models is going down all the time. So a lot of work that is happening at Google and other companies to just reduce the amount of compute and storage and everything else needed to come up with good functioning AI systems. And so Gemini 2.0, which was just announced last week, now uses 90% less compute than Gemini 1.5, which is a huge scaling down in terms of that. The other way is to bring a lot of the compute closer to the device and have the algorithms run on the edge or on the user device itself so you don’t need to run it all through global infrastructure. But we do recognize that a lot of global infrastructure will still be needed. And so we and I know other tech companies invest, for example, a lot in subsea cabling and satellite systems. One of my colleagues here just gave a talk on the interplanetary internet that is being developed. So how do you create an infrastructure that you can easily bring to markets that don’t have it today at a low cost? And so a lot of development going into that at the moment. Of course, the capacity building, scaling, all of that is super important as well. And making sure that the universities that exist, there are some very good institutions here, are connected to the research that’s happening in other parts of the world, including them. So that would be hopefully…
AUDIENCE: Can I add to this? Yeah, please. Okay. So I’m just going to be brief and quick on this. I think there is no one answer for everything. So in the sense of data, right? LLMs today have scrubbed the internet completely and now are generating synthetic data and are growing out of synthetic data. So I don’t know about Google Translate, maybe if that’s part of the uptake. But there is a lot of use of synthetic data there. So how to generate data when you don’t have enough data. This could be very interesting for the global south. And help in generating synthetic data can be very helpful to make sure that there is no under-representation at the data level in the global south. That’s one. The second thing is it depends on what AI we’re talking. We are always now talking about LLMs and agents on top of LLMs. But if you go all the way to the neural nets that are just the layers below, in terms of sophistication, I would say, or building blocks, you can do a lot with tools that are available either from private entity, like a colab, for example, available for free. And you can even do a lot with lower computationally hungry algorithms that provide you with a lower performance level. But that is still enough in many global south countries. Like if I achieve 70%, but I don’t need to use a data center, I can run it on my local PC. I achieve a 70% accuracy. Or do I go for a 98% accuracy, but then I’m dependent on the external data center? I have choices, essentially. So there are ways to adjust all of this. But in order to know that I have choices, I need the literacy for it in the first place.
Fadi Salim: Thank you. Let’s go.
Nibal Idlebi: Maybe GIS data or satellite information are also useful in some cases. Thank you.
Fadi Salim: Oh, I have the mic. OK, thank you very much for your question. And looking forward to your panel as well afterwards. We’ll look forward to your panel. All right. So as we started the questions, if there’s any other questions at the floor, I’ll come back to you. But I would like to take one question online. We have a vibrant, clearly, community out there. But let me read the question to you. What common cultural aspects deter the participative appetite of the global south? For instance, board member, public or private, how public and private board members are selected. Those members are responsible for overseeing cases, assessing data, readiness, internal governance, and so on. Who assesses board members and certifies them as AI worthy? How does culture affect such aspects? This is a very good question. I don’t know if anybody would like, you have, all of you have boards, one way or another. So is there anything to be learned on how these cultural aspects deter participation in these boards in our region or in the Global South in general?
AUDIENCE: Any insights or thoughts? Just one quick thought around good practices I’ve seen governments adopt in the region, which is, for example, implementing chief AI officers or AI officers across different government departments and empowering people to actually learn about the technology and then lean into those conversations when it comes to multi-stakeholder dialogue. So yes, there is some capacity building to be done still, but appointing people, giving them the mandate, and putting structures in place that actually allow this dialogue to happen is a very good first step.
Fadi Salim: Great, thanks. Any other comments on this? OK, we can move on. You have one?
AUDIENCE: Very quickly, I think one important aspect here is, how do you make sure that the people who are in charge, once these organizations or these structures are put in place, how do you make sure that they are actually effective and they have the capacity for it? So this is where you have some kind of, they can be certified or authorized or basically recognized for their capacity. So it’s like continuing education. We hear that a lot in the WENA regions, where basically, how do you make sure that people are always catching up with the technology that they are using or for which they need to make the important decisions? And there are courses and structures, like the ones that MBRSG offers, that can offer these trainings.
Fadi Salim: Thank you. And thank you, Sami Assa, for the question. Now we move on to another question from the floor. Can you please, let’s give you a mic. Yeah, because you need, everybody else needs to hear you.
Jasmin Alduri: Perfect. Hi, my name is Jasmin Alduri. I’m the co-director of the Responsible Tech Hub. We’re a youth-led non-profit focusing on responsible tech. So the name already gives it away. I really like the aspect that Dr. Niyabal actually brought up about the Global South being not as involved in AI developing and that mostly the Global North or the Western countries are doing it. Because I 100,000% agree on this. However, there’s this one aspect of training. training AI that is happening in the Global South, meaning most of the labelling is actually happening in the Global South with click workers doing the main work and to some extent also being exploited. So my question for the round would be how can we make sure that specifically click workers and the Global South actually does not only feel included but can actually benefit from the fact that it’s part of that developing stage of AI.
Fadi Salim: Is there any, who are you targeting your question to? I think it’s coming back to Martin, but yeah, so this is something that is common in all the, it’s AI. So in a way it’s how to exclude rather than include these from being exploited, right? So how to, because it happens. So is there any measures that Google or technology companies are applying to ensure that this is properly governed that we might learn from?
AUDIENCE: I think beyond skills programs and helping developers and people working in those industries in the click content work that you did mention. Of course, a lot of it is encouraging local businesses to adopt some of these technologies and the startup ecosystem to take those technologies and build things that are regionally relevant from the ground up. And so one of the things we’re focused on a lot is to work with the startup ecosystem in particular. We just had a program last year called the Women AI Founders Program, but we found that there is a huge gap in, well, women founders in general, I think in the MENA region it’s only 3% of startups are run by women. And then the funding gap is even worse. It’s 1% of funding going to women. So we realized that there is a huge amount of talent in the region that is not tapped into. properly and that help and support is required. So we run these accelerator programs for different groups of startups. We started generically looking at AI startups. We’re now starting to go down into more thematic approaches whether it’s around health or education or fintech, even gaming. We’re doing accelerators. So there are some very interesting sectors here, particularly in economies that are trying to diversify away from fossil fuels, etc. and encouraging the build-out of new economic sectors where there’s a lot of opportunity. And it’s just making sure that the ecosystems are there, the platforms that all this talent can tap into and work with. And that there are pathways to success, right, that people don’t get stuck. They graduate, they have a degree and then know where to go. There need to be the jobs to go with it.
Fadi Salim: I think, yeah, I want Jill to comment on this because IEEE has ethical specification for AI development, procurement, you name it. So is this embedded already in how for you as a assessor of AI ethics or AI ethical application of AI can look into or want to look into?
Jill: Certainly from an ethical perspective, part of the process of evaluating a solution is taking into consideration inclusively all of the stakeholders, including the labelers. So if you take the labelers in Kenya developing for some big company, some labeling for some LLM stuff and being mentally impacted by it, for example, or being underpaid for it, this is certainly something that would be caught at the identification, what we call the ethics profiling at the use case level. Now, this is part of, for example, the IEEE certified assessment framework, which allows you to assess solutions. So irrespective of whether or not you have an AI governance, you have a solution today and you want to know if it is ethical or not, you can go through a very detailed and formal process that will allow you to do this. But I’d like to, to tackle the question a bit differently as well. For every challenge, there is an opportunity. And in a sense, AI costs a lot of money. It costs a lot of money to countries, Western countries, so-called Western countries, that will go for cheaper labor in some developing countries, right? But at the same time, it is an opportunity, as Martin was referring to, to grow the capacity building into these developing countries and grow even the capacity for local employment and local expertise. And once you have that local expertise, then you can afford more AI solutions because you can afford local salaries instead of having to pay for external salaries there. And on top of that, closing the loop on the programs, the programs allow you to become authorized assessors. The IEEE certified program. And once you are an authorized assessor, you can work worldwide anywhere, so you can compete with others and it opens up opportunities that don’t force you into a niche market of labeling or specifically doing some tedious tasks there. So there are opportunities there that go beyond just the current economics that we see.
Fadi Salim: Thank you. And we still have around 10 minutes to go. You have a question to the panelists. If I may, if you allow it. Yeah, but that will mean that they will have to ask you a question.
Nibal Idlebi: If I may just ask a small question. We know that UNESCO have published ethics for AI. I want to know from you as a private sector or AIEEE, up to which limits you are applying these international ethics, I mean, of UNESCO? for example, in AI?
Fadi Salim: I’ll start and then Jill, let you weigh in as well. So we had, we published AI principles, I think back in 2018, quite a while ago, which defined what we will and what we won’t do with AI. And I think a lot of those have since been, you know, incorporated in some of the global governance standards as well. So it’s important not just to keep checking and assessing, but also to build these principles into the product from day one, right? And so when DeepMind or one of our units that works a lot on AI develops a product, it does so with those principles in mind. And this predates, by the way, Gen AI by many, many years. So, you know, our CEO declared Google to be an AI first company in 2017, I think. And it’s now in all the products, right? It’s in search, it’s in YouTube, it’s in maps, it’s in… And so there is a sort of established practice now of how do you take these principles and build them into products and there are working groups, product teams that check this on a very, very regular basis. So yes, I would say these principles are very much part of our everyday life and there are whole groups dedicated within the company to working on it. Would you like to comment? Yes.
AUDIENCE: Maybe quickly I’ll answer and then you can grab the mic. So I have to say that, I mean, since you are opening the door for it, actually IEEE pioneered social technical standards, which is the social impact of technology. Back in 2016, the first ethically aligned design framework was built from which a lot of recommendations and standards came out, including for software development, similar to what Google did, or for assessment like the IEEE certified, or for procurement, or even for governance. So all of these is the work of this grassroots, thank you. community work. The EAD principles actually were very much used in the UNESCO principles. So we are applying them from the start, and we are promoting their use worldwide for this.
Fadi Salim: Sorry. You have a question? Yeah. Okay. There’s a mic over there.
Lars Ratscheid: Thank you. My name is Lars Ratscheid. I’m from Germany, just like Yasmin, and I work in international cooperation. And now all three of you gave examples on regulation, on governing standards for AI. But bringing it back to the title of the session, how was the Global South or the global majority involved in each of these at Google, at UNESCO, and at IEEE? Thank you.
Fadi Salim: So I guess this is a closing question, right? It’s a closing question because we’re almost out of time. But in a way, it’s an important question. How can we learn? Do you have some examples of how inclusion happens within your organization? I know IEEE, maybe starting with you, Jill, IEEE has a massive working group, activities, volunteers, et cetera. Tell us more about the examples of inclusion that exists in AI.
Jill: Sure. So IEEE has chapters in every country in the world. So there is representation from every country in the world, and every country is encouraged to work with our chapter and
Fadi Salim: get… Recording didn’t work. I can’t hear you.
Nibal Idlebi
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Lack of representation in global AI forums
Explanation
Nibal Idlebi points out that countries in the MENA region are not adequately represented in global AI governance discussions. This lack of representation limits the region’s ability to influence AI policies and standards.
Evidence
Nibal mentions that often only one country from the Arab world attends international forums, and their contributions are siloed and weak.
Major Discussion Point
Inclusion in AI Governance in the MENA Region
Need for capacity building at decision-making level
Explanation
Idlebi emphasizes the importance of building capacity among decision-makers in the MENA region. This includes raising awareness about the importance of engagement in global AI governance and developing the skills needed to participate effectively.
Evidence
She suggests that decision-makers should be convinced of the value of their participation in international forums.
Major Discussion Point
Inclusion in AI Governance in the MENA Region
Agreed with
Jill Nelson
Roeske Martin
Agreed on
Need for capacity building and literacy in AI
Differed with
Roeske Martin
Differed on
Approach to AI regulation
Lack of local data limiting AI development
Explanation
Idlebi highlights the shortage of local data as a significant barrier to AI development in the MENA region. This lack of data hinders the creation of locally relevant AI solutions and perpetuates dependence on external data sources.
Evidence
She mentions that through digital government initiatives, a lot of data is available with the government, but it needs to be cleaned and made accessible.
Major Discussion Point
Data and Infrastructure Challenges
Agreed with
Jill Nelson
Roeske Martin
Agreed on
Importance of local data for AI development
Need for initiatives to encourage local data generation
Explanation
Idlebi suggests that initiatives are needed to encourage the generation of local data in the MENA region. These initiatives could involve various stakeholders and use incentives to promote data collection and sharing.
Evidence
She proposes ideas such as awards, initiatives for citizens, and encouraging governments to open their data for use.
Major Discussion Point
Data and Infrastructure Challenges
Differed with
Jill Nelson
Differed on
Focus of inclusion efforts
Jill Nelson
Speech speed
146 words per minute
Speech length
2019 words
Speech time
827 seconds
Importance of private sector involvement in enabling inclusion
Explanation
Jill emphasizes the crucial role of the private sector in enabling inclusion in AI governance. She argues that without private sector contributions, it would be challenging to achieve literacy and capacity building in the region.
Evidence
Jill mentions running courses in Africa using tools like COLA and Google Meet, which are made available by the private sector.
Major Discussion Point
Inclusion in AI Governance in the MENA Region
Agreed with
Roeske Martin
Agreed on
Role of private sector in enabling inclusion
Differed with
Nibal Idlebi
Differed on
Focus of inclusion efforts
Need for literacy and capacity building to enable meaningful participation
Explanation
Jill stresses the importance of AI literacy and capacity building to enable meaningful participation in AI governance. She argues that without understanding what AI is, people are reliant on what providers tell them, limiting their ability to influence implementation.
Major Discussion Point
Inclusion in AI Governance in the MENA Region
Agreed with
Nibal Idlebi
Roeske Martin
Agreed on
Need for capacity building and literacy in AI
Use of synthetic data to address data scarcity
Explanation
Jill suggests the use of synthetic data as a potential solution to address data scarcity in the Global South. This approach could help generate representative data when real data is insufficient or unavailable.
Evidence
She mentions that LLMs today have scrubbed the internet and are now generating and growing out of synthetic data.
Major Discussion Point
Data and Infrastructure Challenges
Agreed with
Nibal Idlebi
Roeske Martin
Agreed on
Importance of local data for AI development
Need to consider all stakeholders, including labelers, in ethical assessments
Explanation
Jill emphasizes the importance of considering all stakeholders, including data labelers, in ethical assessments of AI systems. This inclusive approach ensures that the impacts on all parties involved in AI development are taken into account.
Evidence
She mentions the IEEE certified assessment framework, which allows for the assessment of AI solutions from an ethical perspective.
Major Discussion Point
Ethical Considerations in AI Development
IEEE’s work on social-technical standards and ethical frameworks
Explanation
Jill highlights IEEE’s pioneering work on social-technical standards and ethical frameworks for AI. These standards and frameworks provide guidance for responsible AI development and implementation.
Evidence
She mentions the ethically aligned design framework developed by IEEE in 2016, which has influenced various recommendations and standards.
Major Discussion Point
Ethical Considerations in AI Development
Opportunity to grow local expertise and employment
Explanation
Jill points out that the challenges in AI development also present opportunities for growing local expertise and employment in developing countries. This could lead to more affordable AI solutions and increased local capacity.
Evidence
She mentions the potential for local employment and expertise growth, which could make AI solutions more affordable through local salaries.
Major Discussion Point
Fostering Local AI Ecosystems
Roeske Martin
Speech speed
149 words per minute
Speech length
1826 words
Speech time
734 seconds
Regulatory uncertainty holding back private sector
Explanation
Martin highlights that regulatory uncertainty, rather than over-regulation, is holding back private sector involvement in AI development in the MENA region. Companies are struggling to navigate a regulatory landscape that is still taking shape.
Evidence
He cites the research findings showing that companies are supportive or neutral towards regulation, but face uncertainty about the regulatory direction.
Major Discussion Point
Inclusion in AI Governance in the MENA Region
Differed with
Nibal Idlebi
Differed on
Approach to AI regulation
Gap in computing power and infrastructure in developing countries
Explanation
Martin acknowledges the gap in computing power and infrastructure in developing countries, which limits their ability to run large AI models. However, he also notes ongoing efforts to reduce the computational requirements of AI systems.
Evidence
He mentions that Gemini 2.0 uses 90% less compute than Gemini 1.5, indicating progress in reducing computational requirements.
Major Discussion Point
Data and Infrastructure Challenges
Potential for data commons and public data sharing
Explanation
Martin suggests the potential of data commons and increased public data sharing to address data scarcity issues. This approach could make more structured, clean data available for AI development and research.
Evidence
He mentions Google’s involvement in the data commons initiative, which aims to clean, structure, and provide insights from publicly available data.
Major Discussion Point
Data and Infrastructure Challenges
Agreed with
Nibal Idlebi
Jill Nelson
Agreed on
Importance of local data for AI development
Importance of building ethical principles into products from the start
Explanation
Martin emphasizes the importance of incorporating ethical principles into AI products from the beginning of development. This approach ensures that ethical considerations are integral to the product, rather than an afterthought.
Evidence
He mentions that Google published AI principles in 2018 and has since incorporated these principles into product development processes.
Major Discussion Point
Ethical Considerations in AI Development
Need to encourage local businesses and startups to adopt AI
Explanation
Martin stresses the importance of encouraging local businesses and startups in the MENA region to adopt AI technologies. This approach can help build a robust local AI ecosystem and drive innovation.
Evidence
He mentions Google’s programs like the Women AI Founders Program and various accelerator programs focused on different sectors.
Major Discussion Point
Fostering Local AI Ecosystems
Agreed with
Jill Nelson
Agreed on
Role of private sector in enabling inclusion
Importance of creating pathways to success for local talent
Explanation
Martin highlights the need to create clear pathways to success for local talent in the AI field. This includes ensuring that there are job opportunities and support systems for graduates and emerging professionals.
Evidence
He mentions the need for jobs to go along with degrees and the importance of building out new economic sectors.
Major Discussion Point
Fostering Local AI Ecosystems
Agreed with
Nibal Idlebi
Jill Nelson
Agreed on
Need for capacity building and literacy in AI
Role of regulatory sandboxes in enabling safe experimentation
Explanation
Martin points out the positive role of regulatory sandboxes in enabling safe experimentation with AI technologies. These sandboxes allow for controlled implementation and testing of new technologies.
Evidence
He mentions examples of countries like Saudi Arabia and UAE implementing regulatory sandboxes for AI experimentation.
Major Discussion Point
Fostering Local AI Ecosystems
Jasmin Alduri
Speech speed
163 words per minute
Speech length
156 words
Speech time
57 seconds
Exploitation of click workers in Global South
Explanation
Jasmin Alduri raises concerns about the exploitation of click workers in the Global South who are involved in AI development, particularly in data labeling. She questions how these workers can benefit from their involvement in AI development rather than just being exploited.
Major Discussion Point
Ethical Considerations in AI Development
Agreements
Agreement Points
Need for capacity building and literacy in AI
Nibal Idlebi
Jill Nelson
Roeske Martin
Need for capacity building at decision-making level
Need for literacy and capacity building to enable meaningful participation
Importance of creating pathways to success for local talent
All speakers emphasized the importance of building capacity and literacy in AI across various levels, from decision-makers to the general public, to enable meaningful participation in AI governance and development.
Importance of local data for AI development
Nibal Idlebi
Jill Nelson
Roeske Martin
Lack of local data limiting AI development
Use of synthetic data to address data scarcity
Potential for data commons and public data sharing
The speakers agreed on the critical role of local data in AI development and suggested various approaches to address data scarcity in the region.
Role of private sector in enabling inclusion
Jill Nelson
Roeske Martin
Importance of private sector involvement in enabling inclusion
Need to encourage local businesses and startups to adopt AI
Both speakers highlighted the crucial role of the private sector in enabling inclusion in AI governance and development, emphasizing the need to encourage local businesses and startups.
Similar Viewpoints
Both speakers proposed innovative solutions to address the lack of local data, suggesting initiatives to encourage data generation or the use of synthetic data.
Nibal Idlebi
Jill Nelson
Need for initiatives to encourage local data generation
Use of synthetic data to address data scarcity
Both speakers emphasized the importance of incorporating ethical considerations into AI development from the beginning, considering all stakeholders involved.
Jill
Roeske Martin
Need to consider all stakeholders, including labelers, in ethical assessments
Importance of building ethical principles into products from the start
Unexpected Consensus
Positive view on regulatory sandboxes
Roeske Martin
Nibal Idlebi
Role of regulatory sandboxes in enabling safe experimentation
Need for initiatives to encourage local data generation
Despite coming from different sectors (private and public), both speakers showed support for initiatives that allow controlled experimentation and innovation in AI, such as regulatory sandboxes and data generation initiatives.
Overall Assessment
Summary
The main areas of agreement included the need for capacity building, the importance of local data for AI development, and the crucial role of the private sector in enabling inclusion. There was also consensus on the need for ethical considerations in AI development and support for initiatives that encourage innovation.
Consensus level
The level of consensus among the speakers was moderately high, with agreement on several key issues. This consensus suggests a shared understanding of the challenges and potential solutions for AI governance and development in the MENA region. However, there were also some differences in emphasis and approach, reflecting the diverse perspectives of the speakers from different sectors and organizations. This level of consensus implies that there is potential for collaborative efforts in addressing AI governance challenges in the region, but also a need for continued dialogue to address remaining differences and develop comprehensive strategies.
Differences
Different Viewpoints
Approach to AI regulation
Nibal Idlebi
Roeske Martin
Need for capacity building at decision-making level
Regulatory uncertainty holding back private sector
Nibal Idlebi emphasizes the need for capacity building among decision-makers to participate in global AI governance, while Martin Roeske highlights that regulatory uncertainty, rather than lack of regulation, is holding back private sector involvement.
Focus of inclusion efforts
Nibal Idlebi
Jill Nelson
Need for initiatives to encourage local data generation
Importance of private sector involvement in enabling inclusion
Nibal Idlebi emphasizes the need for initiatives to encourage local data generation, while Jill Nelson stresses the importance of private sector involvement in enabling inclusion through literacy and capacity building.
Unexpected Differences
Ethical considerations in AI development
Roeske Martin
Jasmin Alduri
Importance of building ethical principles into products from the start
Exploitation of click workers in Global South
While Martin Roeske focuses on incorporating ethical principles into AI products from the start, Jasmin Alduri unexpectedly raises concerns about the exploitation of click workers in the Global South. This highlights a potential blind spot in ethical considerations that major tech companies might be overlooking.
Overall Assessment
summary
The main areas of disagreement revolve around approaches to AI regulation, focus of inclusion efforts, and strategies to address data scarcity in the MENA region.
difference_level
The level of disagreement among the speakers is moderate. While there are differences in approaches and focus areas, there is a general consensus on the importance of inclusion, capacity building, and fostering local AI ecosystems. These differences in perspective can be beneficial in developing a comprehensive approach to AI governance and development in the MENA region, as they highlight various aspects that need to be addressed.
Partial Agreements
Partial Agreements
All speakers agree on the need to address data scarcity in the MENA region, but propose different approaches. Nibal Idlebi suggests initiatives to encourage local data generation, Martin Roeske proposes data commons and public data sharing, while Jill suggests the use of synthetic data.
Nibal Idlebi
Roeske Martin
Jill Nelson
Lack of local data limiting AI development
Potential for data commons and public data sharing
Use of synthetic data to address data scarcity
Both Martin Roeske and Jill agree on the importance of fostering local AI ecosystems, but focus on different aspects. Roeske emphasizes encouraging local businesses and startups to adopt AI, while Jill highlights the opportunity to grow local expertise and employment.
Roeske Martin
Jill Nelson
Need to encourage local businesses and startups to adopt AI
Opportunity to grow local expertise and employment
Similar Viewpoints
Both speakers proposed innovative solutions to address the lack of local data, suggesting initiatives to encourage data generation or the use of synthetic data.
Nibal Idlebi
Jill Nelson
Need for initiatives to encourage local data generation
Use of synthetic data to address data scarcity
Both speakers emphasized the importance of incorporating ethical considerations into AI development from the beginning, considering all stakeholders involved.
Jill Nelson
Roeske Martin
Need to consider all stakeholders, including labelers, in ethical assessments
Importance of building ethical principles into products from the start
Takeaways
Key Takeaways
There is a lack of representation from the MENA region and Global South in global AI governance forums and discussions
Regulatory uncertainty is holding back AI development and adoption by the private sector in the MENA region
There is a need for greater literacy, capacity building, and local data to enable meaningful participation in AI development
Ethical considerations, including the treatment of data labelers and click workers, need to be addressed in AI development
Fostering local AI ecosystems and talent is crucial for inclusion and development in the Global South
Resolutions and Action Items
Implement chief AI officers across government departments to build capacity and enable dialogue
Develop more accelerator programs and support for local AI startups, especially those led by underrepresented groups like women
Increase efforts to make AI tools and resources available in local languages like Arabic
Unresolved Issues
How to effectively bridge the technology gap between the Global North and South in AI development
How to balance innovation with regulation in emerging AI markets
How to ensure fair compensation and treatment of data labelers and click workers in the Global South
How to address the lack of computing power and infrastructure in developing countries for AI development
Suggested Compromises
Use of synthetic data to address data scarcity issues in the Global South
Implementing regulatory sandboxes to allow safe experimentation with AI technologies while developing appropriate governance frameworks
Leveraging existing global tech infrastructure (e.g. from companies like Google) to enable AI development in countries lacking local infrastructure, while building local capacity
Thought Provoking Comments
There is technology gap between the South and the North. And this is behind the scene, everything is related to technology gap. There is a big gap nowadays between developed country or the Northern country and the Southern country. And from that, we can derive a lot of issues.
speaker
Nibal Idlebi
reason
This comment highlights a fundamental issue underlying many of the challenges discussed regarding AI governance and inclusion in the Global South.
impact
It shifted the conversation to focus more explicitly on the North-South divide and its implications for AI development and governance.
How can it be a race if we don’t all start from the same point? And that technology is an important priority nationally, certainly. But what about things like the inability to read and write? Or things like, you know, not enough access to proper health care.
speaker
Salma Alkhoudi
reason
This comment challenges the framing of AI development as a ‘race’ and highlights more fundamental development challenges faced by some countries.
impact
It broadened the discussion to consider the wider context of development challenges beyond just AI and technology.
We’re now starting to go down into more thematic approaches whether it’s around health or education or fintech, even gaming. We’re doing accelerators. So there are some very interesting sectors here, particularly in economies that are trying to diversify away from fossil fuels, etc. and encouraging the build-out of new economic sectors where there’s a lot of opportunity.
speaker
Martin Roeske
reason
This comment provides concrete examples of how AI development can be tailored to specific regional needs and opportunities.
impact
It shifted the discussion towards more practical, sector-specific applications of AI in the Global South.
For every challenge, there is an opportunity. And in a sense, AI costs a lot of money. It costs a lot of money to countries, Western countries, so-called Western countries, that will go for cheaper labor in some developing countries, right? But at the same time, it is an opportunity, as Martin was referring to, to grow the capacity building into these developing countries and grow even the capacity for local employment and local expertise.
speaker
Jill Nelson
reason
This comment reframes the issue of labor exploitation in AI development as a potential opportunity for capacity building in developing countries.
impact
It introduced a more optimistic perspective on the potential for AI to contribute to development in the Global South.
Overall Assessment
These key comments shaped the discussion by highlighting the complex interplay between AI development, global inequalities, and development challenges. They moved the conversation beyond abstract discussions of AI governance to consider more concrete applications and opportunities, while also maintaining a critical perspective on the challenges faced by the Global South in participating fully in AI development and governance.
Follow-up Questions
How can we build a unified AI ecosystem in a region with very diverse regulatory approaches, different development priorities, and varying levels of digital infrastructure?
speaker
Salma Alkhoudi
explanation
This is important to address the interoperability issues faced by companies trying to scale across countries in the MENA region.
How can we operationalize inclusion better in the AI governance ecosystem?
speaker
Fadi Salim
explanation
This is the core focus of the panel and crucial for ensuring diverse perspectives are represented in AI development and governance.
How can voices from countries without a seat at the table be represented at a higher level in the ecosystem of AI governance?
speaker
Fadi Salim
explanation
This is important for ensuring global AI governance decisions consider perspectives from all regions, not just an ‘elite club’.
What is the best approach to encourage the collection and use of local data?
speaker
Audience member
explanation
Local data is crucial for developing AI solutions that are relevant and representative of different regions.
Could big tech companies provide computing power as a service for developing countries to run local data for training AI models?
speaker
Audience member
explanation
This could help bridge the gap in computing infrastructure between developed and developing countries.
How can we ensure that click workers and the Global South actually benefit from being part of the AI development stage?
speaker
Jasmin Alduri
explanation
This is important to address potential exploitation and ensure fair compensation for workers contributing to AI development.
How was the Global South or global majority involved in developing AI regulations and standards at Google, UNESCO, and IEEE?
speaker
Lars Ratscheid
explanation
This question directly addresses the core theme of inclusion in AI governance from a global perspective.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online