High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative

16 Dec 2024 08:35h - 09:35h

High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative

Session at a Glance

Summary

This discussion focused on defining and implementing transparency and explainability in AI systems, as well as balancing innovation with ethical governance. Participants from various countries and organizations shared their perspectives on these challenges.

Key points included the need for globally agreed definitions of transparency and explainability, with transparency relating to how AI systems are designed and deployed, while explainability concerns justifying AI decisions. Several speakers emphasized the importance of standards and frameworks to guide ethical AI development, with examples given from Saudi Arabia, Morocco, and international bodies such as ITU and UNESCO.

The discussion highlighted both the potential of AI to accelerate progress on sustainable development goals and address global challenges, as well as technical and non-technical barriers to achieving transparent and explainable AI. These barriers include the complexity of AI models, data privacy concerns, and the need for more AI expertise and public understanding.

Participants agreed on the need to prioritize trust, safety, and accountability in AI governance moving forward. Suggestions for future action included focusing on frugal and inclusive AI development, enhancing global collaboration, supporting capacity building in the Global South, and closing digital divides. The importance of considering cultural and linguistic diversity in AI development was also stressed.

The discussion concluded with calls to create human-centric AI systems that benefit humanity while addressing ethical concerns and potential risks. Participants emphasized the need for ongoing dialogue and cooperation among all stakeholders to shape responsible AI governance and harness AI’s potential for sustainable development.

Keypoints

Major discussion points:

– Defining transparency and explainability in AI, and their importance for building trust

– National and international efforts to promote ethical AI development and use

– Challenges and barriers to implementing transparent and explainable AI systems

– Leveraging AI to achieve sustainable development goals and address global challenges

– Priorities and actions needed to advance responsible AI governance by 2025 and beyond

The overall purpose of the discussion was to explore how different stakeholders define and approach transparency and explainability in AI, examine real-world examples and challenges, and identify priorities for advancing responsible AI governance and development globally.

Speakers

– Latifa Al-Abdulkarim, Assistant Professor of Computer Science, King Saud University (Moderator)

– Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strategies, Chinese Academy of Engineering

– Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU)

– His Excellency Dr. Abdullah bin Sharaf Alghamdi, President of the Saudi Data & AI Authority (SDAIA), Kingdom of Saudi Arabia

– Amal El Fallah Seghrouchni, Executive President of the International Center of Artificial Intelligence of Morocco, Ai movement within the Mohammed VI Polytechnic University

– Li Junhua, United Nations Secretary-General

– His Excellency Abdullah bin Amer Alswaha, Minister of Communications & Information Technology, Kingdom of Saudi Arabia

Full session report

Expanded Summary of AI Transparency and Explainability Discussion

Introduction:

This discussion, moderated by Latifa Al-Abdulkarim, brought together experts from various countries and organizations to explore the challenges and opportunities surrounding transparency and explainability in artificial intelligence (AI) systems. The conversation focused on defining these concepts, examining their importance in building trust, and identifying priorities for advancing responsible AI governance globally.

Key Definitions and Concepts:

A crucial starting point for the discussion was establishing clear definitions of transparency and explainability in AI. Doreen Bogdan-Martin, representing the International Telecommunication Union (ITU), provided a helpful distinction: transparency relates to how AI systems are designed and deployed, while explainability concerns justifying AI decisions. Amal El Fallah Seghrouchni, Executive President of the International Center of Artificial Intelligence of Morocco stated that it’s important to justify the decision given by the system for better explainability.

National and International Efforts:

Participants shared insights into various initiatives aimed at promoting ethical AI development and use:

1. Saudi Arabia: His Excellency Dr. Abdullah bin Sharaf Alghamdi, President of the Saudi Data & AI Authority (SDAIA), highlighted the country’s development of national AI ethics frameworks and initiatives. He also mentioned Saudi Arabia’s collaboration with international organizations such as ITU, OECD, and ISESCO in AI governance efforts.

2. China: Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strategies, Chinese Academy of Engineering, mentioned steps being taken to promote responsible AI deployment, including the concept of “double increases and double decreases” in AI development.

3. Morocco: Amal El Fallah Seghrouchni discussed Morocco’s efforts in AI, particularly addressing the challenges posed by linguistic diversity. She highlighted the country’s three languages and the complexities this presents for inclusive AI development.

4. International bodies: Doreen Bogdan-Martin discussed ITU’s collaboration with partners like IEC, ISO, IEEE, and IETF through the World Standards Cooperation (WSC) group, focusing on multimedia authentication, deepfakes, and misinformation. She also mentioned the development of an AI readiness framework in collaboration with ITU and the launch of the “Green Digital Action” and the COP29 Declaration on Green Digital Action.

5. United Nations: Li Junhua, United Nations Secretary-General, highlighted the UN’s efforts in AI governance, including the formation of an interagency working group on AI.

Technical Barriers and Challenges:

Several speakers identified key challenges in implementing transparent and explainable AI systems:

1. Complexity: Amal El Fallah Seghrouchni noted that the complexity of AI models makes them difficult to explain, particularly deep learning systems.

2. Data privacy: Gong Ke highlighted data privacy concerns as a challenge for transparency.

3. Regulatory gaps: Amal El Fallah Seghrouchni pointed out that regulations struggle to keep pace with rapid AI advancements, emphasizing the need for flexible regulatory frameworks.

4. Talent shortage: The lack of AI expertise was identified as a major barrier to implementation.

5. Linguistic diversity: Amal El Fallah Seghrouchni raised the issue of language diversity posing challenges for inclusive AI development, citing the example of Morocco’s three languages.

Leveraging AI for Sustainable Development:

Participants emphasised the potential of AI to accelerate progress on sustainable development goals (SDGs) and address global challenges:

1. Doreen Bogdan-Martin stated that AI could accelerate progress on SDGs by 70%.

2. Li Junhua highlighted AI’s ability to enable real-time data analysis for policymaking, address structural inequalities, aid disaster response, and help with climate prediction and resource mobilisation.

Priorities for Future AI Governance:

As the discussion progressed, speakers proposed several priorities for advancing responsible AI governance:

1. Trust, safety, and accountability: His Excellency Dr. Abdullah bin Sharaf Alghamdi emphasised the need to focus on these aspects alongside collaboration.

2. Frugal, trustworthy, and inclusive AI: Amal El Fallah Seghrouchni advocated for this approach to AI development, emphasizing the concept of “doing more with less.”

3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders.

4. Closing digital and AI gaps: Doreen Bogdan-Martin highlighted this as a priority, particularly for developing regions.

5. Capacity building: Gong Ke emphasised the need to build engineering capacity, especially in developing regions, mentioning the World Federation of Engineering Organizations’ 10-year engineering capacity building program for Africa.

6. Standards development: Doreen Bogdan-Martin stressed the importance of standards in AI development to ensure interoperability and responsible practices.

Data Quality vs. Quantity:

The discussion also focused on the approach to data in AI development. While some speakers implied the need for extensive data to leverage AI’s potential, Amal El Fallah Seghrouchni challenged this notion, advocating for focused, high-quality datasets over large quantities of potentially unreliable data.

Conclusion:

The discussion concluded with a call for creating human-centric AI systems that benefit humanity while addressing ethical concerns and potential risks. Participants emphasised the need for ongoing dialogue and cooperation among all stakeholders to shape responsible AI governance and harness AI’s potential for sustainable development.

Several thought-provoking questions were raised for future consideration, including the validity of the Turing test for modern AI systems, the development of context-specific metrics for explainability and transparency, and strategies for creating more frugal, trustworthy, and inclusive AI systems.

Overall, the discussion highlighted the complex challenges and significant opportunities presented by AI technology. While there was broad consensus on the importance of transparency, explainability, and responsible development, the specific approaches to addressing these challenges may vary based on regional contexts and priorities. This underscores the need for continued international collaboration and dialogue to shape the future of AI governance.

Session Transcript

Latifa Al-Abdulkarim: I will go first to describe the general theme of this interesting session. So in this session, we want to know how AI actors, users, and regulators define transparency and explainability in the context of AI. And is this definition a consensus definition? While going through some real-world examples to show the significance of using transparency and explainability, we also want to dig into the technical and other challenges that make AI systems hard to explain. And since we have a very interesting diverse group here, moving from national to regional and global perspectives, we want to discuss the regulatory roles, the shortcomings in the roles, as well as the improvements that we want to achieve, foster international collaboration, and encourage digital dialogue on the roles and expectations from different stakeholders. Finally, this is a question from me. I want to ask ourselves whether the Turing test for AI is still valid for today, or we need a different version to trust, a new trust version for the Turing test to check whether we have trustworthy AI systems or not. Hopefully, some ideas will come from the IGF here in Riyadh. So let’s dive right in. And I will start with you, Doreen. As ITU plays a pivotal role in setting global standards for technologies, how should the term transparency and explainability in the context of AI be defined? And how to promote specifically transparency and explainability in those standards, which is, I know, a very challenging topic. Please. Thank you.

Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a member of HLAB, you know, it’s worth highlighting that, during the discussions stemming from the Secretary-General’s high-level panel on AI—a prominent advisory body—many terms lacked clear, internationally agreed definitions. This recognition underscores the need for greater global consensus and shared understanding of key concepts in the AI domain. I mean, things like fairness, like safety, like transparency. But obviously, when it comes to transparency and when it comes to explainability, they’re both absolutely critical in building public trust, which we need to do when it comes to AI. And we want to ensure accountability for AI systems and AI applications. So for us, I think when it comes to transparency, it’s about that disclosure when it comes to the how. And we want to make sure we understand how systems are designed. We want to understand how those systems are trained. And also to understand how they’re ultimately deployed. So those are the elements we keep in mind when it comes to the how in respect to transparency. When it comes to explainability, it’s a bit more towards the outcomes. It’s the how and the why AI systems produce specific outcomes. And as I said, both are absolutely critical when it comes to building that trust piece. And we want to make sure, as many speakers have noted in the previous opening session, that AI doesn’t get used for the wrong purposes, that AI doesn’t perpetuate biases, that we avoid potential harm. So we need to make sure that those two key features are built in. From the ITU perspective, we put standards at the core. We think that standards are the cornerstone of responsible development of artificial intelligence. Those standards play a key role when it comes to safety, when it comes to transparency, when it comes to ethical use. And that can also help us ensure that we unlock AI’s full potential. And I guess the last thing I wanted to mention, and it’s a specific example, we have launched a group as part of the World Standards Cooperation, the WSC, so we’re working with partners like IEC, ISO, IEEE, IETF and others. And we’re focusing in that group on multimedia authentication. We’re looking at deepfakes and we’re looking at misinformation. And I think that’s a good example of partnerships, of collaboration to ultimately make a difference. One other piece is the points about transparency and also explainability are also core to a recently adopted resolution that came out of our Standards Conference, where we had our first AI standards summit. Thank you.

Latifa Al-Abdulkarim: Thank you. Thanks very much. And this is really very interesting. And I think it’s totally aligned with what we exactly are looking for in terms of harmonizing those standards and specifically working and building studies to know the metrics that we need for those explainability and transparency for each context or application. That is quite different when we are discussing those two principles. And the most interesting part that those standards and those global efforts are aligned with many national efforts. And specifically, if I want to ask Your Excellency Dr. Abdullah about Saudi Arabia. Saudi Arabia has made significant strides when it comes to promoting the ethical use and development of AI. Could you please share more about the Kingdom efforts and initiatives in advancing AI ethics, transparency and explainability? Thanks.

Dr. Abdullah bin Sharaf Alghamdi: Thank you, Dr. Latifa. First, I would like to welcome my fellow panelists to Riyadh. And it’s a great pleasure to share the stage with such distinguished visionaries and thought leaders. First, let me just talk about the beginning of our journey in Saudi Arabia and the area of AI that started back in 2019 when the Saudi Data and AI Authority was established. So, we placed a strong emphasis on embedding the ethics into the core of all AI initiatives since then. We focused more at the beginning on the AI ethics framework. And basically, Saudi Arabia was among the early countries adopting the UNESCO recommendation on AI ethics. So, a year after that, back in the Global AI Summit, the second one back in 2022, we announced our National AI Ethics Framework. And the beauty of that framework, it was associated with the incentive program that was announced earlier this year. The idea of the program to encourage the governmental entities to register in a platform and also to undergo a number of surveys. And based on their performance, based on their maturity level, they are granted badges. And on this stage, two months ago, we celebrated 20 entities from the public and private sectors and granted them badges. And also, this framework, the National AI Ethics Principles Framework was also recognized as a champion by the ITU WSIS a while ago. So, this signifies our commitment here in Saudi Arabia to align also with the international community in those initiatives. And also, the government has introduced a unique initiative by establishing the International Center for Artificial Intelligence Research and Ethics. And proudly, the UNESCO has recognized the center as a global and regional partner to advance the AI ethics locally and worldwide. Only a few days ago, the UNESCO has published its report on AI in Saudi Arabia that highlighted a number of unique achievements and initiatives on AI ethics. And this is a great achievement of Saudi Arabia, completing the RAM methodology requirements among 10 countries worldwide.

Latifa Al-Abdulkarim: Congratulations, Your Excellency, for all these efforts and incentives that we are doing here in Saudi regarding the ethical use of AI and targeting transparency and explainability in specific. And maybe the most interesting part that we are also considering the cultural aspects and providing context-based AI systems while at the same time following all these ethical guidelines that we are working on here in Saudi Arabia. Talking about culture, it’s very interesting to know and more about you from your side, Dr. Amal El Fallah Seghrouchni, about Morocco. Morocco is a nation bridging the Arabs, African worlds and putting at a crossroads of culture as well as economics and technological exchange. How is the Ministry setting this benchmark, I would say, for ethical AI practices and specifically for transparency and explainability?

Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close to America, we are close to Europe, so we are the gate of many, many things. And Morocco is also very well known with inclusion and diversity. This is very challenging for AI today to have multilingualism and multicultural approaches because if you deal with LLM, for example, the most spread technology today in AI field is, for example, chatGPT. I know there is a very interesting experience in Saudi Arabia about LLM. In fact, if we want to be inclusive enough, we should target all the languages over the world. In particular, in Africa we have like 800 dialects across the continent, and we cannot ask everybody to speak English. It’s just something impossible today. I mean, we can speak English as second language, but the native language is not English, and we have to deal with that. In Morocco, for example, we have three languages, in the north, in the middle of the country, and also in the south. They understand each other, but it’s quite different from one region to another one. So how to apply AI in this context? Because language is also the vector for culture. If you don’t speak the language, you cannot understand the culture of the region, of the country, of the continent, et cetera. So in my ministry, we have a department, as I said, which works on how to make models for a multi-language environment. And we face a lot of challenges, for example, some of these languages don’t have structure, don’t have semantics, don’t have basic building blocks to deal with, computationally speaking. So this is one aspect. Now, if we go back to transparency and explainability. Transparency, for me, is like to explain, and not to explain, because I will be confused with explainability, but it’s relays on how the system can meet each expectation, how it functions, et cetera. When it comes to explainability, it’s a bit more technical. We have to justify the decision given by the system. For example, in scoring, in many cases, in justice, in medicine, in health, et cetera, you deal with scoring. The scoring should be justified on technical parameters of the system. You have to justify your decisions. In legal systems, for example, you cannot just provide judgment, you have to explain the judgment in health, et cetera. So in Morocco, as you know, we have been involved in many global, multilateral initiatives towards AI, in particular, with UNESCO and United Nations, and I can go back to all these initiatives if you have room. But the idea is that Morocco is a very, very aware of the necessity, if we want to build towards AI, we need to provide transparency and explainability to citizen and to stakeholders.

Latifa Al-Abdulkarim: Thank you so much, and you highlighted very interesting point, the one that is related to the language, and the importance of considering inclusion even in all the language, and all the data sets in all the languages, that is gonna be very crucial, as well as taking the definition of transparency and explainability, and making sure that transparency is throughout the whole AI life cycle, while explainability is reasoning and justifying the outcome. However, even for that reasoning and justifying for the outcome, we still have some challenges when it comes to trustworthy. We don’t want to provide too detailed answer, and then that will increase the trust maybe for the end users at the end. That’s very interesting discussion. Take me back to… Take me to you, Mr. Li, to wondering and knowing more about how can we leverage those principles of transparency and explainability in AI system to strengthen institutions, governance, and capacity building, specifically for national levels.

Li Junhua: Well, thank you. Thank you, Madam Moderator, for raising this important question. Perhaps at the outset, I just want to say a few words about UN DASA. We are custodian for the Sustainable Development for 2030 Agenda. So for the UN Development System, the ultimate objective is to assist the member states to achieve the 2030 Agenda or Sustainable Development Goals. So in this exercise, we definitely need the regional and the national institutions to work together to accelerate these efforts. So by saying that, we definitely underlined the importance or highlighted importance for this AI technology to stimulate and accelerate the national efforts and the regional efforts. Of course, also at the global level. For instance, last May, we had this ECOSOC special meeting focusing that how AI technology can sustain and stimulate sustainable development. We need to harness the strategy and the synergies together. And then also, why this transparency and explainability is so important in capacity building at the global or national level. First, the General Assembly actually adopted two important or landmark resolutions on AI technology. And among those two resolutions, there are a few important but common elements. Perhaps I could just share with our participants. Number one, they highlighted very much the explainability of the AI for the national efforts. Because to us, explainable AI plays a vital role in developing capacity for demystifying the algorithm. This enables the policymakers to know that whenever the decisions undertaken, it can be explained to the public constituencies. So we can leverage the enthusiasm and participation by our constituencies at the national level and it also enhances the regional networking. And then also, second important element from those two resolutions is that capacity building should go expand or go beyond the technical training to include ethical and regulatory dimensions. So that actually, I don’t need to further explain that. Whenever there’s such a need to use the AI technology, we need to be the very ethical and transparent. Thank you.

Latifa Al-Abdulkarim: Thank you so, so much for mentioning this particular part. We definitely need to work on those capacity building cross domains. It’s not only about technical. Everyone thought that this is a technical, for example, forum. a technical forum. It’s for everyone who should be part of the future, who should help and contribute in shaping the future or the digital future that we want. Dr. Gong Ke, I know that you are leading the Chinese Institute for the new gen AI and I’m sure that you have your inputs and opinions when it comes to how can we leverage AI ethics and capacity building in specific to transparency and explainability in AI. Thank you.

Gong Ke: Thank you. Based on the observation of my Institute in the past years to the Chinese practices, I think there are five essential steps to promote the transparency and responsible deployment of AI system. First, we need to building wide consensus through a multistakeholder dialogue by the institutional approach to engage policymakers, industrial leaders, academia and civil society to develop a shared understanding of transparency and explainability. And based on this, the second step is to provide a clear guideline and setting operational standards for AI transparency and explainability to encourage the development of ethical AI practices through an open science approach as recommended by UNESCO. The third one I’d like to mention is that a building capacity and the literacy for of AI by investing education and training programs for public servants, policymakers and industrial professionals and public to understand AI technology and its social implications so that to enable them to implement the guideline and the standards. And another very important step is to develop technical tools and the methodologies to evaluating and verifying the transparency of AI system. Last but never the least is promoting international collaboration to establish interoperability of norms and the best practices sharing to ensure the alignment with the global standards. I think in this regards, IGF can play a crucial role in this process.

Latifa Al-Abdulkarim: Thank you very much. You mentioned a lot of very interesting points here that has been also part of the GDC adoption and recommendations. The consensus, we need definitely scientific consensus in terms of the definitions so we at least globally can we agreed on certain definitions. The policy dialogue and interoperability and the focus exactly on the main requirements that I believe that we are lacking in globally is the focus on having experts into mainly who could tackle the technical solutions for transparency and explainability in AI. And this is very important and we’d like to work on it and to have more experts in this field. This will help us in finding and reaching trustworthy AI systems. Talking about all these requirements within the capacity building, I know that we are in Saudi Arabia doing our best into ensuring safeguards without limiting AI potential. I would like to hear from you, Your Excellency Dr. Abdullah, about how exactly we are doing this and mainly how we are balancing between the AI governance while we are innovating.

His Excellency Dr. Abdullah bin Sharaf Alghamdi: As you know Dr. Latifa, the AI landscape is evolving rapidly and this evolution includes a lot of opportunities and also it reduces a lot of serious risks. So our approach here in Saudi is based on continuous monitoring of the evolution of AI solutions and also intervening with the right governance tools to make sure that the principles that I talked about are taken into consideration. So the balance is a very serious issue and we have to make sure that the innovation goes along and with the right governance and regulatory tools. For example, recently with the rise of the synthetic data like misinformation, disinformation, we have nationally introduced the deep fake guidelines for the developers and also for the users to be taken into consideration when using or developing such systems. And also, for example, with the evolution and emergence of multiple large language models similar to chatGPT, we have introduced the national AI guidelines framework in order to help the developers choose the right methodology and also to follow certain guidelines in developing these solutions taken into consideration the ethical principles that we talked about. So on the other hand, we have also introduced nationally the national AI adoption framework where we encourage the governmental and private sector organizations to adopt AI and to scale the AI solutions within their sectors. So recently we have celebrated the establishment of 25 AI offices within governmental organizations and those offices will take the care of balancing between innovation as well as the regulation taken into consideration the national AI ethics principles framework and also the framework we have just announced, we have just talked about the gene AI framework and so on and so forth. In addition to that, also we have published the national AI occupational guidelines framework that sets the guidelines for the human resources departments to deal with new jobs, new job titles associated with artificial intelligence. Jobs like the AI engineering, the AI data science, the data science, the data analysis, the AI developers. So we set the guidelines, the performance, the data analysis, the job titles and also the applicants. Also on the other hand, we have introduced the national academic framework for the academic institutions to be used in making sure the curriculum developed or used take into consideration those guidelines and we have introduced eight levels starting from the elementary level, level number one, going through the undergrad, reaching to the PhD level, level number eight. So the idea is for the academic institutions to take these guidelines into consideration when introducing new programs on AI. Last but not least is the is the introduction of the establishment of the International Center for AI Research and Ethics that was accredited by the UNESCO as we mentioned before and I think these initiatives make Saudi Arabia number three worldwide after the US and the UK according to the OECD policy observatory. So this signifies our commitment, dedication and aligning with international community and introducing new rules and regulations for AI.

Latifa Al-Abdulkarim: Thank you so much Your Excellency and well deserved after going through all those frameworks and that is some related to the curriculum and occupational and for the capacity building itself while the others also taking care of how exactly the adoption of AI giving that at this very I think it was announced a few months ago and we have already 225 AI offices and government entities. Congratulations on this achievements. I believe this is really give a clear example of how can we balance between innovation and regulation and of course we need to keep on monitoring of our progress and reflect that on our guidelines. Ms. Doreen, I believe that you have very interesting examples also in balancing giving that you are working on many use cases related to the SDGs and I would like to hear more from you about how can transparent and explainable AI systems those goals. Thank you.

Doreen Bogdan-Martin: Thank you. Maybe, Your Excellency, just to also pick up on the work we’ve done in terms of the AI readiness framework. I think that’s also a great example of how we can work together with countries to help them find ways to leverage artificial intelligence. When it comes to the sustainable development goals, I think it’s important to recognize that only 17% of the targets are on track, so we’re not in a good space in terms of achieving those targets and goals by 2030. But we’re optimistic because we fully believe that leveraging digital technologies, and in particular artificial intelligence, can actually help us to accelerate progress on the 17 SDGs and on the 169 targets. We’ve done some joint work with UNDP, and we showed that if you invest in digital and you invest in AI, you can actually accelerate progress by some 70%. So that’s our big push, is to get all stakeholders to put digital first, put AI first, so that we can make significant progress. In the context of our artificial intelligence for good, AI for good, which we started back in 2017, we have seen very concrete examples and solutions. We need to leverage those solutions. For instance, it was a great story of Mohamedou, who was a winner of our AI innovation factory, he comes from West Africa. He’s been able to take data together with AI, work with farmers, and actually the farmers he has worked with, they’ve seen an increase in their yield by some 200%. So very concrete examples of what we can do when we leverage AI. I think in the UN system, it’s also important to recognize that we do work together, something that the USG has just mentioned. We have an interagency working group on artificial intelligence that ITU co-chairs with UNESCO, and we have documented more than 400 use cases of how we as a system are leveraging AI to achieve the SDGs. So whether it’s something in the space of climate, whether it’s healthcare, whether it’s school connectivity, whether it’s gender, we have demonstrated very clearly how you can use AI to achieve the SDGs, and I think that’s something that we absolutely have to build on. And then when it comes to climate and sustainability, we heard lots of interventions about that this morning, and I think we have to remember in the digital ecosystem, in the digital space, we are emitters of greenhouse gas emissions, and some estimates show that we’re around 4% come from the digital sector. We know that artificial intelligence is hungry for energy, also thirsty for water, but if we use it correctly, artificial intelligence can help reduce greenhouse gas emissions by 10%. And I think also that’s a space where standards is critical. So we’re very focused on the standards component, developing international standards with our partners. We have launched the Green Digital Action Coalition. We had a digitization day at COP 29 where we launched the Green Digital Declaration, had about a thousand or so signatories to that declaration. And we do need to come together to advance sustainable green solutions when it comes to digital and when it comes specifically to artificial intelligence, so that we can be reducers and not emitters. Thank you.

Latifa Al-Abdulkarim: Thank you. Thank you so much, Ms. Doreen, and I’m sure that Mr. Li could elaborate more on this in particular to maybe addressing climate actions under the UN.

Li Junhua: Well, thank you. I’m so glad to hear from Doreen about this SDG implementation. We are off the track, left behind our objectives, but so definitely AI technology could inject a stimulus, a new stimulus, in our efforts. I just want to give you three specific examples how AI technology can help us to leapfrog. First, AI in the real-time data analysis. That helps the policymakers to understand the overall situation, how this 17 goals interlinked together. For instance, like education, how much impact generated from education on the gender equality, and also how much impact on the renewable resource energy impacts on our climate efforts or climate agenda, climate action. And second specific area is that the AI system can address the structural inequalities. For instance, if there would be an urgent situation or a contingent situation, we need to allocate the resources to the disaster reduction or disaster relief. That’s the important thing for the policymakers to make the right judgment on the decision. So that’s where AI can help. In a third area, just now you mentioned the climate action. Well, AI-driven models can do the climate prediction and the resource mobilization. So that is very important for the policymakers and also national efforts. And when they articulate their national efforts, it will be integrated to the global or regional efforts together. Thank you.

Latifa Al-Abdulkarim: Thank you so much for going through all these examples related to connectivity, climate, sustainability, and energy. And very important point that you have just mentioned about when do we need AI to move and take actions when it comes to urgent situations. And this is what we need to prepare ourselves about it from now to get ready for such situations before it happens. I wish that it’s not happened anyway. For you, your excellency, Dr. Amal, we have heard a lot of opportunities, a lot of enormous potentials for AI in different use cases and different national and regional and global level. However, we both know that there are a lot of barriers too. I would like to hear from you about those barriers, whether they are technical or non-technical barriers, and how can we address them? Or if there are solutions already, then how can we elaborate on those solutions?

Amal El Fallah Seghrouchni: Thank you very much. Let me start with non-technical barriers. It’s easy. We have to conduct mindset changing in our countries to make adoption of AI easier, because it’s a huge problem to convince stakeholders to develop AI systems for different reasons. The first one, because we don’t have enough talents, skills in AI, and this is something we should solve. It’s a huge problem over the world. We know there is a technical word to say that, the war of talents, you know. It’s a big problem to solve first. And also, people are afraid from AI because they think AI will dominate the world, AI is more intelligent than human beings, etc. Now, let me talk about the technical problems. I think the first one technical problem is the complexity of the models. As you know, Europe has developed the AI Act until 2020, and then ChargePT came on the table, and the AI Act stopped. It’s just like, you know, something very disruptive happened on the AI landscape, and we had to reconsider all that we have done before. So, like, five years’ work on the AI Act is stopped, and now we think that we will get the new AI Act for 2025, but it’s not sure. And so, this is maybe, we can think that we will have unforeseen situations in AI, and this means that we have to prepare ourselves to change our regulation as quick as possible to follow the technology. And this is not easy because, you know, regulation takes a lot of time compared to developing algorithms or new models. The other thing is that these large language models, for example, deal with millions and sometimes billions of parameters. So, it’s not possible to control what going on in the system by human being. It’s just, and in addition, the system learns, their ways change, and what’s going on in the system is not foreseeable by human being. The second thing is that most of AI systems can be considered as black boxes. We have inputs, we have outputs, we have lots of, lot of things happening within the box and nobody can explain. This is why explainability leads to accountability, etc, etc. So this is also a huge problem. The high dimensionality of data, we have a lot of dimensions to deal with, and we also have hybrid data. You, sometimes you deal with text, with digits, with images, with videos, and so on. And this is also, it’s not linear, and people, human being cannot think when it’s not linear. So most of data, by the way, came from sensors or radars. So this is something also that makes AI system very difficult to predict, the non-linear decision-making, because we focus on correlations, and when we have more than three correlations, we are lost. Sometimes you can go to seven, but you should be very skilled for that. So this is also makes some difficulty in explaining these systems. Data transparency. And about data, I would like to say something, because we think that we need huge data to do systems, to make system function. It’s not true. It’s, you know, like when you put all together from Internet, you have a good data, you have bad data, you have false data, you have whatever. You don’t need all this. You need good data, very well calibrated, and this maybe solve problem of climate change, if I go fast, because you have to set your data set as clean as possible. It’s enough. If you want to talk about justice, you don’t need data about health. If you want to talk about agriculture, you don’t need mining data, and so on. It’s a conversational system, works with all data it would be gathered on Internet, but other systems in different sectors, we don’t need all this data. We need specific and specialized data. A model can also behave unpredictably when deployed in different context. If you put models that work with Arabic, they will not work at the same in other dialect or in other language. So when you change the context, you should make accurate data, and you should change sometimes deeply the data you use.

Latifa Al-Abdulkarim: I totally agree with you and I’m sure that Dr. Gong has a lot to exchange with us, giving his expertise within the GenAI, and I’m sure within the use cases that you are dealing with in the Institute, please.

Gong Ke: In view of the limited time, let me focus on the technical barriers. It’s just mentioned by our colleague from Morocco. The technical barriers, many lines from the complexity of the AI model, and also from the data privacy, it raises a further challenge to the transparency of the models. So to address these models, I think it is, among many others, to further encourage and promote a technical innovation is a must. For example, we need to advance the AI model from today’s pure data-driven model to a new model which will be jointly driven by data and knowledge, in terms of knowledge of graph, decision-making trees, and many others. And also, we need to adopt and further develop privacy-preserving technologies, like differential privacy, federated learning, and homomorphic encryption to protect sensitive data while enabling transparency. I think further technical innovation in a possible and ethical way is a must.

Latifa Al-Abdulkarim: Thank you. Thanks so much for mentioning all this. We have heard about the complexity of the model, the complexity of the data, and the complexity of regulations, and how much we need flexible regulations. And this is maybe the call for, again, support even for the AI Act, as it’s now used as a sandbox for monitoring and evolving and changing, amending the current regulations. And we need to support, again, I totally agree with you about how having more skills for responsible AI that works on the technical side, to have more solutions, I would say technical solutions for responsible AI, including the privacy-preserving technologies. I’m looking at the time, I just want to make sure that we have at least preserving some time, because I want to make sure that we are not closing this session without knowing what are the actions. We have discussed the potentials, the challenges, but what can we provide for the IGF in 2025 and beyond. I would start with you, Your Excellency, Dr. Abdullah.

His Excellency Dr. Abdullah bin Sharaf Alghamdi: We started this idea back in 2019, and that time we sought support from other countries. You remember we paid some visits to our friends in Estonia, and also in South Korea, to benefit from their experiences either in data governance, or in data centers, or in also AI as well. So after five years of experience, I think Saudi Arabia now stands ready to share its expertise with other countries as well. I remember back in the first Global AI Summit back in 2020, we hosted the consultation session for establishing a UN AI advisory body for the General Secretary. So we hosted that consultation sessions during the pandemic, and a few years later, in 2023, the UN Secretary General announced the establishment and the launch of the advisory body, and you being a very active member of this body, Dr. Latifa. Also, in the second Global AI Summit, we announced a number of collaborations with international communities, with the ITU, we worked together, and we also launched the AI readiness framework, and thanks for the ITU for being steadfast in this partnership. And also with the OECD, we also announced a partnership to enhance the AI policy and incidents observatory, and this was also announced during the GAIN24. As well as, we worked with the international, with OECD also, we worked with them to establish GenAI Center of Excellence here in Riyadh, in order to help the member countries develop AI-based solutions, and also to take into consideration the ethics framework as well. Also, we worked with the ICESCO to announce in this very stage, two months ago, the Riyadh Charter for the Islamic World. As you know, Saudi Arabia is the heart of the Muslim world, more than two billion Muslims looking at Saudi and their practices in AI and large language model, Arabic large language model. So we have established, we have launched the Riyadh Charter with the ICESCO. Also, under the umbrella of the International Center for AI Research and Ethics, ICARE, we organized a number of workshops with the GCC countries, with the Arab Leagues, in order to increase the awareness towards the UNESCO RAM, the Readiness Assessment Methodology. And also, as I said before, Saudi Arabia was amongst the early countries adopting this methodology, on implementing the methodology. And as I said, we are proud, really, to be number one regionally according to the global AI index and also number one globally in the AI government strategy according to the same index. Going forward, our priorities for the year to come, year 2025, I recommend to minimize the declarations and focus more on actions, this first thing. And I think we need to focus on three main points in order to overcome the gap between governance and innovation. We have to focus on trust, we have to focus on accountability and also, sorry, safety and accountability as well, and also we have to focus on collaboration. So trust is based basically on, as the esteemed members mentioned, is based on clear governance for the explainability, the transparency, and for the safety, we have to make sure that we have the proper proactive measures and also we need to make sure that we have the proper guidelines in order to make sure that we implement the safety measures and also to mitigate the risks associated with AI products. Collaboration is essential between the governmental entities, industries, and also the academic institutions to make sure that they share the same goals. And these priorities, Saudi Arabia will be positioned as a global leader to develop AI-based solutions for the benefit of humanity.

Latifa Al-Abdulkarim: Thank you so much, Your Excellency, and of course we will keep on exchanging our expertise with the global and aligning with global initiatives. Dr. Amal, from your perspectives, what steps or empirical methodologies maybe should be prioritized in 2025 to bridge the gap and accelerate the use of transparency and explainability into AI systems?

Amal El Fallah Seghrouchni: I would like to relay on His Excellency’s diagram, I found it very accurate for this question. So he talked about algorithm, computing, and data, and I would like to build on this. For example, for computing, we would like, I mean, I think we should do more with less. With data, I would like to push for data protection and data calibration. I can explain each concept separately. And for the algorithms, I will connect algorithms to models, it’s mainly the same, I mean, models and algorithms. And I would like to go for trustworthy, and inclusive AI. To do that, we need, of course, governance is very important, regulation, but the objectives of all this together is to get this inclusion in AI and to be as economical as possible towards our environment with frugal AI. This means that we do not have to use huge data for nothing, or build very big models for nothing. Or, I mean, we should customize our algorithms, our models, and our data sets to do more with less. This is my recommendation.

Latifa Al-Abdulkarim: Thank you so much. Always we are calling for low compute model for saving energy, and this is also targeted with your suggestions, I would say, and priorities for actions. Mr. Li, please, considering the adoption of global digital compact, the rapid advancement of AI, what steps or specific actions should we prioritize to harness AI’s potential for sustainable development and inclusive growth in this transformative era?

Li Junhua: Thank you. From UN’s perspective, I would like to flag out three or four key areas. First, just as His Excellency highlighted, number one, we need to emphasize on the global collaboration among all stakeholders, because the collaboration and cooperation among all stakeholders would be key for the digital transformation. Second, I would argue that the key areas to utilize this IGF platform, as all the distinguished speakers this morning highlighted again, this is a primary, open, and inclusive platform, so we need to dig the potential of this IGF. And third area, I would like to argue, is to allocate additional efforts to support the capacity building in the global south, especially at the local community levels, because without the open access by them, it’s hard to imagine that we can benefit for everyone. Last but not least, as the Minister has argued, we need to uphold a very responsible use of the data. I don’t need to further elaborate it. Thank you.

Latifa Al-Abdulkarim: Thank you so much. Ms. Doreen?

Doreen Bogdan-Martin: Thank you. Perhaps to pick up on last point on humanity, because I think at the end it’s all about humanity. It’s all about the betterment of humanity. I had the honor and privilege of meeting Pope Francis a couple of weeks ago, and we spoke about technology and humanity, and he reminded us that artificial intelligence doesn’t just need a brain, it needs a heart. It needs empathy. So let’s remember that. And of course, when we think about our digital world, what does it mean when we still have a third of humanity that is not yet connected? The Secretary General often reminds us that we have to make sure AI does not stand for advancing inequalities. So when it comes to what to prioritize, I think we really have to prioritize closing the gap, closing the digital gap, closing the AI gap. As the Minister said this morning, that gap is a compute gap, we have a data gap, we have an algorithmic gap, and as you just said, we have a capacity building gap. We’ve got to close those gaps if it’s going to benefit humanity. We also need to be focusing on standards. We need to focus on responsible standards for AI. And then I guess the last point is about governance. We need to have more inclusive governance discussions, like here at the IGF, at the WSIS Forum, at AI for Good. We need all stakeholders at the table to discuss governance that benefits all of humanity. Thank you.

Latifa Al-Abdulkarim: Thank you so much. Dr. Gong?

Gong Ke: So let me raise two points. Firstly, I’d like to echo what Undersecretary Lee has mentioned, capacity building. Capacity building is so important for the further deployment and application of AI in an inclusive and responsible way. The capacity divide is behind the divide of data, the computer, and the algorithm. Here I’d like to highlight the engineering capacity. Engineering capacity is so important. The World Federation of Engineering Organizations, with the support of UNDESA, UNESCO, and many other United Nations organizations, is carrying out a 10-year-long engineering capacity building for Africa program. We need your support. And secondly, I’d like to mention the combination of digitalization and sustainable development to make a dual transformation or twin transformation of sustainability and digitalization. So in China, we say to move AI from chat to product to benefit people and to achieve double increases and double decreases. The double increases is to increase the quality of the production and to increase the efficiency of the production. The double decreases is to decrease carbon footprint and to decrease the cost. So I stop here. Thank you.

Latifa Al-Abdulkarim: Thank you so much. I think this is the best words to close our discussion today. And for our audience, please take these actions for the next IGF to build safe AI systems and secure human-centric, which is going to be a solution for most of the issues that we are discussing here, a human-centric digital future, and leave no one behind. And don’t forget, AI has heart too. Thank you so much. Ladies and gentlemen, we now invite you to enjoy a delightful lunch break. Please remember to return here in 90 minutes as we look forward to resuming the program promptly.

D

Doreen Bogdan Martin

Speech speed

140 words per minute

Speech length

1272 words

Speech time

542 seconds

Standards are key for responsible AI development

Explanation

Bogdan Martin emphasizes the importance of standards in the responsible development of AI. She states that standards play a crucial role in ensuring safety, transparency, and ethical use of AI.

Evidence

ITU has launched a group as part of the World Standards Cooperation focusing on multimedia authentication, deepfakes, and misinformation.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Transparency relates to system design, explainability to outcomes

Explanation

Bogdan Martin differentiates between transparency and explainability in AI. She explains that transparency is about disclosing how systems are designed, trained, and deployed, while explainability focuses on how and why AI systems produce specific outcomes.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Agreed with

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Agreed on

Importance of transparency and explainability in AI

AI can accelerate progress on SDGs by 70%

Explanation

Bogdan Martin highlights the potential of AI to accelerate progress on the Sustainable Development Goals. She states that leveraging digital technologies, particularly AI, can significantly speed up progress on the 17 SDGs and 169 targets.

Evidence

Joint work with UNDP showed that investing in digital and AI can accelerate progress by 70%.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Li Junhua

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

A

Abdulah Bin Sharaf Alghamdi

Speech speed

108 words per minute

Speech length

1524 words

Speech time

840 seconds

Saudi Arabia has developed national AI ethics frameworks and initiatives

Explanation

Alghamdi outlines Saudi Arabia’s efforts in promoting ethical use and development of AI. He describes various national frameworks and initiatives implemented to ensure responsible AI development and adoption.

Evidence

Saudi Arabia adopted the UNESCO recommendation on AI ethics, announced a National AI Ethics Framework, and established the International Center for Artificial Intelligence Research and Ethics.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Agreed with

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

Agreed on

Importance of transparency and explainability in AI

Differed with

Amal El Fallah Seghrouchni

Differed on

Approach to AI regulation

Focus on trust, safety, accountability and collaboration

Explanation

Alghamdi emphasizes the need to prioritize trust, safety, accountability, and collaboration in AI governance. He suggests focusing on these aspects to bridge the gap between governance and innovation in AI.

Major Discussion Point

Priorities for Future AI Governance

A

Amal El Fallah Seghrouchni

Speech speed

117 words per minute

Speech length

1448 words

Speech time

739 seconds

Language diversity poses challenges for inclusive AI development

Explanation

Seghrouchni highlights the challenges posed by language diversity in developing inclusive AI systems. She emphasizes the importance of considering multiple languages and dialects in AI development to ensure inclusivity.

Evidence

Morocco has three languages related to Amazigh in different regions, which poses challenges for AI application.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Lack of AI talent and skills is a major barrier

Explanation

Seghrouchni identifies the shortage of AI talent and skills as a significant barrier to AI implementation. She emphasizes the need to address this skills gap to facilitate AI adoption.

Major Discussion Point

Challenges and Barriers to AI Implementation

Complexity of AI models makes them difficult to explain

Explanation

Seghrouchni points out that the complexity of AI models, particularly large language models, makes them difficult to explain. She notes that the high number of parameters and the black-box nature of many AI systems pose challenges for transparency and explainability.

Evidence

Large language models like ChatGPT deal with billions of parameters, making it impossible for humans to control or fully understand what’s happening in the system.

Major Discussion Point

Challenges and Barriers to AI Implementation

Agreed with

Doreen Bogdan Martin

Abdulah Bin Sharaf Alghamdi

Agreed on

Importance of transparency and explainability in AI

Regulations struggle to keep pace with rapid AI advancements

Explanation

Seghrouchni highlights the challenge of regulations keeping up with the rapid advancements in AI technology. She notes that the development of AI regulations takes much longer than the creation of new algorithms or models.

Evidence

The European AI Act development was disrupted by the emergence of ChatGPT, causing a delay in its finalization.

Major Discussion Point

Challenges and Barriers to AI Implementation

Differed with

Abdulah Bin Sharaf Alghamdi

Differed on

Approach to AI regulation

Develop frugal, trustworthy and inclusive AI

Explanation

Seghrouchni advocates for the development of AI that is frugal, trustworthy, and inclusive. She emphasizes the need to customize algorithms, models, and data sets to do more with less, while ensuring inclusivity and trust.

Major Discussion Point

Priorities for Future AI Governance

L

Li Junhua

Speech speed

106 words per minute

Speech length

734 words

Speech time

411 seconds

AI enables real-time data analysis for policymaking

Explanation

Li highlights the potential of AI in real-time data analysis for policymaking. He explains that AI can help policymakers understand the interrelationships between different Sustainable Development Goals.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

AI can address structural inequalities and aid disaster response

Explanation

Li points out that AI systems can help address structural inequalities and improve disaster response. He emphasizes AI’s potential in resource allocation during urgent or contingent situations.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

AI models can help with climate prediction and resource mobilization

Explanation

Li discusses the potential of AI-driven models in climate prediction and resource mobilization. He highlights the importance of these capabilities for policymakers in articulating national efforts and integrating them into global or regional initiatives.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

Emphasize global collaboration among all stakeholders

Explanation

Li stresses the importance of global collaboration among all stakeholders in harnessing AI’s potential. He argues that cooperation among various stakeholders is key for digital transformation.

Major Discussion Point

Priorities for Future AI Governance

G

Gong Ke

Speech speed

97 words per minute

Speech length

558 words

Speech time

345 seconds

China is taking steps to promote responsible AI deployment

Explanation

Gong outlines steps China is taking to promote responsible AI deployment. He emphasizes the importance of building consensus, providing clear guidelines, and developing capacity for AI literacy.

Evidence

China is engaging in multistakeholder dialogues, providing authoritative guidelines, investing in education and training programs, and promoting international collaboration.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Data privacy concerns create challenges for transparency

Explanation

Gong highlights that data privacy concerns pose challenges for AI transparency. He suggests that privacy-preserving technologies need to be developed and adopted to address this issue.

Evidence

Gong mentions technologies like differential privacy, federated learning, and homomorphic encryption as potential solutions.

Major Discussion Point

Challenges and Barriers to AI Implementation

Build engineering capacity, especially in developing regions

Explanation

Gong emphasizes the importance of building engineering capacity, particularly in developing regions. He highlights this as a crucial step for the responsible deployment and application of AI.

Evidence

The World Federation of Engineering Organizations is carrying out a 10-year-long engineering capacity building program for Africa.

Major Discussion Point

Priorities for Future AI Governance

Agreements

Agreement Points

Importance of transparency and explainability in AI

Doreen Bogdan Martin

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Transparency relates to system design, explainability to outcomes

Saudi Arabia has developed national AI ethics frameworks and initiatives

Complexity of AI models makes them difficult to explain

The speakers agree on the critical importance of transparency and explainability in AI systems, emphasizing the need for clear guidelines and frameworks to ensure responsible AI development and use.

AI’s potential to accelerate progress on Sustainable Development Goals

Doreen Bogdan Martin

Li Junhua

AI can accelerate progress on SDGs by 70%

AI enables real-time data analysis for policymaking

AI can address structural inequalities and aid disaster response

AI models can help with climate prediction and resource mobilization

Both speakers highlight the significant potential of AI in accelerating progress towards the Sustainable Development Goals, particularly through improved data analysis and decision-making capabilities.

Similar Viewpoints

These speakers emphasize the need for responsible AI development that prioritizes trust, safety, and inclusivity, while also promoting collaboration and clear guidelines.

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Gong Ke

Focus on trust, safety, accountability and collaboration

Develop frugal, trustworthy and inclusive AI

China is taking steps to promote responsible AI deployment

Unexpected Consensus

Challenges in AI regulation keeping pace with technological advancements

Amal El Fallah Seghrouchni

Abdulah Bin Sharaf Alghamdi

Regulations struggle to keep pace with rapid AI advancements

Saudi Arabia has developed national AI ethics frameworks and initiatives

Despite coming from different regional perspectives, both speakers recognize the challenge of developing regulations that can keep up with the rapid pace of AI advancements, highlighting a shared concern across different governance approaches.

Overall Assessment

Summary

The speakers generally agree on the importance of transparency, explainability, and responsible development of AI, as well as its potential to accelerate progress on sustainable development goals. There is also consensus on the need for capacity building, particularly in developing regions, and the challenges posed by the rapid advancement of AI technology in relation to regulation and governance.

Consensus level

There is a high level of consensus among the speakers on the main issues discussed. This strong agreement suggests a shared understanding of the challenges and opportunities presented by AI across different regions and perspectives, which could facilitate international cooperation in developing governance frameworks and standards for AI. However, the specific approaches to addressing these challenges may vary based on regional contexts and priorities.

Differences

Different Viewpoints

Approach to AI regulation

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Saudi Arabia has developed national AI ethics frameworks and initiatives

Regulations struggle to keep pace with rapid AI advancements

While Alghamdi emphasizes Saudi Arabia’s proactive approach in developing AI ethics frameworks, Seghrouchni highlights the challenges of regulations keeping up with rapid AI advancements, suggesting different perspectives on the effectiveness of current regulatory approaches.

Unexpected Differences

Focus on data quantity vs. quality

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

AI can accelerate progress on SDGs by 70%

Develop frugal, trustworthy and inclusive AI

While Bogdan Martin emphasizes the potential of AI to accelerate progress on SDGs, implying the use of extensive data, Seghrouchni unexpectedly argues for a more frugal approach, suggesting that we don’t need huge amounts of data but rather well-calibrated, specific data sets. This difference in perspective on data usage was not explicitly anticipated in the discussion.

Overall Assessment

summary

The main areas of disagreement revolve around regulatory approaches, the balance between innovation and governance, and the approach to data usage in AI development.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific issues, there is a general consensus on the importance of responsible AI development and the need for transparency and explainability. These differences in approach could lead to varied strategies in AI governance and implementation across different regions, potentially impacting global coordination efforts.

Partial Agreements

Partial Agreements

Both speakers agree on the need for responsible AI development, but they differ in their approaches. Bogdan Martin emphasizes the importance of standards, while Seghrouchni advocates for frugal, trustworthy, and inclusive AI development.

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

Standards are key for responsible AI development

Develop frugal, trustworthy and inclusive AI

Similar Viewpoints

These speakers emphasize the need for responsible AI development that prioritizes trust, safety, and inclusivity, while also promoting collaboration and clear guidelines.

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Gong Ke

Focus on trust, safety, accountability and collaboration

Develop frugal, trustworthy and inclusive AI

China is taking steps to promote responsible AI deployment

Takeaways

Key Takeaways

Transparency and explainability are critical for building public trust in AI systems

Standards and ethical frameworks are essential for responsible AI development

AI has significant potential to accelerate progress on Sustainable Development Goals

Challenges remain in AI implementation, including model complexity, data privacy, and regulatory gaps

Future AI governance should prioritize trust, safety, accountability, and global collaboration

Resolutions and Action Items

Develop more inclusive governance discussions involving all stakeholders

Focus on closing digital and AI gaps, especially in developing regions

Promote capacity building, particularly engineering capacity

Advance technical innovation in privacy-preserving technologies and explainable AI models

Encourage the development of frugal, trustworthy, and inclusive AI systems

Unresolved Issues

How to effectively balance innovation with regulation in rapidly evolving AI landscape

Addressing the global shortage of AI talent and skills

Developing universally agreed definitions for key AI ethics terms

Ensuring AI benefits all of humanity without exacerbating inequalities

Suggested Compromises

Develop flexible, adaptive regulations that can keep pace with AI advancements

Customize AI models and datasets to specific contexts to reduce computational requirements

Balance comprehensive data collection with privacy concerns through targeted, specialized datasets

Thought Provoking Comments

Transparency, for me, is like to explain, and not to explain, because I will be confused with explainability, but it’s relays on how the system can meet each expectation, how it functions, et cetera. When it comes to explainability, it’s a bit more technical. We have to justify the decision given by the system.

speaker

Amal El Fallah Seghrouchni

reason

This comment provides a clear distinction between transparency and explainability in AI, which are often conflated. It highlights the nuanced differences in how these concepts apply to AI systems.

impact

This clarification set the tone for more precise discussions about transparency and explainability throughout the rest of the conversation. Other speakers referred back to this distinction in their comments.

We have launched a group as part of the World Standards Cooperation, the WSC, so we’re working with partners like IEC, ISO, IEEE, IETF and others. And we’re focusing in that group on multimedia authentication. We’re looking at deepfakes and we’re looking at misinformation.

speaker

Doreen Bogdan Martin

reason

This comment introduces concrete actions being taken to address pressing issues in AI, specifically around deepfakes and misinformation. It shows how international cooperation is being leveraged to tackle these challenges.

impact

This example of practical collaboration shifted the discussion towards more action-oriented approaches and inspired other speakers to share their own initiatives and partnerships.

In Morocco, for example, we have three languages related to Amazigh, in the north, in the middle of the country, and also in the south. They understand each other, but it’s quite different from one region to another one. So how to apply AI in this context?

speaker

Amal El Fallah Seghrouchni

reason

This comment brings attention to the challenges of applying AI in multilingual and multicultural contexts, highlighting an often overlooked aspect of AI development and deployment.

impact

This insight broadened the discussion to include cultural and linguistic considerations in AI development, leading to further comments on inclusivity and the need for diverse data sets.

We need to leverage those solutions, whether it’s the visually impaired girl from India, Jayatri, who gained her independence by having access to AI glasses. It was a great story. Mohamedou, who was a winner of our AI innovation factory, he comes from West Africa. He’s been able to take data together with AI, work with farmers, and actually the farmers he has worked with, they’ve seen an increase in their yield by some 200%.

speaker

Doreen Bogdan Martin

reason

This comment provides concrete examples of how AI can positively impact individuals and communities, particularly in developing regions. It illustrates the practical benefits of AI beyond theoretical discussions.

impact

These real-world examples shifted the conversation towards the tangible impacts of AI on sustainable development and inspired further discussion on how AI can be leveraged for social good.

We think that we need huge data to do systems, to make system function. It’s not true. It’s, you know, like when you put all together from Internet, you have a good data, you have bad data, you have false data, you have whatever. You don’t need all this. You need good data, very well calibrated, and this maybe solve problem of climate change, if I go fast, because you have to set your data set as clean as possible.

speaker

Amal El Fallah Seghrouchni

reason

This comment challenges the common assumption that more data is always better for AI systems. It emphasizes the importance of data quality over quantity, which is a crucial consideration in AI development.

impact

This insight led to further discussion about responsible data practices and the need for focused, high-quality datasets rather than indiscriminate data collection.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical aspects to include cultural, linguistic, and ethical considerations in AI development and deployment. They highlighted the importance of international collaboration, the need for practical applications of AI for social good, and the significance of responsible data practices. The discussion evolved from theoretical concepts to more concrete examples and action-oriented approaches, emphasizing the real-world impacts of AI on sustainable development and the importance of inclusivity in AI systems.

Follow-up Questions

Is the Turing test for AI still valid today, or do we need a new version to check whether we have trustworthy AI systems?

speaker

Latifa Al Abdulkarim

explanation

This question addresses the evolving nature of AI and the need to reassess our methods for evaluating AI trustworthiness.

How can we develop metrics for explainability and transparency for each context or application of AI?

speaker

Latifa Al Abdulkarim

explanation

This highlights the need for context-specific measures of AI transparency and explainability.

How can we address the challenge of AI systems behaving unpredictably when deployed in different contexts or languages?

speaker

Amal El Fallah Seghrouchni

explanation

This question points to the need for research on making AI systems more adaptable and reliable across different cultural and linguistic contexts.

How can we advance AI models from pure data-driven to jointly driven by data and knowledge?

speaker

Gong Ke

explanation

This suggests a need for research into integrating knowledge graphs and decision-making trees into AI models to improve their performance and explainability.

How can we further develop and implement privacy-preserving technologies like differential privacy, federated learning, and homomorphic encryption in AI systems?

speaker

Gong Ke

explanation

This area of research is crucial for balancing transparency with data privacy in AI systems.

How can we develop more frugal, trustworthy, and inclusive AI systems?

speaker

Amal El Fallah Seghrouchni

explanation

This research area focuses on creating AI systems that are more efficient, reliable, and accessible to a wider range of users.

How can we better support capacity building for AI in the Global South, especially at local community levels?

speaker

Li Junhua

explanation

This research area is important for ensuring equitable access to AI technologies and benefits across different regions and communities.

How can we close the gaps in compute, data, algorithms, and capacity building in AI?

speaker

Doreen Bogdan Martin

explanation

This research area is crucial for addressing inequalities in AI development and deployment globally.

How can we develop responsible standards for AI that benefit all of humanity?

speaker

Doreen Bogdan Martin

explanation

This research area is important for ensuring that AI development aligns with ethical principles and societal values.

How can we combine digitalization and sustainable development to achieve ‘double increases’ in production quality and efficiency, and ‘double decreases’ in carbon footprint and cost?

speaker

Gong Ke

explanation

This research area focuses on leveraging AI for both economic and environmental sustainability.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.