WS #98 Towards a global, risk-adaptive AI governance framework
WS #98 Towards a global, risk-adaptive AI governance framework
Session at a Glance
Summary
This discussion focused on developing a global, risk-adaptive AI governance framework. Participants from various organizations and regions shared insights on balancing innovation with responsible AI development. Key themes included the need for flexible, context-specific approaches to AI regulation and the importance of multi-stakeholder collaboration.
Speakers highlighted ongoing efforts to create risk-based governance frameworks, such as the OECD’s AI classification system and the Council of Europe’s convention on AI. They emphasized the challenge of translating high-level principles into practical guidelines and standards. The discussion touched on regional differences in AI adoption and regulation, with examples from Europe, the MENA region, and the United States.
Participants agreed on the need for interoperability between different governance frameworks while allowing for cultural and contextual variations. They stressed the importance of ongoing dialogue and adaptation as AI technology evolves. The role of education in empowering users to make informed choices about AI was also discussed.
The conversation explored the complexities of defining and mitigating AI risks, with speakers noting the differences between advanced AI systems and everyday AI applications. The need for sector-specific assessments and tailored approaches was emphasized. Participants also discussed the challenges of developing technical standards for AI and the importance of regular review and revision of governance frameworks.
Overall, the discussion converged on the idea of creating adaptive frameworks that can evolve with technological advancements while maintaining core principles of human rights, democracy, and the rule of law. The speakers agreed that ongoing international cooperation and knowledge-sharing are crucial for effective global AI governance.
Keypoints
Major discussion points:
– The need for a risk-based, adaptive approach to AI governance that balances innovation with safety and rights protection
– The importance of multi-stakeholder collaboration and cultural context in developing AI governance frameworks
– The challenge of defining and assessing AI risks across different use cases and sectors
– The role of standards, education, and ongoing review processes in AI governance
– Balancing global interoperability with flexibility for local/regional differences
Overall purpose:
The goal was to examine various global initiatives on AI governance, identify commonalities, and explore how to develop a more interoperable, global approach to AI governance while accounting for different cultural and regional perspectives.
Tone:
The tone was collaborative and constructive throughout. Speakers shared insights from their diverse backgrounds in a spirit of mutual learning. There was general agreement on key principles, with nuanced discussion of implementation challenges. The tone remained optimistic about finding balanced solutions through ongoing dialogue and adaptive approaches.
Speakers
– Timea Suto: Moderator, Global Digital Policy Lead at the International Chamber of Commerce
– Lucia Russo: Artificial Intelligence Policy Analyst at the OECD
– Thomas Schneider: Vice Chair of the Council of Europe’s Committee on AI
– Sulafah Jabarty: CEO and Founder of Clear Vision, Chair of ICC Saudi Arabia’s Digital Economy Committee
– Noora Al-Thani: Vice Dean at the College of Computer and Information Sciences in King Saud University
– Paloma Villa Mateos: Head of Digital Public Policy at Telefonica
– Melinda Claybaugh: Director of Privacy Policy at Meta
Additional speakers:
– Amal Ahmed: Works in DGA (Digital Government Authority)
– Jacques Beglinger: Board member, EuroDIG
– Wouter Cobus: Dutch Government, Standardization Advisor
Full session report
Expanded Summary of AI Governance Discussion
Introduction
This discussion, moderated by Timea Suto from the International Chamber of Commerce, focused on developing a global, risk-adaptive AI governance framework. Participants from various organisations and regions shared insights on balancing innovation with responsible AI development. The conversation explored key themes including the need for flexible, context-specific approaches to AI regulation and the importance of multi-stakeholder collaboration.
Key Themes and Discussion Points
1. Risk-based Approaches to AI Governance
A central theme of the discussion was the need for risk-based, adaptive approaches to AI governance. Lucia Russo from the OECD emphasised the importance of flexible, context-based risk assessment, highlighting ongoing efforts to create risk-based governance frameworks, such as the OECD’s AI classification system. Thomas Schneider, Vice Chair of the Council of Europe’s Committee on AI, stressed the importance of cultural considerations in risk perception, noting that different societies may have varying tolerances for risk.
Paloma Villa Mateos from Telefonica highlighted the challenge of balancing innovation and regulation in risk frameworks. Melinda Claybaugh from Meta advocated for focusing on marginal risks specific to AI, suggesting that existing legal frameworks could be leveraged for AI governance.
2. Challenges in Operationalizing AI Governance Frameworks
Speakers discussed the difficulties of translating high-level principles into practical guidelines and standards. Lucia Russo pointed out the challenge of operationalising governance frameworks, while Thomas Schneider noted the tension between harmonisation and local/cultural adaptation. Paloma Villa Mateos highlighted the need to balance people’s rights and innovation in governance, while Melinda Claybaugh advocated for allowing sufficient time to properly define high-risk AI practices.
3. Cultural and Regional Perspectives on AI Governance
Regional perspectives were shared, with Sulafa Jabarty from ICC Saudi Arabia noting heavy investment in AI and digital transformation in the MENA region. Noora Al-Thani from King Saud University highlighted the key role universities play in AI governance and research, particularly in Saudi Arabia. Thomas Schneider emphasised the importance of considering cultural differences in risk perception and governance approaches.
4. Role of Education and Awareness in AI Governance
Sulafa Jabarty highlighted the crucial role of public awareness and education in enabling effective AI governance. Noora Al-Thani stressed the importance of universities in conducting AI research and contributing to governance discussions. Speakers agreed on the need for ongoing dialogue and education to adapt governance as AI evolves.
5. Importance of Multi-stakeholder Collaboration
There was broad consensus on the importance of multi-stakeholder collaboration, including governments, private sector, academia, and civil society, in developing effective AI governance frameworks. Lucia Russo emphasised the role of global forums in facilitating multi-stakeholder and cross-cultural dialogue. Sulafa Jabarty advocated for developing harmonised global frameworks with local flexibility.
6. Role of Standards in AI Governance
During the Q&A session, the importance of standards in AI governance was discussed. Speakers highlighted the need for technical standards to support the implementation of governance frameworks and ensure interoperability. The discussion touched on the differences between internet governance and AI governance, noting that AI may require more proactive and comprehensive approaches.
Areas of Agreement
There was broad consensus among speakers on several key points:
1. The need for adaptive and flexible AI governance frameworks that can evolve with technological advancements while considering local contexts and cultural differences.
2. The importance of multi-stakeholder collaboration in developing effective AI governance frameworks.
3. The recognition that cultural differences play a significant role in risk perception and governance approaches.
4. The need for ongoing dialogue and adaptation as AI technology evolves.
5. The importance of education and awareness-building around AI risks and benefits.
Key Takeaways and Unresolved Issues
The discussion yielded several key takeaways:
1. The need for risk-based, adaptive approaches to AI governance that balance innovation with risk mitigation.
2. The importance of flexible frameworks that account for cultural differences and evolving technology.
3. The crucial role of multi-stakeholder collaboration and dialogue in developing effective, interoperable AI governance approaches.
4. The potential need for sector-specific and use case-specific governance rather than one-size-fits-all approaches.
5. The importance of standards in supporting the implementation of AI governance frameworks.
However, several issues remained unresolved:
1. How to specifically define and categorise high-risk AI applications.
2. How to balance regional approaches with the need for global interoperability.
3. How to operationalise risk-based frameworks in practice across different sectors.
4. How to address cultural differences in risk perception and tolerance while maintaining a coherent global approach.
Conclusion
The discussion highlighted the complex challenges involved in developing global AI governance frameworks. While there was broad agreement on the need for flexible, adaptive approaches that balance innovation with risk mitigation, the operationalisation of these principles remains a significant challenge. The conversation underscored the importance of ongoing multi-stakeholder dialogue, collaboration, and adaptability to address these challenges and develop effective, culturally sensitive AI governance frameworks.
Session Transcript
Timea Suto: Global Risk Adaptive AI Governance Framework. I am very glad that you’ve decided to spend an hour and a half of your time with us this afternoon. My name is Timo Schütte, I am the Global Digital Policy Lead at the International Chamber of Commerce, and I will be moderating this session today. We have proposed this session for the agenda of the IGF, not because there are not enough conversations on AI, because there clearly are quite a few, but because we wanted to find a way to discuss, or take stock, rather, a little bit of all the various initiatives that are out there on AI governance and governance frameworks, and try and see if we can find some commonalities, or perhaps some ideas, through which we can look at AI governance from a truly global perspective, and push for a more interoperable outcome, or some sort of common approach on how we look at artificial intelligence governance. I’m not going to spend too much time introducing the landscape of AI, because we all have heard a lot about it, and I’m sure our speakers will talk a lot about it as well, but I will take a moment to just introduce the speakers that are going to be here with us today, trying to uncover some of these questions. In the order in which they will be speaking on the panel, I have Ms. Lucia Russo, who is Artificial Intelligence Policy Analyst at the OECD, Mr. Thomas Schneider, who is Vice Chair of the Council of Europe’s Committee on AI, Ms. Sulafa Jabarti, CEO and Founder of Clear Vision, and Chair of ICC Saudi Arabia’s Digital Economy Committee. I also have Ms. Noora Alkhani, who is Vice Dean at the College of Computer and Information Sciences in King Saudi University, and Ms. Paloma Villamateos, who is joining us online from Spain. Thank you, Paloma, for being with us, who is Head of Digital Public Policy at Telefonica, and Ms. Melinda Claybaugh, who is Director of Privacy Policy at TED. So to start off the roundtable, I am just going to ask our panelists to share a little bit about their experience in fostering trusted and responsible and inclusive AI, and share a few of the good practices or projects that they’re working on that incorporates a risk-based approach to AI governance framework. Why have we chosen to ask our panelists about a risk-based framework? It’s because we hear a lot when we look at the governance frameworks around the world that say, yes, our governance framework is risk-based. The approach to AI governance needs to be risk-based. So there seems to be agreement on that, but there’s little agreement on what it actually means. So that’s what we’re trying to figure out together in this session. So to first look at this, I’m going to turn to Lucia, and I hope that you can share a little bit of information on how the OECD is looking at facilitating cross-border collaboration on AI governance, and what are some of the key challenges and opportunities that operationalize or look at this risk-based approach?
Lucia Russo: First of all, let me thank you for organizing this very important session, and welcome all the other speakers and participants here. So I will talk a bit about the way the OECD is promoting interoperability in international AI governance, and I will mention a few examples of how we are putting this risk-based approach into practice. So just to start off, at the cornerstone of the work of the OECD is the OECD recommendation on artificial intelligence that was adopted in 2019 and recently revised to take stock of some technological and policy development, and notably advanced AI, advanced systems. And since then, our work has been really focusing on how to move from these high-level principles into practice. And when we talk about risk-based approach here, of course, we mean having a proportionate system of duties and obligation that is tailored to the level of risk that each and every AI system brings. And so already in 2022, the OECD has developed its own AI classification framework in the form of a scoring table that evaluates AI systems according to five different dimensions, people and planet, economic context, data and input, AI model and task and output. And I don’t want to go too much in detail here, but basically under each of these dimension, then there would be an evaluation of where, for instance, in the data and input, there are considerations related to privacy or copyright under the task and output on the autonomy level of a system. And then in the economic context, the business function of the system, which in turn, it’s basically telling us about the impact that this system may have on this business environment. And so this risk-based approach is what then we see also in a regulatory framework such as the EU AI Act that of course takes this risk-based approach and establishes stricter measures for systems that are deemed to bring, to have a highest risk for safety and fundamental rights in the EU. And we see this risk-based approach also emerging in other frameworks. For instance, the G7 Hiroshima process that was launched under the Japanese presidency in 2023, led to the adoption of a voluntary code of conduct for AI developers that also calls to develop and implement and disclose AI governance and risk management policies in line with the risk-based approach. And to build on this code of conduct, what we are currently working on at the OECD is supporting the G7, the Italian presidency in the development of a monitoring and reporting framework for these commitments, which means moving from this code of conduct that can be again, high level in a sense to what it means in practice for companies to adhere and to respect the commitments that are embedded in this code. And this is obviously to respond to the needs of transparency, accountability, but also it is I think a good example of how we go at a level up from the national borders to an international cooperation that really is across jurisdictions because it is developed by the G7, but of course is not limited to companies in G7 member countries, the adherence to this code of conduct. And lastly, I would just perhaps talk about another initiatives that we have at the OECD, the AI incident monitor, because again, when we talk about risks, what we need to take into account is also the evidence on which we build the frameworks and the objective of this monitoring reporting framework is also to understand where the actual harms materialize and so to have a better informed decision making when it comes to establish what are the high risk categories and how to regulate these categories. And so this is an online tool already and is also a reporting framework that is harmonized across different countries. I’ll stop here and happy to engage in the conversation later.
Timea Suto: Thank you so much, Lucia. Quite a lot going on at the OECD, but it’s not the only forum that does work. You mentioned also how the OECD’s work inspires work in the EU AI, how it inspires work at the G7. And I also want to ask Tomasz on how you are collaborating from your previous role as chair of the CHI and now as vice chair on some of these risk-based approaches into AI, both as you were negotiating the convention itself and now the risk-based impact measurement mechanism.
Thomas Schneider: Thank you very much. And actually, yeah, it’s good that somebody, one of the sessions actually tries to concentrate on the risk-based and what that actually means because we talk a lot about legal texts and we forget about the operation. operationalization of all of this. So before going into how the Council of Europe’s work fits into all of this, let me also again start with the allergy to engines, because there are many similarities. We have engines in machines that produce goods that are more or less big, more or less dangerous for the people. We have engines in cars, in airplanes, in tanks, in many other vehicles. It may be the same engines or similar engines. And they all have, of course, opportunities to produce something, but they also have risks. But we do not have one regulation for the engine. We have thousands of legal norms for the engines, but for the vehicle itself, for the drivers, for the infrastructure, liability rules for parts of a car or parts of an airplane, for the airplane company, for the one selling the tickets and so on, and we have thousands of technical norms and we have social cultural norms from culture to culture. There are different expectations on how to deal with risks. In some cultures, they expect the king or the president or the state to take care of your risk. In other cultures, you have more than expectation that people are capable of dealing with risks themselves. And you have everything in between. And basically the same logic applies to AI as well, because, again, the risks are very much context based in terms of where you apply a certain algorithm or a set of algorithm. And normally it’s not the algorithm itself. Algorithms are part of machines, of tools that we buy, like we have an engine as part of a car or part of an airplane. And and I think one is to look at the legal texts and the convergence and all the legal texts. As you say, they talk about risk based approach. They talk about impact. The Council of Europe Convention is built on a graduated and differentiated approach, which I think is a slightly more exact, because it’s not just vertical risk, high or low, but it’s also horizontal. It may be in different areas. The same thing may be different, although it’s the same algorithm, even if it’s in the health sector, you may have differences and so on. And for instance, the Convention of the Council of Europe, that is an open convention to all countries in the world. So it’s not an instrument for Europe. It just requires states to have mechanisms in place. So it’s a very general requirement to have functioning mechanisms in place. And it says what they should be able to deliver, i.e. identify risks with regard to human rights, democracy and rule of law, and that states have remedies in place in case risks become actually impacts and a mitigation plan and so on. It doesn’t go into further detail. This is where the second instrument comes in that the Council of Europe is currently working on, and this is done in cooperation with the technical standards bodies, with the OECD, with UNESCO, with hundreds of experts from civil society, academia and businesses. It’s a non-binding instrument on the contrary to the Convention. It’s a non-binding instrument on several levels. It’s a methodology for human rights, democracy and rule of law, risk and impact assessment tool. Also, the Level 2 document is a document of about 20 pages explaining, giving guidance what you should need, which is a context-based risk, initial risk analysis, stakeholder engagement in order to see whether your initial risk analysis goes in the right direction or whether you’re missing something, then it’s the actual risk analysis, which is a classical checklist question thing, then there’s a mitigation plan. So if you realize that risks become reality, how are you going to react? How are you protecting people? And then, of course, some logic about iteration, how you do this with technology that is evolving. And it’s building on the work of the technical standard institutions that are also participating, tries to make the link between the legal text, a legal norm and the technical norm, but also giving the flexibility to take into account social, cultural norms and expectations of how to deal with risk, which you may not be able to harmonize. You may be able to harmonize technical norms, but not social, cultural norms. And I think this is important. Just one final thing. And we see how difficult it is. The EU has given a mandate to send Senelec two years ago to develop technical norms, to operationalize and implement the AI Act. And both sides are still struggling to understand each other and to see whether they actually are able to come up with something. So this shows it’s just one example. I don’t blame them. It’s really a difficult, difficult issue. But how important it is that there is cooperation and the OECD is very helpful in bringing people together, the Council of Europe as well, standardization organizations and others. We need to build bridges between these technical bodies and the legal bodies and the cultural bodies in the end so that we understand how how to make this work as a whole and not just on paper, as a legal text or in a questionnaire for programmers. So this needs to fit together. And there’s a huge work ahead of us.
Timea Suto: Thank you, Thomas. That was a great intro to the work of the Council of Europe on this. And I want to keep focusing on this element of regional cultural differences and approaches to context. And as we move out from the OECD setting and the Council of Europe setting into the MENA region, and I want to turn to Sulafa next and ask, what are your insights working in a technology company in this region and maybe perhaps even further than Saudi Arabia in the entire MENA context? What are some of the views that you see on how AI technology works here and how are the risk based approaches on the table here? And also, what are some of the elements that we can maybe elevate into a more global approach?
Sulafah Jabarti: OK, so I guess we all agree that AI has been reshaping the economy and the society all over the world and based on such a globalized economy and a globalized area that we’re speaking about, which is AI, one of the most advanced technologies in the world. So globalization aspect here is much more wider than regular business and regular digital transformation aspects. And so speaking about what’s kind of unique or specific, if we want to go out, zoom out of this globalized space, I think the uniqueness of the MENA region led by countries like Saudi Arabia that are investing heavily in AI. So one of the very unique pointers in the MENA region is heavy investment and leadership in the digital transformation supported by government, supported by private sector, as an example, the Alat company that has been launched under the PIF recently with a capital more than one billion dollars. And that is a specified company just for investing in AI, deep technologies, manufacturing and localizing all of that out of here, making the best of the international minds, the international technologies and the investment environment here. Also, the investment in the sector, whether it’s a financial investment or investing the minds, the regulations, the government mindset, has actually gave us a result that we have reached number one this year in the United Nations indicator of digital government, where we stood six years back in number 52. And that just says how much investment is going on and the speed. And speed cannot be based only on financial investment. It is definitely a mandate collaboration between mindsets, government, private sector, academia, all together, based, of course, by a very strong economy. Second uniqueness aspect, in my opinion, is something everyone also, I guess, agrees upon is such a young, let’s say, generation and tech savvy youth, which makes the biggest amount of our population. So that also adds to the speed of imbedding these technologies. I mean, a lot of technologies are just imbedded and live before even we know about them. And I guess this is also part why regulations are very important. We need, when we speak about risk-based regulations, the advantage of that is that they are flexible, supposedly, and to meet this kind of different levels of maturity of these applications and these technologies. And that’s why flexibility is very much needed in this kind of regulations. Also, the adaptability to the kinds and the ongoing different risks and differentiation between the kinds of applications versus the kind of blanket regulations that are not definitely needed for these kinds of technologies. So if we consider back to the globalized framework, and I guess we all know that the European Union this year has activated their… landmark AI law, which is considered the leading global law and nothing met before this mature, based on the EU Act, AI Act 2021. And considering that kind of effort put in such a law, we speak today about localization. Basically we’re just speaking about, we don’t need, in technology we never believe in starting from scratch. You capitalize on what’s there, open source and other technologies where you can build on. It needs to be the same kind of mindset in terms of regulations. So what we need to do in MENA is that, okay, we take those frameworks and then just fill the gap, taking into consideration the unique, let’s say, socio-economics, cultural, technology, differentiation of aspects, which I don’t believe are going to be a lot, speaking in this kind of making, which is the AI, and then imbedding them. And I guess as we speak there’s a lot that has already been done in Saudi Arabia in terms of, and I speak about Saudi Arabia as leading in the region in this area. We have the Sadae, which is the authority for data and AI. They have launched a couple of frameworks in different areas, and I believe we can definitely match and fill the gap between what’s been done internationally and locally to move this faster. And so summing that up, I guess what we all agree, MENA and globally, is that this kind of risk-based framework supposedly gives a much wider space of flexibility and adaptation and inclusivity, supposedly, for everyone to make the best of what’s going on all around the world, and for us to be able to lead that ongoingly for sustainable
Timea Suto: framework adjustments. Thank you. Thank you very much, Latha. A lot to learn from. I’m always amazed every time you quote this number from 68 to number one in six years. I think this is an amazing feat, and I like how you put that into the context of what that requires. Of course, investment, collaboration with the various expert groups, but of course also the energy and the talent of young people, which brings me to Noura and to ask you what role do you see from your perspective? I’m sorry I messed up your title before, but your work at the University in the Information Technology Department, how do you see the role of universities in building this new generation of developers and tech works?
Noora Al-Thani: Hello. First of all, I’m just pleased to be among the distinguished speakers. As I want to start with, I would like to add, as Ms. Latha mentioned, Saudi Arabia and the MENA region is the leading. So according to Vision 2030, AI actually has a pivotal role at the core of Vision 2030, basically, because they want to diversify the economy, reduce dependency on and establish a kingdom as a global leader in technology and innovation. And Saudi Arabia actually spearheads that effort and aims to develop robust AI and generation AI ecosystem. As Ms. Latha mentioned, they published several frameworks. They published the framework in 2023, September, and again, they published the AI adoption framework in September 2024. And recently, they published in January 2024, the AI intelligence guidelines. So they are keeping up updated with all what’s coming within the technology and legalization. And in the latest publication, the AI, the artificial intelligence guidelines, Saudi ensured responsible use of AI and emphasizing data privacy and ethical standards, and tried to balance innovation with societal values, potential risks, and mitigation strategies. They talked about explicitly certification fraud as a risk, since, as you all know, AI now could produce human-like content. You could write, you could have essays, even detailed research, undermining all traditional educational and professional standards. Therefore, Saidiya also stated mitigation measures for assessment, education, and training explicitly here in Saudi Arabia. And in terms of AI adoption in higher education institutes, actually the adoption and management of new technology in higher education institutes can be complex due to their diverse constituents, including faculty, students, staff, each with different needs and priorities. But there is a paper that was published in September 2024. It is titled AI Governance in Higher Education, Case Studies of Guidance at Big Ten Universities. It was published in the journal Future Internet. This study examined how the prestigious universities in the United States are approaching the governance of artificial intelligence, particularly in response to the growing influence of generative AI in higher education. They reviewed AI governance policies and strategies in 14 prestigious universities. What we can see from this study is that universities started investing generously in AI governance. For example, you could see Massachusetts Institute of Technology developed a comprehensive framework for ethical AI governance and has invested $1 billion in AI initiatives. University of Utah launched a $100 million responsible AI initiative aimed at using AI to tackle societal issues while protecting civil rights. And Tsinghua University established the Institute for AI International Governance and the Center for AI Governance focusing on AI ethics, policy development, and international cooperation. And the University of Oxford launched the Oxford Martin AI Governance Initiative to understand and mitigate AI risks through research and collaboration. And also University of Birmingham’s Center for Artificial Intelligence and Government. And lastly, universities also recognized the importance of dialogue and take innovative steps to promote it. For instance, University of Illinois Urbana-Champaign harnessed the power of social media and they created an online space discussion to discuss issues related to gen AI within the university community. So these universities are not only investing financially but developing comprehensive programs, research initiatives, and governance structures to address all these issues. And to go back to the MENA region, again I’ll go back to Saudi Arabia. In Saudi Arabia, universities are focused on AI within, obviously, the vision 21st 30th. In KSU, we have established KSU Zakat Center and KSU Zakat Office. Both are concerned with AI. The KSU Zakat Center has educated its efforts to its numerous partnerships to localizing knowledge and technology within the field of AI, while the Zakat Office is concerned with developing AI research and applied programs that serve different academic and professional disciplines. And again, there’s KAUST. Also, they established the Center of Excellence for Generative AI, which is dedicated to placing Saudi Arabia at the forefront of AI research in the region and globally.
Timea Suto: Thank you very much, Nora. Quite a lot that universities are able to do, and I guess also when they’re able to do that, when they’re supported to do it. So again, I think what you’ve said fits very nicely in what the panel has said earlier already in how we make sure that expert communities that are either based in academic circles, in private sector circles, or government or international organizations manage to come together and build on each other’s knowledge to further this work. And that we need the expertise of all of them if we want to get the approach right. So in that vein, I also want to turn to Paloma online and ask where do you see the role of the private sector’s efforts in driving this responsible AI innovation by design? And what are the role of the policies that are necessary around this to help us make sure that private sector can do this?
Paloma Villa Mateos: Yeah, thank you. Thank you. Can you listen to me well? It’s okay? Okay, great. Well, thank you. Well, I do think that the magic word here is AI governance. And this applies for private and public sector. I do think that We need to be humble and have a substantial conversation between us, because otherwise we will not benefit from the AI. I think we have done a great job in the last decades in the different international organizations and also in the companies. And the question for us is, in the end, how to ensure AI that is developed responsibly while fostering innovation. And I do think that the AI governance from the company perspective lies in four interconnected pillars of AI governance, which are really important. The first one is principles and guidelines, mainly come from international organizations. Regulation is the second pillar, technical standard, and industry self-regulation. Most of them have been already mentioned, but I think it is important trying to get this interconnected proposal, starting from some principles to the more sophisticated development of AI. Regarding, for example, the principles and guidelines, I do believe that the OECD, Council of Europe, UNESCO, Hiroshima principles, executive order, all this going around the world is directly connected to what companies are doing. I think the development of what we have been doing in the last two decades has been going in parallel, and this is very good news. The principles are there when we talk about transparency, fairness, privacy, human rights, democracy, rural flow. We have been telephonic with many other companies, Microsoft, Meta. We have been working with the Council of Europe, with OECD on a daily basis. With UNESCO, we have signed. These principles are there, and I do think, this is my positive insight, that we are in the same role. The problem comes when, I think Thomas has said, when we come from the high-level principles to the earth. We don’t know how to apply all these principles. Now, for example, at the OECD and many other organisations, we are developing in a more sophisticated way things related to the AI, not only high-risk. I mean, the high-risk approach, everywhere the high-risk approach, there is no discussion about that. But we are now discussing more specific topics. For example, AI and intellectual property. And this is, again, the problem of how we make possible this interoperability of regimes in Europe with other regions, where the history of the juristical, the law tradition, is completely different. How we can find this common interplay? So, the second pillar is regulation. And I think that here, companies, in the case, for example, of Europe, where the AI is already in place, and based on the principle we have already discussed, I do think that companies are doing a great job, for example, signing the EU pact, which is really relevant for companies trying to voluntarily implement the ETAG before it is into force. And many companies are engaging in core commitment, in AI governance strategy, mapping the AI system, and developing AI literacy in the companies and outside the companies. These three core commitments of companies are relevant for what we are talking now. I mean, this collaboration between the institution, the public sector, and the companies are extremely relevant. The problem here in this second pillar in regulation is how we will implement regulation. Again, this is the problem. Maybe the problem is not the regulation itself, but all the standardisation, what it implies in high-risk systems. And sometimes there is a grey zone. Sometimes when we talk with companies, with institutions, the problem is that the discussion is not substantial. I mean, because we are trying to very quickly resolve the standardisation process, which is very difficult, and the technical details are really difficult. So when I start talking about being humble and having a substantive conversation between the public and private sector, because sometimes we have a legal instrument from the 20th century, but the technology is from the 21st century. This is a challenge, a challenge for the institution, but also for the companies, because we have to comply with this regulation when the legal framework is not fit for purpose. For the third pillar, which is the technical standard, I have to say that companies, Telefonica, many others, I’m talking about Telefonica, we are involved in the standardisation process, participating in all the conversations, also with the AI office, with the standardisation of code of practice. But we do have also international standards with ISO and NIS and so on. In the end, what we have, we are seeing also in the ITU, is a complex scenario with many standardisation processes going around. So here we have a lot of work ahead. But I have to say that this conversation is taking place also with the participation of companies. And the fourth pillar has to do with self-regulation. And here I have to say that the companies in the last decade, especially those who are using AI internally and offering the data service, we have put in place AI governance strategy with a very substantial model, scaling the process internally with the responsible within the companies, and also ways to identify the risks internally that are really in line with what you have already said. I think self-regulation is relevant because the technology goes very fast. We have seen that during the process of the AI Act. We started talking about AI. In the end, the global purpose AI was in the middle because the technology is faster than the legal framework. So I do think that self-regulation and responsible AI is critical here. And I stop here because I think we can go in depth later.
Timea Suto: Thank you. Thank you. So thank you very much for that, Paloma. And it’s quite a complex framework, as you said. I think one commonality out of all those four pillars is the collaboration between industry and regulators to make sure that we get the balance right, that we balance the innovation and the rapid development of technology with some of those commitments and goals that we want to address through risk management. So I want to stay with some of this idea, as I turn to Melinda, and we’ve heard a lot about safety risks of AI, and there’s been a number of global summits already on this issue. So I’m just wondering if you might want to draw out a few lessons learned there and see what we can do to maintain or get this balance in act right between innovation and investment risks, but also what is it that the private sector is already doing to help that balance? Over to you, Melinda.
Melinda Claybaugh: Great. Thank you so much. Just a little bit of context to explain Meta’s, to explain my company’s context and how we’re coming at the AI conversation. So we have two main buckets of AI products. So one is our generative AI products, which are in the app, in any app, in Facebook and Instagram and WhatsApp, you may have seen a Meta AI assistant. So it’s basically a chat bot powered by a large language model that you can interact with and ask it to do things and answer questions. Also, we have image generation tools, things like that, that help you create content online. The other bucket of our… AI products is a large language model called Lama that we have released several generations of. And it’s an open source model, which means we make it freely available to anyone to download. So it’s essentially giving away, you know, many, many millions of dollars of investments to entrepreneurs and developers who want to build on it for their own applications. I think that’s just important context to set for kind of how we come at the conversation as both a model provider and a gen AI system deployer. So at the model, let me start, at the gen AI system level, so our meta AI assistant, we assess risk in the way we would assess privacy risks in general. So we built our AI risk management program on top of our privacy risk management program. So it’s to say that any time a new feature or product or assistant is developed or improved in a certain way, it goes through a risk assessment and review process and mitigations are identified and applied. And there’s kind of a cycle of improvement in the same way as happens on the data privacy side. With respect to our large language model, their risks are assessed and mitigated at different points in the development of the model. So at the stage of the data collection, the pre-training stage, we’re identifying, you know, we’re actually going out of our way to not collect personal data, and then we’re identifying potential personal data, removing it, identifying, you know, data that may have copyright protections, you know, going through all of those risks at the pre-training stage, training the model once it’s trained, implementing certain red teaming, other, you know, safety testing and risk assessment and mitigation processes to make sure that the model we’re releasing is safe, and then we release it and developers can build on it. I think, you know, so in addition to those and kind of the product development process, we also, if you’ve mentioned, have signed up to multiple kind of international frameworks. So domestically, to start, in the U.S., we were an early adopter of the White House commitments, which are kind of high-level commitments to the safe deployment of advanced AI. And then we signed on to the sole frontier AI safety commitments. And so I think what we’re seeing is a really positive harmonization at around safety frameworks for advanced or frontier AI. I think that level of, and I think that will be furthered in addition by the development of the various AI safety institutes and how they are going to be working together to understand the science of risk identification, mitigation, evaluations, benchmarks, all of that. And so I think that those are really positive developments. I think where some of the challenges arise is in the more bread and butter AI. So not the kind of frontier AI, you know, safety stuff we’re talking about, but how is AI being applied in our everyday lives to maybe make decisions about us or offer us goods or services? And I think that’s where some of the stickiness comes up in terms of reaching consensus about what are the risks that we are trying to identify? What are the mitigations that should be applied? And is there a global view on that or should it be kind of nationally determined? Because there’s going to be differences in how different societies view different risks. So I think that’s a really interesting thing to keep in mind, the difference between kind of the very advanced AI safety concerns and then kind of the day-to-day bread and butter AI concerns. And just a few general thoughts on risk. So I think it’s really important to focus on the marginal risk we’re talking about, because I think we tend to come to this and think, oh, my God, AI is new and it’s different and it’s terrible. And, you know, in fact, we’ve been dealing with AI, classic AI, for a really long time. And I think what people get concerned about is this really advanced stuff that maybe we’ll lose control of, you know, people worry about, or maybe it’s doing things we don’t understand and all of that. And so I just, you know, we have a whole legal, we have many, many legal frameworks that already govern things like data privacy, that already govern things like kids’ safety online. And so we have a lot of mature frameworks to draw from. And I think from a company’s perspective, what is going to be really important is how these things are rationalized. And so I think there’s a risk of imposing in the lens of, you know, AI, imposing a whole new framework and regime on top of all of the ones we already have. And then how do those relate to one another? We’re seeing this to some extent in Europe, in the AI and privacy conversation, and how data can be used in AI or not. And how does the legal regime on data privacy intersect with AI? And that balance of innovation and privacy protection is really at a tension point, where we all recognize data is needed for AI advances, but of course, there’s limits around it. And I think the unique nature of large language models means that we may not be able to implement data subject rights or other things that arise in data privacy frameworks the way that we can in other types of data processing. So there’s a real life tension there that I think has to be grappled with. And then another, just two other points I want to make real quick is, I think it’s really important to focus on the use cases. So for us, as a large language model provider, and particularly as an open source LLM provider, we release our model, we do all the mitigations that we can, we release it, and we have no idea how it’s used. Anyone can build on it for any purpose, and it’s up to them to put into place the mitigations that are necessary for their particular use cases. And so I think it’s important to, I know the OECD is looking at the value chain and really breaking down what are the roles and responsibilities of the various actors in the AI value chain, and what is in their control to identify and mitigate. I think that’s a really important conversation, and again, the use case conversation, and then particularly looking at what are the laws we already have in place. We already have in place laws about discrimination in employment in most places. We already have in laws discrimination in housing services. So what is net new here that is already not covered, and can we cover those risks in existing frameworks, as opposed to new frameworks?
Timea Suto: Thanks. Thank you, Melinda, for that. I forgot to turn on the microphone. It’s been quite a rich first round around this table. We’ve heard a number of ideas coming out of the speakers here on what is it that we’re facing in terms of risk-based approach to AI? What are some of the elements that we can build on? So I want to focus on our second round of questions. I have the same question to all of you. And in addition to reacting to what you’ve heard from one another, is to just really share a little bit on how you think forums like where we are sitting today, and these global conversations at the IGF, and other global fora, can help bring what you’ve mentioned in your interventions into fruition for an actual global approach to the governance of AI in a way that, as most of you highlighted, it balances the rapid growth and allows the rapid growth of technology and innovation while making sure that some of the harms that we fear from are actually mitigated. So I don’t want to summarize what you’ve all said because it’s going to take too much time, but I hope we can take this one question and do a round-robin around the table and react to one another and bring out those elements that can actually help in global conversations. So Lucia, you spoke first. I’ll hand the microphone over to you.
Lucia Russo: Thank you, Tima. It’s truly fascinating to hear from such a diverse group of speakers. And I think, for me, what resonates the most with what we heard is, on one hand, this need for multi-stakeholder conversation and collaboration, the need also to have a contextual and cultural approach to this type of regulation, and also the need to think in practical terms of what it means to translate these principles into concrete requirements and along this risk spectrum. that we have advocated and so what I want to get at is that we see some sort of regulatory fragmentation and this is no news to anyone and we perhaps shouldn’t seek to have full harmonization because that’s maybe not achievable not even perhaps desirable because we have heard there are some cultural considerations to be made that local values or technological developments but even cultural and institutional history so I think the way we are approaching this issue at the OECD is really to have this multi stakeholder groups coming together and discussing so we have these expert groups overall we have a network of 600 experts that work with us and they are divided into expert groups that focus on specific topics and for instance one of them is working on a group which is called risks and accountability so it’s a group that’s the name that speaks for itself and it really is taking this approach of looking at the different risk management frameworks that have emerged so far and try and see where they share commonalities and where they differ and so the idea is to develop responsible business conduct for enterprises which is not yet another framework they have to comply with but more of a framework that would indicate to companies especially those operating trans-border when they comply with a given requirement what it means for instance in the EU what it means in terms of complying for in the US or in another jurisdiction so the idea is to really put this interoperability in practice meaning having a level of alignment or a level of understanding for operators of where these different requirements intersect and so this is the project that we are currently carrying out and we should have the due diligence guidance ready next year and perhaps the last point that I would like to add and Melinda hinted at that is that it’s a risk management framework that is not only looking at one specific actor in the chain but it looks at AI development and deployment across the value chain because of course it’s not only one part of the chain that is responsible but there are upstream and downstream operators that also have due diligence requirements to abide with and so that would go down to data to the very first investment and data labeling so it’s really a more holistic approach so yes I would say that the value of these conversations is really to bring together these perspectives and it’s the way to go there is no other alternative.
Timea Suto: Thank you Lucia. Same question to you Thomas. What is the role of the global community here?
Thomas Schneider: Yes, thank you. It’s actually interesting to see to what extent and I think the value of a forum like this is to hear from each other where we are and to what extent we are on the same page or going in the same direction, to what extent processes are converging, legal processes, standardization processes and also to what extent they may be not converging or they don’t have to converge and a fundamental question that hasn’t been raised here is actually who defines what a risk is and who defines what a high or a too high risk is and that largely diverges from country to country and not just with AI. Just to give you one example, in England in Liverpool you have the River Mercy and nobody would ever think of going in the river to swim. On the contrary, you have a metal fence that is from 1920 that tells you forbidden water, danger, beware. You have a second fence one meter ahead of it from the 1930s that says, oh danger, water, don’t go in, there may be ships and there’s even a third fence added in the 50s. In Switzerland, in Basel for instance, you have a river with cargo ships but thousands of people go swimming in the water, they go beneath bridges, they navigate between cargo ships and the ships between them because this is one of the greatest things to do in summer if you live in Basel and have no access to the sea. So if the government would decide to forbid swimming in the river because there are cargo ships and it may be dangerous, the people would just say no and this is just, and the UK and Switzerland is not like 5,000 or 10,000 or 20,000 kilometers apart, but just to say that while in an airline business where people are okay to trust experts because it exceeds their personal knowledge also in the airline business, people are willing to agree on internationally harmonized risk management because they want to be sure that the airplane lands safely because they can’t run it themselves. But the closer it gets to your own capabilities, to your life where you want to take a decision and that will also be the same with AI on the heart surgery operation. You may be happy that it’s clear what the red lines are, what the doctor can do, what the tools, what safety tests the tools need to pass, but when it’s about AI-generated content with your freedom of expression, expressing your cultural political views, you may not want some expert or I don’t know the government to tell you what is right or wrong, but you may want to decide it yourself. So I think this is something, there will be harmonization which is fine for people, people will want to have harmonization so that they don’t have to care, they can trust experts, but there will be areas where people want to be the master and use AI the way they want and discuss it with their neighbors what is right or wrong and not with any with the government or people from far away. So I think we will have to live with some kind of diversity in this
Timea Suto: field. Thank you Thomas. Sulafa, how do you see this?
Sulafah Jabarti: Well capitalizing on what they just said, which is I guess I can see how we’re all coming closer to the same area, which is I really liked what you said in terms of what we need to develop or not develop, because this area is actually re-qualifying the whole drive because it’s just okay we need to regulate this sector so let’s go and drive and do regulations every day and question everything and as she said this is a scary new thing and the idea is actually we really need to be very objective but also very connected to the technology itself and to the society itself. So I think Pamela or if I’m not mentioning the name right or wrong, but yeah she’s Paloma, she said something about that the speed of technology sometimes exceeds the speed of regulations and it’s not fair to like ask the businesses to slow down and just wait for regulations, which does happen sometimes. On the other side in a business world, as an example for the cyber security area, which is a very very highly regulated area and still part of this whole as they say crowd, a very small example and some of the applications we provide to some very highly regulated entities, we every now need to adjust the applications we provide with the regulations of cyber security, which are very highly adjusted in our country and so we ended up realizing that some entities because they’re just giving us the regulations and the updates just like they are and they want us to just you know adjust the application to it without actually having an eye for the business itself or the business owners themselves in the organization, we end up to a place where the authorized users can’t enter to the to the to the application and then we have to you know drive some concept into it and we actually bring our business culture, our business understanding to them and and this brings us back to why we need a multi-stakeholder governed frameworks because we need to bring the society in academics and technology people, business people all together and I guess if I want to sum that I think we need flexibility, coordination and awareness. Awareness is a very important part because to give people the right establishment and the right ground to be able to think with us on the same harmonized approach, we need to enable them first to know what they need to know and that also brings us back to exactly being very clever and actually inviting the right entities and the right stakeholders to participate in this. Some people are very closed in boxes of regulations, law or academia despite the other side which is the business itself. So no one should work on this in a closed box, they need to be very much attached with a lot embedded data, informatics, and this is what it’s all about. So we sometimes, I’m sure we all sometimes find people who are working on this who are very isolated from the core of itself and the spirit of this technology, AI, which is based on very live data and information flows. So I think what we need at the end, we all aim to reach a very robust, trusted, and adaptive framework that everyone can use all over the world.
Timea Suto: Thank you very much, Rafaq. Noura, how do you see this going for you?
Noora Al-Thani: Actually, I see the global forum as a very good place to get everyone thinking together. I would like to, I was noticing that like now AI is having, is actually getting everyone is afraid of what AI will do, how AI will develop, and I could see that because when I started AI, when I started studying, it was just I’m doing an AI algorithm or machine learning algorithm in one specific area and it will, for example, find a tumor. Now it’s a different thing. It’s a generalized model and what happens that creators of the AI really don’t know how the AI will respond because they teach the AI the learning model and then the AI will respond the way it responds. So regulating it, I see it’s important to regulate it from the beginning, from entering the data, from the early steps, because whenever the data is in, or whenever, like, anything is in is very difficult. For example, a cake, can you eat it? If you take the ingredients before mixing the ingredients, you could do that. But I would ask you to take this ingredient or whatever data after baking the cake, it’s kind of impossible. And that’s what happens. Whenever the risk comes, it will come anyway. So I do see why there is a great concern and I see some positive things have a great concern just to regulate it. But I see that it’s coming and it’s coming strongly because it is very beneficial and you can see the benefits of it day after day with healthcare, with every aspect. You can see that it’s very beneficial. Like last year, there’s a surgery that happened that there’s a blind girl that managed to be seen now because of an AI surgery. So there is huge benefits. The fear is, we could understand. But I think, like, and other than this also, the government should be very specific for each sector. It should be very different. We can’t have just one framework that governs everything. Every sector is completely different and has its own characteristics that we need to, other than society, other than the region. So I think we’re on the right track. We’re working and it’s a work in progress. And let’s hope for the best.
Timea Suto: Thank you. Step by step and no one-size-fits-all, I think.
Paloma Villa Mateos: Paloma. Yeah, thank you. So Thomas and also Zulafa have said something which is for me really relevant. I mean, the definition of high risk, no? I mean, if we think on Europe, on AIAC, in the end, what we have here is a regulation on high risk application, mostly. And here we are developing this standardization process. And the problem is how to go from the theory to the real world. And this is something more difficult than some of the policymakers thought it is. Last week, for example, we were in Brussels having some conversation with the AI office. So they have a mandate that in the next seven months they have to come with this code of practice. And they have thousands of people participating in this code of practice. And at the same time, we have responded to a public consultation, again, on the definition of some of the application on high risk and so on. So it’s more difficult than it is. And in the end, it is true that we as a company, we have to protect people’s rights, safety and so on. But we have also to protect in Europe innovation and also how to compete in the global economy. So this pattern is really difficult. And I do think that engaging with companies is really relevant because having this theoretical approach sometimes is against what we are trying to do. And in parallel, I have to say that companies, we are also learning how to provide or how to work with a responsible AI. GSMA, for example, you know GSMA, we are now working on a responsible AI maturity roadmap. So trying to provide a framework for companies to work in an AI governance strategy that from the beginning to the end, we are able to provide ethical AI system. So this is going hand by hand. And I think it is important, as I said, to combine and to balance people’s rights and innovation. This is something that is relevant and more relevant in the next year, where in Europe, for example, we will see this new code of practice, standardization and sense and elect. So it’s critical now in Europe to balance that because it could be a regulation that in other parts of the world are looking to. So it is important that we do it right. Thank you.
Melinda Claybaugh: I mostly echo what other people said, but just on the point about the EU AI Act, I think that it’s an interesting reflection of how unsettled things are. So with the code of practice in particular, there’s still live conversation and no consensus on what even is a prohibited practice or what is a high risk practice. And so you would think the prohibited practices would be fairly understood generally, but it’s not. And so I think just as we, I guess my recommendation for kind of convenings and global convenings is to take some time to do it right. Because I think what’s happening is that the EU AI Act was finalized in a frenzy around gen AI development and advanced gen AI development. And now they’re kind of having to figure out, oh, actually, what is prohibited and high risk? Meanwhile, the clock is ticking on compliance for all the companies. And so it’s really a difficult situation to be managing. So I think building more consensus around some of the risks and some of the high risks and what’s inbounds and out of bounds, recognizing, of course, there will be cultural differences. But taking some time to set that step right rather than rushing ahead as the technology is still advancing as well.
Timea Suto: Thank you so much, Belinda. So a lot to take away from the panel. We’ve discussed the importance of multi-stakeholder approach and a cross-cultural approach. The importance of bridging fragmentation in regulatory spaces and trying to build towards common principles, but not a one size fits all approach. To try and work together to define what high risk and low risk is. And also the value of conversations and the acknowledgement that it might not be the same across regions. To make sure that we are looking and are connected at the technology when we’re trying to pass regulations. Again, the value of multi-stakeholder approach here so that we don’t pass regulations that are actually restrictive to the benefits of a technology that we’re trying to regulate. To go step by step and make sure that we place the regulatory at the right moment. Not necessarily taking an approach that covers everything from one go. The role of standards. and balancing innovation and regulation with an approach to standards and industry initiatives. And then, of course, taking the time to do slides and allow time to tell us where actually the risks are and to look at that from also the user perspective, the way that the technology is being used in the field as opposed to where we think risks might be coming. So a lot coming out from the panel. We have about maybe 20 minutes, a little bit to turn to the audience, a little less than that, both online and here in the room. I understand Paloma will have to leave. So if there’s anything last second that you want to share before you have to move to the next meeting, please go ahead. Otherwise, we thank you very much for being here. If there are people in the room, the rest of the speakers or online, please, we’ll get you a microphone and then we’ll try to get your answer as well. You and then them. Thank you very much.
Audience: My name is Amal Ahmed. I’m currently working in DGA. I’m not asking a question. I’m just having an emphasis. First of all, welcome here in Saudi Arabia. It’s an honor to have you all here. And my experience is a total of three years. Two, I’ve spent in the private sector and one in the governmental sector in DGA. And I want to say that it’s really exciting working here. And I’ve seen how the government sector is working very closely with citizens to be human-centric. And I’ve realized a challenge that we are facing to enhance the practices of creating new products, which is the first one is how to actually adhere to the best practices that are available to doing what humans really need. Because the more we contact through the workshops the different stakeholders, we realize that some of the practices we’re doing, they’re not very fit. And on a product level, when it comes to, let’s say, creating some sort of a feature, going through the right process sometimes is not the very best option to it. So this is one of the things that I’ve seen. And it’s kind of like a balancing between the frameworks and the reality itself. My name is Jacques Beglinger. I’m from Switzerland. I’m here with the Eurotech, the European IGF, and with the Swiss IGF process, but also in the business ICC team. My question is following on what Thomas was saying on different perception, means different aversion to risk or embracing risks. And wouldn’t that call for governments and for business to engage much more in education and explaining as much as possible so that the users can make a free choice?
Timea Suto: Thomas, the question was addressed to you, I think. But all of you around the table, if you’d like to elaborate a little bit on how we educate around AI.
Thomas Schneider: Well, I do not necessarily think that it’s addressed to me. But of course, what I said before about people swimming in the river in Switzerland, they don’t want the government to forbid swimming in the river. They want the government to make sure that the water quality is okay, so there’s no damage. They want the government to make sure that everyone properly learns how to swim at school. And society teaches also foreigners and immigrants how to deal with water. And they also want the drivers of the cargo ships to know that, okay, I go on the left and the people are on the right, so I will not kill them. So education is key to freedom of choice. And to also make people adaptive to be able to assess the risks in a situation that may not be foreseen. Because you may set up rules, but reality may be not foreseen by the rules. And then what do you do? And the more people are able or the system or the society is able to deal with risks, also in unforeseen moments, and we will have them probably also with AI, then, of course, it’s easier for people to react.
Timea Suto: Thank you. Does anybody else want to react to what we’ve heard from the audience? Are there any other questions? The gentleman in the back there. Hello? Yeah. Okay, great.
Audience: Thank you. My name is Wouter Cobus. I’m with the Dutch Government and Standardization Advisor. I’m seeing a difference between the Internet, which we discussed at the IGF, and AI, where the Internet is confounded by standards, really based on standards. And in AI, we are now trying to develop new standards. And I can imagine that difference has also implications to how we govern it. So, what are your opinions about how this difference affects the governance model that we have to choose for AI compared to the Internet?
Timea Suto: Some question there about the role of standards and whether standards need to come before development or development needs to come before standards, if I understood the question correctly. Any other questions that we could maybe walk together? No? It’s quite unfortunate that Paloma had to leave because she always has a lot to say on standards, but perhaps others? Melinda, do you want to take that up?
Melinda Claybaugh: Actually, I’m not that close to the standards development work. In the U.S., I can say that the quote-unquote standards, I mean, not the ISO things, but the NIST is the primary soft standard body in the U.S., they’ve been focused primarily on risk management frameworks for Gen-AI. I think there’s a place for that because that is kind of a standardization of a process of how to assess and mitigate risks that you want to make standard across anyone developing and deploying AI. As for the technical standards, which I know are so important to the Internet, I actually don’t have a view on them. I defer to you if you’re saying it’s more challenging in the AI space.
Timea Suto: Johannes?
Thomas Schneider: Maybe just a quick reaction. The question is, what do you mean by standards on the Internet? I mean, of course, the TCPIP is there for a few decades, but the IETF is continuing to develop norms and standards. And also there, basically, it’s probably not fundamentally different because somebody proposes a standard, you test it, and like a running code and so on, and if nobody has a problem with the standard, then it may get de-standard, although you may have competing standards or a variety of standards, and you had this with television and previous, so you may have competing standards. And over time, maybe one of the standards or two will succeed in just being the most attractive, not necessarily the best, but the most attractive for businesses or whatever. So I don’t see a fundamental difference. But of course, it’s a difference between a standard for an infrastructure, if you take the Internet as an infrastructure, or service using an infrastructure. So of course, it’s also there, standards are case-sensitive. But I don’t see a fundamental difference in logic, because also there you just try and see what happens, and then you standardize as you go, more or less.
Timea Suto: Thank you, Johannes. Yes, just one thing, if I can add from my role as the moderator. We also need to make sure that as we develop standards, we are mindful of not fragmenting the space further. So that standard, the inter-appropriate approach that we want to take to regulation, to the actual use of technology, also that standards do not add to creating pockets of technology, that this technology works on this standard, and the other one works on that standard, and the two don’t talk to one another, because then we are actually fragmenting the opportunities that we can get out of the technology. That’s just two cents from me. But we have a question there.
Audience: When we talk about standards, we also need to bear in mind that standards are not carved in stone. So for me, and also from my experience in business, it’s okay to have standards, but they shouldn’t be too rigid to start with. But then there must be a serious review process, or at least the expectation that it’s going to be reviewed once that flaws are expected. So in that sense, what has been done at the Council of Europe, principle-based, is fine. Whether the AI Act went a little bit too far in this respect, and not enough expectation to be revised pretty soon, as we saw it with the GDPR, which was not revised so quickly, you might learn from it. But I think it’s fine. really essential that there is a perspective and certain know-how on the subject that there will be a revision.
Timea Suto: Thank you for that addition. I think we seem to have exhausted the questions from the audience. I hope not the audience itself. We have, yes, about five minutes to end our session. So I just want to turn back to the panelists here on the podium and ask, what is your main takeaway from the session? If it still had the character limitations that we have on social platforms to express our opinions, what would be your one sentence takeaway from this that we can put in the report about what we discussed today? I’m going to skip the speaking order and I’m going to start with Sulafa and just go around the people here.
Sulafah Jabarti: I think mostly it’s to make this sustainable. It’s actually the harmonization of the global framework that we’ve heard bits and pieces from different backgrounds, and we all, I guess, agree that as much as the process is flexible, inclusive, and as they say, connected to multi-stakeholders as well, and listening out to everyone, giving everyone the space to imbue their process, and I think that’s the way to actually make it faster and more convenient and more sustainable, let’s say, because at the end, this is an ongoing process. So as much as the flow is connected to multiple entities, as much as it’s sustainable and objective, if we may say, and considering all of the aspects together.
Melinda Claybaugh: Yeah, I echo that and I agree that finding the balance between what we agree on and then allowing for variability, so setting a floor and then you can add to it as needed for the use case, for the country, for the context that something is being deployed in, and so firming up the foundation and then whether looking to kind of sector-specific assessments beyond that, however that differential should be implemented is unclear yet, but like that super floor and then allow space to move around.
Timea Suto: Lucia?
Lucia Russo: Yeah, I think for me as well is this notion of having an adaptive framework, not having something set in stone that you can’t review and can’t reopen, especially in light of the speed of the technology and the length of the policymaking process, and so this notion of footer-proofing legislation or regulation in a way that is not set in stone or that you have processes to update your requirements, and also I think this really the need of what we call a risk-based, well, tailored approach to the use cases but to the sectors as well, and I think Melinda expressed it very well, this notion that we have advanced AI systems and then we have what we may call everyday AI, and also Nora was mentioning that transition from the narrow AI to now the large foundation model that can do much more, and so I think that is at the core of what we call risk-based approach, I mean to tailor the requirement that are imposed to really a careful consideration of what the impulse will be.
Noora Al-Thani: Hello, yes, I do agree with Lucia that it should be adapted, and especially since it’s a global, as also Melinda said, we should have a basic and then different differences, and I think all that could be done through dialogue, and again dialogue, and reiterative process of setting the standards on, and it should be like regularly and continuously, because things change, our beliefs or our point of view change with the changing world, so I think as I will actually emphasize whatever they said, and that’s how it is. Thank you, Thomas. Yes, thank you, I also think
Thomas Schneider: what a surprise that adaptive is the key word I think of this afternoon, and I think it is important that the framework is adaptive, but not necessarily, the goal should always be the same, to make sure that people are free, but people use their freedom with responsibilities, that there is protection, human rights, for democracy, for rule of law things, and also like clear rules for the industry, that they know what can they do, what can they not do, at least when a certain level of risk is reached, so the principles should be stable and reliable, but the way they are implemented, the way it’s made sure that people continue to be free, but safe to the extent that they want to be safe, need to be adaptive, and I think also, my country is not a member of the EU, but we are grateful to the EU that they dared to do something, of which we can all learn, and of course, a colleague from Telefonica is right, it’s not easy, but not doing anything and just letting everything go may not be the right thing too, so we watch closely what the EU is doing, what difficulties the member states have in implementing this in the local level and so on, and of course, yeah, they are the front runner, they have some advantages, but they also pay a price, but as long as we stay engaged and can learn from each other, I think it’s a mutual benefit. In my small country, we will try to achieve the same goals with something different, something more agile, something smaller, because also we have to, we don’t have the resources that the EU as a group of countries have, so as long as we can learn from each other, I think, yeah, we will go in the right direction if we share the basic fundamental principles of freedom and respect and autonomy and human rights and solidarity and so on.
Timea Suto: Thank you. So, we started from one word or one hyphenated word, risk-based, and then we added quite a couple to this, but I think Thomas is right, the end word that we seem to converge around is adaptability, an adaptive framework that moves with the times, that moves with the technology, that moves with the changes of our views and perspectives and the way that we, our culture develops with the technology together while making sure that we keep our eyes on the prize, keep our eyes on the right goals that we’ve set for ourselves in the beginning. To all the words that we’ve said today, I will just add two more, which is thank you. Thank you to all of you who have come and share your knowledge and expertise with us for the past hour and a half. Thank you to all who came to listen and contribute to the conversation. Thank you to those who joined us online. I know Paloma had to go, but the audience that is there still. I hope this was as useful for you as it was edifying for me, and hope to see you next year at the next IGF and see how we progress from adaptive to who knows what the next word be. Thank you, everyone. Thank you. Thank you. Thank you.
Lucia Russo
Speech speed
126 words per minute
Speech length
1349 words
Speech time
637 seconds
Need for flexible, context-based risk assessment
Explanation
Lucia Russo emphasizes the importance of a flexible and context-based approach to risk assessment in AI governance. She argues that a proportionate system of duties and obligations should be tailored to the level of risk each AI system brings.
Evidence
OECD has developed an AI classification framework with a scoring table that evaluates AI systems across five dimensions: people and planet, economic context, data and input, AI model, and task and output.
Major Discussion Point
Risk-based approaches to AI governance
Agreed with
Thomas Schneider
Sulafah Jabarty
Noora Al-Thani
Paloma Villa Mateos
Melinda Claybaugh
Agreed on
Need for adaptive and flexible AI governance frameworks
Differed with
Thomas Schneider
Differed on
Approach to risk assessment in AI governance
Difficulty translating high-level principles into practice
Explanation
Russo highlights the challenge of moving from high-level AI principles to practical implementation. She emphasizes the need for concrete requirements and risk management frameworks.
Evidence
OECD is supporting the G7 Italian presidency in developing a monitoring and reporting framework for AI commitments, moving from high-level code of conduct to practical implementation.
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Facilitating multi-stakeholder and cross-cultural dialogue
Explanation
Russo emphasizes the importance of multi-stakeholder conversations and collaboration in AI governance. She argues for the need to have a contextual and cultural approach to AI regulation.
Evidence
OECD has a network of 600 experts divided into groups focusing on specific topics, such as risks and accountability.
Major Discussion Point
Role of global forums in AI governance
Agreed with
Sulafah Jabarty
Noora Al-Thani
Agreed on
Importance of multi-stakeholder collaboration
Thomas Schneider
Speech speed
183 words per minute
Speech length
2233 words
Speech time
730 seconds
Importance of cultural considerations in risk perception
Explanation
Schneider highlights that risk perception and tolerance vary across cultures. He argues that AI governance should account for these cultural differences rather than imposing a one-size-fits-all approach.
Evidence
He provides an example of different attitudes towards swimming in rivers in England versus Switzerland, illustrating how risk perception can vary culturally.
Major Discussion Point
Risk-based approaches to AI governance
Differed with
Lucia Russo
Differed on
Approach to risk assessment in AI governance
Tension between harmonization and local/cultural adaptation
Explanation
Schneider discusses the challenge of balancing global harmonization of AI governance with the need for local and cultural adaptation. He argues for a flexible approach that allows for cultural differences while maintaining core principles.
Evidence
He mentions the Council of Europe’s work on a non-binding instrument for human rights, democracy, and rule of law risk assessment in AI, which aims to provide guidance while allowing for cultural flexibility.
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Agreed with
Lucia Russo
Sulafah Jabarty
Noora Al-Thani
Paloma Villa Mateos
Melinda Claybaugh
Agreed on
Need for adaptive and flexible AI governance frameworks
Building consensus on risks while allowing for cultural differences
Explanation
Schneider emphasizes the importance of global forums in building consensus on AI risks and governance principles. He argues for maintaining stable core principles while allowing for adaptive implementation based on cultural contexts.
Evidence
He mentions the EU’s efforts in AI regulation as a learning opportunity for other countries, while acknowledging that different approaches may be needed based on local contexts and resources.
Major Discussion Point
Role of global forums in AI governance
Sulafah Jabarty
Speech speed
144 words per minute
Speech length
1435 words
Speech time
597 seconds
Heavy investment in AI and digital transformation in MENA region
Explanation
Jabarty highlights the significant investment and leadership in AI and digital transformation in the MENA region, particularly in Saudi Arabia. She argues that this investment, supported by both government and private sector, is a unique aspect of the region’s approach to AI.
Evidence
She mentions the launch of Alat company with over $1 billion in capital for AI investment, and Saudi Arabia’s rise from 52nd to 1st place in the UN’s digital government indicator in six years.
Major Discussion Point
Risk-based approaches to AI governance
Differed with
Noora Al-Thani
Differed on
Focus of AI investment and development
Need for multi-stakeholder collaboration in framework development
Explanation
Jabarty emphasizes the importance of involving multiple stakeholders in developing AI governance frameworks. She argues that this collaborative approach leads to more effective and sustainable governance.
Evidence
She mentions the need to bring together society, academics, technology experts, and business people to create harmonized approaches to AI governance.
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Agreed with
Lucia Russo
Noora Al-Thani
Agreed on
Importance of multi-stakeholder collaboration
Developing harmonized global frameworks with local flexibility
Explanation
Jabarty advocates for the development of global AI governance frameworks that can be harmonized across regions while allowing for local flexibility. She emphasizes the need for adaptability to different levels of maturity in AI applications and technologies.
Evidence
She suggests building on existing frameworks like the EU AI Act and adapting them to local contexts in the MENA region.
Major Discussion Point
Role of global forums in AI governance
Agreed with
Lucia Russo
Thomas Schneider
Noora Al-Thani
Paloma Villa Mateos
Melinda Claybaugh
Agreed on
Need for adaptive and flexible AI governance frameworks
Noora Al-Thani
Speech speed
125 words per minute
Speech length
1148 words
Speech time
546 seconds
Universities playing key role in AI governance and research
Explanation
Al-Thani highlights the crucial role universities play in AI governance and research. She argues that higher education institutions are investing in AI governance structures and research initiatives to address emerging issues.
Evidence
She cites examples of universities like MIT, University of Utah, and Tsinghua University establishing AI governance centers and investing millions in AI initiatives.
Major Discussion Point
Risk-based approaches to AI governance
Agreed with
Lucia Russo
Sulafah Jabarty
Agreed on
Importance of multi-stakeholder collaboration
Differed with
Sulafah Jabarty
Differed on
Focus of AI investment and development
Importance of sector-specific governance approaches
Explanation
Al-Thani emphasizes the need for sector-specific approaches to AI governance. She argues that different sectors have unique characteristics and risks that require tailored governance frameworks.
Evidence
She states that ‘Every sector is completely different and has its own characteristics that we need to [consider].’
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Promoting ongoing dialogue to adapt governance as AI evolves
Explanation
Al-Thani advocates for continuous dialogue and adaptation in AI governance. She argues that as AI technology rapidly evolves, governance frameworks need to be regularly updated to remain effective.
Evidence
She describes AI governance as a ‘work in progress’ and emphasizes the need for ongoing efforts to address new developments in AI technology.
Major Discussion Point
Role of global forums in AI governance
Agreed with
Lucia Russo
Thomas Schneider
Sulafah Jabarty
Paloma Villa Mateos
Melinda Claybaugh
Agreed on
Need for adaptive and flexible AI governance frameworks
Paloma Villa Mateos
Speech speed
120 words per minute
Speech length
1367 words
Speech time
682 seconds
Balancing innovation and regulation in risk frameworks
Explanation
Mateos emphasizes the need to balance innovation and regulation in AI risk frameworks. She argues that while protecting people’s rights is crucial, it’s equally important to foster innovation and competitiveness in the global economy.
Evidence
She mentions ongoing work with the AI office in Brussels to develop a code of practice for AI, highlighting the challenge of translating theoretical approaches into practical implementation.
Major Discussion Point
Risk-based approaches to AI governance
Agreed with
Lucia Russo
Thomas Schneider
Sulafah Jabarty
Noora Al-Thani
Melinda Claybaugh
Agreed on
Need for adaptive and flexible AI governance frameworks
Balancing people’s rights and innovation in governance
Explanation
Mateos reiterates the importance of finding a balance between protecting people’s rights and fostering innovation in AI governance. She argues that this balance is critical, especially in the context of emerging regulations like the EU AI Act.
Evidence
She mentions the development of a responsible AI maturity roadmap by GSMA to provide a framework for companies to work on AI governance strategies that ensure ethical AI systems.
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Balancing regional approaches with global interoperability
Explanation
Mateos discusses the challenge of balancing regional approaches to AI governance with the need for global interoperability. She emphasizes the importance of getting the balance right in Europe, as it could influence approaches in other parts of the world.
Evidence
She mentions the upcoming code of practice, standardization, and other developments in Europe as critical junctures for balancing regional and global approaches to AI governance.
Major Discussion Point
Role of global forums in AI governance
Melinda Claybaugh
Speech speed
155 words per minute
Speech length
1688 words
Speech time
649 seconds
Focusing on marginal risks specific to AI
Explanation
Claybaugh emphasizes the importance of focusing on the marginal risks specific to AI, rather than treating all AI-related risks as entirely new. She argues that many existing legal frameworks already address some of the concerns related to AI.
Evidence
She mentions existing frameworks for data privacy and kids’ safety online as examples of mature frameworks that can be drawn upon for AI governance.
Major Discussion Point
Risk-based approaches to AI governance
Allowing time to properly define high-risk AI practices
Explanation
Claybaugh advocates for taking sufficient time to properly define high-risk AI practices. She argues that rushing to implement regulations without clear definitions can lead to difficulties in compliance and enforcement.
Evidence
She cites the ongoing discussions around the EU AI Act, where there’s still no consensus on what constitutes prohibited or high-risk practices, even as compliance deadlines approach.
Major Discussion Point
Challenges in operationalizing AI governance frameworks
Agreed with
Lucia Russo
Thomas Schneider
Sulafah Jabarty
Noora Al-Thani
Paloma Villa Mateos
Agreed on
Need for adaptive and flexible AI governance frameworks
Taking time to properly define risks and prohibited practices
Explanation
Claybaugh reiterates the importance of taking time to build consensus around AI risks and prohibited practices. She argues for a more measured approach to developing AI governance frameworks to ensure their effectiveness and practicality.
Evidence
She points to the challenges faced in implementing the EU AI Act, where rushed finalization has led to ongoing debates about fundamental definitions and classifications.
Major Discussion Point
Role of global forums in AI governance
Agreements
Agreement Points
Need for adaptive and flexible AI governance frameworks
Lucia Russo
Thomas Schneider
Sulafah Jabarty
Noora Al-Thani
Paloma Villa Mateos
Melinda Claybaugh
Need for flexible, context-based risk assessment
Tension between harmonization and local/cultural adaptation
Developing harmonized global frameworks with local flexibility
Promoting ongoing dialogue to adapt governance as AI evolves
Balancing innovation and regulation in risk frameworks
Allowing time to properly define high-risk AI practices
All speakers emphasized the importance of creating AI governance frameworks that are adaptive, flexible, and can evolve with technological advancements while considering local contexts and cultural differences.
Importance of multi-stakeholder collaboration
Lucia Russo
Sulafah Jabarty
Noora Al-Thani
Facilitating multi-stakeholder and cross-cultural dialogue
Need for multi-stakeholder collaboration in framework development
Universities playing key role in AI governance and research
These speakers stressed the need for collaboration among various stakeholders, including governments, private sector, academia, and civil society, in developing effective AI governance frameworks.
Similar Viewpoints
Both speakers emphasized the need to consider cultural differences in AI governance approaches while working towards globally harmonized frameworks.
Thomas Schneider
Sulafah Jabarty
Importance of cultural considerations in risk perception
Developing harmonized global frameworks with local flexibility
Both speakers highlighted the importance of balancing innovation with regulation, focusing on risks specific to AI rather than overregulating.
Paloma Villa Mateos
Melinda Claybaugh
Balancing innovation and regulation in risk frameworks
Focusing on marginal risks specific to AI
Unexpected Consensus
Role of universities in AI governance
Noora Al-Thani
Sulafah Jabarty
Universities playing key role in AI governance and research
Need for multi-stakeholder collaboration in framework development
While not typically emphasized in AI governance discussions, both speakers highlighted the crucial role of universities in shaping AI governance frameworks and conducting relevant research.
Overall Assessment
Summary
The main areas of agreement included the need for adaptive and flexible AI governance frameworks, the importance of multi-stakeholder collaboration, and the consideration of cultural differences in risk perception and governance approaches.
Consensus level
There was a high level of consensus among the speakers on the need for flexible and adaptive approaches to AI governance. This consensus suggests a growing recognition of the complexity of AI governance and the need for frameworks that can evolve with technological advancements and varying cultural contexts. The implications of this consensus could lead to more nuanced and context-sensitive approaches to AI governance on a global scale.
Differences
Different Viewpoints
Approach to risk assessment in AI governance
Lucia Russo
Thomas Schneider
Need for flexible, context-based risk assessment
Importance of cultural considerations in risk perception
While both speakers advocate for flexibility in risk assessment, Russo emphasizes a more technical, multi-dimensional approach, while Schneider highlights the importance of cultural factors in risk perception and tolerance.
Focus of AI investment and development
Sulafah Jabarty
Noora Al-Thani
Heavy investment in AI and digital transformation in MENA region
Universities playing key role in AI governance and research
Jabarti emphasizes government and private sector investment in AI, while Al-Thani focuses on the role of universities in AI governance and research.
Unexpected Differences
Role of existing legal frameworks in AI governance
Melinda Claybaugh
Other speakers
Focusing on marginal risks specific to AI
Claybaugh uniquely emphasizes the importance of leveraging existing legal frameworks for AI governance, while other speakers focus more on developing new AI-specific frameworks. This unexpected difference highlights the tension between adapting existing regulations and creating entirely new ones for AI.
Overall Assessment
summary
The main areas of disagreement revolve around the specific approaches to implementing flexible AI governance frameworks, the role of cultural factors in risk assessment, and the balance between regional investment and global interoperability.
difference_level
The level of disagreement among the speakers is moderate. While there is general consensus on the need for adaptive and flexible AI governance, speakers differ in their emphasis on specific aspects and implementation strategies. These differences reflect the complex nature of AI governance and the need for continued dialogue and collaboration to develop effective global frameworks that can accommodate regional and cultural variations.
Partial Agreements
Partial Agreements
All speakers agree on the need for flexible and adaptive AI governance frameworks, but they differ in their emphasis on specific aspects such as cultural considerations, sector-specific approaches, and the balance between innovation and regulation.
Lucia Russo
Thomas Schneider
Sulafah Jabarty
Noora Al-Thani
Paloma Villa Mateos
Melinda Claybaugh
Difficulty translating high-level principles into practice
Tension between harmonization and local/cultural adaptation
Need for multi-stakeholder collaboration in framework development
Importance of sector-specific governance approaches
Balancing innovation and regulation in risk frameworks
Allowing time to properly define high-risk AI practices
Similar Viewpoints
Both speakers emphasized the need to consider cultural differences in AI governance approaches while working towards globally harmonized frameworks.
Thomas Schneider
Sulafah Jabarty
Importance of cultural considerations in risk perception
Developing harmonized global frameworks with local flexibility
Both speakers highlighted the importance of balancing innovation with regulation, focusing on risks specific to AI rather than overregulating.
Paloma Villa Mateos
Melinda Claybaugh
Balancing innovation and regulation in risk frameworks
Focusing on marginal risks specific to AI
Takeaways
Key Takeaways
There is broad agreement on the need for risk-based, adaptive approaches to AI governance that balance innovation with risk mitigation
Governance frameworks should be flexible enough to account for cultural differences and evolving technology while maintaining core principles
Multi-stakeholder collaboration and dialogue is crucial for developing effective, interoperable AI governance approaches
Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
Education and awareness-building around AI risks and benefits is important
Standardization efforts for AI should aim to promote interoperability while allowing for adaptation
Resolutions and Action Items
Continue multi-stakeholder dialogues and collaboration on AI governance at global forums
Work towards harmonized global frameworks that allow for regional/cultural flexibility
Develop adaptive governance mechanisms that can evolve with AI technology
Unresolved Issues
How to specifically define and categorize high-risk AI applications
How to balance regional approaches with the need for global interoperability
How to operationalize risk-based frameworks in practice across different sectors
How to address cultural differences in risk perception and tolerance
Suggested Compromises
Establish a common ‘floor’ of basic AI governance principles, with flexibility for regional/cultural adaptation beyond that
Focus on use case and sector-specific governance rather than blanket regulations
Allow for regular review and updating of AI governance frameworks as technology evolves
Thought Provoking Comments
We have engines in machines that produce goods that are more or less big, more or less dangerous for the people. We have engines in cars, in airplanes, in tanks, in many other vehicles. It may be the same engines or similar engines. And they all have, of course, opportunities to produce something, but they also have risks. But we do not have one regulation for the engine.
speaker
Thomas Schneider
reason
This analogy provides a fresh perspective on AI regulation, highlighting the complexity and context-dependence of risk management.
impact
It shifted the discussion towards considering more nuanced, context-specific approaches to AI governance rather than one-size-fits-all solutions.
We need to be humble and have a substantial conversation between us, because otherwise we will not benefit from the AI.
speaker
Paloma Villa Mateos
reason
This comment emphasizes the importance of collaboration and open dialogue in AI governance.
impact
It reinforced the theme of multi-stakeholder cooperation and encouraged participants to consider how to foster more substantive conversations between different sectors.
I think it’s really important to focus on the marginal risk we’re talking about, because I think we tend to come to this and think, oh, my God, AI is new and it’s different and it’s terrible. And, you know, in fact, we’ve been dealing with AI, classic AI, for a really long time.
speaker
Melinda Claybaugh
reason
This comment provides a balanced perspective on AI risks, countering alarmist views and encouraging a more measured approach.
impact
It prompted a more nuanced discussion of AI risks and the need to build on existing regulatory frameworks rather than starting from scratch.
We need flexibility, coordination and awareness. Awareness is a very important part because to give people the right establishment and the right ground to be able to think with us on the same harmonized approach, we need to enable them first to know what they need to know
speaker
Sulafah Jabarty
reason
This comment highlights the importance of public education and awareness in AI governance.
impact
It broadened the discussion to include the role of public understanding and engagement in effective AI governance.
Overall Assessment
These key comments shaped the discussion by moving it towards a more nuanced, context-specific, and collaborative approach to AI governance. They highlighted the complexity of AI regulation, the need for flexibility and adaptability in governance frameworks, the importance of building on existing regulatory structures, and the crucial role of public education and multi-stakeholder dialogue. The discussion evolved from considering broad regulatory approaches to exploring more specific challenges and opportunities in implementing effective AI governance across different cultural and regulatory contexts.
Follow-up Questions
How can we define and reach consensus on what constitutes ‘high risk’ in AI applications across different cultural contexts?
speaker
Thomas Schneider, Sulafah Jabarty, Paloma Villa Mateos
explanation
Multiple speakers highlighted the challenge of defining high-risk AI applications, especially given cultural differences in risk perception. This is crucial for developing effective and culturally-sensitive AI governance frameworks.
How can we balance innovation and regulation in AI governance to ensure competitiveness while protecting rights and safety?
speaker
Paloma Villa Mateos
explanation
This balance is critical for developing AI governance that fosters innovation while addressing potential risks and harms.
How can we develop sector-specific AI governance approaches while maintaining a coherent overall framework?
speaker
Noora Al-Thani, Melinda Claybaugh
explanation
Speakers emphasized the need for tailored approaches to different sectors, while also maintaining some level of consistency across frameworks.
How can we ensure AI governance frameworks remain adaptive and flexible to keep pace with rapidly evolving technology?
speaker
Lucia Russo, Thomas Schneider
explanation
Given the fast pace of AI development, ensuring governance can adapt quickly is crucial for effective regulation.
What role should education play in preparing society to make informed choices about AI risks and benefits?
speaker
Jacques Beglinger (audience member)
explanation
Education was highlighted as a key factor in enabling people to assess and manage AI risks effectively.
How can we develop effective standards for AI that allow for innovation while ensuring interoperability and avoiding fragmentation?
speaker
Wouter Cobus (audience member), Timea Suto
explanation
The role of standards in AI governance was raised as an important area for further exploration, particularly in comparison to internet governance.
How can we operationalize risk-based approaches to AI governance in practice?
speaker
Timea Suto
explanation
While many frameworks claim to be risk-based, there’s a need to clarify what this means in practice and how to implement it effectively.
How can we ensure global interoperability in AI governance while respecting local cultural and regulatory differences?
speaker
Lucia Russo, Sulafah Jabarty
explanation
Balancing global consistency with local flexibility was identified as a key challenge in developing effective AI governance frameworks.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online