Pre 2: The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA)

12 May 2025 07:00h - 08:15h

Pre 2: The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA)

Session at a glance

Summary

This discussion focused on the Council of Europe Framework Convention on Artificial Intelligence and the HUDERIA guidance for risk and impact assessment of AI systems on human rights, democracy, and the rule of law. Mario Hernández-Ramos, Chair of the Committee on Artificial Intelligence, moderated a panel featuring three experts who helped shape this groundbreaking international treaty.


Jasper Finke explained that the Framework Convention represents the first binding international treaty on AI, negotiated under significant time pressure between 2022 and 2024. Despite being based on abstract principles rather than specific rules due to time constraints, the convention has attracted signatures from major powers including the EU, United States, Japan, and Canada, demonstrating its global approach. The treaty establishes fundamental principles including human dignity, autonomy, equality, non-discrimination, transparency, and accountability throughout AI systems’ entire lifecycle.


Murielle Popa-Fabre provided a comprehensive overview of the international AI governance landscape, describing it as a “multi-layered lasagna” approach. She emphasized the challenge of making principles operational in reality, comparing various frameworks including the US National Institute of Standards and Technology approach, the G7 Hiroshima principles, and China’s detailed regulatory system. China’s experience demonstrates how binding AI regulations can coexist with innovation, requiring multiple approval steps and detailed compliance measures before AI systems reach the market.


Jordi Ascensi-Sala focused on the practical implementation of the HUDERIA methodology, which bridges the gap between legal principles and technical practice. This methodology emphasizes context-based risk analysis and stakeholder engagement processes, creating dialogue between technical developers, policymakers, and affected communities. The approach requires ongoing assessment throughout an AI system’s lifecycle and aims to make the framework accessible to small companies and municipalities through capacity-building tools and shared knowledge libraries. The discussion concluded with recognition that this treaty represents a crucial step toward ensuring AI serves as a force for good while protecting human rights and democratic values.


Keypoints

## Major Discussion Points:


– **The Council of Europe AI Framework Convention as the first binding international AI treaty** – Jasper Finke presented the convention as a groundbreaking achievement negotiated under significant time pressure (1.5 years), emphasizing it as a starting point rather than a perfect end product, with global participation including the EU, US, Japan, Canada, and others.


– **International AI governance landscape and multi-layered regulatory approaches** – Murielle Popa Fabre outlined the complex “AI governance lasagna” with various international frameworks, comparing approaches from the US (risk management focus), G7 Hiroshima principles (transparency-based), EU AI Act (product safety), and China’s detailed regulatory system with multiple approval layers.


– **The HUDERIA methodology for risk and impact assessment** – Jordi Ascensi Sala explained how the HUDERIA framework bridges legal principles with practical implementation through context-based risk analysis and stakeholder engagement processes, focusing on the entire AI system lifecycle and creating dialogue between technical and non-technical stakeholders.


– **Practical implementation challenges for smaller organizations** – Discussion addressed how small companies, municipalities, and public institutions can navigate complex AI compliance frameworks, with proposed solutions including capacity-building tools, visual guidance systems, and shared knowledge libraries.


– **The relationship between innovation and regulation** – Panelists explored balancing AI innovation with public interest protection, emphasizing that clear regulatory frameworks can actually support better product development and adoption rather than stifling innovation.


## Overall Purpose:


The discussion aimed to present and explain the Council of Europe’s Framework Convention on Artificial Intelligence and the HUDERIA methodology, positioning them within the broader international AI governance landscape while addressing practical implementation challenges and the balance between innovation and human rights protection.


## Overall Tone:


The tone was professional, collaborative, and cautiously optimistic throughout. Panelists acknowledged the imperfections and challenges of current approaches while maintaining confidence in the frameworks’ potential impact. The discussion remained constructive and forward-looking, with speakers building on each other’s points and using accessible metaphors (like the “lasagna” analogy) to explain complex regulatory concepts. The tone became slightly more technical during Q&A but remained engaging and solution-oriented.


Speakers

**Speakers from the provided list:**


– **Mario Hernandez Ramos** – Chair of the Council of Europe’s Committee on Artificial Intelligence


– **Jasper Finke** – Legal Officer of the Federal Ministry of Justice and Head of the German Delegation to the Committee on Artificial Intelligence


– **Murielle Popa Fabre** – Generative AI Government Advisor with expertise at the intersection of technology and policy, PhD in Neuroimaging and Natural Language Process, experienced in training Large Language Models


– **Jordi Ascensi Sala** – Head of Technology at Andorra Research and Innovation, Head of Delegation of Andorra to the Committee of Artificial Intelligence


– **Audience** – Multiple audience members asking questions during the Q&A session


**Additional speakers:**


– **Martin Boteman** – Audience member who asked a question about balancing AI innovation and public availability


– **Jacques Berglinger** – Swiss-based board member of EuroDIG and affiliated with Leiden University in the Netherlands


Full session report

# Comprehensive Report: Council of Europe Framework Convention on Artificial Intelligence and HUDERIA Methodology


## Executive Summary


This discussion centred on the groundbreaking Council of Europe Framework Convention on Artificial Intelligence and the HUDERIA guidance for risk and impact assessment of AI systems on human rights, democracy, and the rule of law. Moderated by Mario Hernández-Ramos, Chair of the Committee on Artificial Intelligence, the panel featured three key experts who played instrumental roles in shaping this first binding international treaty on AI. The conversation explored the complex landscape of international AI governance, practical implementation challenges, and approaches to balancing innovation with the protection of fundamental human rights and democratic values.


## Key Speakers and Their Contributions


### Jasper Finke – Legal Framework Perspective


Jasper Finke, Legal Officer of the Federal Ministry of Justice and Head of the German Delegation to the Committee on Artificial Intelligence, provided crucial insights into the treaty’s development process. He emphasised that the Framework Convention represents the first binding international treaty on AI, with negotiations beginning with a zero draft published in summer 2022, finalized in March 2024, adopted by the Committee of Ministers in May 2024, and opened for signature in September 2024. Despite the challenging timeline, the convention has successfully attracted signatures from major global powers including the European Union, United States, Japan, and Canada, demonstrating its truly international scope.


Finke was candid about the convention’s approach, stating: “Is the convention perfect? Well, I’m afraid the answer is no, but which convention or which outcome of an international negotiation has ever been perfect?” He positioned the treaty as a pragmatic starting point that establishes fundamental principles including human dignity and autonomy, equality and non-discrimination, protection of privacy, transparency and oversight, accountability and responsibility, and safe innovation and reliability throughout AI systems’ entire lifecycle.


The convention also establishes important procedural rights and safeguards, including documentation requirements, effective complaint mechanisms, and notification requirements when individuals interact with AI systems. The principle-based approach allows for future specification by national legislators and continued development by the Committee on AI.


### Murielle Popa-Fabre – International Governance Landscape


Murielle Popa-Fabre, a Generative AI Government Advisor with expertise in neuroimaging and natural language processing, provided a comprehensive overview of the international AI governance landscape. She introduced the compelling metaphor of AI governance as a “multi-layered lasagna,” explaining the complex interplay between various international frameworks and the critical challenge of making abstract principles operational in practice.


Popa-Fabre’s analysis encompassed diverse regulatory approaches across different jurisdictions. She highlighted the US National Institute of Standards and Technology’s focus on risk management, the G7 Hiroshima principles’ emphasis on transparency, and the Council of Europe’s distinctive socio-technical approach. She provided detailed examination of various international frameworks, including China’s regulatory system, which includes multiple approval steps, a central algorithm register, and specific technical requirements such as 98% acceptability rates for training data, 90% acceptable answers on 1,000 test questions, and maximum 5% question rejection rates for AI systems before market deployment.


Her observation about the courage required to “transform qualitative into quantitative” elements highlighted a fundamental challenge in AI governance: how to make inherently qualitative human values measurable and actionable within technical systems.


### Jordi Ascensi-Sala – Practical Implementation Focus


Jordi Ascensi-Sala, Head of Technology at Andorra Research and Innovation and Head of Delegation of Andorra to the Committee of Artificial Intelligence, focused on the practical implementation of the HUDERIA methodology. He explained how this framework bridges the gap between legal principles and technical practice through context-based risk analysis and comprehensive stakeholder engagement processes.


The HUDERIA methodology emphasises assessment based on scale, scope, probability, and reversibility across an AI system’s entire lifecycle. Ascensi-Sala stressed the importance of creating dialogue between technical developers, policymakers, and affected communities, ensuring that AI system assessment considers both application context and development context.


His philosophical insight, referencing Paul Virilio’s observation that “when we invented the train we invented the train accident,” underscored the need for proactive consideration of unintended consequences in AI development. He also provided a candid reflection on technical education, noting that engineering schools teach how to build bridges but not the implications of building them, highlighting the importance of interdisciplinary dialogue in responsible AI development.


## Major Areas of Consensus


### Multi-layered Governance Necessity


All speakers demonstrated strong consensus on the need for multi-layered, complementary AI governance approaches. Finke acknowledged that the convention serves as a starting point requiring further specification by national legislators, whilst Popa-Fabre emphasised how different frameworks work together like layers in a lasagna. This agreement extended to recognition that effective AI governance cannot rely on a single instrument but requires a network of complementary regulations working at different levels.


### Innovation and Regulation Compatibility


The speakers agreed that innovation and regulation can coexist productively. Popa-Fabre’s presentation of various regulatory approaches, including detailed frameworks that don’t stifle innovation, complemented Ascensi-Sala’s argument that clear rules and methodology don’t hinder innovation but rather require broader context consideration beyond purely technical aspects.


### Capacity Building Requirements


All speakers agreed on the critical need for practical implementation tools and capacity building for smaller entities. Ascensi-Sala emphasised that implementation requires capacity building tools and knowledge libraries to make methodologies accessible to small companies and municipalities. This consensus extended to recognition that visual tools and training materials are essential for helping small developers and public sector organisations navigate complex compliance frameworks.


## Practical Implementation Challenges


### Bridging Technical and Legal Perspectives


A recurring theme throughout the discussion was the challenge of bridging technical and legal perspectives in AI governance. Ascensi-Sala’s HUDERIA methodology specifically addresses this gap through stakeholder engagement processes that create dialogue between different perspectives. The practical implementation of such dialogue remains complex, particularly given the different languages and frameworks used by technical and legal communities.


### Accessibility for Smaller Organisations


The discussion repeatedly returned to the challenge of making AI governance frameworks accessible to smaller organisations with limited resources. Proposed solutions included visual guidance systems, shared knowledge libraries, and capacity-building tools. The speakers emphasised ongoing consultation to understand what constitutes useful implementation support for different types of organisations.


### Quantifying Qualitative Values


Popa-Fabre’s observation about the courage required to transform qualitative principles into quantitative, measurable elements highlighted a fundamental challenge. The HUDERIA methodology attempts to address this through structured assessment processes, but the tension between human values and technical measurement systems remains an ongoing area requiring continued innovation.


## Global Context and International Coordination


### The “Strasbourg Effect” Question


An important question raised during the discussion concerned whether the Council of Europe’s Framework Convention would achieve a global “Strasbourg effect” similar to the Brussels effect of EU regulations. The convention’s global approach, allowing non-Council of Europe members to participate, suggests potential for broader influence. The speakers noted that the ultimate success will depend on continued international engagement and the practical effectiveness of implementation tools and methodologies.


### Complementary Regulatory Frameworks


The discussion emphasised that the Framework Convention works complementarily with other regulations such as the EU AI Act, creating a network of instruments to regulate AI as a horizontal technology. This approach recognises that different regulatory frameworks serve different purposes and constituencies, with the Council of Europe’s focus on human rights and democratic values complementing more technical or market-focused approaches elsewhere.


## Audience Engagement and Key Questions


### Balancing Innovation and Public Access


Martin Boteman raised an important question about achieving optimal balance between investing in AI innovation and making AI accessible to the public. The speakers acknowledged this as an ongoing challenge encompassing both technical and economic dimensions, requiring continued dialogue between innovation stakeholders and public interest advocates.


### Implementation Support and Tools


Questions from the audience highlighted the practical need for implementation support, particularly for smaller organisations. The speakers confirmed ongoing work to develop visual tools, training materials, and knowledge libraries to make the HUDERIA methodology and Framework Convention requirements more accessible and actionable.


## Future Directions and Ongoing Work


### Continued Development


Several key areas require continued development and attention. The Committee on AI will continue work to specify the abstract principles established in the Convention, whilst parallel efforts focus on developing capacity building tools and training materials. The creation of visual tools to guide users through the HUDERIA methodology process represents a priority for making the framework practically accessible.


### Keeping Pace with Technological Change


The principle-based approach of the Framework Convention provides flexibility for adaptation as AI technology evolves. The speakers emphasised that the framework’s strength lies in its ability to accommodate future developments while maintaining focus on fundamental human rights and democratic values.


## Key Insights and Reflections


### Pragmatic Approach to International Cooperation


Finke’s candid acknowledgement that the convention is “not perfect” but represents a necessary starting point provided crucial perspective on international treaty-making. This pragmatic approach emphasised practical progress over theoretical perfection while maintaining ambition for continuous improvement.


### Structural Thinking in Governance


Popa-Fabre’s “AI governance lasagna” metaphor proved powerful in making complex regulatory structures accessible and understandable. The metaphor’s emphasis on proper structure and implementation highlighted the critical importance of design in regulatory effectiveness.


### Proactive Risk Consideration


Ascensi-Sala’s philosophical insight about trains and train accidents provided reflection on the relationship between technological innovation and unintended consequences. This observation advocated for more reflective, cautious approaches to AI deployment that consider broader societal implications.


## Conclusions


The discussion revealed mature, nuanced understanding of AI governance challenges amongst international experts. The strong consensus on the need for multi-layered, complementary approaches suggests convergence in the international community around fundamental governance principles, with flexibility for diverse national implementations.


The Framework Convention represents a significant achievement in international cooperation, establishing the first binding treaty on AI whilst maintaining flexibility for diverse national implementations. The HUDERIA methodology provides a crucial bridge between abstract principles and practical implementation, with continued development of supporting tools and capacity building remaining essential.


The emphasis on stakeholder engagement and ongoing dialogue suggests a democratic, participatory approach to AI governance. The path forward requires continued international cooperation, practical tool development, and ongoing dialogue between technical and policy communities to ensure that AI development serves human flourishing whilst protecting fundamental rights and democratic values.


Session transcript

Mario Hernandez Ramos: Good morning, everyone. Sorry for the small delay. Welcome, all of you, to this session of the RDIG, about the Council of Europe Framework Convention on Artificial Intelligence and Guidance for the Risk and Impact Assessment of Artificial Intelligence Systems on Human Rights, Democracy and the Rule of Law, what we call HUDERIA. My name is Mario Hernández-Ramos, and I serve as Chair of the Council of Europe’s Committee on Artificial Intelligence. As you all know, artificial intelligence is reshaping societies of an unprecedented pace, offering extraordinary opportunities, but also posing significant risks to fundamental rights, democracy and rule of law. In response to those challenges, the Council of Europe is leading global efforts to establish the first ever binding International Treaty on Artificial Intelligence, ensuring that these technologies develop in alignment with human rights and democratic values. Today, we are privileged to hear from distinguished experts who have been actively shaping this groundbreaking treaty, and also worked on the guidance on the risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law. We have an exceptional group of panellists. who will share their insights on how we can ensure this technology upholds human rights and democratic values. Let us then start with Mr. Jasper Finke, Legal Officer of the Federal Ministry of Justice and Head of the German Delegation to the Committee on Artificial Intelligence. Dear Jasper, could you please present the formal convention to us and to the public and to stress the main elements, please? Thank you.


Jasper Finke: Sure, thank you very much, Mario. Before I start, the usual safeguards, personal safeguards, I’m here in my personal capacity, so everything I say is not representing the position of the Federal Republic of Germany, but my own. Now we can start. Let me start with a modest comment. I think the negotiations of the AI Framework Convention were a success. Why do I say so? Well, in evaluating international agreements, you have to take into account the context in which they were negotiated and not just the content. So both matter, context and content, for evaluating international negotiations and agreements. And therefore, I will spend a little bit of time on the context in which we negotiated the AI Framework Convention before I will then focus on the content. If you look at the context in which we negotiated the AI Framework Convention, a few things mattered. The first thing was time. The zero draft of the Framework Convention was published in summer 2022. We finalized the negotiations. in March 2024. So there was a huge time pressure under which we negotiated the Framework Convention. We basically had more or less one and a half years to do so and we managed to do so. And when you look at the content of the Framework Convention, please always take this into account. So as I said, we finalized the negotiations in March 2024. The Convention was adopted by the Committee of Ministers in May last year and then it was signed or opened for signature in September 2024 and by now I have to read it because my memory is not that good. The following have signed the European Union and take note that it’s not the EU itself but it also signed the Convention on behalf of its 27 member states. So there will be no signature or ratifications from EU member states. It will just be the EU. We have Israel, Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Japan, Canada, Switzerland, Liechtenstein, Montenegro and I’m afraid I have missed the United States. They have also signed the AI Framework Convention. Well this already indicates the approach we took in the negotiations. It was what will hopefully be a global approach. So the becoming a party is not restricted to being to the members of the Council of Europe. Instead we thought that with other conventions as well it would be valuable to have non-Council of Europe member states. as potential parties as well, and this really opened up the negotiations. States from Latin America took also part in the negotiations. We hope to see signatures from that region of the world as well. Australia joined the negotiations in the end, too. Of course, the AI Framework Convention was not the only initiative. We had other initiatives, and for everyone negotiating the AI Framework Convention, but especially for EU member states, it was particularly interesting to negotiate the AI Act and the AI Framework Convention in parallel. The AI Act was not yet finalized while we negotiated the AI Framework Convention that posed a number of obstacles and challenges. We managed them all, but it didn’t make things easier. Let’s put it this way. As Mario has already said, there are other initiatives worldwide on AI regulation, but the Framework Convention is the first international binding agreement, and therefore it stands out. Against this very limited time frame, we had diverse interests, legal backgrounds, and cultures in the negotiations that should also be taken into account. Of course, we had a moving target. The technology developed even more rapidly when we negotiated the AI Framework Convention. It’s always a tricky question. How do you frame rules? object of these rules is still changing. Of course this is not just the case for the AFM convention, it applies to the Air Act, the Hiroshima process and UNESCO principles as well, but of course if it’s a binding agreement internationally it becomes particularly difficult as it’s not easy to change these rules once enforced. So again to this background let me give you a brief overview on what the convention covers. First of all it does not address the technology as such artificial intelligence, it addresses the artificial intelligence systems and the entire life cycle. So basically from product design to decommissioning to be a bit un-technical. I should add Jordi here is the more technical person and every time I speak about technology he has to work very hard from rolling his eyes I guess. Of course the AI Framework Convention does not stand alone in itself, it’s part of a larger framework, human rights conventions and the Council of Europe, particularly the European Convention on Human Rights and of course data protection is important when it comes to AI systems as well, even though this was not the core of our endeavor and is dealt with in other legal acts, conventions, committees. So how did we proceed? We first provided a list of fundamental principles that should be, or that must be, observed in the life cycle of AI systems. Human dignity and autonomy, especially the idea of human autonomy, is important, becomes increasingly important to the extent that technology evolves. Equality and non-discrimination, important in itself, but given the possible impact of AI on manifesting discrimination and inequalities and prolonged stereotypes, it was important to stress these principles as well. Protection of privacy is, of course, included as well. More AI-specific transparency and oversight, meaning oversight over the AI system. Accountability and responsibility, and of course, safe innovation and reliability. So, these are principles. They’re not specific rules. And, as everyone knows who has negotiated international agreements, if you want to be more specific, you need more time. If you do not have this kind of time, because you are not a lot of pressure, you have to start relying on these more abstract principles. Now, I have to look at the chair. How much more time do I have? Of course, we also included procedural rights and safeguards. So, there’s a documentation requirement for AI systems. have, there must be an effective possibility to lodge complaints, the notification that one is interacting with an AI system. Of course, there are, you can deviate from these principles under specific circumstances, but the basic rule is the notification and very important and a core element is the risk and impact management framework. I wouldn’t go into detail here, colleagues will do that. So, to conclude, let me ask a rhetorical question. Is the convention perfect? Well, I’m afraid the answer is no, but which convention or which outcome of an international negotiation has ever been perfect? It was not just the four of us sitting in a room for one and a half years drawing up ideal rules on AI. The AI convention is a result of compromise, as all international agreements are, and you have to take into account more diverse interests if you extend the scope and allow, for example, or take a global approach, let’s put it this way, if you take a global approach, more diverse interests have to be accommodated. And this relates to the idea that we started with principles, the negotiations were not, or finalizing the negotiations last year, was not the end. No one understood it as the end. but more or less as a starting point. So we have abstract principles and all of us know that they have to be specified. So who can specify them? Well, of course, once ratified and enforced, national legislators, so the parties to the convention can specify and make the principle more specific. One example could be the Act, but there are many other ways to approach the topic and this is what we took into account that there are different ways of approaching AI regulation and the Framework Convention leaves and gives and allows this kind of space to regulate according to the specific needs and interests of the parties. But it’s not just, if we say the principles have to be specified or should be specified, this is not just a job for the parties. The work of the CHI, the Committee on Artificial Intelligence must and will continue and let me show you we have already started our work and therefore, to point this out again, I think that given the context in which we negotiated, the convention is a success because it’s a starting point for further work and we are all committed to actually do this work and with that, thank you very much.


Mario Hernandez Ramos: Thank you very much Jasper for this general review and also for stressing. the issues that contextualize the outcome of the Framework Convention, of course it’s not perfect from many interests, but it’s the first international treaty on artificial intelligence. But this is not the only one, there are more exercises, more interesting regulatory examples, and for that we have our next panelist, Mrs Murielle Popa Fabre Fabre, Generative AI Government Advisor with expertise at the intersection of technology and policy with PhD in Neuroimaging and Natural Language Process and Handout of Experienced Training Large Language Models. Dear Murielle, could you please introduce us to the current international national artificial governance license, taking into account human rights standards, where do we stand currently, Murielle?


Murielle Popa Fabre: Thank you, good morning everyone, I will share some graphical elements, because probably the first thing we have to do is that, for the sake of time, development and because of the public opinion disruption that CHAT-GPT yielded, we see that there is an incredible, as you see on the graphic, that is just stopping one year ago, an incredible acceleration of formulated AI rules across the world, a lot of dynamics on trying to find the applicability of existing rules, and what is actually interesting for us, given the context of presentation of the Huderia, is that it’s a relatively stable amount of international frameworks, so we have a lot of rules, but when we think about what’s the question of having a framework, it’s a has to stick to reality, right? So rules can be principle-based, but whenever we have something that has to be a framework, it has to grisp on reality, right? Like in a car, you have to have the wheels on, otherwise you slide off, right? And so it’s interesting because everyone understands the need to act, but it’s very difficult to make it land in reality. And so I will have a crosswalk of the first attempt of having a grisp on reality. And so, but we should start with what I usually call the AI governance lasagna, which is actually a multi-layered approach to AI governance in correct terms. And as you see, you have the meat right in the center in yellow. So what are the regulated entities, right? The AI producers, deployers, the design, all the AI supply chain. And then the more you go off the meat of the lasagna, the more you meet things that try to be very based on, like based on reality that is right at the bottom, but are usually non-binding. So I would call this the cream. And then you have everything that gives structure, like the pasta, the layers up. And so what is at stake here is to take principles and to make them land in reality, right? So what is actually at stake in this three main, I just take three cross national situation is to have something like the AI Act that is based on risk, that is based on product safety to land in reality. And I don’t know to what extent you’re familiar to the inner workings of the AI Act, but definitely it’s something that is going to be implemented with standards. and the C mark will make the landing in society. Because at the end of the day, reality is also society, which is the main focus of the convention of the Council of Europe. It’s something that has not only to be, to define what is a nice system, but to define what is a nice system with humans. And so the focus here is human rights, democracy and rule of law. And as the focus is humans, then it’s the whole life cycle, because it’s not about only development. It’s also, for example, about the commissioning. What if you do therapy with an AI and all your data are stuck into one company and you wanna change? For example, this is a question about the commissioning. What if the system stops working? What are you doing with your company? Companies I was doing consultancy with were stressed when governance inside big AI company was shaky, because they said, what is happening? So it’s really important to understand this whole life cycle focus. And here, how it lands on the ground is not standards at the CENSELEC level, it’s the Huderian methodology. And then we have the UL approach that is also about main agreement on core elements, like the first US-led was about safety. So we had a safe, secure, trustworthy, sustainable development of AI. And the second China-led one was about free, open, inclusive and non-discriminatory. And when you say free, open and inclusive, you also mean interoperable. So you see that when we are at the level of principle, we already have a specialization, like just people targeting core elements of AI systems. inside reality. So basically, if we look back to the lasagna, the problem is to make it eatable, because if, I don’t know if you’ve tried lasagnas that don’t have the right structure, it’s really difficult to eat them. And also to make it then arrive in reality, right? Because if it’s not eatable, nobody will eat it, right? So the question is to bridge the layers, to come to the principle and operationalize them. And so what does it mean to make principle operable? So here you have a very basic schema where you have the fundamental values, you pick the ones you want, then you try based on this fundamental value to find AI principles. So to see how the technology can actually, what kind of characteristic the technology has to have in order to be in line with this fundamental rights. Then you try, you discover that there are risks. This is something that is actually, everyone’s discovering, right? So this is just, you discover there are risks, you wanna manage them, you’ve tried to find ways to manage them. And then when you found ways to manage them, that is the cream, all the soft, cushy part of that was at the bottom, you finally get into hardcore decision about regulations, about standards, about rights, liability, remedies. So the first, it’s also important that then you have basically three steps. You have a step that is about what is your approach? So for example, the focus on human rights and how it lands in society for the Council of Europe compared to the product, right? So you have the approach, then you have the method. How your approach comes down to reality with a method. And then you have all the governance. and regulation that comes and gives it, like fix it in stow. So I would like to take these three main conceptual steps that were actually introduced by Jasper in order to show you the different approaches, because there are many, many initiatives, but they all target different elements of this landing in reality. So how we make principles operable. And so the first one that was really a first one in time and has to be acknowledged is the national standard, the U.S. national standard developing an AI risk management framework. So they put themselves at the risk framework level, but they focus on the AI system and on focus on the AI system with humans like the convention. And so in this focus of the AI system, they did important work in order to find some characteristics that every trustworthy AI system should have. And so this is their approach, focus on the system. And what is the method is to give guidance to companies through an AI risk management framework. And so what is interesting here is that they developed this framework that is graphically represented here. And an important element that they share with the Houdini framework, for example, is that and also with the commission is that you have to map in context. What are the risks? So that you take the context of application, you know, to understand the risks. And we’ll see that Houdini will do something more. But this is really very important because when you think about an AI system and how versatile it is, it’s really fundamental to understand it in its context of application. When you’re when you’re developing products in a company, for example. And so you make sure that in the context, all these seven principles are thicked, and then you do all your governance, then you measure, you go to the yellow spot, you measure, and then you manage it, and all this is built around the governance mechanism. And then we have what happened during the G7 in 2023 in Hiroshima, where there was a government forum that decided to issue some guiding principles. And so the approach was to say, we find principles that can be across risk management, stakeholder engagement, and ethical and society consideration. So we’re starting to enter at the interface with humans here, and given these principles, they say, okay, we wanna build on these 11 pillars, a code of conduct, voluntary code of conduct, of course. And this is interesting, because when you look at the list here, you have all the risk management and governance consideration, what we can also find in the U.S. national standard approach. But we also have some stakeholder engagement. But how is this stakeholder engagement actually structured? It’s structured around transparency and accountability. So they take inside the principle and they focus on some principles. And the responsible information sharing. So here, the dynamic is to say, okay, we want transparency to be the core, and transparency will be a way to have this stakeholder engagement. And so what they developed in order to make it operable, to be transparent, is a monitoring mechanism, also based on voluntary reporting, where companies developing AI system can actually report about the best practices. they have in their risk management. And here I would like to stress that in this, the relationship with the stakeholders is just like I put information here on this platform and then you go if you want to check it. While what we see in the Huderia approach and at the core of the treaty is that the approach is socio-technical, so totally sticking with the reality of being an interface with a human being and society at large. Having the life cycle, but actually it also develops a methodology that is called Huderia that wants to check the impact, not only the risk, but wants to assess and quantify the impact. And what is actually interesting is that it’s not only about transparency, but it has two crucial steps I would like to follow. That is the context-based analysis. And here the context is not only the application and it has the stakeholder engagement process, which is not only about transparency. And if you want, you go and check this on the web. And here is really important because it’s only these two poles that makes it land in reality in a totally different way compared to these other initiatives I have been mapping. And here the COBRA risk analysis is based on the application, like we saw in the US national standards, but also the design, the development and the context and the deployment. So actually the context is linked to all the steps that lead the system to then interface with you. And this is fundamental when we think that we are getting to very sophisticated systems that are sold as black boxes. and that actually have internally a lot of different steps that have different impacts on their addiction patterns, on their influence patterns, on the end user and at large in society for systemic questions. So in this stakeholder analysis it’s really important because it’s about putting around the same table all the people that are interacting with these systems. And so it’s important because one step of this analysis is about identifying missing viewpoints, which is actually something that people when they develop products do like to have somehow. And so it’s really for me something that puts a lot of much more granularity in the analysis and that actually is very lively because it keeps the pace of technology because you are using tools and as you’re using tools you’re observing the effects they have on your life, both positive and negative. So if you want to keep the peace of the use of these systems you definitely have everyone around one table. And so I’d like, if I still have three minutes, I’d like to show, taking the example of China and of DeepSeek, I’d like to show actually what is a governance journey for the country that was the first to set up binding rules on algorithms at large and AI-powered algorithms. So China had a regulatory journey that started by having laws according to different architectures. So you had one law about recommendation algorithm that power social media for example or to help you fix fixating prices. They had another law on deep synthesis techniques for generating content that is synthetic. And they had another law in August 2023 that is called for the moment interim generative AI law that is about algorithmic discrimination, fake content, intellectual property, privacy, social values in generated content, and also security and identity verification. So one way to see the kind of path I’m going to design in the last last three minutes is to understand to what extent there is a layered iterative approach in the experience that China has been developing in regulating AI that is linked to a central tool that is called the central register for algorithms, where you have to put your algorithm, your training data before going to the market. And, and this tool actually is a tool that everyone developing, for example, a large language model has to use. And, and it’s, this is the interface, or this is the interface one year and a half ago. So maybe the interface is slightly different today. But basically, you have to include if you have biometric features or not, you have to include your identity information, if it’s open source, what data sets you use, what sources you use, and a lot of different, what is the use scenarios and a lot of different other characteristics. And this since 2022. And so you have batches of approval of what is put on the market. For example, this is an example of batches of approval of deep synthesis algorithms. And, and after this, they still developed standards like it’s happening now at the EU level and what is interesting is that the generative AI standards came out on the 11th of October 2023 and they were actually very detailed. Their scope was training data for example and for training data you had to say what’s the assessment, what is the evaluation methods and so for all the people that say that innovation has to be without concrete and fine-grained regulations here you see an example with DeepSeek where actually you definitely have innovation and you definitely have at least already you see three layers of compulsory steps you have to take before putting into the markets and now we are three now I ask you to count. So when you’re here you have different methods of evaluation but for example you have identified 31 risks including of course socially valued discrimination, commercially legal, legitimate interest of people and all these risks have to be evaluated in a certain manner and you have to have 98% of acceptability on your training data according to all your risks and you also have to have 90% of acceptable answers on a pool of 1,000 questions you ask to a chatbot in this case on the generated content so you take the two initial and final so input and output and you have 98% of acceptable questions you have and this is what DeepSeek went through. You also have to have a maximum of 5% of reject the question on certain questions it means that you have to answer correctly to a certain amount of questions and you can just say I reject them and it’s fine. And then you have additional step of standardization. on data security of pre-training and optimization in training in generative AI and also cyber security measures, so here we count as five. And then the Cyberspace Administration mandated on the 11th of July 2024 an additional step of government review on the AI models. And here is six, and then we had in March this year an additional new regulation labeling AI-generated content, so I’m sure we can count more than seven. So this was just an example of how detailed can be the AI regulation journey and how agile and flexible this job is, and thank you for your attention.


Mario Hernandez Ramos: Thank you very much, Murielle, for this very interesting overview of the landscape and especially the China’s regulation, which is sometimes not very known, but this is, of course, a very interesting thing to know. And now let’s move to our last panelist, of course, Mr. Jordi Ascensi Sala, Head of Technology at Andorra Research and Innovation, Head of Delegation of Andorra to the Committee of Artificial Intelligence. Let’s move to a very important question, which is try to make a system, an artificial intelligence system safe at the center. This is our main worries, of course, to assess before problems with their use even arise. So, Jordi, please, could you please explain how and why risk assessment and risk management of artificial intelligence are so important and how Huderia contribute to making artificial intelligence safe.


Jordi Ascensi Sala: Thank you, Mario. I don’t know if you’re hungry or not, because we’re talking about lasagnas, and I’m going to take this nice metaphor from Murielle, because as she explained, the way that you try to use a legal instrument, as the convention is, and to touch base into real practice is not an easy task, because in the realm of the convention, you talk with lawyers and politicians and policy makers that are interested in making a very consistent text that you can understand in a specific way, and it can be understood at the same time in the terms of an international treaty. When you translate that into practice, you create kind of a bridge, because putting this into practice, you’re going to talk with people like me, that we have a technical background, you’re going to have with people that are in the public administrations in charge of public procurement, you’re going to touch people that have rights, and you are going to be dealing with people that deliver these rights. So we have to bridge these gaps between a legal convention and the real practice. Murielle explained it in a very nice way, the lasagna way, but I think that it’s important to note that in the convention there is an article about that there is an obligation to have a methodology to understand the impact on human rights, democracy, and the rule of law. And since the Committee of AI… did the convention, it felt that it would be important to have a special recipe for this lasagna. And of course, this is a non-binding instrument, you can use whatever you want. You have to have one, but you can use whatever you want, and the Council of Europe proposed, or the Committee of the Air proposed, Huderia, and it’s a concrete model, as Murielle explained. I like when Jasper said that the convention had specific context, and in terms of Huderia, context is super important, because it depends on the approach that you have to apply this convention. And also, at the end, this methodology should help you to fulfill the requirements that the convention asks for. And so, in terms of the Huderia methodology, we have a very important focus on context, but also a very important focus on perspective. We’re going to go into the, Murielle already told us about the context-based risk analysis and the stakeholder engagement process. We’re going to abound a little bit further with this, but how you deal with something that it’s in continuous evolution, that is technology in this case. I remember when we finished the zero draft for the convention, I think that this was in 2022, and all these tools from ChagGPT, generative, and so on, they were just starting, popping out. And now, two, three years later, we are in a situation that things have changed in a very dramatic way. And also, in terms of technology, also in terms of geopolitics. And this is a change, continuous movement. So, how you do a methodology, how you implement something, that touch base, taking into account that the place where you touch base is moving you know, and it’s evolving all the time so probably I will say that the answer is to focus on human rights, democracy and the rule of law and this is one of the important things, that this is of course something that it’s principles that evolve but they don’t evolve as CGPT or other, you know, large language models or other AI systems so focusing on that and taking into account that this is the main part at the end, the Huderian methodology, it reaches this intersection between human rights and tech, technology, frameworks and practices and it’s a structured approach based on scale, scope, probability and reversibility and this is quite important because it touches all the life cycle Jasper was looking at me when he was talking about the terminology and for computer scientists or engineers as me, life cycle is something that is commonly understood you know, it’s when you design a system, you test it and you implemented it and you operated it and then sometimes you decommission this system and then there are many things around so when bridging the legal instrument into a more practical approach we have to think also on these processes and we have to also use similar languages it is true that sometimes, and this is important to know too about the methodology when we talk about, let’s say, explainability the understanding of explainability from a technical standpoint it’s different from a legal standpoint and there are things that merge here But it is important to have this, again, perspective and this context and to create this dialogue between people that are in the designing phases, on the operational phases, on the training phases, on the implementation phases, on the procurement phases, to use the same language. And this is one of the things that we are trying to do in the Hederi methodology, to have a common understanding of the language. Because otherwise we are talking with the same way, using the same words, but we are not talking the same thing. And this is quite, you know, something that is important to notice. So, when we speak about the phases, this process of the Hederi, we saw that this context-based risk analysis, where you check, based on the context, what is the risk that arises when you use an AI system. And then you go to the stakeholder engagement process, that it means, okay, this risk, let’s put that in perspective. And perspective is not just one single perspective, it’s going to be the perspective of the people that are engaged in this, using this system, or the people that will be affected by this system. And this is a way to create a conversation. Because otherwise, talking about processes, and we engineers, we like processes. We can process everything. We can process waking up in the morning, you know, and not myself, but you know, you’re brushing your hair, but you’re cleaning your teeth, and we can process all these things with many different indicators and KPIs. But when we think about using an AI system, we don’t put in this process other perspective, other than the technical. And using this methodology, it creates a conversation that says, are you taking into account this specific part of the population that will be impacted by this AI system? and there are some questions that they help you frame this conversation. Same thing for the people that want to use this system. It creates this engagement. Have you have a system that will help explain what are the reasons of the decision that has been made by an AI system? We can have this conversation in this table where we have an AI expert as Murielle and two magnificent lawyers and myself and we can just start this conversation. I’m sure that we can have this conversation in this room and this will be a very rich and fruitful conversation to understand the risk and the impact of this risk depending on the perspective. Then of course when you have analyzed all this, then you understand what the risks are and what is the impact of the risk. And at the end you have to think about how to mitigate the risk because sometimes it will be impossible. It will be difficult to just avoid the risk. You have to think about how you will mitigate them. So examples of the use of this process and this is an ongoing process. So you don’t do that. It’s an ex-ante analysis, but you don’t do that just one time and that’s it. It works for everyone. You have to do it from time to time. And I think this is also, I don’t want to be super romantic here, but you create conversation among people. And this is a good thing, you know, when you install an AI system and you want to see how is it going on and you do in this thing in an ongoing conversation, you are asking questions and you are, you know, since the system is evolving and amplifying its capacities or maybe going into different sectors or maybe dealing with different inputs to deliver different outputs. you have to have this conversation, and this is a good tool to have this conversation. Examples of other tools that are similar to this one, because this is not new, the Hedera is part of, let’s say, a tradition of doing things in that way. The Convention 108 Plus has this in terms of data protection rights. The GDPR in the European Union has this. Also, in terms of cybersecurity, there are also tools that are similar to this, how you assess a system in terms of cybersecurity. So, the Hedera methodology, at the end, it helps to have a holistic way and a continuous way to approach the use and the implementation of an AI system. And this holistic way, I want to link it with Jasper’s talk about the human autonomy, that is one of the basic principles. And now, personally, I’m doing things about philosophy, I can’t explain that in order to be free, to have this human autonomy, there is this universality principle. So, the holistic way of understanding the conversation is a way to understand this universal understanding. It’s not the engineer or the computer scientist saying what is the risk. It’s not only the people that receive the rights or deliver the rights that they say, they signal where are the risks and the impact of this risk. It’s having a coral conversation in terms of having this universality. So, just to wrap up a little bit, and I’m going to finish in two minutes, the holistic way means to have AI systems application context, where this system is going to be applied. The AI systems design and development context, and this is the process of the life cycle, the data protection, explainability, interoperability. and this is taking into account how you deal with the whole process of installing, operating, using an AI system but also is how you put it into motion because sometimes we have this feeling right now that you turn on a switch and there is light and you turn on your cell phone and there is a connection and then you start a system and all works but when we have these powerful systems as AI systems are things are not that easy, of course they work right away but there is this French philosopher Paul Virilio used to say that when we invented the train we invented the train accident so when we install a system we have to think about all of this and this is not a very rapid question, you have to think about it so just to finalize here when we were discussing about the Hederia to me having the methodology is important but also it’s having how you are going to implement this methodology because at the end there will be either a small company or a big company thinking about deploying, preparing AI systems it’s going to be also a government or a small municipality thinking about using a system to better deliver services but it’s going to be also a policy maker, regulators and so on so how you make this operational and in here there are two parts the secretariat is working hard to help with this the first part is the capacity building so we have to have a tool that will be able to be used by the majority of the people so let’s make this tool useful Let’s make this tool in a way that it will be understandable. And the second part will be the library of knowledge, thinking that when you use this system, they’re going to create cases, specific analysis from specific applications. And these are things that, of course, using all the privacy and the disclaimers that should be used, that we can be used and shared around. So it’s going to be a common practice. I don’t think that a small municipality in my country will be much different from a small municipality in Germany, in France or in Spain or other countries in other jurisdictions. So probably we can use this as an example. I’m trying to breach this process of using a convention as a basis, the top layer of the lasagna, the Huderia as a methodology to fulfill, to put some meat or some vegetables if you’re vegan. And but also how to help to digest this lasagna. So this will be maybe the salt and pepper. So thank you very much.


Mario Hernandez Ramos: Thank you very much, Jordi. We are starting getting hungry. Hungry, sorry, hungry. So we have 15 minutes to allow you to make any questions or comments to this extraordinary panelist. So the floor is yours if you have. If you don’t have questions, I have many. So we took advantage of my position, but I’d rather prefer you to have questions. There’s no questions.


Audience: Martin Boteman, question. With developing all this AI, and I hear you very much about capacity building and making available to everybody. How would be a good balance between the way forward in investing in innovation? on one hand, and making AI available for the public on the other hand, because I think you will need to have premium AI in some way to stimulate innovation. How do you see that balance?


Mario Hernandez Ramos: Thank you for the question.


Jordi Ascensi Sala: I’m going to do the engineering answer, meaning that to me, if we know what are the rules and what are the clear methodology or framework to deploy this, it’s not different to the protocols that we are using when we are creating an AI system or a technological digital system. It is true that you should have an approach that is different from the pure technical one. In the technical side, we have a gap here about the understanding of human rights. Let me put this, I don’t want this sounding like a critique on the engineering schools, but when we went to the engineering schools or the computer science schools and so on, they told us or they taught us how to build a bridge between A and B. I’m not thinking about what are the implications of building the bridge between A and B. I think that this conversation for the good of the profession is important to have. The implications of using a specific technology. We have the mindset and the framework to find ways if the rules are clear. To me, it will be let’s have a very specific framework, let’s talk about this. Of course, it’s gonna take a little bit more of time that just have a free for all process. But at the end, there is no, there is never a free for all process. We have limitations in terms of capacity, energy, processing. When you install a system into a municipality, for instance, into a government, the system is not a standalone, you know, it’s, you put it in a, in terms of a specific context. So what we are is, what we are doing is enlarging this context, thinking about when you put this, what are the implications? Can you explain this? Can you put a specific audit mechanisms or lock mechanisms? To me, it will be also important that the way that we use this methodology will be approachable in a easy way for computer scientists or for public procurement, you know, teams. Because otherwise it’s gonna be a big document that it will be difficult to understand. And we’re putting lots of efforts here because this is the juice, the important meat of the conversation.


Murielle Popa Fabre: In addition to this, I would say that taking the perspective of developing today generative AI tools or investing in creating an ecosystem of generative AI, which is something I do, for example, for France. When you are in investment and you wanna accelerate the economy, you wanna build products, just don’t want the best tech of the world. You wanna have adoption and you wanna build products. And so I would go even further in what Jordi just said, saying that there is something about design that is actually product design, that is something that is highly cultural. And highly human and highly cognitive. And that’s the questions we are facing today. With sovereignty, it’s about culture and cognition, too. So, my understanding of innovation is also the idea of having responsibility of innovation, but also having just good products that fit the cognitive well-being of people and fit also the cultural demand of a certain area of the world. And so, I think Huderia is interesting because of putting all these people around the same table, so that at the end, I already said it in my presentation, you can have also good products.


Audience: May I? I think in a way it’s remembering how privacy came into the world and GDPR in a way became a strong instrument where the public interest is defined by law and not by the public. So, I like the fact that it’s a convention because it allows us to shape it. And as you said, Madam, it’s global, it’s not. So, it’s indeed a dialogue that would be needed. I can see that part of the convention would be to stop premature regulation that stifles public interest progress.


Mario Hernandez Ramos: Thank you very much for the question.


Audience: Jacques Berglinger, Swiss-based board of EuroDIG and with Leiden University in the Netherlands. My question is, and thank you the committee for the wonderful work, but will we see after that a Strasbourg effect globally, similar to what the Brussels effect is trying to achieve?


Jasper Finke: Unfortunately enough, I cannot predict the future. If I could, maybe I would have a different job, and I would definitely not be a lawyer. Well, yes and no. The convention and the AI Act work on different levels. So the AI Act, for example, or the Brussels effect, as you said, it’s more specific. It’s basically implementing the Frame Convention. On the other hand, we have now, once ratified and enforced, we have binding principles. We are working to make these principles more operable, to give guidance using Uteria methodology, using COBRA. And therefore, this effect, well, we are putting all our efforts into achieving this effect. And then, in the end, it also depends on parties, on companies, on municipalities, local actors, to actually use the tools that are provided by the Council of Europe. Of course, then we will see this effect. But if you look at the Frame Convention itself, it does not play on the same level as the AI Act. And therefore, it’s, well, I think it was clear from the beginning that it’s about content, but it’s also about geopolitics. And in this, as you can see, by the potentially global approach, and I think this geopolitical aspect and impact will remain. is an essential part of the success, or hopefully the success of the framework convention. Thank you.


Murielle Popa Fabre: So it is really courageous to take principles and transform them into measurable elements because principles are not quantitative, right? So this move between qualitative and quantitative and having a method in order to do it right for humans, I’m always thinking about the cognitive risk of large-scale automation and the question of autonomy that was already raised by the two panelists. So I think that if there is something that can be a Strasbourg effect, is this courage to tackle the question of transforming qualitative into quantitative in a world that is automatically dataifying and algorithmicizing. So I think we’re on a good track.


Mario Hernandez Ramos: Thank you very much. I will take advantage of my position and to say that I’m very happy with that question and with the Strasbourg effect. I’ve never heard about the Strasbourg effect, but there is a Strasbourg effect regarding human rights and the extension of standards beyond the Council of Europe countries. It is a reality, especially in Latin America and other parts of the world. But I would like to stress the complementarity relationship between the Council of Europe Framework Convention on Artificial Intelligence and, for instance, with the European Union Regulation. It is very clear that both instruments work better together and with other instruments. So it’s a part of a very needed network in order to regulate this horizontal technology that posts so many different. and the important risk on human rights. So thank you very much for the question. Is there any question in the room? Yes, please.


Audience: Thank you. Given my IT background, I wanted to ask a question. Given that many AI developers, especially smaller startups, public sector innovators, often lack, as you already said, lack of capacity to navigate this complex framework of compliance, how do you envision supporting them in aligning with convention, with other EU regulations? Is this methodology enough for them, or do you perceive something else to help them to navigate all this? Thank you.


Jordi Ascensi Sala: Thank you. It’s a very interesting question, and it touches me in my heart. Why? Because I come from a small country with only 85,000 people. So to me, it’s not only about the small companies, because the majority of the developers are small companies. Of course, then they have the big ones, the big players, but they are not the problem. They have lots of lawyers and lots of people dedicated or saying that they are dedicated to understand the risks of AI systems. But in my case, it has been something that we have been stressing at each meeting, thinking about there’s going to be small companies, but it’s going to be also small municipalities, small public institutions, that they don’t have a ton of means to understand this. So with the secretariat, we approach this in a way that first, we have to have a tool. that will help in this capacity building. So, a training tool that will help assess the methodology digested. We are thinking about the way, but we have to think about the user design UX and user experience and so on, because it’s different from the perspective from the computer scientists or the small companies than the small municipalities. It’s a different thing. But we can use the same way of doing this tool that in my mind, and we haven’t decided yet, and I’m looking at the secretariat and I hope that I don’t want to put myself into a complete difficult situation, but in my mind it will be a visual tool that you follow the process of Huderia and then you have different definitions and they help you understand why the question is made in that way and in which capacity you are answering it. So, this is my thoughts. We’re dealing with that. We have several meetings that we have to do with municipalities, but also with publics, with the states, and also with developers to grasp or to understand what will be a useful tool. This is one part, so training capacity building and then a tool that will help you go into the process. And the second part will be this common knowledge platform where, let’s say, I’m working on a tool that will help to assess financial credits for housing, let’s say. And I’m a developer and I wanted to prepare a tool like this. So, in my mind, it will be interesting to see what other cases are similar to this one, where the questions were asked and which were the context-based risk assessment and the stakeholder. and then from this I will start the design of the application in a very specific way that will be in line with the Huderia methodology. This is something that we are thinking right now, developing or thinking about it. This is the reason why Jasper is saying that there is an important job to do with the CHI or the foregoing committee that will be in charge of this, because the baby is born but we need to help him or her walk and we need to take it in a very precise way to make it useful and to make it broad in its way of understanding the methodology, because otherwise it’s going to be a big ton of papers in a drawer and we don’t like that. Believe me, I’ve been very specific into this because my government, it’s going to be something very difficult to digest and we are one of the signatories, so it’s in our interest to make this, I don’t want to say easier, but manageable in our scale. So yeah, we are doing this, we don’t have the details yet, but think about this academy, you know, capacity planning tool that helps you navigate and then a library of knowledge of cases.


Mario Hernandez Ramos: Thank you very much to all our panellists, thank you to you. This discussion underscores the importance of this treaty and all the regulation of Artificial Intelligence in shaping a future where Artificial Intelligence is a force for good, protecting human rights and upholding democratic values. I hope the insights shared today will inspire all like-minded in the States to consider joining the Landmark Initiative. Thank you all for joining us, and I look forward to seeing this treaty come to force and fruition. Thank you very much for today.


J

Jasper Finke

Speech speed

111 words per minute

Speech length

1669 words

Speech time

900 seconds

Convention negotiated under significant time pressure in 1.5 years, representing a compromise between diverse global interests

Explanation

Finke argues that the AI Framework Convention was successfully negotiated despite having only 1.5 years from the zero draft publication in summer 2022 to finalization in March 2024. He emphasizes that the convention represents a compromise between diverse interests, legal backgrounds, and cultures, which should be considered when evaluating its content.


Evidence

Zero draft published summer 2022, negotiations finalized March 2024, adopted by Committee of Ministers May 2024, opened for signature September 2024


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Legal and regulatory


Disagreed with

– Murielle Popa Fabre

Disagreed on

Timeline and approach to regulatory development


First binding international treaty on AI, with global approach allowing non-Council of Europe members to participate

Explanation

Finke highlights that this convention stands out as the first international binding agreement on AI, distinguishing it from other non-binding initiatives. The convention takes a global approach by allowing non-Council of Europe member states to become parties, which opened up negotiations to broader participation.


Evidence

Signatories include EU and 27 member states, Israel, Andorra, Georgia, Iceland, Norway, Moldova, San Marino, UK, Japan, Canada, Switzerland, Liechtenstein, Montenegro, United States; participation from Latin American states and Australia in negotiations


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Legal and regulatory


Convention establishes fundamental principles including human dignity, autonomy, equality, non-discrimination, and transparency

Explanation

Finke explains that the convention provides a list of fundamental principles that must be observed throughout the AI system lifecycle. These principles include human dignity and autonomy, equality and non-discrimination, privacy protection, transparency and oversight, accountability and responsibility, and safe innovation and reliability.


Evidence

Specific principles listed: human dignity and autonomy, equality and non-discrimination, protection of privacy, transparency and oversight, accountability and responsibility, safe innovation and reliability


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Human rights | Legal and regulatory


Disagreed with

– Murielle Popa Fabre

Disagreed on

Approach to AI regulation – principles vs detailed rules


Convention is a starting point requiring further specification by national legislators and continued work by the Committee on AI

Explanation

Finke acknowledges that the convention is not perfect but serves as a starting point rather than an end. The abstract principles need to be specified by national legislators and parties to the convention, with continued work by the Committee on Artificial Intelligence to make principles more specific and operational.


Evidence

EU AI Act mentioned as example of how parties can specify principles; Committee on AI has already started work on further specification


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Legal and regulatory


Agreed with

– Murielle Popa Fabre
– Mario Hernandez Ramos

Agreed on

AI governance requires multi-layered, complementary regulatory approaches


M

Murielle Popa Fabre

Speech speed

139 words per minute

Speech length

3012 words

Speech time

1299 seconds

Incredible acceleration of AI rules globally with multi-layered governance approach resembling a “lasagna” structure

Explanation

Popa Fabre describes the current AI governance landscape as having experienced incredible acceleration in rule formulation globally, particularly after ChatGPT’s disruption. She uses a “lasagna” metaphor to explain the multi-layered approach to AI governance, with regulated entities at the center and various binding and non-binding frameworks forming different layers.


Evidence

Graphic showing acceleration of AI rules stopping one year ago; lasagna metaphor with regulated entities (AI producers, deployers, design, supply chain) as the ‘meat’ in yellow at center


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Legal and regulatory


Challenge lies in making principles operational and bridging the gap between abstract principles and practical reality

Explanation

Popa Fabre argues that the main challenge in AI governance is translating abstract principles into operational reality. She emphasizes that frameworks need to “grip on reality” like wheels on a car, otherwise they slide off, and the key is bridging the layers between principles and practical implementation.


Evidence

Car wheel metaphor – framework needs wheels on ground or it slides off; three conceptual steps: approach, method, and governance/regulation


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Legal and regulatory


Agreed with

– Jasper Finke
– Mario Hernandez Ramos

Agreed on

AI governance requires multi-layered, complementary regulatory approaches


China has developed detailed binding regulations with iterative layered approach, including central algorithm register and multiple approval steps

Explanation

Popa Fabre provides China as an example of detailed AI regulation, showing how they developed binding rules through an iterative layered approach. She demonstrates that innovation can coexist with detailed regulation, using DeepSeek as an example of a system that went through multiple regulatory steps while still achieving innovation.


Evidence

China’s laws on recommendation algorithms, deep synthesis techniques, interim generative AI law; central register for algorithms requiring algorithm and training data submission before market entry; DeepSeek example with 98% acceptability on training data, 90% acceptable answers on 1,000 questions, maximum 5% question rejection rate


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Legal and regulatory


Agreed with

– Jordi Ascensi Sala
– Audience

Agreed on

Innovation and regulation can coexist with proper frameworks


Disagreed with

– Jasper Finke

Disagreed on

Timeline and approach to regulatory development


Different international initiatives focus on various aspects: US on risk management, G7 on transparency, Council of Europe on socio-technical approach

Explanation

Popa Fabre compares different international AI governance approaches, showing how each focuses on different elements of making principles operational. She highlights that the Council of Europe’s approach is unique in its socio-technical focus and stakeholder engagement process, going beyond just transparency to active participation.


Evidence

US National Standard focuses on AI system with risk management framework; G7 Hiroshima approach focuses on transparency and voluntary reporting; Council of Europe focuses on context-based analysis including design, development, and deployment contexts plus stakeholder engagement


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Legal and regulatory


J

Jordi Ascensi Sala

Speech speed

142 words per minute

Speech length

3169 words

Speech time

1337 seconds

HUDERIA provides structured approach based on scale, scope, probability and reversibility across entire AI system lifecycle

Explanation

Ascensi Sala explains that HUDERIA methodology provides a structured approach to assess AI systems’ impact on human rights, democracy, and rule of law. The methodology is based on four key factors – scale, scope, probability, and reversibility – and covers the entire lifecycle from design to decommissioning.


Evidence

Methodology covers design, testing, implementation, operation, and decommissioning phases; focuses on intersection between human rights and technology frameworks


Major discussion point

HUDERIA Methodology and Risk Assessment Implementation


Topics

Human rights | Legal and regulatory


Methodology focuses on context-based risk analysis and stakeholder engagement process to create dialogue between different perspectives

Explanation

Ascensi Sala emphasizes that HUDERIA’s strength lies in its two crucial components: context-based risk analysis that considers the specific application context, and stakeholder engagement that brings different perspectives to the table. This creates a conversation between technical experts, rights holders, and affected communities.


Evidence

Context includes application, design, development, and deployment; stakeholder engagement identifies missing viewpoints and puts affected people around same table


Major discussion point

HUDERIA Methodology and Risk Assessment Implementation


Topics

Human rights | Legal and regulatory


Agreed with

– Murielle Popa Fabre
– Audience

Agreed on

Innovation and regulation can coexist with proper frameworks


Holistic approach considers AI system application context, design and development context, ensuring universal understanding through collaborative conversation

Explanation

Ascensi Sala argues that HUDERIA’s holistic approach links to the principle of human autonomy by ensuring universal understanding through collaborative conversation. Rather than having only engineers or only rights holders define risks, it creates a “coral conversation” that includes all stakeholders in the decision-making process.


Evidence

Reference to philosopher Paul Virilio’s quote about train invention also inventing train accidents; comparison to small municipalities having similar needs across different countries


Major discussion point

HUDERIA Methodology and Risk Assessment Implementation


Topics

Human rights | Legal and regulatory


Implementation requires capacity building tools and knowledge library to make methodology accessible to small companies and municipalities

Explanation

Ascensi Sala acknowledges that successful implementation of HUDERIA requires practical support for smaller entities that lack resources. He proposes developing visual tools for capacity building and creating a shared knowledge library of cases that can be used across similar contexts.


Evidence

Coming from small country with 85,000 people; proposal for visual tool following HUDERIA process; library of knowledge for sharing cases like financial credit assessment for housing across similar municipalities


Major discussion point

HUDERIA Methodology and Risk Assessment Implementation


Topics

Development | Legal and regulatory


Agreed with

– Audience

Agreed on

Need for practical implementation tools and capacity building for smaller entities


A

Audience

Speech speed

116 words per minute

Speech length

281 words

Speech time

144 seconds

Clear rules and methodology don’t hinder innovation but require broader context consideration beyond pure technical aspects

Explanation

An audience member raised concerns about balancing innovation investment with public AI availability, questioning whether premium AI is needed to stimulate innovation. The response emphasized that clear rules and methodologies actually support innovation by providing frameworks, similar to existing technical protocols.


Evidence

Engineering schools teach building bridges from A to B but not implications of building bridges; systems are never standalone but exist in specific contexts with limitations in capacity, energy, processing


Major discussion point

Balancing Innovation with Public Interest and Accessibility


Topics

Economic | Development


Agreed with

– Murielle Popa Fabre
– Jordi Ascensi Sala

Agreed on

Innovation and regulation can coexist with proper frameworks


Innovation should focus on responsibility and creating good products that fit cognitive well-being and cultural demands

Explanation

The discussion emphasized that innovation in AI should prioritize responsible development and creating products that meet human cognitive needs and cultural demands. This approach sees design as highly cultural and human-centered, particularly important for sovereignty questions involving culture and cognition.


Evidence

Generative AI development for France focuses on adoption and product building; sovereignty involves culture and cognition; putting stakeholders around same table leads to better products


Major discussion point

Balancing Innovation with Public Interest and Accessibility


Topics

Economic | Sociocultural


Need for visual tools and training to help small developers and municipalities navigate complex compliance frameworks

Explanation

An audience member highlighted the challenge faced by small startups and public sector innovators who lack capacity to navigate complex AI compliance frameworks. The response emphasized the need for accessible tools and training specifically designed for smaller entities with limited resources.


Evidence

Small companies and municipalities don’t have tons of lawyers like big players; proposal for visual tool with user experience design; training tool to help digest methodology; different perspectives needed for computer scientists vs municipalities


Major discussion point

Balancing Innovation with Public Interest and Accessibility


Topics

Development | Legal and regulatory


Agreed with

– Jordi Ascensi Sala

Agreed on

Need for practical implementation tools and capacity building for smaller entities


Convention aims to achieve global “Strasbourg effect” similar to Brussels effect, working complementarily with other regulations like EU AI Act

Explanation

An audience member asked about achieving a “Strasbourg effect” globally similar to the Brussels effect. The response indicated that while the convention works on different levels than the EU AI Act, it aims to create binding principles with global impact, working complementarily with other regulations rather than competing with them.


Evidence

Convention works on different level than AI Act; binding principles with guidance through HUDERIA methodology; geopolitical aspect essential for success; complementarity with EU regulations stressed


Major discussion point

Balancing Innovation with Public Interest and Accessibility


Topics

Legal and regulatory | Human rights


M

Mario Hernandez Ramos

Speech speed

126 words per minute

Speech length

841 words

Speech time

399 seconds

AI is reshaping societies at unprecedented pace, offering extraordinary opportunities but posing significant risks to fundamental rights, democracy and rule of law

Explanation

Hernandez Ramos frames the discussion by acknowledging the dual nature of AI technology – its transformative potential alongside serious risks to core democratic values. He positions this as the fundamental challenge that necessitates international regulatory response.


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Human rights | Legal and regulatory


Council of Europe is leading global efforts to establish the first ever binding International Treaty on Artificial Intelligence

Explanation

Hernandez Ramos emphasizes the pioneering role of the Council of Europe in creating binding international AI governance. He highlights that this treaty aims to ensure AI technologies develop in alignment with human rights and democratic values.


Evidence

Framework Convention on Artificial Intelligence and HUDERIA guidance being discussed as groundbreaking treaty


Major discussion point

Council of Europe AI Framework Convention Development and Context


Topics

Legal and regulatory | Human rights


Framework Convention and EU AI Act work better together as part of needed network to regulate horizontal technology

Explanation

Hernandez Ramos stresses the complementarity between different regulatory instruments rather than competition. He argues that regulating AI as a horizontal technology requires a network approach with multiple complementary instruments working together.


Evidence

Complementarity relationship between Council of Europe Framework Convention and European Union Regulation mentioned; extension of standards beyond Council of Europe countries to Latin America and other parts of world


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Legal and regulatory


Agreed with

– Jasper Finke
– Murielle Popa Fabre

Agreed on

AI governance requires multi-layered, complementary regulatory approaches


There is a Strasbourg effect regarding human rights extending standards beyond Council of Europe countries

Explanation

Hernandez Ramos confirms the existence of a ‘Strasbourg effect’ similar to the Brussels effect, where Council of Europe human rights standards influence regions beyond member countries. He specifically mentions this effect’s reality in Latin America and other parts of the world.


Evidence

Extension of standards beyond Council of Europe countries, especially in Latin America and other parts of the world


Major discussion point

Global AI Governance Landscape and Regulatory Approaches


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

Need for practical implementation tools and capacity building for smaller entities

Speakers

– Jordi Ascensi Sala
– Audience

Arguments

Implementation requires capacity building tools and knowledge library to make methodology accessible to small companies and municipalities


Need for visual tools and training to help small developers and municipalities navigate complex compliance frameworks


Summary

Both speakers recognize that small companies, municipalities, and public sector innovators lack the resources and capacity to navigate complex AI compliance frameworks, requiring accessible tools, training, and visual interfaces to make methodologies like HUDERIA practically usable.


Topics

Development | Legal and regulatory


AI governance requires multi-layered, complementary regulatory approaches

Speakers

– Jasper Finke
– Murielle Popa Fabre
– Mario Hernandez Ramos

Arguments

Convention is a starting point requiring further specification by national legislators and continued work by the Committee on AI


Challenge lies in making principles operational and bridging the gap between abstract principles and practical reality


Framework Convention and EU AI Act work better together as part of needed network to regulate horizontal technology


Summary

All speakers agree that effective AI governance cannot rely on a single instrument but requires a network of complementary regulations working at different levels, from abstract principles to specific implementation guidelines.


Topics

Legal and regulatory


Innovation and regulation can coexist with proper frameworks

Speakers

– Murielle Popa Fabre
– Jordi Ascensi Sala
– Audience

Arguments

China has developed detailed binding regulations with iterative layered approach, including central algorithm register and multiple approval steps


Methodology focuses on context-based risk analysis and stakeholder engagement process to create dialogue between different perspectives


Clear rules and methodology don’t hinder innovation but require broader context consideration beyond pure technical aspects


Summary

Speakers agree that detailed regulation does not stifle innovation, as demonstrated by examples like DeepSeek in China, and that clear frameworks actually support innovation by providing structured approaches to development and deployment.


Topics

Economic | Development | Legal and regulatory


Similar viewpoints

Both speakers emphasize the pioneering and globally significant nature of the Council of Europe’s AI Framework Convention as the first binding international treaty on AI, highlighting its global reach beyond Council of Europe members.

Speakers

– Jasper Finke
– Mario Hernandez Ramos

Arguments

First binding international treaty on AI, with global approach allowing non-Council of Europe members to participate


Council of Europe is leading global efforts to establish the first ever binding International Treaty on Artificial Intelligence


Topics

Legal and regulatory | Human rights


Both speakers recognize that the Council of Europe’s approach through HUDERIA is distinctive in its socio-technical focus and comprehensive lifecycle approach, differentiating it from other international initiatives that focus on narrower aspects like transparency or risk management.

Speakers

– Murielle Popa Fabre
– Jordi Ascensi Sala

Arguments

Different international initiatives focus on various aspects: US on risk management, G7 on transparency, Council of Europe on socio-technical approach


HUDERIA provides structured approach based on scale, scope, probability and reversibility across entire AI system lifecycle


Topics

Human rights | Legal and regulatory


Both speakers emphasize the importance of human autonomy and dignity as fundamental principles, with Finke highlighting it as increasingly important as technology evolves, and Ascensi Sala connecting it to universal understanding through collaborative dialogue.

Speakers

– Jasper Finke
– Jordi Ascensi Sala

Arguments

Convention establishes fundamental principles including human dignity, autonomy, equality, non-discrimination, and transparency


Holistic approach considers AI system application context, design and development context, ensuring universal understanding through collaborative conversation


Topics

Human rights | Legal and regulatory


Unexpected consensus

China’s detailed regulatory approach as a positive example for innovation

Speakers

– Murielle Popa Fabre
– Audience

Arguments

China has developed detailed binding regulations with iterative layered approach, including central algorithm register and multiple approval steps


Innovation should focus on responsibility and creating good products that fit cognitive well-being and cultural demands


Explanation

It’s unexpected that in a Western-led discussion about AI governance, China’s regulatory approach would be presented positively as an example of how detailed regulation can coexist with innovation, particularly given typical Western critiques of Chinese tech regulation. The speakers use DeepSeek as evidence that multiple regulatory layers don’t stifle innovation.


Topics

Legal and regulatory | Economic


Engineering education gaps in understanding human rights implications

Speakers

– Jordi Ascensi Sala
– Audience

Arguments

Holistic approach considers AI system application context, design and development context, ensuring universal understanding through collaborative conversation


Clear rules and methodology don’t hinder innovation but require broader context consideration beyond pure technical aspects


Explanation

There’s unexpected consensus on the need for engineers and computer scientists to expand beyond purely technical considerations to include human rights implications, with Ascensi Sala noting that engineering schools teach how to build bridges but not the implications of building them.


Topics

Development | Human rights


Overall assessment

Summary

The speakers demonstrate strong consensus on the need for multi-layered, complementary AI governance approaches that bridge abstract principles with practical implementation. They agree on the importance of stakeholder engagement, capacity building for smaller entities, and the compatibility of innovation with detailed regulation.


Consensus level

High level of consensus with significant implications for AI governance, suggesting that the international community is converging on approaches that prioritize human rights while supporting innovation through clear frameworks and collaborative processes. The consensus spans technical, legal, and policy perspectives, indicating broad-based agreement on fundamental approaches to AI regulation.


Differences

Different viewpoints

Approach to AI regulation – principles vs detailed rules

Speakers

– Jasper Finke
– Murielle Popa Fabre

Arguments

Convention establishes fundamental principles including human dignity, autonomy, equality, non-discrimination, and transparency


China has developed detailed binding regulations with iterative layered approach, including central algorithm register and multiple approval steps


Summary

Finke advocates for abstract principles that can be specified later by national legislators, acknowledging time constraints led to principle-based approach. Popa Fabre presents China’s model of detailed, specific regulations with concrete requirements like 98% acceptability rates and multiple approval steps, suggesting detailed regulation doesn’t hinder innovation.


Topics

Legal and regulatory


Timeline and approach to regulatory development

Speakers

– Jasper Finke
– Murielle Popa Fabre

Arguments

Convention negotiated under significant time pressure in 1.5 years, representing a compromise between diverse global interests


China has developed detailed binding regulations with iterative layered approach, including central algorithm register and multiple approval steps


Summary

Finke emphasizes that time pressure necessitated compromise and abstract principles, while Popa Fabre demonstrates through China’s example that detailed, iterative regulation can be developed and implemented effectively with multiple layers of requirements.


Topics

Legal and regulatory


Unexpected differences

Innovation and regulation balance

Speakers

– Audience
– Jordi Ascensi Sala
– Murielle Popa Fabre

Arguments

Clear rules and methodology don’t hinder innovation but require broader context consideration beyond pure technical aspects


Innovation should focus on responsibility and creating good products that fit cognitive well-being and cultural demands


Explanation

While there was general agreement that regulation doesn’t hinder innovation, there was subtle disagreement on emphasis – some focused on clear rules providing frameworks for innovation, while others emphasized cultural and cognitive aspects of responsible innovation. This disagreement was unexpected as it emerged from audience questions rather than prepared presentations.


Topics

Economic | Development | Sociocultural


Overall assessment

Summary

The main disagreement centers on regulatory approach – whether to start with abstract principles that are later specified (Council of Europe approach) versus detailed, specific regulations from the outset (China model). There’s also disagreement on timeline constraints and their impact on regulatory quality.


Disagreement level

Low to moderate disagreement level. The speakers largely complement each other’s perspectives rather than directly opposing them. The disagreements are more about emphasis and approach rather than fundamental conflicts. This suggests a constructive environment for developing AI governance frameworks, with different models potentially serving different contexts and needs.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the pioneering and globally significant nature of the Council of Europe’s AI Framework Convention as the first binding international treaty on AI, highlighting its global reach beyond Council of Europe members.

Speakers

– Jasper Finke
– Mario Hernandez Ramos

Arguments

First binding international treaty on AI, with global approach allowing non-Council of Europe members to participate


Council of Europe is leading global efforts to establish the first ever binding International Treaty on Artificial Intelligence


Topics

Legal and regulatory | Human rights


Both speakers recognize that the Council of Europe’s approach through HUDERIA is distinctive in its socio-technical focus and comprehensive lifecycle approach, differentiating it from other international initiatives that focus on narrower aspects like transparency or risk management.

Speakers

– Murielle Popa Fabre
– Jordi Ascensi Sala

Arguments

Different international initiatives focus on various aspects: US on risk management, G7 on transparency, Council of Europe on socio-technical approach


HUDERIA provides structured approach based on scale, scope, probability and reversibility across entire AI system lifecycle


Topics

Human rights | Legal and regulatory


Both speakers emphasize the importance of human autonomy and dignity as fundamental principles, with Finke highlighting it as increasingly important as technology evolves, and Ascensi Sala connecting it to universal understanding through collaborative dialogue.

Speakers

– Jasper Finke
– Jordi Ascensi Sala

Arguments

Convention establishes fundamental principles including human dignity, autonomy, equality, non-discrimination, and transparency


Holistic approach considers AI system application context, design and development context, ensuring universal understanding through collaborative conversation


Topics

Human rights | Legal and regulatory


Takeaways

Key takeaways

The Council of Europe AI Framework Convention represents the first binding international treaty on AI, establishing fundamental principles for human rights, democracy, and rule of law protection


The Convention was successfully negotiated under significant time pressure (1.5 years) and takes a global approach, allowing non-Council of Europe members to participate


The HUDERIA methodology provides a practical bridge between abstract legal principles and real-world implementation through context-based risk analysis and stakeholder engagement


AI governance requires a multi-layered approach that balances innovation with public interest, focusing on making AI systems that fit cognitive well-being and cultural demands


Different countries and regions are taking varied approaches to AI regulation, with China implementing detailed binding regulations and iterative approval processes


The Convention works complementarily with other regulations like the EU AI Act, creating a network of instruments to regulate AI as a horizontal technology


Implementation success depends on capacity building and making methodologies accessible to small companies, municipalities, and public institutions


Resolutions and action items

The Committee on AI will continue work to specify abstract principles established in the Convention


Development of capacity building tools and training materials to help small developers and municipalities navigate compliance frameworks


Creation of a visual tool to guide users through the HUDERIA methodology process with definitions and explanations


Establishment of a common knowledge platform/library where similar AI implementation cases can be shared and referenced


Conducting meetings with municipalities, public institutions, and developers to understand what constitutes useful implementation tools


Development of an academy-style capacity planning tool combined with a case study library for practical guidance


Unresolved issues

How to achieve the optimal balance between investing in AI innovation and making AI accessible to the public


Whether the Convention will achieve a global ‘Strasbourg effect’ similar to the Brussels effect remains uncertain


Specific details of implementation tools and training materials are still being developed and refined


The challenge of keeping pace with rapidly evolving AI technology while maintaining consistent regulatory frameworks


How to effectively bridge the gap between technical and legal perspectives in AI system assessment


Ensuring the methodology remains manageable and doesn’t become overly bureaucratic for smaller entities


Suggested compromises

The Convention itself represents a compromise between diverse global interests and different regulatory approaches


Using principles-based approach rather than overly specific rules to accommodate different national implementation methods


Allowing flexibility for parties to specify principles according to their specific needs and interests while maintaining core human rights focus


Creating tools that can serve different user types (computer scientists, public procurement teams, municipalities) through adaptable interfaces


Balancing comprehensive risk assessment with practical usability for organizations with limited resources


Thought provoking comments

Is the convention perfect? Well, I’m afraid the answer is no, but which convention or which outcome of an international negotiation has ever been perfect? It was not just the four of us sitting in a room for one and a half years drawing up ideal rules on AI. The AI convention is a result of compromise, as all international agreements are… finalizing the negotiations last year, was not the end. No one understood it as the end. but more or less as a starting point.

Speaker

Jasper Finke


Reason

This comment is profoundly insightful because it reframes the entire discussion from perfectionism to pragmatism. Finke acknowledges the convention’s limitations while contextualizing them within the reality of international negotiations. His framing of the convention as a ‘starting point’ rather than an endpoint shifts the focus from criticism to evolution and continuous improvement.


Impact

This comment established the foundational tone for the entire discussion, setting realistic expectations and emphasizing the iterative nature of AI governance. It influenced subsequent speakers to focus on practical implementation rather than theoretical perfection, and created space for discussing complementary frameworks rather than competing ones.


The AI governance lasagna… is actually a multi-layered approach to AI governance… what is at stake here is to take principles and to make them land in reality… the problem is to make it eatable, because if you’ve tried lasagnas that don’t have the right structure, it’s really difficult to eat them.

Speaker

Murielle Popa Fabre


Reason

This metaphor is exceptionally thought-provoking because it transforms abstract regulatory concepts into tangible, relatable terms. The lasagna analogy brilliantly illustrates the challenge of bridging high-level principles with practical implementation, making complex governance structures accessible to diverse audiences.


Impact

This metaphor became a recurring theme throughout the discussion, with other panelists adopting and building upon it. It fundamentally changed how the conversation approached the gap between theory and practice, making the discussion more concrete and actionable. Jordi later extended the metaphor, showing how it provided a shared conceptual framework.


When we invented the train we invented the train accident… when we install a system we have to think about all of this and this is not a very rapid question, you have to think about it

Speaker

Jordi Ascensi Sala


Reason

This philosophical insight, referencing Paul Virilio, is deeply thought-provoking because it challenges the tech industry’s ‘move fast and break things’ mentality. It forces consideration of unintended consequences as inherent to technological innovation, not as afterthoughts.


Impact

This comment shifted the discussion toward a more reflective, cautious approach to AI deployment. It reinforced the importance of the HUDERIA methodology’s emphasis on ongoing assessment and stakeholder engagement, moving the conversation from reactive to proactive risk management.


So it is really courageous to take principles and transform them into measurable elements because principles are not quantitative, right? So this move between qualitative and quantitative and having a method in order to do it right for humans… this courage to tackle the question of transforming qualitative into quantitative in a world that is automatically dataifying and algorithmicizing.

Speaker

Murielle Popa Fabre


Reason

This observation is intellectually profound because it identifies a fundamental epistemological challenge in AI governance: how to quantify inherently qualitative human values. It recognizes the ‘courage’ required to bridge this gap, acknowledging both the necessity and difficulty of the task.


Impact

This comment elevated the discussion to a more philosophical level, helping participants understand why AI governance is so challenging and why frameworks like HUDERIA are necessary. It provided intellectual validation for the complex methodologies being discussed and positioned them as necessary innovations rather than bureaucratic obstacles.


When we went to the engineering schools or the computer science schools… they taught us how to build a bridge between A and B. I’m not thinking about what are the implications of building the bridge between A and B. I think that this conversation for the good of the profession is important to have.

Speaker

Jordi Ascensi Sala


Reason

This self-reflective critique of technical education is insightful because it identifies a fundamental gap in how technologists are trained. It acknowledges that technical competence without ethical consideration is insufficient for responsible AI development.


Impact

This comment bridged the gap between technical and ethical perspectives, making the discussion more inclusive and highlighting why interdisciplinary approaches like HUDERIA are essential. It also provided a personal, vulnerable moment that humanized the technical aspects of the discussion.


You create conversation among people. And this is a good thing… you are asking questions and you are… having this conversation, and this is a good tool to have this conversation.

Speaker

Jordi Ascensi Sala


Reason

This insight reframes regulatory frameworks not as bureaucratic burdens but as tools for democratic dialogue. It recognizes that the process of assessment itself has value beyond compliance, fostering ongoing stakeholder engagement and social learning.


Impact

This comment shifted the discussion from viewing regulation as constraint to seeing it as enablement of democratic participation. It reinforced the human-centered approach of the Council of Europe framework and distinguished it from more technocratic approaches.


Overall assessment

These key comments fundamentally shaped the discussion by establishing several crucial themes: pragmatic realism over perfectionism, the critical importance of bridging theory and practice, the need for ongoing dialogue and assessment, and the recognition that AI governance requires courage to tackle unprecedented challenges. The lasagna metaphor became a unifying conceptual framework that made complex ideas accessible, while the philosophical insights about technology and society elevated the discussion beyond mere technical implementation. Together, these comments created a narrative arc that moved from acknowledging limitations to embracing the iterative, collaborative nature of AI governance, ultimately positioning the Council of Europe’s approach as both necessary and innovative in addressing the human dimensions of AI regulation.


Follow-up questions

How to balance investment in innovation with making AI available to the public, particularly regarding premium AI services that stimulate innovation

Speaker

Martin Boteman (Audience member)


Explanation

This addresses the tension between encouraging AI development through premium services and ensuring public accessibility, which is crucial for equitable AI deployment


Will there be a ‘Strasbourg effect’ globally similar to the Brussels effect from EU regulations

Speaker

Jacques Berglinger (Audience member)


Explanation

This explores whether the Council of Europe’s AI Framework Convention will have global influence beyond its signatories, similar to how EU regulations affect global practices


How to support smaller AI developers, startups, and public sector innovators who lack capacity to navigate complex compliance frameworks

Speaker

Audience member with IT background


Explanation

This addresses the practical challenge of ensuring smaller entities can comply with AI regulations without being overwhelmed by complexity or cost


How to create effective capacity building tools and training materials for the HUDERIA methodology

Speaker

Jordi Ascensi Sala


Explanation

This is essential for making the methodology accessible and usable by various stakeholders, from small companies to municipalities


How to develop a common knowledge platform or library of cases for sharing HUDERIA implementation experiences

Speaker

Jordi Ascensi Sala


Explanation

This would help organizations learn from similar use cases and avoid duplicating assessment work across similar AI applications


How to bridge the gap between legal/policy language and technical implementation in AI governance

Speaker

Jordi Ascensi Sala


Explanation

This addresses the challenge of translating abstract legal principles into concrete technical practices that engineers and developers can implement


How to maintain relevance of AI governance frameworks given the rapid pace of technological change

Speaker

Multiple speakers (implied concern throughout discussion)


Explanation

This addresses the fundamental challenge of regulating a rapidly evolving technology while ensuring regulations remain effective and relevant


How to effectively transform qualitative principles into quantitative, measurable elements for AI assessment

Speaker

Murielle Popa Fabre


Explanation

This is crucial for making abstract human rights principles operational in technical AI systems and creating accountability mechanisms


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.