Session-Unpacking the EU AI Act

29 Apr 2024 13:15h - 14:30h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Exploring the EU’s Artificial Intelligence Act: Implications and International Influence Discussed at Diplo Foundation Event

At a comprehensive event hosted by the Diplo Foundation, Gabriele Mazzini from the European Commission’s DigiConnect provided a thorough exposition of the EU’s Artificial Intelligence Act (AI Act). The event’s goal was to demystify the AI Act and discuss its implications for various stakeholders, including diplomats, businesses, and the broader international community.

The AI Act is an internal market legislation that employs a risk-based framework to regulate AI systems within the EU. Its purpose is to harmonize standards across member states, preventing market fragmentation and ensuring the free movement of AI products within the EU. The Act identifies four categories of AI systems: prohibited AI practices, high-risk AI systems, systems with transparency obligations, and those with minimal or no additional rules. Prohibited practices include AI systems that manipulate behavior beyond a person’s consciousness or deploy subliminal techniques.

A significant focus of the Act is on high-risk AI systems, which are subject to stringent requirements such as data governance, transparency, human oversight, robustness, and accuracy. These systems must undergo conformity assessments and adhere to the EU’s established requirements before being placed on the market. Additionally, the Act introduces new regulations for general-purpose AI models, particularly those posing systemic risks, which are subject to additional obligations like risk assessment and incident reporting.

The AI Act’s enforcement and governance will be managed primarily by member states, with the creation of an AI Board to facilitate cooperation between national authorities. An AI Office within the European Commission will enforce rules on general-purpose AI models and will be supported by a scientific panel and an advisory forum, ensuring technical expertise and stakeholder participation in the Act’s implementation.

During the event, concerns were raised regarding the clarity of some provisions and the practical challenges of implementing the Act at the national level. The impact on the competitiveness of EU companies, especially SMEs, was also discussed, with questions about how the Act might affect their ability to comply with its requirements.

The international implications of the AI Act were also a topic of discussion, with participants contemplating whether the Act would have a “Brussels effect” similar to the GDPR. While the Act is initially intended to affect only EU members, there is potential for broader geopolitical and digital regulatory dynamics to influence non-EU countries to align with the Act in the future.

In conclusion, the event underscored the need for further clarity and guidance on the Act’s implementation and its global impact. While the AI Act represents a significant regulatory step within the EU, its phased implementation allows time for stakeholders to prepare and adapt. The discussions highlighted the importance of aligning AI regulation with EU values, fostering innovation, and ensuring that AI systems are safe and non-discriminatory. The event provided a valuable platform for stakeholders to gain insights into the EU’s approach to AI regulation and to consider its potential worldwide influence.

Session transcript

Jovan Kurbalija:
For those of you who are new to Diplo’s event, my name is Jovan Kurbalja, I’m Director of Diplo Foundation and Head of Geneva Internet Platform, and I’m really honoured to welcome you to our event today with our guest of honour, whom you will meet soon and hear from him, Gabriele Mazzini, Team Leader of UAA Act, from DigiConnect, from European Commission. Today’s event is aimed to unpack UAA Act, I’m sure that you have been hearing a lot about it, those of you from diplomatic missions, to learn what you will report back to the capital, should you do it the same thing like GDPR and try to inspire with new regulations, those of you from businesses who are going to see how to adjust to these business requirements which will come in the operation very, very soon. And we are really honoured that we are organising this in partnership with delegation of European Union, European External Action Service, and we will be hearing from Radka Sibiliu very, very shortly, with a few words from the mission here. The plan for today is the following, we’ll start with Radka, then I will introduce Gabriele, he will give us about 20-25 minutes introduction, then we’ll have Sorina Teleanu, you know Sorina, who basically had another view of the UAA Act from outside, she did in her analytical mind read 200 plus pages, I was basically puzzled, I said you have to read it, she read line by line, therefore she will reflect from that perspective. And after that, by having these two perspectives, from Gabriele who is the architect of UAA Act, and from Sorina who is seeing it from outside, we’ll open the floor for discussion, discussion on diplomatic aspects, business aspects, technological aspects, and whatever you wanted to basically, whatever you wanted to ask about UAA Act but you didn’t dare to do it, that could be the famous movie paraphrasing, paraphrasing it. Radka, tell us what do you expect from today?

Radka Sibille:
Thank you, thank you so much, and good afternoon everyone, I would just really like to thank TIPO Foundation for all the amazing work that they do on digital, I think this is really a place where all of us in Geneva gather to learn everything that’s going on in the digital space, and we learn it in sort of like a safe space where everybody can ask questions, so we are really grateful for that, and thank you very much for this event, we have seen over the past month that there has been a lot of interest. and the EU’s work on AI. We hosted another event on AI with another colleague from DigiConnect last September. But that was, of course, during the period where the AI was still under making. It was going into the final phase. But now we already have a final product. We have a text. So we are very much looking forward to hearing from Gabriela, who is at the origins of the whole process, and also together with Diplo’s analysis, what we are gonna hear today. Thank you so much. Thank you.

Jovan Kurbalija:
Thank you, Radka. Well, Gabriela, the floor is yours. Now, enlighten us about this key act and key development in the field, please.

Gabriele Mazzini:
Thank you. First of all, good afternoon, everybody. And thank you, Jovan. Thank you, Radka, for having me here with you today. So I’ll try to keep it to 20, 25 minutes. I realize, just looking at the slides now, I may have too many. So let’s see, maybe I have to skip some. But it’s probably more interesting to have a conversation as opposed to presenting. But the act is so long. I mean, I could speak probably for half a day and still, you know, get not even through half of it. So first of all, a couple of, I mean, this is an international community. I assume most of you are familiar with the EU and its functions. But I realize also when I speak to international forum that people are just also confused. Who does what in the EU? And so now we are in between these two faces. So the first is about the legislation. In the legislation, do you have three actors that are operating? It’s the commission where I work and the commission is the initiator of legislation. So without the commission putting forward a proposal, there is no, essentially, no legislation that can be adopted. And then you have parliament and council. Parliament represents the people. So this is direct elections. And then the council represents the member states, so the government. And they function like a bicameral assembly. So they need to agree on the text. This is where we are now. They have agreed on a text that is about to be published between June and July. There is a number of like technical literacy that are stuff that is going on right now, but essentially the text is finalized. And then there is the new phase that opens up as the implementation phase. The implementation of the law is primarily the responsibility of the member states. So with some exceptions where the EU has direct enforcement powers, most of the laws in the EU are implemented by the member states through their authorities. But the commission, which functions as the executive, so we are the initiator of the process, but also the executive, also has quite some important role in supporting implementation by, for instance, adopting what we call tertiary legislation, like delegated acts, implementing acts. So elements to supplement the basic legislation and also by providing guidance. All this is done by the commission under its own responsibility with some degree of oversight by the two legislative branches, limited I would say, but some oversight sometimes happens. So let me come here so I can see myself, I have a really bad memory so if I don’t see the slides I forget. So what is the IARC? Fundamentally the IARC is a classic internal market legislation. For those of you who are familiar with product legislation in the EU, it’s like the type of legislation that we adopt to regulate products, like for instance medical devices, toys, machinery and so on. This is the type of legislation that typically comes with a C mark. So the C mark signals that the product has been developed in accordance with the EU legislation and essentially the advantage of the C mark is that the product can freely circulate. The origin of product legislation in the EU is to eliminate barriers of circulations of products within the EU. So we want to make sure that countries around the EU have the same standards or regulative standards on how to develop products, because if those standards are different then you have barriers to the circulation. So that’s essentially the logic. And so for the IARC we chose to go for the same logic when it comes to high risk AI systems. I won’t dig deeper into the rationale, there’s a little bit of some conservation to be done there but maybe we can talk about it later. So the logic is that of this NLF, New Legislative Framework, that is the same type of logic applicable, framework applicable to this product. Not all products actually apply that legislation but this is the main one I would say where AI systems can be relevant. Another important characteristic is this horizontal approach. This is quite distinctive of the IARC and to my knowledge, I mean the IARC is the first legislation on AI but I would say the first comprehensive legislation on AI. Comprehensive because it’s horizontal. So basically it applies to almost all sectors, areas of economic and social activity. So there are only some areas that are excluded because they’re excluded also because of lack of new competence or limited competence I should say, like in matters of national security, military or defense. But otherwise it can apply essentially to all use of AI in all the sectors. We had to take into account however the fact that a lot of other legislation exists in the EU that directly or indirectly applies to AI and therefore in structuring the IARC we had taken that into account. Another important element is this fact that the IARC does not impact other existing legislation. It’s without prejudice to it. The first of the major examples is GDPR. So and this is important because when we conceived the IARC we didn’t want to duplicate or redo or reinvent the wheel because the AI for instance, the way developers or deployers use or deploy AI, they have to be compliant with GDPR in many respects. So the IARC needed to enter into a relative space that would not necessarily impact GDPR. The risk-based approach, I have a slide later, let me move directly to that so I think we can advance a little bit faster, is the fundamental idea of the IARC. This is really, of all the slides that I present I would say this is the most important one. If you have to go home with one concept, this is the concept. And essentially the innovation we try to bring with this risk-based approach is first we don’t want to regulate the technology as such. In fact we do not regulate all AI. We only regulate only the use of the technology depending on the use cases and the type of regulation depends on the type of risk that the AI generates. So if you have use cases where the risk generated by the use of the system is considered by the legislature not acceptable, the type of regulatory response is a prohibition. The commission had foreseen four use cases for prohibited AI practices. One was a use case with an exception, it is the use in particular of remote biometric identification systems. So it is biometric systems for identification in the context of law enforcement. enforcement in real time and in public accessible spaces, you see there is a lot of elements. But this was a very specific case of prohibition with exceptions and this was one of the, there were many questions that were heavily discussed and this one was one of those, because it clearly touched a lot of fears about people, am I being constantly monitored, can I be controlled by the police and so on. So we had four and now we have eight use cases of prohibitions following the ten-day negotiations. Then we have the case of high-risk. High-risk is the core of the IACT, 90% of the prohibitions of the IACT is about high-risk. And this is where the product legislation logic comes in. So the C mark is related to high-risk care systems. So when a system is high-risk, meaning it creates a risk, but that risk is somewhat also accompanied with some benefits of the users AI. When we think about the prohibitions essentially the idea is we don’t need this technology in these use cases at all, so it must be prohibited. When it comes to high-risk, so take for instance the case of a medical device or take for instance the case of an algorithm recommending loans or using credit scoring. There are benefits in using those systems, but we need to make sure that therefore in order to fully leverage the benefits we can also cater for the risks. That’s why the system is classified as high-risk and that is why the IACT proceeds that the system needs to be developed in accordance with certain requirements. When I talk about AI requirements I refer to things like data governance, so good data sets, transparency, human oversight, robustness, accuracy. So the IACT contains five articles with the requirements for high-risk AI systems and here we have not invented anything. So there is quite a large consensus I would say globally around what are the characteristics that the AI system needs to meet in order for them to be compliant or for being trustworthy. And so essentially the IACT has taken inspiration from those principles, mostly developed in the EU by a group of experts which advised the commission between 2019 and 2021, 2020, the high-level expert group on AI. And so we borrowed some of the requirements from their work. And on top of complying with the requirements as a manufacturer, as a provider of the system, you have to ensure, you have to put the C mark on the system. So essentially you have to do what we call conformity assessment. Verifying compliance before placing the system on the market. The third use case is about cases where the system poses risks that are somewhat linked to the lack of transparency. We have proposed three use cases. One is for instance a classic checkbox. So you want to make sure that people that are interacting with an AI system know that they are interacting with an AI system and not with another human. Especially as we have systems that are so performing that people can get deceived. So it’s a basic I would say need of human dignity to know that you’re interacting with a chatbot as opposed to another human. Things like that. Or we had foreseen an obligation to disclosure the fact you are exposed to certain biometric systems like emotional recognition or biometric categorization. And the third use case that we had foreseen was about generated content. So we want to make sure that generated content can be labeled as generated content so that people again are not deceived. In the negotiations, two other use cases were added. But we won’t be able to discuss this in details perhaps in questions. And essentially so you have binding rules in these three first layers. Prohibited AI, high risk, and transparency. Then you have the fourth layer which is essentially… Essentially, all what is not falling under the first three is not subject to any additional rules by the AI Act. But there may be code of practice or code of conduct. So essentially, there is a possibility for developers to still develop AI, applying some of the principles for high risk, but without that being compulsory. So I want to spend a couple of more minutes on the high risk, because as I said, this is the most important part of the AI Act, and this is where, to be frank, the most compliant burden exists for companies. What are high risk AI systems? There are two ways in which systems can be classified high risk. The first one is systems that are components of products, and notably of products that are already subject of EU legislation. Think for instance about medical devices, toys, machinery, radio equipment. The digitalization of products means that some of these digital elements can be AI. To the extent the AI represents a safety component of one of these products, not any component, a safety component, and the product as a whole is subject to a third party certification, then the AI system becomes high risk. So the logic here is to create a link between the riskiness of the product according to sectoral legislation, because sectoral legislation will tell, for instance, whether a medical device, what is the procedure for conformity assessment. So there is a risk classification of medical devices in sectoral legislation, the medical device regulation. So what we did was try to leverage, essentially, the risk classification of the product as a whole in sectoral legislation to determine whether the AI. that is a safety component of that product should be classified high risk or not. The second way in which certain systems can be classified as high risk, it’s a different, we use a different legislative technique. First of all, they are related to, these are systems that are not related to products. They are typically stand-alone systems, also purely software-based systems. And what we did was identifying a number of areas that you see on the slides, like employment, education, migration, law enforcement. There are a lot of broad areas. And then, in the annex, under each area, there are listed specific AI systems. So for instance, in employment, there is a reference to AI systems that are used to screen candidates for a job, or to evaluate the performance of workers. Therefore, it’s not the whole sector of employment that is high risk, but only the systems that are specifically mentioned under each area. We did that because while these, the broad areas, the eight areas, are determined by the legislator, so the commission, as an executive, cannot change them. But the commission will be able to update the use cases under each area, so we can add or remove use cases. This was essentially a way to ensure the future-proofness of the AI, to make sure that it remains adapted. Okay, now if we move to a little bit of a different and very important, though, area. The general purpose AI model for systems. This is an area that is completely new. The commission did not foresee any rules around general purpose AI. In fact, it doesn’t fit into the risk-based period. I mean, now you can find, I’ve seen a link in already, people trying to fit that in somewhere. But you need a bit to add a different measure somehow. But the idea here, this is something that can be explained because, so the proposal for the AI was adopted in April 2021. And then in December 2022, we had charge of the key. And both co-legislators differed in different manner, but felt the need to ensure somewhat that the AI could deal with this type of emerging technology. That became known to everyone, thanks to charge of duty. And this is the overall result. So it’s fairly complex, but I’ll try to keep it simple. So general purpose AI models. So the focus is on general purpose AI models. Not, let’s say, the chatbot, script-speaking, but on the models powering the chatbot, so GPT-4 to be clear. And the rules are based on a two-tier system. There are some basic rules that apply to all general purpose AI models. And these rules are around technical documentation, around transparency, so downstream, and around compliance with certain copyright-related obligations. And these apply to all general purpose models. models. And then there is a second layer of rules that are on top of the first one, which apply to only those models that pose a systemic risk. How do you determine when a model poses a systemic risk? There are two ways. The first one is about looking essentially at the compute used for training the model, which is in this case the threshold of 10 to 25 flops or floating of operations. You may know that in the U.S. they accepted the order by President Biden in October last year. They had some specific rules also on the general prophecy models, but they added a higher threshold of 10 to 26. So in view we have this lower threshold. But this is not the only way by which certain models can be classified as models with a systemic risk. The other one is a designation by the I-Office. The I-Office, by the way, is the Commission, based on a number of criteria that are in an annex. So essentially you can, you know, regardless of the number of flops, the Commission will be in a position to designate models as models with systemic risk and therefore trigger these additional obligations, which are around risk assessment and mitigation, incident reporting, and cyber security. So when you look at models, of course, you immediately have the question about open source. And these rules apply also to open source models, except the technical documentation and the transparency of the lower tier rules. An important element looking about implementation, about how these rules will be actually be implemented, is code of practice. So the legislator foresees that the Commission shall develop code of practice that can be used by providers of the model to ensure or demonstrate compliance with the rules. A couple of words about… around enforcement. As I said, most of the rules, well, the rules of the AI Act, or more generally law, are responsibility to the national competent authorities. So these are the ones who should basically monitor the third party certifiers, or notified bodies, or the notifying authorities, and the market surveillance authorities, which are the authorities that are supposed to police the market once the systems are on the market. These authorities, the market surveillance authorities, have powers under the market surveillance regulation. They have a number of quite important powers, including access to data and tech implementation, and in certain cases, also access to the source code. In the proposal, we had foreseen also the AI board, which is essentially a forum for cooperation between the national market surveillance authorities. So as I said, responsibility for compliance is in member state level. In order to ensure uniform compliance, we wanna make sure that there is a forum for the authorities to convene and align implementation practices. This is the role of the AI board. All the other three boxes that you see were added during the negotiations by the co-legislators as a consequence of regulating general purpose AI models. Essentially, the regulation of general purpose AI models has made the enforcement, the governance system of the AI Act more complex, more articulated by having an AI office, which is part of the commission that has a responsibility for enforcing those rules around the general purpose AI models. And the AI office will be supported by two other bodies, but they’re not separate bodies. I mean, they are mostly bodies within the commission or groups advising the commission, which is a scientific panel of experts. So that will help the commission in enforcing the rules around general purpose AI, including, for instance, triggering alerts, whether certain models should be designated as model system risk. And then an advisory forum that has the role to ensure stakeholders participation. Briefly, typically actually I don’t, this is like I added to my standard presentation because I thought this is really, there are more questions, but I think this is really the setting where it’s important to discuss the Brussels effect. I think you’re probably most familiar with this concept. And in my view, many people ask me, so what do you think? Will the AI act trigger a process effect? What will the other countries around the world do? Right now, it’s really difficult to say, but there are at least some conversations that need to happen when we think about process effect. So first of all, what are the drivers of process effect? What will, and in my view, there are three points. The first one is around the business. How do businesses react, in particular global actors? Will they see, okay, I want to access new markets. The new market is the only one that has tools as AI act. Will it, is it worth for me to, if I want to access the market, of course I have to comply. Notably for high risk. Once I comply with the AI act in the EU, will I want to comply with the same rules outside the EU as a global actor? Does it make sense from a purely business point of view for me? So this is the first driver in my view. That is mostly impacting, of course, big operators. So major companies, multinational companies. Then there is other considerations on the policy. So as a country, as a country that is not in the EU, how do I want to deal with AI in my jurisdiction? Ultimately, the AI act is the EU choice. It represents the EU choice in how to handle AI in the EU. I was tasked to think about the legal basis of the AI act. For those of you who know a little bit more about EU law, it’s one, one, four. It’s internal market. When I was asked, okay, do a concept note about the AI act, I thought, okay, let’s, I thought about the EU internal market, the EU values, the EU stakeholders. I did not think necessarily, how will the AI act impact business outside the EU? I mean, my task was focused on finding a reasonable or appropriate regulation of AI in the EU. Therefore, I don’t know if other countries will want to make their own considerations. How do I want AI regulated in my own country? Do I want to align or not align? I think this is really policy considerations. They’re more responsibility of each, each, each country. And then there are geopolitical considerations that partly, I guess overlap a bit to the policy consideration, thinking about, of course, some of these alignments may be part of being, of course, part of a global community, of a bigger community in being aligned with you or not being aligned with you in certain ways to deal with AI. And certainly this is something that we see, for instance, at the level of the G7, where the countries belonging to the G7, for instance, have started reflecting about correl contacts for AI, not only for generative AI, impacting, for instance, things around this information. Of course, this is part also of geopolitics. And then, in my view, there’s another aspect that may have to be considered when we think about the Brussels effect, which is exactly what? Because the AI act, as you see, it’s, you know, with this three layers, it’s, it’s a bit different than GDPR. So there are rules around substance. And here we have the rules that are, you know, prohibitive AI, high-risk AI, transparency. Of course, the high-risk is the one that may trigger the Brussels effect. But what about the prohibitions? What about the transparency? So there are maybe there are distinctions, distinctions to be made. And then, of course, there is, does it make sense to have a Brussels effect also on governments and enforcement? We chose there, as I said, the approach of product legislation. So we entrusted, we build a government system around product legislation. We entrust the responsibility to the authorities that are typically doing product legislation enforcement. But of course, this is something that I guess each country has its own. And finally, that’s the last slide. Just wanted to show quickly what’s coming. So after the entry into force, which happens 20 days after publication, so we’re going to have probably publication, as I said, between June and July, after 20 days, it enters the force, but it doesn’t mean it’s merely applicable. It’s not, actually. So you have a face-in application, depending on the type of provisions. In the first six months, the provisions around prohibited AI will be applicable. After 12 months, the rules around general purpose AI, this new chapter that was added, and then most provisions will enter into application on the 24th, after 24 months, with only one exception for high-risk related products, after 36. And that was the last slide. So hopefully I didn’t go too bold. I think I did. Sorry. I’m sorry. I’m done.

Jovan Kurbalija:
Thank you, Gabriel, for really explaining it in a simple way, and in also direct addressing the concerns of our audience here, both diplomats, but also businesses and academics, and I think that was very, very useful. I was worried when Serena told me about 200-plus pages, about understanding it, but that’s now rather simple. And now we’ll see, we’ll increase complexity with Sorina. Sorina read carefully the document and made the notes, and in her style, for those of you who are following Sorina’s work, we will now see what are her reflections, and then we’ll open the floor for comments and questions and discussion. I’m sure there are many questions, especially for Gabriel, and his experience from inside the system, these architectural aspects, which may not be seen when we are reading just the final text. Sorina?

Sorina Teleanu:
Thank you, Jovan. Good afternoon, everyone. Yes, I’m going to complicate things a little. So bear with me as I’m doing that. I did read the whole document more than once, both the initial versions and then what came out of all these negotiations, and I think I’m still at the stage where I have more questions than anything else. So I’m going to give a few examples of questions, basically, and then let’s have a discussion and see what else you would like to raise. So the comments I’m going to make are not necessarily in a logical order, it’s just how I read the document and how I added my own notes. I didn’t have time to actually prepare my slides, so I’m just going to read for you. I think you also agree that there is a lack of clarity in many places in the AI Act, and that’s going to make the implementation and enforcement a bit difficult. And I do have a few favorite elements in the Act that I did want to read for you, just to get a sense of what do I mean when I say it’s not particularly clear. So I talked about this general purpose AI model with systemic risk, that’s how it’s been called. So I tried to understand, okay, what exactly is the general purpose AI model with the systemic risk? And then I came across this definition. So general purpose AI model is classified as one with systemic risk if it has high impact capabilities. And then you ask, what is high impact capability? And it says high impact capability in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models. And here I’m completely lost. What do you mean exceed the capabilities in the most advanced general purpose AI model? How are you going to actually put this in practice? What does it mean, most advanced? More advanced yesterday, today, tomorrow, exceeding when? To me, that’s also, I worked for the Romanian parliament for nine years, so I do have a bit of experience in writing laws or trying to understand laws. And yeah, and to me, some of these things are very confusing, if that makes any sense. And I’m going to give you one more example. There is a research prohibited AI systems, systems we are not going to allow. And one of these systems reads like this. So it is prohibited the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques with the objective or effect of materially distorting the behavior of a person. What does it mean, materially distorting? By appreciably impairing the ability to make an informed decision. Appreciably impairing, okay. Thereby causing them to take a decision that they would not have otherwise taken. How do you know if that person would have or would have not taken that decision? So just two small examples of things where I think it’s going to be very interesting to see how those provisions exactly would be implemented when you decide to what extent these systems fall within those definitions or those provisions or not. You also showed that map with governance system. I think that’s going to be very complex and complicated. And what worries me most is the implementation at national level. Because we’re assuming national authorities do have resources and capacities and still for my own country, I don’t think we’re there. So that’s also going to be very interesting to see how it’s going to be enforced at the national level, specifically. I think there has been a lot of talk in public about the impact on EU companies, and in particular the small and medium ones, so let’s see if we can have a discussion on that as well. I think they will struggle a bit, even with understanding what applies to them and what not. So let’s see how that goes as well. Again, I’m not completely sure. It’s always clear what is being enforced and monitored and what is not, and I’ll give you another example. There is this provision somewhere in the beginning which says that providers and deployers of AI systems shall take measures to ensure, to the best extent, a sufficient level of AI literacy of their staff. But then, is anyone going to look into this, into whether this is being put in place or not? I’m not completely sure. I found the corresponding part towards the end of the regulation. Then, yes, there are quite a few provisions on AI-generated content, and I think that’s good, trying to bring more transparency and these kind of things. But here also, some clarity might have been good, and I’ll give you one more example. It says, deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest, so informing the public on matters of public interest. Again, I don’t know what that means. Is Diplo an entity that holds within this article? So what it has to do is to disclose that the text has been artificially generated or manipulated. It’s really important to understand who this article applies to because then the monitoring and enforcement is quite interesting, and you can get a fine of up to 50,000, no, 50 million euros if you’re breaking this. So yeah, who does it apply to? Then there are quite a lot of provisions on evaluation and testing of AI models, and I think it’s a bit worrisome that in many places, or at least that’s how I read it, and this is left to the companies themselves to do the testing and the evaluation, unless you fall within those very few specific GPI models, which is semi-trisc or the other categories where I think it’s the EU office who can request the testing. For the models, yes. All right, so that, then on the positive side again, there are a few interesting notes on localization when you talk about data sets, and there is, for instance, this provision which says data sets should take into account the extent required by the intended purpose, the characteristics or elements that are particular, the specific geographical, contextual, behavioral, or functional settings. So trying to bring a bit of localization to the data sets. But then here, again, I’m a little confused when it comes to the implementation. So that, what I just read, is the obligation, but then as you read more through the Act, it says high-risk AI systems that have been trained or tested on data reflecting these specific geographical, behavioral, and contextual are presumed to be in compliance with the article above. So it’s a bit, okay, you’re first requesting that you’re saying if it is, it is in compliance, but how are you checking this compliance? I guess it’s like, again, to the, how are they called, modified values? Yeah, modified values. Sorry, more confusion with the actual implementation. There have been quite a lot of discussions about what we’re doing with copyright and the implementation of intellectual property rights in the context of generative AI. I think there’s only one provision, or at least I only came across one provision, which says, which places an obligation on providers of general-purpose AI models saying that they should put in place a policy to respect union copyright law? Okay. And there is this interesting provision on transparency. Again, these providers are obliged to make publicly available a sufficiently detailed summary about the content used for training. I think that’s really interesting, and I would like to see providers actually putting this in practice. There are a few exemptions for open sources, but here, also, it’s interesting to look at what is written after the exemption. For instance, they don’t apply if the open-source models are also general-purpose AI models. Maybe I should stop. One final positive thing. Those of you who have heard me talk before about standards, we do talk a lot about the relevance of standards and how we think they should be given a bit more attention, including in the policy space, and the AI Act does emphasize quite a lot on the value of standards and how. with standards to be adopted by EU standardization bodies will be considered as compliance with certain provisions. And that’s good, although there are a few interesting requirements from participants in European standardization processes. But let me not criticize more and have a discussion. Thank you.

Jovan Kurbalija:
Thank you. Thank you, Sorina, for this view from outside. Well, we did usual Serena’s x-ray of the texts, but I can tell you all that she was gentle comparing to the unpacking the Global Digital Compact proposal. That was much, much sort of more positive. And we are now, we set the stage with our keynote speaker, Gabriele, with the reflections from Serena, the text, and I think it’s time for questions. I have many, many questions, but since I had a really inspiring dinner last night with Gabriela and we went through so many interesting things from philosophy to technology, I won’t monopolize the mic and I will pass the floor to you, questions from the business perspective, from the governance, diplomatic perspective, there are many, many issues that Gabriele in his presentation triggered, and of course, Serena in her comments. The floor is open for questions and comments, and then, well, later on, some refreshment continues from our chat with some food and drinks. We have Raymond here, and it is, Raymond Sanders.

Audience:
Yeah, thank you. I heard from Gabriele that there were exemptions, and particularly one I’m interested in about the military component. Just across the street, we have the WTO, and there are disputes, for instance, when countries call upon their sovereign right to consider certain goods to be encroaching on their sovereignty, or being at risk, and that in itself is a long debate. What is really the risk to a country’s security? Now, if it’s not included, wouldn’t that leave a door open for countries to interpret whatever as a security risk, and hence it would fall within the military exemption clause?

Jovan Kurbalija:
Raymond Sanders is specialist for the business diplomacy and also for WTO. He’s closely following the WTO discussion. Thank you, Raymond. Gabriele?

Gabriele Mazzini:
So that’s a slide I skipped, so maybe I’m bringing it back so you can get a bit more context. So essentially, indeed, when you look about the areas that are excluded, so all areas outside scope of EU law, but this is topological. And that’s part of the treaty of EU. EU law should not affect member state competence or national security. That’s in the treaty. The interpretation of this formulation is highly debated, and the institutions have different views. As you can imagine, the parliament has a very restrictive view. The commission is somewhat also rather restrictive, but the council has a bigger view. It’s a larger view. So this has been this topic of intense discussion. I have to say, when it comes to military and defense, which is not necessarily national security, this is also where our proposal intentional, in our proposal, we excluded military from the scope, because we took for granted, OK, national security is part of the primary law of the treaty, so we don’t need to even mention that. But military actually is a bit, I’m not an expert here in EU competence, but essentially, it’s an area where there are some competence for the EU. But military and defense was, for us, an area where, thinking, for instance, about the autonomous weapons, it’s really an area where we were not comfortable putting forward rules on this area with an internal market registration. We felt like, in any case, this is an area that impacts a really broader interest than just what is essentially consumer protection legislation. So this needs to be handled at a different level. So that’s why we introduced a specific exclusion. But then when it comes to national security, that was the subject of an intense debate. And indeed, the member states wanted to add, specifically, formulation around the exclusion. for those topics. And so then we ended up with this type of formulation that the AI Act does not apply to AI placed in the market, which is to service or use exclusively for military defense or national security purposes. So hopefully this gives a bit more context.

Jovan Kurbalija:
We can see already problems with the dual use AI, which is happening now in the current conflicts where a certain AI is developed for the commercial uses, but then also used for the military purposes. Any above here with a question? Will you just introduce yourself?

Audience:
Of course. Thank you so much. I’m Haifa. I’m from ITC, so just across the street. So my question would be more targeting the competitivity of non-EU SNEs, mainly having their own languages, developing their own languages. Us, I mean, well, if you’re not familiar with ITC, International Trade Center, we support SNEs, trade SNEs. And I’m from the tech development sector. We have seen it with the GDPR. We need to build the capacity of our companies so that they keep their competitiveness outside their country and to EU buyers. Do you see the implications of the AI act as big as the GDPR or a bit less since its scope?

Gabriele Mazzini:
I think it depends really what the business of the company is. Because I think the GDPR impact was linked to companies around the world processing personal data affecting EU citizens. Most cases, I would say, most EU citizens live in the EU, so that is where you have to link with sort of the internal market. And it’s about processing of personal data. Here is about actually what to use business about. If your business is around high risk, in that case, you will be heavily impacted. If your business is not around high risk, maybe not. So I think you have to, so that is why this is a product legislation. It is not a GDPR-like legislation. Although people like to mention process effect for the AI, but indeed, the logic of the AI is not the same as the GDPR. Totally different. So it depends, in my view, between the two.

Audience:
Hi. I’m Aisha Khadija. I work on the Enterprise on an ETH rig. I think Serena was really clear, and for me, the key is going to be with regards to the actual application and enforcement. I think that’s where we’re going to see the teeth in this legislation. So you mentioned, I think in another slide, the AI office, the AI board, the advisory forum, scientific panel. If you could potentially just give a little bit more of an overview of these bodies, vis-a-vis also what is happening at the national property authority level. So that’s the first thing, to just explain what that is and how that’s gonna work. And the second thing, which of course, I think there’s a lot of discussions happening around that right now, is how are you going to have the right people, because this obviously is a very difficult interpretation challenge, it’s a forceful challenge, which requires structure, people, competencies. So it would be good to know what is the plan and what’s happening around that, because I think that’s very important for not only those that are going to be subject to that in the EU, but also outside. So if you could just elaborate on that, that’d be very useful, thank you.

Gabriele Mazzini:
Yeah, I’ll do my best. So when it comes to just giving you a little bit more details on this body, so initially, indeed, as I said, we had a much simpler structure. And here, with the national competent authority, we wanted to be as less disruptive as possible, by indeed saying, okay, let’s not create new authorities. First of all, we give the members, so member states are already used to dealing with notifying authorities and market surveillance authorities, because each product legislation, medical device, toys, machine, all those, they have those authorities. So the IARC would just request that member states also have notifying authorities, which can be essentially the same that are doing the same job for other sectoral legislation. The difference, of course, that those notifying authorities need to be, so their job is to monitor notified bodies. But notified bodies need different tech competencies, depending on the product. Because if you are notified for toys, you may not be notified for medical devices, you may not be notified for machinery. Or you may be a body that is notified for all of them. Ultimately, this depends about the competencies. So you need to meet the competencies that each regulation requires from you. And, but the authorities are essentially the same. They just need to be able to monitor that the notified bodies are up to the task. But the member states are familiar with that. And so with the market surveillance, where also it’s a choice of the member states to say, okay, I want, for instance, for medical device and other equipment, the same ministry. It’s also their choice. And for us, when we thought about the IARC, we wanted to make sure that the member states maintain that choice. Because indeed, it’s up to them to see, depending on the size of the member states, their ability to cope with this, to see what is the best way for me to fulfill my responsibilities. Because that’s essentially where we focus on. You need to have these responsibilities. But then how you organize yourself, how you divide your responsibilities. Do you want to concentrate? Do you want to separate? It’s your choice. Essentially also for a matter of competencies. Do you want, for instance, to have experts, AI experts in different sectoral authorities, like financial products, law enforcement? Or do you want to concentrate those expertise in one single authority? So there is a debate right now in the matter. member states where they should have one single authority operating as a market surveillance authority for the whole AI systems or not. But they could also theoretically say, no, I want Ministry of Health for, let’s say, medical devices, and then I want Ministry of Interior for law enforcement, and so on. But of course, then they need to have to double up the expertise, so to say. I think the best thing we could do, also, in the light of the principle of subsidiarity, is to let the member states decide by themselves. And then when it comes to the other, so this new chapter around general purpose and model, I think certainly there is a question also for the commission, for us. How are we going to enforce and supervise the rules on general purpose and models? Do we have the competencies? So the commission has now, we recruit new people in the AI office, around 80 persons. So some of the persons will be repurposed. So like, for instance, the unit where I’m working now, basically is already part of the AI office. But we’ll hire new individuals, including persons with a scientific background. So as to be able, indeed, to fulfill the tasks that we are given. And that is also why there is a big role of this panel that is aimed to support the role of the AI office, especially when it comes to technical expertise. And we still don’t have yet an opening for this, or a procedure for setting it up. The AI act is relatively silent. But there will be, of course, a publication, an open call for persons to apply and sit in these bodies, where, indeed, the scientific panel will be much more relevant for enforcing the rules, especially on the general purpose and models, where the advisory forum, I think, will be more really for having a more advisory role in terms of they will somehow work together. But from the point of the expertise you’re mentioning, I would say the scientific panel will be more relevant.

Audience:
With EU citizens only that are either hiring or outside, for example, Swiss.

Gabriele Mazzini:
That’s a good question. I think we haven’t. The act is silent on that point. I think this will be a decision by the commission. I think we are in the process of preparing and implementing an act to set up the panel. Frankly, I’m not sure about this question. I would need to have a check.

Jovan Kurbalija:
Before I pass the floor for the next question, there is a bit of panic in businesses, some of business, when you read what’s going on, how we are going to review. But after listening to you, basically, the message is don’t worry for the time being. We are equally confused. What we are trying to do. And by the time, try to learn a bit on the AI business because what’s going on in AI. But it’s not something which will happen tomorrow, figuratively tomorrow. There is a time to prepare and these things. Because when I often, when I discuss with businesses in Geneva, I was in Malta with some startup community. They said, what we are… going to do, is it going to, should we train new people, should we have people from MBA in AI? There was a bit of panic. But what I’m following you carefully, listening carefully, there is no need for that panic. Please, don’t read it to me.

Gabriele Mazzini:
I didn’t say that we are equally confused. I mean, I need to put this on record.

Jovan Kurbalija:
No, no, no, no. I refer to the group.

Gabriele Mazzini:
But no, I can relate to what you’re saying. But this is also why there is a phased implementation. Because if this were applicable tomorrow, I think we’ll be in trouble. But this is something, honestly, we knew since the beginning. When we created the AI Act, we knew we were regulating something that was not regulated before. It was a choice. Extending, for instance, all the machinery around product legislation to regulating software was a novelty. It was a choice we made intentionally. But knowing that there was a price to pay in terms of implementation. And that is why, since the beginning, we also, for instance, I don’t want to discuss now about harmonized standards. They were mentioned by Serena before. But that is why the commission has given a mandate to European standardization organizations to start developing standards even before the AI Act is adopted. Because typically, the commission gives a mandate. OK, now we have the AI Act. This is the requirement. Organizations, standardization, to start working on the technical standards. But we did it even before, after the adoption of the proposal. Because we knew that this would take time. So the expectation is that by. By the time those rules become applicable, so notably, as I said, high risk is where the most compliance burden exists, things will be in place. When it comes to general purpose AI, and this is certainly a challenge for the commission, this has to be in place even before. We have only 12 months, which is certainly a challenge because, as you may know, this is also an area that is mostly moving.

Jovan Kurbalija:
Thank you. Thank you, Gabriele, for we have one more question and two more questions. Three more questions. Well, that will be three more questions, then we continue with the chart during the refreshment.

Audience:
Thank you so much. I’m Shu Wang from Oxford University. I’m a law school student. And also I’m a former elective student here who mastered technology, so I’m so interested in this topic. My question is, actually I have three, mainly about the Article 2, the process effects, because we know that the Bursa Act and the GDPR have exhibited a certain process effect. From my perspective, compared to the GDPR, the territorial scope of the AI Act is a little bit different. Anyway, the Act in Article 2, it prescribed that if the provider is located within the EU, and the AI product is sold outside the EU, then the AI Act is not applicable. But a similar situation involving personal data however, fall under the scope of the GDPR. So I want to ask whether the GDPR was referred when you dropped the AI Act in terms of the territorial scope. And if so, why would you exclude the specific scenario that I mentioned from the AI Act? This is the first question.

Gabriele Mazzini:
Sorry, exclude what? Can you repeat that?

Audience:
Exclude the scenario where the AI company is established and located within the EU, while the AI products are sold outside the EU. This is not included. This is not included. the scope of AI acts, but this is falling within the scope of GDPR if it concerns the personal data. So I was wondering why there is a difference, why would you exclude it?

Gabriele Mazzini:
Yeah, because the AI act is not based on GDPR, it’s a different logic. The logic is we care about regulating systems that are placed on the market and indirectly affecting the citizens. Paradoxically, if you have an AI company in Europe developing a high-risk AI system but not offering it to your market, but selling it to another country, it would not be in scope. So, you companies can develop high-risk for…

Audience:
This is good, right?

Gabriele Mazzini:
Yes.

Jovan Kurbalija:
Do you expect the sort of…

Gabriele Mazzini:
It’s the first time I’m saying this because I reflected on the question, but this is what the AI act is about. It’s about placing an AI system in your market. If you don’t place it in your market, you just develop here, but you sell it somewhere else, you’re fine. Unless it’s prohibited, maybe that’s the exception. I have to reflect about this, but it’s a good point.

Audience:
I’m going to connect, so I’m Marco Pelle, and I come from the private sector, and I work for an international company, Swiss and American. So, actually, what you just discussed now, it’s very relevant. My question was in fact on competitiveness, and if you had considered, if you could share behind the scene, the competitiveness that could impact European companies versus other companies outside of Europe, and vice versa.

Gabriele Mazzini:
There’s always someone asking this question. It should make sense. Usually, my answer is twofold. There are two components. The first one is about every regulation is a burden, very bluntly said. In a way, it adds requirements for companies, how you develop or produce, how you provide services. So in a way, any regulation requires companies to do some degree of compliance and adds some burden, right? And in that sense, it may impact competitiveness. But it’s a choice. We did this because we think, yeah, we need a regulation on AI because we don’t want certain outcomes that we consider them not important. in line with our values. We don’t want AI systems to discriminate. We want medical devices that have AI components to be safe. So it’s a choice, and as a policymaker, you look about, you certainly consider competitiveness, but also look at the overall objective that you want to achieve in terms of final results and value-based results. And that is a reality. That being said, we also try to make sure that a regulation can help, to the extent possible, innovation, bearing in mind that any regulation represents some degree of burden. And when you look about the EU dimension, there is certainly the element that any EU harmonization legislation, like this one, creates a common market. So because if, let’s say, we don’t have the IR, and we have AI systems rolled out within the EU by difference, it’s a bit where I started when I mentioned the C-mark, the prior legislation. And you have issues coming up, let’s say, systems that are not safe, have discriminatory, and so on. And then you have individual member states that want to take action. That will fragment the market. And so in a way, what we did was trying to make sure, this is called, technically speaking, full harmonization legislation. So it deprives member states from having rules in this area, with some exceptions. But primarily, let’s say, tomorrow, a member state cannot say, oh, I want another requirement for the AI system. No, no. In order for the AI system to be freely circulated in the EU, you have to comply with the requirements of the IR. You cannot add another requirement. So that’s the impact. And this, of course, gives companies an opportunity to have a bigger market. And need to confront a harmonized framework. And then. Regardless of that, we’ve tried also to add elements that could facilitate or help a little bit the innovation, like, for instance, directly sandboxes. We have constructed this, again, I go back to the new legislative framework, was my first slide. This is a legal framework that relies on standards. So the law remains relatively abstract. This is part of the criticism that is often moved because it’s not clear what it says. But again, in part, it’s intentional. I’m not saying that a lot of unclarity is intentional. Unfortunately, too much of it is the result of the process. But certainly, when it comes to the requirements, a certain degree of abstraction was necessary because we wanted to make sure that the law remains future-proof. And then we can keep the technical specifications more agile and enabling innovation through the standardization. And the standards will be the main tool for which, notably, SMEs will have to, can comply. So in that sense, it’s also a tool to support compliance. Not a black and white answer.

Audience:
No, but it’s already a good answer, I guess, to understand. Because I was thinking also about the UK and Switzerland within the document. And if you had thought about or discussed about harmonizing the EU with those countries.

Gabriele Mazzini:
I am not aware of discussions of that level. I think, clearly, they asked themselves the question, of course. But it’s the power of the EU market that impacts mostly the countries that are closer, I imagine. But yes, I cannot say more.

Jovan Kurbalija:
Zoltan.

Audience:
Zoltan, I work at the DPR for Hungary. And thank you for the very interesting presentation. I had the privilege to take part in the negotiations of the UNESCO recommendation and the Council of Europe Treaty on AI. But I did not really follow the UNPI Act. I have colleagues in Brussels who took care of that. I would be interested to hear your views on how you see the future of this legal field. Taking into consideration that there are all these parallel regulatory initiatives at the same time. With very similar, but to some extent, different requirements. So, will it result in a very fragmented legal framework? Which will be very difficult for AI providers to implement. Or there is a chance and intention to harmonize these rules. What comes to my mind is the OECD AI system definition. Which was used also by the Council of Europe and also by the EU. But there are specific questions like how to ensure that the risk assessment or the impact assessment is carried out in a proper way. Council of Europe focuses on human rights. This has a bit different type of approach. Where does human rights come into play under the EU AI Act? And what kind of burden will it create for the providers? Who will need to implement these regulations and other rules?

Gabriele Mazzini:
We certainly wanted to ensure as much international alignment as possible. That is why we started with the OECD definition of AI. Although you may know this quite well. The OECD in the principle of 2019 developed the principles and definition of AI. But that was for a policy document, not for a legal framework. So, we really had to take that as a basis. For us, it was important to take OECD as a basis. Because indeed, it was the place where countries also outside the EU had converged. For us, it was important to regulate AI in the EU. But having at least some basic concepts that are aligned. But then we had to adjust it. Because simply, even now, there may be questions around. Is the AI definition clear enough? But certainly, the OECD definition at the time was not enough for a proper legal framework. So, certainly, that was a bit of background. When it comes to other frameworks. I mean, I think UNESCO not too familiar. But of course, it’s a recommendation. It’s a soft law instrument. Soft law. But it’s a soft law. I mean, pardon me. But I only focus on what is binding in a way, primarily. Because I think this is where issue with conflicts of the AI emerged. And so, when it comes to the Council of Europe. Yes, I personally found it was not an ideal move. To have this competing organizations trying to compete who is doing what first. Of course, we, as Commission, we negotiated that on behalf of the member states. So, we had a mandate by the member states. But luckily so. Because indeed, it would have been probably a disaster. If these two work strands would have been advancing in parallel. Without any coordination. So, I was not the one who went to Strasbourg for that. Because I was fully booked with negotiations in our internal cuisine. But essentially, the goal of having you part of that process. Was to ensure that for the parts that the Convention. The Council of Convention that are covered by the AI Act. The AI Act will serve as the instrument governing relationships in the EU. So, in that sense, this was my first priority. We have to ensure that regardless of the ultimate outcome in Strasbourg. The AI Act will be the rule that implements the Convention in the EU. And that kind of alignment, I understand, happened. But of course, indeed, the… So that companies don’t have, in the EU, they are subject to the Act, but not have two competing frameworks. But then, indeed, the Council of Europe Convention goes beyond in certain areas, and certainly has a binding effect outside the EU countries. I cannot speak more to the details, but I think for me the main concern was indeed ensuring alignment when it comes to the EU jurisdiction.

Jovan Kurbalija:
One thing which is a parallel when all this conversation is something which reveals the reflections is many legislators are trying to pass the difficult issues, like EU passed the national regulation, some of the provisions, and it’s fascinating on a serious note, and Sorina brought it, how you use constructive ambiguity and sometimes imprecision to basically regulate something which is completely unclear. And in our discussion, the whole process reminds me of the famous saying by Bismarck, when they ask him about making of laws, he said, no, it’s like making sausages, don’t try to understand how they are made, just enjoy knitting them. That could be a possible message for this. We had a question from a colleague from Mauritius, but if you don’t mind, we’ll discuss directly, or would you like to?

Audience:
No, I was just, okay, thank you. I just don’t want to force people, but thank you very much for the presentation, and I tried to read the EU Act on AI before coming, but I’m not Sorina, so I accept a defeat on that. What you mentioned, if I got it correctly, is that the EU Act is definitely going to have an implication only on the EU members, and even those who have had GDP or non-EU countries with the GDPR, it’s not going to affect them, it’s not like the C-bomb, for example, that is going to affect cross-border countries working with the EU. But do you see that there is a possibility in the future, especially when we speak of harmonisation of policies and AI in the future, that this is going to affect countries, that the EU Act on AI is going to affect countries outside the EU? And also, how do you reconcile that, especially for small island developing states and developing countries which are still in the level of digital divide and fragmentation? Thank you.

Gabriele Mazzini:
I think, if I’m correct, your question is more related indeed to, so let’s say you may not have industries that want to import AI-discussed systems into the EU, it’s not your business, it’s a different business. So you say, okay, why should I even consider being compliant with the EU Act? Why should I enact legislation? I think this is part of what I tried to indicate in that slide around policy considerations, geopolitical considerations. So in principle, it would not, if I were to advise you, I would say, so why do you even consider aligning your legal framework to the EU Act, it doesn’t make any sense. But then at the same time, the EU Act is part of a probably much larger dimension about geopolitics, about digital regulation, digital matters, whereby maybe it could be linked also to some sort of conditionality around relationships with the EU. So this then becomes a much bigger question that may impact you, even if you have no interest in entering the EU market. But of course, the moment you enact legislation, your companies will be impacted, I mean, even if you have one. So I think it would be an interesting space to watch in the next few years.

Jovan Kurbalija:
Thank you. Thank you, Gabriele, for a really great and insightful discussion and presentation. We were honoured to have such first-hand expertise in unpacking the EU Act, which was the title of this event. And I would like to invite all of us to give you one well-deserved applause for this.

A

Audience

Speech speed

160 words per minute

Speech length

1339 words

Speech time

503 secs

GM

Gabriele Mazzini

Speech speed

160 words per minute

Speech length

7918 words

Speech time

2960 secs

JK

Jovan Kurbalija

Speech speed

149 words per minute

Speech length

1268 words

Speech time

511 secs

RS

Radka Sibille

Speech speed

209 words per minute

Speech length

210 words

Speech time

60 secs

ST

Sorina Teleanu

Speech speed

201 words per minute

Speech length

1630 words

Speech time

488 secs

Event gallery