EU AI Act | Proposed amendments (full view)
September 2023
As the trialogue on finalising the text of the EU AI Act is unfolding, we are showing here what negotiators from the European Parliament, Council and Commission have to work with.
Shown in the table are:
- The original AI Act, as proposed by the Commission;
- The amendments adopted by the European Parliament in June 2023;
- The Council’s general approach, as approved in December 2022.
- Potential ‘compromise’ text, automatically generated by DiploAI (building on the texts in the previous three columns).
To make it easier to compare the various texts proposed by the three EU institutions, we have taken the following approach:
a. When both the Council and the Parliament propose new text, the proposed texts are placed within the same row, in parallel cells if the texts refer to similar issues (i.e. can be compared). This is done even when the numbering of the proposed text is different across Parliament, Council, and/or Commission.
See, for instance:
- recital 6a: same numbering, both texts refer to machine learning.
- Recitals 12a-Parliament and 12b-Council: different numbering, but both texts refer to research.
b. If the proposed texts, despite having the same numbering, cover completely different issues, they are placed in distinct rows.
See, for instance, recital 5a: same numbering in the Parliament’s and the Council’s versions, but different text.
For a more structured display of the text (by chapters and articles), please see this page.
Numbering | Commission | Parliament | Council | Text proposed by the AI |
---|---|---|---|---|
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, | THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, | THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, | THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, | |
Citation 1 | Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof, | Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof, | Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof, | Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof, |
Citation 2 | Having regard to the proposal from the European Commission, | Having regard to the proposal from the European Commission, | Having regard to the proposal from the European Commission, | Having regard to the proposal from the European Commission, |
Citation 3 | After transmission of the draft legislative act to the national parliaments, | After transmission of the draft legislative act to the national parliaments, | After transmission of the draft legislative act to the national parliaments, | After transmission of the draft legislative act to the national parliaments, |
Citation 4 | Having regard to the opinion of the European Economic and Social Committee31, 31. OJ C [...], [...], p. [...]. | Having regard to the opinion of the European Economic and Social Committee31, 31. OJ C [...], [...], p. [...]. | Having regard to the opinion of the European Economic and Social Committee1, 1. OJ C [...], [...], p. [...]. | Having regard to the opinion of the European Economic and Social Committee1, 1. OJ C [...], [...], p. [...]. |
Citation 4a (new) | Having regard to the opinion of the European Central Bank, | Having regard to the opinion of the European Central Bank3, 3. Reference to ECB opinion | Having regard to the opinion of the European Central Bank, | |
Citation 4b (new) | Having regard to the joint opinion of the European Data Protection Board and the European Data Protection Supervisor, | None | ||
Citation 5 | Having regard to the opinion of the Committee of the Regions32, 32. OJ C [...], [...], p. [...]. | Having regard to the opinion of the Committee of the Regions32, 32. OJ C [...], [...], p. [...]. | Having regard to the opinion of the Committee of the Regions2, 2. OJ C [...], [...], p. [...]. | Having regard to the opinion of the Committee of the Regions, OJ C [...], [...], p. [...]. |
Citation 6 | Acting in accordance with the ordinary legislative procedure, | Acting in accordance with the ordinary legislative procedure, | Acting in accordance with the ordinary legislative procedure, | Acting in accordance with the ordinary legislative procedure, |
Whereas, | Whereas, | Whereas, | None | |
Recital (1) | (1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation. | (1) The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market. This Regulation lays down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence in conformity with Union values and ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of Artificial Intelligence systems (AI systems), unless explicitly authorised by this Regulation. Certain AI systems can also have an impact on democracy and rule of law and the environment. These concerns are specifically addressed in the critical sectors and use cases listed in the annexes to this Regulation. | (1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross- border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation. | (1) The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human centric and trustworthy artificial intelligence by laying down a uniform legal framework for the development, marketing, placing on the market, putting into service and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, fundamental rights, democracy, rule of law and the environment from harmful effects of artificial intelligence systems. It ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation. Certain AI systems can also have an impact on democracy and rule of law and the environment. These concerns are specifically addressed in the critical sectors and use cases listed in the annexes to this Regulation. |
Recital (1a) (new) | (1a) This Regulation should preserve the values of the Union facilitating the distribution of artificial intelligence benefits across society, protecting individuals, companies, democracy and rule of law and the environment from risks while boosting innovation and employment and making the Union a leader in the field. | This Regulation aims to uphold the values of the Union by facilitating the equitable distribution of benefits derived from artificial intelligence across society. It seeks to protect individuals, companies, democracy, rule of law, and the environment from potential risks, while simultaneously promoting innovation and employment. The ultimate goal is to establish the Union as a leader in the field of artificial intelligence. | ||
Recital (2) | (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board. | (2) AI systems can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and uptake of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). | (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board. | (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy, safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and uptake of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board. |
Recital (2a) (new) | (2a) As artificial intelligence often relies on the processing of large volumes of data, and many AI systems and applications on the processing of personal data, it is appropriate to base this Regulation on Article 16 TFEU, which enshrines the right to the protection of natural persons with regard to the processing of personal data and provides for the adoption of rules on the protection of individuals with regard to the processing of personal data. | Given the reliance of artificial intelligence on the processing of large volumes of data, including personal data, this Regulation should be based on Article 16 TFEU. This article upholds the right to the protection of natural persons in relation to the processing of personal data and mandates the adoption of rules for the protection of individuals concerning the processing of personal data. | ||
Recital (2b) (new) | (2b) The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive 2016/680. Directive 2002/58/EC additionally protects private life and the confidentiality of communications, including providing conditions for any personal and non-personal data storing in and access from terminal equipment. Those legal acts provide the basis for sustainable and responsible data processing, including where datasets include a mix of personal and nonpersonal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments. This Regulation does not affect the fundamental rights to private life and the protection of personal data as provided for by Union law on data protection and privacy and enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’). | The fundamental right to the protection of personal data is upheld by Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive 2016/680. Directive 2002/58/EC further safeguards private life and the confidentiality of communications, including setting conditions for any personal and non-personal data storage in and access from terminal equipment. These legal acts form the foundation for sustainable and responsible data processing, even when datasets comprise a mix of personal and non-personal data. This Regulation does not aim to alter the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities responsible for ensuring compliance with these instruments. This Regulation does not impact the fundamental rights to private life and the protection of personal data as stipulated by Union law on data protection and privacy and enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’). | ||
Recital (2c) (new) | (2c) Artificial intelligence systems in the Union are subject to relevant product safety legislation that provides a framework protecting consumers against dangerous products in general and such legislation should continue to apply. This Regulation is also without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety, including including Regulation (EU) 2017/2394, Regulation (EU) 2019/1020 and Directive 2001/95/EC on general product safety and Directive 2013/11/EU. | Artificial intelligence systems in the Union are subject to relevant product safety legislation that provides a framework protecting consumers against dangerous products in general. Such legislation should continue to apply. This Regulation is also without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety, including Regulation (EU) 2017/2394, Regulation (EU) 2019/1020 and Directive 2001/95/EC on general product safety and Directive 2013/11/EU. | ||
Recital (2d) (new) | (2d) In accordance with Article 114(2) TFEU, this Regulation complements and should not undermine the rights and interests of employed persons. This Regulation should therefore not affect Union law on social policy and national labour law and practice, that is any legal and contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation. This Regulation should not affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor should it affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. It should in any event not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems. | In line with Article 114(2) TFEU, this Regulation is designed to complement and not undermine the rights and interests of employed individuals. It should not impact Union law on social policy, national labour law and practice, or any legal and contractual provisions related to employment conditions, working conditions, including health and safety at work, and the relationship between employers and workers, including information, consultation, and participation. The Regulation should not interfere with the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Furthermore, it should not affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. Lastly, this Regulation should not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems. |
||
Recital (2e) (new) | (2e) This Regulation should not affect the provisions aiming to improve working conditions in platform work set out in Directive ... [COD 2021/414/EC]. | This Regulation should not interfere with the provisions aimed at enhancing working conditions in platform work as outlined in Directive ... [COD 2021/414/EC]. | ||
Recital (2f) (new) | (2f) This Regulation should help in supporting research and innovation and should not undermine research and development activity and respect freedom of scientific research. It is therefore necessary to exclude from its scope AI systems specifically developed for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. Under all circumstances, any research and development activity should be carried out in accordance with the Charter, Union law as well as the national law; | This Regulation is intended to support research and innovation, without undermining or affecting research and development activities, while respecting the freedom of scientific research. Therefore, AI systems that are specifically developed for the sole purpose of scientific research and development should be excluded from the scope of this Regulation. It is crucial to ensure that all research and development activities on AI systems are conducted in accordance with the Charter, Union law, and national law. | ||
Recital (3) | (3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation. | (3) Artificial intelligence is a fast evolving family of technologies that can and already contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities if developed in accordance with relevant general principles in line with the Charter and the values on which the Union is founded. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, crisis management, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation. | (3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation. | (3) Artificial intelligence is a fast evolving family of technologies that can and already contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities if developed in accordance with relevant general principles in line with the Charter and the values on which the Union is founded. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes. These outcomes can be seen in various sectors such as healthcare, farming, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, crisis management, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems, and climate change mitigation and adaptation. |
Recital (3a) (new) | (3a) To contribute to reaching the carbon neutrality targets, European companies should seek to utilise all available technological advancements that can assist in realising this goal. Artificial Intelligence is a technology that has the potential of being used to process the ever-growing amount of data created during industrial, environmental, health and other processes. To facilitate investments in AI-based analysis and optimisation tools, this Regulation should provide a predictable and proportionate environment for low-risk industrial solutions. | In order to achieve the carbon neutrality targets, it is crucial for European companies to leverage all available technological advancements. Artificial Intelligence, in particular, holds significant potential for processing the increasing volume of data generated from industrial, environmental, health, and other processes. Therefore, this Regulation should establish a predictable and proportionate environment to encourage investments in AI-based analysis and optimisation tools, specifically for low-risk industrial solutions. | ||
Recital (4) | (4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial. | (4) At the same time, depending on the circumstances regarding its specific application and use, as well as the level of technological development, artificial intelligence may generate risks and cause harm to public or private interests and fundamental rights of natural persons that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm. | (4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial. | (4) At the same time, depending on the circumstances regarding its specific application and use, as well as the level of technological development, artificial intelligence may generate risks and cause harm to public or private interests and fundamental rights of natural persons that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm. |
Recital (4a) (new) | (4a) Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. Safeguards should be provided to ensure the development and use of ethically embedded artificial intelligence that respects Union values and the Charter. | Given the significant societal implications of artificial intelligence and the necessity to foster trust, it is crucial that the development and regulatory framework of artificial intelligence align with the Union values outlined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. Artificial intelligence should be a human-centric technology, not replacing human autonomy or implying the loss of individual freedom, but primarily serving societal needs and the common good. Measures should be implemented to ensure the development and application of ethically embedded artificial intelligence that respects Union values and the Charter. | ||
Recital (5) | (5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33, and it ensures the protection of ethical principles, as specifically requested by the European Parliament34. ------------------ 33. European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL). | (5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety, protection of fundamental rights, democracy and rule of law and the environment, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market, the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. These rules should be clear and robust in protecting fundamental rights, supportive of new innovative solutions, and enabling to a European ecosystem of public and private actors creating AI systems in line with Union values. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of promoting the AI made in Europe, of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33, and it ensures the protection of ethical principles, as specifically requested by the European Parliament34. ------------------ 33. European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL). | A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules and building on the work of the High-level Expert Group on Artificial Intelligence as reflecetd in the Guidelines for Trustworthy Artificial Intelligence in the EU, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence as stated by the European Council4, and it ensures the protection of ethical principles, as specifically requested by the European Parliament5. _________ 4. European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 5. European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL). | A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety, protection of fundamental rights, democracy and rule of law and the environment, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market, the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. These rules should be clear and robust in protecting fundamental rights, supportive of new innovative solutions, and enabling to a European ecosystem of public and private actors creating AI systems in line with Union values. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, and building on the work of the High-level Expert Group on Artificial Intelligence as reflected in the Guidelines for Trustworthy Artificial Intelligence in the EU, this Regulation supports the objective of promoting the AI made in Europe, of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament. |
Recital (5a) (new) [C] | (5a) The harmonised rules on the placing on the market, putting into service and use of AI systems laid down in this Regulation should apply across sectors and, in line with its New Legislative Framework approach, should be without prejudice to existing Union law, notably on data protection, consumer protection, fundamental rights, employment and product safety, to which this Regulation is complementary. As a consequence all rights and remedies afforded by such Union law to consumers and other persons who may be negatively impacted by AI systems, including as regards the compensation of possible damages pursuant to Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, remain unaffected and fully applicable. On top of that, this Regulation aims to strengthen the effectiveness of such existing rights and remedies by establishing specific requirements and obligations, including in respect of transparency, technical documentation and record-keeping of AI systems. Furthermore, the obligations placed on various operators involved in the AI value chain under this Regulation should apply without prejudice to national laws, in compliance with Union law, having the effect of limiting the use of certain AI systems where such laws fall outside the scope of this Regulation or pursue other legitimate public interest objectives than those pursued by this Regulation. For example, national labour law and the laws on the protection of minors (i.e. persons below the age of 18) taking into account the United Nations General Comment No 25 (2021) on children’s rights, insofar as they are not specific to AI systems and pursue other legimitate public interest objectives, should not be affected by this Regulation. | (5a) This Regulation establishes harmonised rules for the marketing, operation, and use of AI systems across all sectors. These rules are in line with the New Legislative Framework approach and do not prejudice existing Union law, including data protection, consumer protection, fundamental rights, employment, and product safety. This Regulation complements these existing laws, ensuring that all rights and remedies provided by such Union law to consumers and other individuals potentially negatively affected by AI systems remain intact and fully applicable. This includes compensation for possible damages under Council Directive 85/374/EEC of 25 July 1985. Additionally, this Regulation seeks to enhance the effectiveness of these existing rights and remedies by setting specific requirements and obligations, including transparency, technical documentation, and record-keeping of AI systems. The obligations imposed on various operators in the AI value chain by this Regulation apply without prejudice to national laws that limit the use of certain AI systems, provided these laws are in compliance with Union law, fall outside the scope of this Regulation, or pursue other legitimate public interest objectives. National labour laws and laws protecting minors (i.e., individuals under 18), in line with the United Nations General Comment No 25 (2021) on children’s rights, should not be affected by this Regulation, provided they are not specific to AI systems and pursue other legitimate public interest objectives. | ||
Recital (5a) (new) [P] | (5a) Furthermore, in order to foster the development of AI systems in line with Union values, the Union needs to address the main gaps and barriers blocking the potential of the digital transformation including the shortage of digitally skilled workers, cybersecurity concerns, lack of investment and access to investment, and existing and potential gaps between large companies, SME’s and start-ups. Special attention should be paid to ensuring that the benefits of AI and innovation in new technologies are felt across all regions of the Union and that sufficient investment and resources are provided especially to those regions that may be lagging behind in some digital indicators. | In order to promote the growth of AI systems in harmony with Union values, it is crucial to address the primary gaps and obstacles hindering the potential of digital transformation. These include the scarcity of digitally skilled workers, cybersecurity issues, insufficient investment and access to investment, and the existing and potential disparities between large corporations, SMEs, and start-ups. It is important to ensure that the advantages of AI and innovation in new technologies are experienced across all regions of the Union. Special focus should be given to providing adequate investment and resources, particularly to those regions that may be trailing in certain digital indicators. | ||
Recital (6) | (6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. | (6) The notion of AI system in this Regulation should be clearly defined and closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance, while providing the flexibility to accommodate the rapidtechnological developments in this field. Moreover, it should be based on key characteristics of artificial intelligence, such as its learning, reasoning or modelling capabilities, so as to distinguish it from simpler software systems or programming approaches. AI systems are designed to operate with varying levels of autonomy, meaning that they have at least some degree of independence of actions from human controls and of capabilities to operate without human intervention. The term “machine-based” refers to the fact that AI systems run on machines. The reference to explicit or implicit objectives underscores that AI systems can operate according to explicit human-defined objectives or to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. The reference to predictions includes content, which is considered in this Regulation a form of prediction as one of the possible outputs produced by an AI system. For the purposes of this Regulation, environments should be understood as the contexts in which the AI systems operate, whereas outputs generated by the AI system, meaning predictions, recommendations or decisions, respond to the objectives of the system, on the basis of inputs from said environment. Such output further influences said environment, even by merely introducing new information to it. | (6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on key functional characteristics of artificial intelligence such as its learning, reasoning or modelling capabilities, distinguishing it from simpler software systems and programming approaches. In particular, for the purposes of this Regulation AI systems should have the ability, on the basis of machine and/or human-based data and inputs, to infer the way to achieve a set of final objectives given to them by humans, using machine learning and/or logic- and knowledge based approaches and to produce outputs such as content for generative AI systems (e.g. text, video or images), predictions, recommendations or decisions, influencing the environment with which the system interacts, be it in a physical or digital dimension. A system that uses rules defined solely by natural persons to automatically execute operations should not be considered an AI system. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The concept of the autonomy of an AI system relates to the degree to which such a system functions without human involvement. | (6) The notion of AI system in this Regulation should be clearly defined to ensure legal certainty, harmonization, and wide acceptance, while providing the flexibility to accommodate future and rapid technological developments. The definition should be based on key functional characteristics of the software, particularly its learning, reasoning, or modelling capabilities, distinguishing it from simpler software systems or programming approaches. AI systems are designed to operate with varying levels of autonomy, meaning that they have at least some degree of independence of actions from human controls and capabilities to operate without human intervention. For a given set of human-defined objectives, AI systems should have the ability to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. AI systems can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The concept of the autonomy of an AI system relates to the degree to which such a system functions without human involvement. The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to-date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. This definition should be closely aligned with the work of international organisations working on artificial intelligence. |
Recital (6a) (new) | (6a) AI systems often have machine learning capacities that allow them to adapt and perform new tasks autonomously. Machine learning refers to the computational process of optimizing the parameters of a model from data, which is a mathematical construct generating an output based on input data. Machine learning approaches include, for instance, supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks. This Regulation is aimed at addressing new potential risks that may arise by delegating control to AI systems, in particular to those AI systems that can evolve after deployment. The function and outputs of many of these AI systems are based on abstract mathematical relationships that are difficult for humans to understand, monitor and trace back to specific inputs. These complex and opaque characteristics (black box element) impact accountability and explainability. Comparably simpler techniques such as knowledge-based approaches, Bayesian estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, in particular when they are used in combination with machine learning approaches in hybrid systems. | (6a) Machine learning approaches focus on the development of systems capable of learning and inferring from data to solve an application problem without being explicitly programmed with a set of step-by-step instructions from input to output. Learning refers to the computational process of optimizing from data the parameters of the model, which is a mathematical construct generating an output based on input data. The range of problems addressed by machine learning typically involves tasks for which other approaches fail, either because there is no suitable formalisation of the problem, or because the resolution of the problem is intractable with non-learning approaches. Machine learning approaches include for instance supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks, statistical techniques for learning and inference (including for instance logistic regression, Bayesian estimation) and search and optimisation methods. | (6a) AI systems, particularly those with machine learning capacities, are capable of adapting and performing new tasks autonomously. Machine learning refers to the computational process of optimizing the parameters of a model from data, which is a mathematical construct generating an output based on input data. This process allows systems to learn and infer from data to solve an application problem without being explicitly programmed with a set of step-by-step instructions. Machine learning approaches include, for instance, supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks, statistical techniques for learning and inference such as logistic regression, Bayesian estimation, and search and optimisation methods. These AI systems often address tasks for which other approaches fail, either because there is no suitable formalisation of the problem, or because the resolution of the problem is intractable with non-learning approaches. However, the function and outputs of many of these AI systems are based on abstract mathematical relationships that are difficult for humans to understand, monitor and trace back to specific inputs. These complex and opaque characteristics, often referred to as the 'black box' element, impact accountability and explainability. This Regulation is aimed at addressing new potential risks that may arise by delegating control to AI systems, especially those that can evolve after deployment. Simpler techniques such as knowledge-based approaches, Bayesian estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, particularly when they are used in combination with machine learning approaches in hybrid systems. |
|
Recital (6b) (new) [C] | (6b) Logic- and knowledge based approaches focus on the development of systems with logical reasoning capabilities on knowledge to solve an application problem. Such systems typically involve a knowledge base and an inference engine that generates outputs by reasoning on the knowledge base. The knowledge base, which is usually encoded by human experts, represents entities and logical relationships relevant for the application problem through formalisms based on rules, ontologies, or knowledge graphs. The inference engine acts on the knowledge base and extracts new information through operations such as sorting, searching, matching or chaining. Logic- and knowledge based approaches include for instance knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimisation methods. | Logic- and knowledge-based approaches are centered on the creation of systems that utilize logical reasoning capabilities to address a specific application problem. These systems typically comprise a knowledge base and an inference engine. The knowledge base, often encoded by human experts, represents entities and logical relationships pertinent to the application problem through formalisms such as rules, ontologies, or knowledge graphs. The inference engine operates on the knowledge base, generating outputs and extracting new information through operations like sorting, searching, matching, or chaining. Examples of logic- and knowledge-based approaches include knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems, and search and optimisation methods. | ||
Recital (6b) (new) [P] | (6b) AI systems can be used as stand-alone software system, integrated into a physical product (embedded), used to serve the functionality of a physical product without being integrated therein (non-embedded) or used as an AI component of a larger system. If this larger system would not function without the AI component in question, then the entire larger system should be considered as one single AI system under this Regulation. | AI systems can be utilized in various forms such as stand-alone software systems, integrated into a physical product (embedded), used to serve the functionality of a physical product without being integrated therein (non-embedded), or used as an AI component of a larger system. In cases where the larger system would not function without the AI component, the entire larger system should be considered as a single AI system under this Regulation. | ||
Recital (6c) (new) | (6c) In order to ensure uniform conditions for the implementation of this Regulation as regards machine learning approaches and logic- and knowledged based approaches and to take account of market and technological developments, implementing powers should be conferred on the Commission. | In order to ensure uniform conditions for the implementation of this Regulation, particularly in relation to machine learning approaches and logic- and knowledge-based approaches, and to accommodate market and technological developments, implementing powers should be conferred on the Commission. | ||
Recital (6d) (new) | (6d) The notion of ‘user’ referred to in this Regulation should be interpreted as any natural or legal person, including a public authority, agency or other body, using an AI system under whose authority the system is used. Depending on the type of AI system, the use of the system may affect persons other than the user. | (6d) The term 'user' as mentioned in this Regulation should be understood as any natural or legal entity, encompassing a public authority, agency, or other bodies, that operates an AI system under their authority. The use of the AI system may have implications for individuals other than the user, depending on the specific type of AI system in use. | ||
Recital (7) | (7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . ---------------- 35. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36. Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89). | (7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35. Biometrics-based data are additional data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person, such as facial expressions, movements, pulse frequency, voice, key strikes or gait, which may or may not allow or confirm the unique identification of a natural person. __________ 35. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). | (7) The notion of biometric data used in this Regulation should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council6, Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council7 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council8. _______ 6. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 7. Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 8. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89). | (7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35, Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36, and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37. Biometrics-based data are additional data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person, such as facial expressions, movements, pulse frequency, voice, key strikes or gait, which may or may not allow or confirm the unique identification of a natural person. |
Recital (7a) (new) | (7a) The notion of biometric identification as used in this Regulation should be defined as the automated recognition of physical, physiological, behavioural, and psychological human features such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification), irrespective of whether the individual has given its consent or not. | The concept of biometric identification, as applied in this Regulation, should be understood as the automated recognition of physical, physiological, behavioural, and psychological human characteristics such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) with the aim of confirming an individual's identity. This is achieved by comparing the biometric data of the individual with the stored biometric data of individuals in a database (one-to-many identification), regardless of whether the individual has given their consent or not. | ||
Recital (7b) (new) | (7b) The notion of biometric categorisation as used in this Regulation should be defined as assigning natural persons to specific categories or inferring their characteristics and attributes such as gender, sex, age, hair colour, eye colour, tattoos, ethnic or social origin, health, mental or physical ability, behavioural or personality, traits language, religion, or membership of a national minority or sexual or political orientation on the basis of their biometric or biometric-based data, or which can be inferred from such data. | The concept of biometric categorisation, as referred to in this Regulation, should be defined as the process of assigning natural persons to specific categories or deducing their characteristics and attributes. These may include gender, sex, age, hair colour, eye colour, tattoos, ethnic or social origin, health status, mental or physical ability, behavioural or personality traits, language, religion, membership of a national minority, or sexual or political orientation. This categorisation is based on their biometric or biometric-based data, or information that can be inferred from such data. | ||
Recital (8) | (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. | (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used, exlcuding verification systems which merely compare the biometric data of an individual to their previously provided biometric data (one-to-one). Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. Given that the notion of biometric identification is independent from the individual’s consent, this definition applies even when warning notices are placed in the location that is under surveillance of the remote biometric identification system, and is not de facto annulled by pre-enrolment. | (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons typically at a distance, without their active involvement, through the comparison of a person’s biometric data with the biometric data contained in a reference data repository, irrespectively of the particular technology, processes or types of biometric data used. Such remote biometric identification systems are typically used to perceive (scan) multiple persons or their behaviour simultaneously in order to facilitate significantly the identification of a number of persons without their active involvement. Such a definition excludes verification/authentication systems whose sole purpose would be to confirm that a specific natural person is the person he or she claims to be, as well as systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises. This exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental rights of natural persons compared to remote biometric identification systems which may be used for the processing of the biometric data of a large number of persons. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. | (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance, without their active involvement, through the comparison of a person’s biometric data with the biometric data contained in a reference database or data repository, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. This definition excludes verification/authentication systems whose sole purpose would be to confirm that a specific natural person is the person he or she claims to be, as well as systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises, and verification systems which merely compare the biometric data of an individual to their previously provided biometric data (one-to-one). Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. Given that the notion of biometric identification is independent from the individual’s consent, this definition applies even when warning notices are placed in the location that is under surveillance of the remote biometric identification system, and is not de facto annulled by pre-enrolment. |
Recital (8a) (new) | (8a) The identification of natural persons at a distance is understood to distinguish remote biometric identification systems from close proximity individual verification systems using biometric identification means, whose sole purpose is to confirm whether or not a specific natural person presenting themselves for identification is permitted, such as in order to gain access to a service, a device, or premises. | The identification of natural persons at a distance refers to the use of remote biometric identification systems, which are distinct from close proximity individual verification systems that utilize biometric identification means. The primary function of these close proximity systems is to verify whether a specific natural person presenting themselves for identification is authorized, for instance, to access a service, a device, or premises. | ||
Recital (9) | (9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand. | (9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned and regardless of the potential capacity restrictions. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, sports grounds, schools, universities, relevant parts of hospitals and banks, amusement parks, festivals, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand. | (9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to an undetermined number of natural persons, and irrespective of whether the place in question is privately or publicly owned and irrepective of the activity for which the place may be used, such as commerce (for instance, shops, restaurants, cafés), services (for instance, banks, professional activities, hospitality), sport (for instance, swimming pools, gyms, stadiums), transport (for instance, bus, metro and railway stations, airports, means of transport ), entertainment (for instance, cinemas, theatres, museums, concert and conference halls) leisure or otherwise (for instance, public roads and squares, parks, forests, playgrounds). A place should be classified as publicly accessible also if, regardless of potential capacity or security restrictions, access is subject to certain predetermined conditions, which can be fulfilled by an undetermined number of persons, such as purchase of a ticket or title of transport, prior registration or having a certain age. By contrast, a place should not be considered publicly accessible if access is limited to specific and defined natural persons through either Union or national law directly related to public safety or security or through the clear manifestation of will by the person having the relevant authority on the place. The factual possibility of access alone (e.g. an unlocked door, an open gate in a fence) does not imply that the place is publicly accessible in the presence of indications or circumstances suggesting the contrary (e.g. signs prohibiting or restricting access). Company and factory premises as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers are places that are not publicly accessible. Publicly accessible spaces should not include prisons or border control areas. Some other areas may be composed of both not publicly accessible and publicly accessible areas, such as the hallway of a private residential building necessary to access a doctor's office or an airport. Online spaces are not covered either, as they are not physical spaces. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand. | (9) For the purposes of this Regulation, the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public or an undetermined number of natural persons, irrespective of whether the place in question is privately or publicly owned, and regardless of the potential capacity restrictions or the activity for which the place may be used. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses, factories, company and factory premises as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets, age restrictions, purchase of a ticket or title of transport, prior registration or having a certain age, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, museums, concert and conference halls, sports grounds, schools, universities, relevant parts of hospitals and banks, amusement parks, festivals, shops, shopping centres, restaurants, cafeÌs, swimming pools, gyms, stadiums, bus, metro and railway stations, airports, means of transport, public roads and squares, parks, forests, playgrounds are normally also publicly accessible. Publicly accessible spaces should not include prisons or border control areas. Some other areas may be composed of both not publicly accessible and publicly accessible areas, such as the hallway of a private residential building necessary to access a doctor's office or an airport. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand. |
Recital (9a) (new) | (9a) It is important to note that AI systems should make best efforts to respect general principles establishing a high-level framework that promotes a coherent human-centric approach to ethical and trustworthy AI in line with the Charter of Fundamental Rights of the European Union and the values on which the Union is founded, including the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, non-discrimination and fairness and societal and environmental wellbeing. | AI systems should strive to uphold general principles that establish a high-level framework promoting a coherent, human-centric approach to ethical and trustworthy AI. This approach should be in line with the Charter of Fundamental Rights of the European Union and the values on which the Union is founded. These values include the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, non-discrimination and fairness, and societal and environmental wellbeing. It is crucial that AI systems make their best efforts to respect these principles. | ||
Recital (9b) (new) | (9b) ‘AI literacy’ refers to skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip providers and users with the notions and skills required to ensure compliance with and enforcement of this Regulation. It is therefore necessary that the Commission, the Member States as well as providers and users of AI systems, in cooperation with all relevant stakeholders, promote the development of a sufficient level of AI literacy, in all sectors of society, for people of all ages, including women and girls, and that progress in that regard is closely followed. | AI literacy' is defined as the skills, knowledge, and understanding that enable providers, users, and those affected by AI, considering their respective rights and obligations under this Regulation, to make informed decisions about the deployment of AI systems. This includes awareness of the opportunities and risks of AI, the potential harm it can cause, and the promotion of its democratic control. AI literacy should extend beyond learning about tools and technologies, and should also aim to equip providers and users with the necessary notions and skills to ensure compliance with and enforcement of this Regulation. Therefore, it is crucial that the Commission, Member States, providers, and users of AI systems, in collaboration with all relevant stakeholders, promote the development of a sufficient level of AI literacy across all sectors of society and for people of all ages, including women and girls. Progress in this area should be closely monitored. | ||
Recital (10) | (10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union. | (10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union and on international level, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union. In order for the Union to be true to its fundamental values, AI systems intended to be used for practices that are considered unacceptable by this Regulation, should equally be deemed to be unacceptable outside the Union because of their particularly harmful effect to fundamental rights as enshrined in the Charter. Therefore it is appropriate to prohibit the export of such AI systems to third countries by providers residing in the Union. | (10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union. | (10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union and on an international level, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users and deployers of AI systems established within the Union. In order for the Union to be true to its fundamental values, AI systems intended to be used for practices that are considered unacceptable by this Regulation, should equally be deemed to be unacceptable outside the Union because of their particularly harmful effect to fundamental rights as enshrined in the Charter. Therefore it is appropriate to prohibit the export of such AI systems to third countries by providers residing in the Union. |
Recital (11) | (11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. | (11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. This exception should nevertheless be limited to trusted countries and international organisation that share Union values. | (11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for future cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. Recipient Member States authorities and Union institutions, offices, bodies and bodies making use of such outputs in the Union remain accountable to ensure their use comply with Union law. When those international agreements are revised or new ones are concluded in the future, the contracting parties should undertake the utmost effort to align those agreements with the requirements of this Regulation. | (11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union. Nonetheless, to take into account existing arrangements and special needs for future cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. This exception should nevertheless be limited to trusted countries and international organisation that share Union values. Recipient Member States authorities and Union institutions, offices, bodies and bodies making use of such outputs in the Union remain accountable to ensure their use comply with Union law. When those international agreements are revised or new ones are concluded in the future, the contracting parties should undertake the utmost effort to align those agreements with the requirements of this Regulation. |
Recital (12) | (12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act]. | (12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or deployer of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act]. | (12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. | (12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider, user, or deployer of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act]. |
Recital (-12a) (new) | (-12a) If and insofar AI systems are placed on the market, put into service, or used with or without modification of such systems for military, defence or national security purposes, those should be excluded from the scope of this Regulation regardless of which type of entity is carrying out those activities, such as whether it is a public or private entity. As regards military and defence purposes, such exclusion is justified both by Article 4(2) TEU and by the specifities of the Member States’ and the common Union defence policy covered by Chapter 2 of Title V of the Treaty on European Union (TEU) that are subject to public international law, which is therefore the more appropriate legal framework for the regulation of AI systems in the context of the use of lethal force and other AI systems in the context of military and defence activities. As regards national security purposes, the exclusion is justified both by the fact that national security remains the sole responsibility of Member States in accordance with Article 4(2) TEU and by the specific nature and operational needs of national security activities and specific national rules applicable to those activities. Nonetheless, if an AI system developed, placed on the market, put into service or used for military, defence or national security purposes is used outside those temporarily or permanently for other purposes (for example, civilian or humanitarian purposes, law enforcement or public security purposes), such a system would fall within the scope of this Regulation. In that case, the entity using the system for other than military, defence or national security purposes should ensure compliance of the system with this Regulation, unless the system is already compliant with this Regulation. AI systems placed on the market or put into service for an excluded (i.e. military, defence or national security) and one or more non excluded purposes (e.g. civilian purposes, law enforcement, etc.), fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation. In those cases, the fact that an AI system may fall within the scope of this Regulation should not affect the possibility of entities carrying out national security, defence and military activities, regardless of the type of entity carrying out those activities, to use AI systems for national security, military and defence purposes, the use of which is excluded from the scope of this Regulation. An AI system placed on the market for civilian or law enforcement purposes which is used with or without modification for military, defence or national security purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out those activities. | AI systems that are placed on the market, put into service, or used with or without modification for military, defence or national security purposes should be excluded from the scope of this Regulation, regardless of the type of entity carrying out these activities, whether public or private. This exclusion is justified by Article 4(2) TEU, the specificities of the Member States’ and the common Union defence policy, and the fact that national security remains the sole responsibility of Member States. However, if an AI system developed for military, defence or national security purposes is used temporarily or permanently for other purposes, such as civilian or humanitarian purposes, law enforcement or public security, it would fall within the scope of this Regulation. In such cases, the entity using the system for other purposes should ensure compliance with this Regulation, unless the system is already compliant. AI systems placed on the market or put into service for both excluded and non-excluded purposes fall within the scope of this Regulation, and providers should ensure compliance. The fact that an AI system may fall within the scope of this Regulation should not affect the possibility of entities carrying out national security, defence and military activities to use AI systems for these purposes, which are excluded from the scope of this Regulation. An AI system placed on the market for civilian or law enforcement purposes which is used for military, defence or national security purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out those activities. | ||
Recital (12a) (new) [C] | (12a) This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act]. | This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council, as amended by the Digital Services Act. | ||
Recital (12a) [P] / (12b) [C] (new) | (12a) Software and data that are openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market. Research by the Commission also shows that free and open-source software can contribute between EUR 65 billion to EUR 95 billion to the European Union’s GDP and that it can provide significant growth opportunities for the European economy. Users are allowed to run, copy, distribute, study, change and improve software and data, including models by way of free and open-source licences. To foster the development and deployment of AI, especially by SMEs, start-ups, academic research but also by individuals, this Regulation should not apply to such free and open-source AI components except to the extent that they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV of this Regulation. | (12b) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to exclude from its scope AI systems specifically developed and put into service for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should also not apply. This is without prejudice to the obligation to comply with this Regulation when an AI system falling into the scope of this Regulation is placed on the market or put into service as a result of such research and development activity and to the application of provisions on regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to the foregoing regarding AI systems specifically developed and put into service for the sole purpose of scientific research and development, any other AI system that may be used for the conduct of any reaserch and development activity should remain subject to the provisions of this Regulation. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical and professional standards for scientific research. | (12c) Openly shared software and data, where users can freely access, use, modify, and redistribute them or their modified versions, can contribute significantly to research, innovation, and the European Union's GDP. This Regulation should not apply to such free and open-source AI components, especially those developed and deployed by SMEs, start-ups, academic research, and individuals, unless they are part of a high-risk AI system or an AI system that falls under Title II or IV of this Regulation when placed on the market or put into service by a provider. Furthermore, this Regulation should not undermine research and development activity and should respect the freedom of science. Therefore, AI systems specifically developed and put into service for the sole purpose of scientific research and development should be excluded from its scope. However, this is without prejudice to the obligation to comply with this Regulation when an AI system resulting from such research and development activity is placed on the market or put into service. In addition, any other AI system that may be used for the conduct of any research and development activity should remain subject to the provisions of this Regulation. All research and development activities should be carried out in accordance with recognised ethical and professional standards for scientific research. |
|
Recital (12b) (new) [P] | (12b) Neither the collaborative development of free and open-source AI components nor making them available on open repositories should constitute a placing on the market or putting into service. A commercial activity, within the understanding of making available on the market, might however be characterised by charging a price, with the exception of transactions between micro enterprises, for a free and open-source AI component but also by charging a price for technical support services, by providing a software platform through which the provider monetises other services, or by the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software. | The collaborative development and availability of free and open-source AI components on open repositories should not be considered as placing on the market or putting into service. However, a commercial activity, which falls under the category of making available on the market, could be defined by charging a price for a free and open-source AI component, excluding transactions between micro enterprises. This could also include charging a price for technical support services, providing a software platform through which the provider monetises other services, or using personal data for purposes other than exclusively improving the security, compatibility or interoperability of the software. | ||
Recital (12c) (new) [C] | (12c) In the light of the nature and complexity of the value chain for AI systems, it is essential to clarify the role of actors who may contribute to the development of AI systems, notably high-risk AI systems. In particular, it is necessary to clarify that general purpose AI systems are AI systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts. They may be used as high- risk AI systems by themselves or be components of other high risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, such systems should be subject to proportionate and more specific requirements and obligations under this Regulation while ensuring a high level of protection of fundamental rights, health and safety. In addition, the providers of general purpose AI systems, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems, should cooperate, as appropriate, with the providers of the respective high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation. In order to take into account the specific characteristics of general purpose AI systems and the fast evolving market and technological developments in the field, implementing powers should be conferred on the Commission to specify and adapt the application of the requirements established under this Regulation to general purpose AI systems and to specify the information to be shared by the providers of general purpose AI systems in order to enable the providers of the respective high-risk AI system to comply with their obligations under this Regulation. | Given the complexity of the AI systems value chain, it is crucial to define the roles of actors involved in the development of AI systems, especially high-risk AI systems. It should be clarified that general purpose AI systems are those designed by the provider to perform universally applicable functions, such as image/speech recognition, in multiple contexts. These systems can be used as high-risk AI systems independently or as components of other high-risk AI systems. Due to their unique nature, these systems should be subject to specific and proportionate requirements and obligations under this Regulation, ensuring a high level of protection of fundamental rights, health, and safety. Providers of general purpose AI systems should cooperate with the providers of high-risk AI systems to ensure compliance with this Regulation, regardless of whether their systems are used as high-risk AI systems or as components of such systems. The Commission should be granted implementing powers to adapt the application of the requirements established under this Regulation to general purpose AI systems, considering their specific characteristics and the rapidly evolving market and technological developments. The Commission should also specify the information to be shared by the providers of general purpose AI systems to enable the providers of high-risk AI systems to comply with their obligations under this Regulation. | ||
Recital (12c) (new) [P] | (12c) The developers of free and open-source AI components should not be mandated under this Regulation to comply with requirements targeting the AI value chain and, in particular, not towards the provider that has used that free and open-source AI component. Developers of free and open-source AI components should however be encouraged to implement widely adopted documentation practices, such as model and data cards, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union. | Developers of free and open-source AI components should not be required to comply with regulations targeting the AI value chain, particularly not towards the provider that has used that free and open-source AI component. However, they should be encouraged to adopt widely accepted documentation practices, such as model and data cards, to facilitate information sharing along the AI value chain and promote trustworthy AI systems in the Union. | ||
Recital (13) | (13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. | (13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights as well as democracy and rule of law and the environment, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter, the European Green Deal, the Joint Declaration on Digital Rights of the Union and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High-Level Expert Group on Artificial Intelligence, and should be non-discriminatory and in line with the Union’s international trade commitments. | (13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. | (13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, fundamental rights, as well as democracy, rule of law and the environment, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal, the Joint Declaration on Digital Rights of the Union and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High-Level Expert Group on Artificial Intelligence, and should be non-discriminatory and in line with the Union’s international trade commitments. |
Recital (14) | (14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. | (14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems | (14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. | (14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. |
Recital (15) | (15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. | (15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. | (15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. | (15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. |
Recital (16) | (16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distortthe behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. | (16) The placing on the market, putting into service or use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. This limitation should be understood to include neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain-computer interfaces insofar as they are materially distorting the behaviour of a natural person in a manner that causes or is likely to cause that person or another person significant harm. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of individualsand specific groups of persons due to their known or predicted personality traits, age, physical or mental incapacities, social or economic situation. They do so with the intention to or the effect of materially distorting the behaviour of a person and in a manner that causes or is likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time. The intention to distort the behaviour may not be presumed if the distortion results from factors external to the AI system which are outside of the control of the provider or the user, such as factors that may not be reasonably foreseen and mitigated by the provider or the deployer of the AI system. In any case, it is not necessary for the provider or the deployer to have the intention to cause the significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices is complementary to the provisions contained in Directive 2005/29/EC, according to which unfair commercial practices are prohibited, irrespective of whether they carried out having recourse to AI systems or otherwise. In such setting, lawful commercial practices, for example in the field of advertising, that are in compliance with Union law should not in themselves be regarded as violating prohibition. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research and on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian. | (16) AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices. The placing on the market, putting into service or use of certain AI systems materially distorting human behaviour, whereby physical or psychological harms are likely to occur, are particularly dangerous and should therefore be forbidden. Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive as those stimuli are beyond human perception or other subliminal techniques that subvert or impair person’s autonomy, decision-making or free choices in ways that people are not consciously aware of, or even if aware not able to control or resist, for example in cases of machine-brain interfaces or virtual reality. In addition, AI systems may also otherwise exploit vulnerabilities of a specific group of persons due to their age, disability within the meaning of Directive (EU) 2019/882, or a specific social or economic situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme poverty, ethnic or religious minorities. Such AI systems can be placed on the market, put into service or used with the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause physical or phycological harm to that or another person or groups of persons, including harms that may be accumulated over time. The intention to distort the behaviour may not be presumed if the distortion results from factors external to the AI system which are outside of the control of the provider or the user, meaning factors that may not be reasonably foreseen and mitigated by the provider or the user of the AI system. In any case, it is not necessary for the provider or the user to have the intention to cause the physical or psychological harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the provisions contained in Directive 2005/29/EC, notably that unfair commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or otherwise. The prohibitions of manipulative and exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable medical standards and legislation. In addition, common and legitimate commercial practices that are in compliance with the applicable law should not in themselves be regarded as constituting harmful manipulative AI practices. | (16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of individuals and specific groups of persons due to their known or predicted personality traits, age, physical or mental incapacities, social or economic situation. They do so with the intention to or the effect of materially distorting the behaviour of a person and in a manner that causes or is likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time. The intention to distort the behaviour may not be presumed if the distortion results from factors external to the AI system which are outside of the control of the provider or the user, such as factors that may not be reasonably foreseen and mitigated by the provider or the deployer of the AI system. In any case, it is not necessary for the provider or the deployer to have the intention to cause the significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the provisions contained in Directive 2005/29/EC, according to which unfair commercial practices are prohibited, irrespective of whether they carried out having recourse to AI systems or otherwise. In such setting, lawful commercial practices, for example in the field of advertising, that are in compliance with Union law should not in themselves be regarded as violating prohibition. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research and on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian. The prohibitions of manipulative and exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable medical standards and legislation. |
Recital (16a) (new) | (16a) AI systems that categorise natural persons by assigning them to specific categories, according to known or inferred sensitive or protected characteristics are particularly intrusive, violate human dignity and hold great risk of discrimination. Such characteristics include gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union, as well as under Article 9 of Regulation (EU)2016/769. Such systems should therefore be prohibited. | AI systems that categorize natural persons by assigning them to specific categories, based on known or inferred sensitive or protected characteristics, are considered highly intrusive, infringe upon human dignity, and carry a significant risk of discrimination. These characteristics include gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability, or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union, as well as under Article 9 of Regulation (EU)2016/769. Therefore, the use of such systems should be prohibited. | ||
Recital (17) | (17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited. | (17) AI systems providing social scoring of natural persons for general purpose may lead to discriminatory outcomes and the exclusion of certain groups. They violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups based on multiple data points and time occurrences related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited. | (17) AI systems providing social scoring of natural persons by public authorities or by private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices should be therefore prohibited. This prohibition should not affect lawful evaluation practices of natural persons done for one or more specific purpose in compliance with the law. | (17) AI systems providing social scoring of natural persons for general purpose by public authorities, on their behalf, or by private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups based on their social behaviour in multiple contexts or known, inferred, or predicted personal or personality characteristics, using multiple data points and time occurrences. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices should be therefore prohibited. This prohibition should not affect lawful evaluation practices of natural persons done for one or more specific purpose in compliance with the law. |
Recital (18) | (18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. | (18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces is particularly intrusive to the rights and freedoms of the concerned persons, and can ultimately affect the private life of a large part of the population, evoke a feeling of constant surveillance, give parties deploying biometric identification in publicly accessible spaces a position of uncontrollable power and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights at the core to the Rule of Law. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited. Similarly, AI systems used for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems should also be prohibited, unless there is pre-judicial authorisation for use in the context of law enforcement, when strictly necessary for the targeted search connected to a specific serious criminal offense that already took place, and only subject to a pre-judicial authorisation. | (18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. | (18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance, give parties deploying biometric identification in publicly accessible spaces a position of uncontrollable power and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights at the core to the Rule of Law. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects, particularly when it comes to age, ethnicity, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited, unless there is pre-judicial authorisation for use in the context of law enforcement, when strictly necessary for the targeted search connected to a specific serious criminal offense that already took place, and only subject to a pre-judicial authorisation. |
Recital (19) | (19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. __________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1). | deleted | (19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA9 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA9, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. In addition, this Regulation should preserve the ability for law enforcement, border control, immigration or asylum authorities to carry out identity checks in the presence of the person that is concerned in accordance with the conditions set out in Union and national law for such checks. In particular, law enforcement, border control, immigration or asylum authorities should be able to use information systems, in accordance with Union or national law, to identify a person who, during an identity check, either refuses to be identified or is unable to state or prove his or her identity, without being required by this Regulation to obtain prior authorisation. This could be, for example, a person involved in a crime, being unwilling, or unable due to an accident or a medical condition, to disclose their identity to law enforcement authorities. _____________ 9. Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1). | (19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. |
Recital (20) | (20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above. | deleted | (20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the situations mentioned above. | (20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above. |
Recital (21) | (21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier. | deleted | (21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use of the system with a view to identify a person or persons. Exceptions to this rule should be allowed in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier. | (21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use of the system with a view to identify a person or persons. Exceptions to this rule should be allowed in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier. |
Recital (22) | (22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation. | deleted | (22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation. | (22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation. |
Recital (23) | (23) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it. | deleted | (23) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it. | (23) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it. |
Recital (24) | (24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable. | (24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces as regulated by this Regulation should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable. | (24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply with all requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than law enforcement, Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725 prohibit the processing of biometric data for the purpose of uniquely identifying a natural person, unless one of the situations in the respective second paragraphs of those two articles applies. | (24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply with all requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than law enforcement, the processing should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725. These regulations prohibit the processing of biometric data for the purpose of uniquely identifying a natural person, unless one of the situations in the respective second paragraphs of those two articles applies. |
Recital (25) | (25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU. | (25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU. | (25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2), (3) and (4) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU. | (25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2), (3) and (4) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU. |
Recital (26) | (26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU. | (26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU. | (26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2), (3) and (4) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU. | (26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2), (3) and (4) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU. |
Recital (26a) (new) | (26a) AI systems used by law enforcement authorities or on their behalf to make predictions, profiles or risk assessments based on profiling of natural persons or data analysis based on personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of persons for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour or administrative offences, including fraud-predicition systems, hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited. | AI systems utilized by law enforcement authorities, or on their behalf, for the purpose of making predictions, profiles, or risk assessments based on profiling of natural persons or data analysis based on personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of persons, with the aim of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour or administrative offences, including fraud-prediction systems, should be prohibited. This is due to the particular risk they hold of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. | ||
Recital (26b) (new) | (26b) The indiscriminate and untargeted scraping of biometric data from social media or CCTV footage to create or expand facial recognition databases add to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy. The use of AI systems with this intended purpose should therefore be prohibited. | The unregulated and indiscriminate collection of biometric data from sources such as social media or CCTV footage for the purpose of creating or expanding facial recognition databases contributes to a sense of mass surveillance and has the potential to result in severe infringements of fundamental rights, including the right to privacy. Therefore, the deployment of AI systems with this specific intention should be strictly prohibited. | ||
Recital (26c) (new) | (26c) There are serious concerns about the scientific basis of AI systems aiming to detect emotions, physical or physiological features such as facial expressions, movements, pulse frequency or voice. Emotions or expressions of emotions and perceptions thereof vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such technologies, are the limited reliability (emotion categories are neither reliably expressed through, nor unequivocally associated with, a common set of physical or physiological movements), the lack of specificity (physical or physiological expressions do not perfectly match emotion categories) and the limited generalisability (the effects of context and culture are not sufficiently considered). Reliability issues and consequently, major risks for abuse, may especially arise when deploying the system in real-life situations related to law enforcement, border management, workplace and education institutions. Therefore, the placing on the market, putting into service, or use of AI systems intended to be used in these contexts to detect the emotional state of individuals should be prohibited. | There are significant concerns regarding the scientific foundation of AI systems designed to detect emotions, physical or physiological features such as facial expressions, movements, pulse frequency or voice. The expression of emotions and their perception can vary greatly across cultures, situations, and even within a single individual. Key limitations of these technologies include limited reliability, as emotion categories are not consistently expressed through or associated with a common set of physical or physiological movements. There is also a lack of specificity, as physical or physiological expressions do not perfectly align with emotion categories, and limited generalisability, as the effects of context and culture are not adequately considered. These reliability issues pose major risks for misuse, particularly when the system is deployed in real-life situations related to law enforcement, border management, workplaces, and educational institutions. Therefore, the marketing, servicing, or use of AI systems intended for use in these contexts to detect the emotional state of individuals should be prohibited. | ||
Recital (26d) (new) | (26d) Practices that are prohibited by Union legislation, including data protection law, non-discrimination law, consumer protection law, and competition law, should not be affected by this Regulation | This Regulation should not affect practices that are prohibited by Union legislation, including data protection law, non-discrimination law, consumer protection law, and competition law. | ||
Recital (27) | (27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. | (27) High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law, including fundamental rights, democracy, the rule or law or the environment. In order to ensure alignment with sectoral legislation and avoid duplications, requirements for high-risk AI systems should take into account sectoral legislation laying down requirements for high-risk AI systems included in the scope of this Regulation, such as Regulation (EU) 2017/745 on Medical Devices and Regulation (EU) 2017/746 on In Vitro Diagnostic Devices or Directive 2006/42/EC on Machinery. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. Given the rapid pace of technological development, as well as the potential changes in the use of AI systems, the list of high-risk areas and use-cases in Annex III should nonetheless be subject to permanent review through the exercise of regular assessment. | (27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high- risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. | (27) High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law, including fundamental rights, democracy, the rule of law or the environment. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. In order to ensure alignment with sectoral legislation and avoid duplications, requirements for high-risk AI systems should take into account sectoral legislation laying down requirements for high-risk AI systems included in the scope of this Regulation, such as Regulation (EU) 2017/745 on Medical Devices and Regulation (EU) 2017/746 on In Vitro Diagnostic Devices or Directive 2006/42/EC on Machinery. Given the rapid pace of technological development, as well as the potential changes in the use of AI systems, the list of high-risk areas and use-cases in Annex III should nonetheless be subject to permanent review through the exercise of regular assessment. |
Recital (28) | (28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons. | (28) AI systems could have an adverse impact to health and safety of persons, in particular when such systems operate as safety components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. | (28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons. | (28) AI systems could produce adverse outcomes to health and safety of persons, particularly when such systems operate as components or safety components of products. In line with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and compliant products find their way into the market, it is crucial that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care, should be able to safely operate and perform their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. These rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to these rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons. |
Recital (28a) | (28a) The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, right to education consumer protection, workers’ rights, rights of persons with disabilities, gender equality, intellectual property rights, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment. | The classification of an AI system as high-risk should take into account the extent of its adverse impact on the fundamental rights protected by the Charter. These rights encompass the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, non-discrimination, right to education, consumer protection, workers’ rights, rights of persons with disabilities, gender equality, intellectual property rights, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, and right to good administration. Furthermore, it is crucial to acknowledge the specific rights of children as outlined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child, including the provisions detailed in the UNCRC General Comment No. 25 regarding the digital environment. These provisions necessitate the consideration of children's vulnerabilities and the provision of necessary protection and care for their well-being. Lastly, the fundamental right to a high level of environmental protection, as enshrined in the Charter and implemented in Union policies, should also be taken into account when assessing the severity of the harm that an AI system can cause, particularly in relation to the health and safety of persons or to the environment. |
||
Recital (29) | (29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. ––––––––– 39. Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40. Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41. Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1). | (29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment, market surveillanceand enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. ––––––––– 39. Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40. Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41. Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1). | (29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council10, Regulation (EU) No 167/2013 of the European Parliament and of the Council11, Regulation (EU) No 168/2013 of the European Parliament and of the Council12, Directive 2014/90/EU of the European Parliament and of the Council13, Directive (EU) 2016/797 of the European Parliament and of the Council14, Regulation (EU) 2018/858 of the European Parliament and of the Council15, Regulation (EU) 2018/1139 of the European Parliament and of the Council16, and Regulation (EU) 2019/2144 of the European Parliament and of the Council17, it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. ––––––– 10. Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 11. Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 12. Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 13. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 14. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 15. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 16. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 17. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type- approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1). | (29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment, market surveillance and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. |
Recital (30) | (30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices. | (30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation law listed in Annex II, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation law. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices. | (30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices. | (30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation listed in Annex II, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices. |
Recital (31) | (31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarilymean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products. –––––– 47. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). 48. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176). | 31) The classification of an AI system as high-risk pursuant to this Regulation should not mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation law that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products. –––––– 47. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). 48. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176). | (31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council18 and Regulation (EU) 2017/746 of the European Parliament and of the Council19, where a third- party conformity assessment is provided for medium-risk and high-risk products. –––––– 18. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). 19. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176). | (31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council and Regulation (EU) 2017/746 of the European Parliament and of the Council, where a third-party conformity assessment is provided for medium-risk and high-risk products. –––––– Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176). |
Recital (32) | (32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and itsprobability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. | (32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment . Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons. Such combination could for instance result in a high severity but low probability to affect a natural person, or a high probability to affect a group of persons with a low intensity over a long period of time, depending on the context. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. | (32) As regards high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence, and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. It is also important to clarify that within the high-risk scenarios referred to in Annex III there may be systems that do not lead to a significant risk to the legal interests protected under those scenarios, taking into account the output produced by the AI system. Therefore only when such output has a high degree of importance (i.e. is not purely accessory) in respect of the relevant action or decision so as to generate a significant risk to the legal interests protected, the AI system generating such output should be considered as high-risk. For instance, when the information provided by an AI systems to the human consists of the profiling of natural persons within the meaning of of Article 4(4) Regulation (EU) 2016/679 and Article 3(4) of Directive (EU) 2016/680 and Article 3(5) of Regulation (EU) 2018/1725, such information should not typically be considered of accessory nature in the context of high risk AI systems as referred to in Annex III. However, if the output of the AI system has only negligible or minor relevance for human action or decision, it may be considered purely accessory, including for example, AI systems used for translation for informative purposes or for the management of documents. | (32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, and that are listed in one of the areas and use cases in Annex III, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons, and, where the AI system is used as a safety component of a critical infrastructure, to the environment. This significant risk of harm should be identified by assessing the severity, intensity, probability of occurrence and duration of such risk, and whether the risk can affect an individual, a plurality of persons or a particular group of persons. However, within the high-risk scenarios referred to in Annex III, there may be systems that do not lead to a significant risk to the legal interests protected under those scenarios, taking into account the output produced by the AI system. Therefore, only when such output has a high degree of importance (i.e. is not purely accessory) in respect of the relevant action or decision so as to generate a significant risk to the legal interests protected, the AI system generating such output should be considered as high-risk. For instance, when the information provided by an AI systems to the human consists of the profiling of natural persons within the meaning of Article 4(4) Regulation (EU) 2016/679 and Article 3(4) of Directive (EU) 2016/680 and Article 3(5) of Regulation (EU) 2018/1725, such information should not typically be considered of accessory nature in the context of high risk AI systems as referred to in Annex III. However, if the output of the AI system has only negligible or minor relevance for human action or decision, it may be considered purely accessory, including for example, AI systems used for translation for informative purposes or for the management of documents. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. |
Recital (32a) (new) | (32a) Providers whose AI systems fall under one of the areas and use cases listed in Annex III that consider their system does not pose a significant risk of harm to the health, safety, fundamental rights or the environment should inform the national supervisory authorities by submitting a reasoned notification. This could take the form of a one-page summary of the relevant information on the AI system in question, including its intended purpose and why it would not pose a significant risk of harm to the health, safety, fundamental rights or the environment. The Commission should specify criteria to enable companies to assess whether their system would pose such risks, as well as develop an easy to use and standardised template for the notification. Providers should submit the notification as early as possible and in any case prior to the placing of the AI system on the market or its putting into service, ideally at the development stage, and they should be free to place it on the market at any given time after the notification. However, if the authority estimates the AI system in question was misclassified, it should object to the notification within a period of three months. The objection should be substantiated and duly explain why the AI system has been misclassified. The provider should retain the right to appeal by providing further arguments. If after the three months there has been no objection to the notification, national supervisory authorities could still intervene if the AI system presents a risk at national level, as for any other AI system on the market. National supervisory authorities should submit annual reports to the AI Office detailing the notifications received and the decisions taken. | Providers with AI systems that fall under the areas and use cases listed in Annex III, and who believe their system does not pose a significant risk to health, safety, fundamental rights, or the environment, should submit a reasoned notification to the national supervisory authorities. This notification could be a one-page summary detailing the AI system's intended purpose and reasons why it does not pose a significant risk. The Commission is tasked with specifying criteria for risk assessment and developing a standardized notification template. Providers should submit this notification as early as possible, ideally during the development stage, and certainly before the AI system is placed on the market or put into service. Providers are free to market their AI system at any time after the notification. If the authority believes the AI system has been misclassified, it has the right to object to the notification within a three-month period. This objection must be substantiated with a clear explanation of why the AI system has been misclassified. Providers retain the right to appeal this objection by providing further arguments. If no objection has been raised within the three-month period, national supervisory authorities still retain the right to intervene if the AI system presents a risk at the national level, as with any other AI system on the market. Finally, national supervisory authorities are required to submit annual reports to the AI Office, detailing the notifications received and the decisions taken. |
||
Recital (33) | (33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight. | deleted | (33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, race, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high- risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight. | (33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, race, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight. |
Recital (33a) (new) | (33a) As biometric data constitute a special category of sensitive personal data in accordance with Regulation 2016/679, it is appropriate to classify as high-risk several critical use-cases of biometric and biometrics-based systems. AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those which are prohibited under this Regulation should therefore be classified as high-risk. This should not include AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises (one-to-one verification). Biometric and biometrics-based systems which are provided for under Union law to enable cybersecurity and personal data protection measures should not be considered as posing a significant risk of harm to the health, safety and fundamental rights. | Given the sensitive nature of biometric data, it is appropriate to classify several critical use-cases of biometric and biometrics-based systems as high-risk. This includes AI systems intended for biometric identification of natural persons and those used to make inferences about personal characteristics based on biometric data, such as emotion recognition systems. However, this classification should exclude AI systems used for biometric verification, including authentication, whose sole purpose is to confirm a person's identity for access to a service, device, or premises. Furthermore, biometric systems provided for under Union law for cybersecurity and personal data protection measures should not be considered as posing a significant risk to health, safety, and fundamental rights. | ||
Recital (34) | (34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. | (34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of the supply of water, gas, heating electricity and critical digital infrastructure, since their failure or malfunctioning may infringe the security and integrity of such critical infrastructure or put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or health and safety of persons and property. Failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to the health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of such safety components may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres. | (34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of critical digital infrastructure as listed in Annex I point 8 of the Directive on the resilience of critical entities, road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastrucure, are systems used to directly protect the physical integrity of critical infrastructure or health and safety of persons and property but which are not necessary in order for the system to function. Failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres. | (34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic, the supply of water, gas, heating, electricity, and critical digital infrastructure as listed in Annex I point 8 of the Directive on the resilience of critical entities. The failure or malfunctioning of these systems may infringe the security and integrity of such critical infrastructure, put at risk the life and health of persons at large scale, and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or health and safety of persons and property, but which are not necessary in order for the system to function. Failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to the health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres. |
Recital (35) | (35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. | (35) Deployment of AI systems in education is important in order to help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience . AI systems used in education or vocational training, notably for determining access or materially influence decisions on admission or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or to assess the appropriate level of education for an individual and materially influence the level of education and training that individuals will receive or be able to access or to monitor and detect prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. | (35) AI systems used in education or vocational training, notably for determining access, admission or assigning persons to educational and vocational training institutions or programmes at all levels or to evaluate learning outcomes of persons should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. | (35) AI systems used in education or vocational training, notably for determining access, admission or assigning persons to educational and vocational training institutions or programmes at all levels, to evaluate learning outcomes of persons, or to assess the appropriate level of education for an individual and materially influence the level of education and training that individuals will receive or be able to access, or to monitor and detect prohibited behaviour of students during tests should be considered high-risk. These systems are important in order to help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience. However, they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. |
Recital (36) | (36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. | (36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions or materially influence decisions on initiation,promotion and termination and for personalised task allocation based on individual behaviour, personal traits or biometric data, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights. Relevant work-related contractual relationships should meaningfully involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also undermine the essence of their fundamental rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AI-systems in the employment context. | (36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation based on individual behavior or personal traits or characteristics, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. | (36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions or materially influence decisions on initiation, promotion and termination and for personalised task allocation based on individual behaviour, personal traits or biometric data, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights. Relevant work-related contractual relationships should meaningfully involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also undermine the essence of their fundamental rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AI-systems in the employment context. |
Recital (37) | (37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property. | (37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services, including healthcare services, and essential services, including but not limited to housing, electricity, heating/cooling and internet, and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, gender, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. However, AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation. Natural persons applying for or receiving public assistance benefits and services from public authorities, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling and internet, are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Similarly, AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance may also have a significant impact on persons’ livelihood and may infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property. | (37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by micro or small entreprises, as defined in the Annex of Commission Recommendation 2003/361/EC for their own use. Natural persons applying for or receiving essential public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, those systems may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property. AI systems are also increasingly used for risk assessment in relation to natural persons and pricing in the case of life and health insurance which, if not duly designed, developed and used, can lead to serious consequences for people’s life and health, including financial exclusion and discrimination. To ensure a consistent approach within the financial services sector, the above mentioned exception for micro or small enterprises for their own use should apply, insofar as they themselves provide and put into service an AI system for the purpose of selling their own insurance products. | (37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits, including healthcare services, necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, gender, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. However, considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers, micro or small enterprises, as defined in the Annex of Commission Recommendation 2003/361/EC for their own use. AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation. Natural persons applying for or receiving public assistance benefits and services from public authorities, including healthcare services and essential services, are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Similarly, AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance may also have a significant impact on persons’ livelihood and may infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property. AI systems are also increasingly used for risk assessment in relation to natural persons and pricing in the case of life and health insurance which, if not duly designed, developed and used, can lead to serious consequences for people’s life and health, including financial exclusion and discrimination. To ensure a consistent approach within the financial services sector, the above mentioned exception for micro or small enterprises for their own use should apply, insofar as they themselves provide and put into service an AI system for the purpose of selling their own insurance products. |
Recital (37a) (new) | (37a) Given the role and responsibility of police and judicial authorities, and the impact of decisions they take for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, some specific use-cases of AI applications in law enforcement has to be classified as high-risk, in particular in instances where there is the potential to significantly affect the lives or the fundamental rights of individuals. | Given the significant role and responsibility of police and judicial authorities, and considering the profound impact of their decisions in the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, it is necessary to classify certain use-cases of AI applications in law enforcement as high-risk. This is particularly crucial in instances where there is potential to significantly affect the lives or the fundamental rights of individuals. | ||
Recital (38) | (38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. | (38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities, as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for the evaluation of the reliability of evidence in criminal proceedings, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be classified ashigh-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and judicial authorities should not become a factor of inequality, social fracture or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, notably the difficulty in obtaining meaningful information on their functioning and the consequent difficulty in challenging their results in court, in particular by individuals under investigation. | (38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out adminstrative tasks analysing information pursuant to Union anti-money laundering legislation should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. | (38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities, for individual risk assessments, polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, to detect the emotional state of natural person, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering legislation should not be classified as high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and judicial authorities should not become a factor of inequality, social fracture or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, notably the difficulty in obtaining meaningful information on their functioning and the consequent difficulty in challenging their results in court, in particular by individuals under investigation. |
Recital (39) | (39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. ––––– 49. Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50. Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1). | (39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination and assessment of the veracity of evidence in relation to applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status; for monitoring, surveilling or processing personal data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; for the forecasting or prediction of trends related to migration movements and border crossings. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or Union institutions, agencies or bodies as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non-refoulement, or or deny safe and effective legal avenues into the territory of the Union, including the right to international protection. ––––– 49. Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50. Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1). | (39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council20, the Regulation (EC) No 810/2009 of the European Parliament and of the Council21 and other relevant legislation. –––––– 20. Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 21. Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1). | (39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable positions and who are dependent on the outcome of the actions of the competent public authorities or Union agencies, offices or bodies. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by or on behalf of competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools, insofar as their use is permitted under relevant Union and national law, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination and assessment of the veracity of evidence in relation to applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council, the Regulation (EC) No 810/2009 of the European Parliament and of the Council and other relevant legislation. The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or Union institutions, agencies or bodies as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non-refoulement, or deny safe and effective legal avenues into the territory of the Union, including the right to international protection. |
Recital (40) | (40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources. | (40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or administrative body or on their behalf to assist judicial authorities or administrative bodies in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution. The use of artificial intelligence tools can support, but should not replace the decision-making power of judges or judicial independence, as the final decision-making must remain a human-driven activity and decision. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources. | (40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks. | (40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or administrative body or on their behalf to assist judicial authorities or administrative bodies in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution. The use of artificial intelligence tools can support, but should not replace the decision-making power of judges or judicial independence, as the final decision-making must remain a human-driven activity and decision. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources. |
Recital (40a) (new) | (40a) In order to address the risks of undue external interference to the right to vote enshrined in Article 39 of the Charter, and of disproportionate effects on democratic processes, democracy, and the rule of law, AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems. with the exception of AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view. | In order to mitigate the risks of undue external interference with the right to vote, as protected by Article 39 of the Charter, and to prevent disproportionate impacts on democratic processes, democracy, and the rule of law, AI systems designed to influence the outcome of an election or referendum or the voting behaviour of individuals should be classified as high-risk AI systems. This does not include AI systems that individuals are not directly exposed to, such as those used for the administrative and logistical organization, optimization, and structuring of political campaigns. | ||
Recital (40b) (new) | (40b) Considering the scale of natural persons using the services provided by social media platforms designated as very large online platforms, such online platforms can be used in a way that strongly influences safety online, the shaping of public opinion and discourse, election and democratic processes and societal concerns. It is therefore appropriate that AI systems used by those online platforms in their recommender systems are subject to this Regulation so as to ensure that the AI systems comply with the requirements laid down under this Regulation, including the technical requirements on data governance, technical documentation and traceability, transparency, human oversight, accuracy and robustness. Compliance with this Regulation should enable such very large online platforms to comply with their broader risk assessment and risk-mitigation obligations in Article 34 and 35 of Regulation EU 2022/2065. The obligations in this Regulation are without prejudice to Regulation (EU) 2022/2065 and should complement the obligations required under the Regulation (EU) 2022/2065 when the social media platform has been designated as a very large online platform. Given the European-wide impact of social media platforms designated as very large online platforms, the authorities designated under Regulation (EU) 2022/2065 should act as enforcement authorities for the purposes of enforcing this provision. | Considering the significant influence of very large online platforms, particularly in terms of online safety, public opinion, democratic processes, and societal concerns, it is crucial that AI systems used in their recommender systems adhere to this Regulation. This includes compliance with technical requirements on data governance, technical documentation and traceability, transparency, human oversight, accuracy, and robustness. Adherence to this Regulation will enable these platforms to meet their broader risk assessment and risk-mitigation obligations as outlined in Articles 34 and 35 of Regulation EU 2022/2065. This Regulation does not prejudice Regulation (EU) 2022/2065 and should complement its obligations when the social media platform is designated as a very large online platform. Given the Europe-wide impact of these platforms, the authorities designated under Regulation (EU) 2022/2065 should act as enforcement authorities for the purposes of enforcing this provision. | ||
Recital (41) | (41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant. | (41) The fact that an AI system is classified as a high risk AI system under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful or unlawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. | (41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant, unless it is specifically provided for otherwise in this Regulation. | (41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful or unlawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant, unless it is specifically provided for otherwise in this Regulation. |
Recital (41a) (new) | (41a) A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today, including but not limited to EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and national law. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector). | A multitude of legally binding regulations at the European, national, and international levels are currently applicable or relevant to AI systems. These include, but are not limited to, EU primary law such as the Treaties of the European Union and its Charter of Fundamental Rights, and EU secondary law, which includes the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law, and Safety and Health at Work Directives. Additionally, the UN Human Rights treaties and the Council of Europe conventions, including the European Convention on Human Rights, and national law are also applicable. Beyond these universally applicable rules, there are also various domain-specific rules that apply to specific AI applications, such as the Medical Device Regulation in the healthcare sector. | ||
Recital (42) | (42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider. | (42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for deployers and affected persons, certain mandatory requirements should apply, taking into account the intended purpose, the reasonably foreseeable misuse of the system and according to the risk management system to be established by the provider. These requirements should be objective-driven, fit for purpose, reasonable and effective, without adding undue regulatory burdens or costs on operators. | (42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider. In particular, the risk management system should consist of a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system. This process should ensure that the provider identifies and analyses the risks to the health, safety and fundamental rights of the persons who may be affected by the system in light of its intended purpose, including the possible risks arising from the interaction between the AI system and the environment within which it operates, and accordingly adopts suitable risk management measures in the light of state of the art. | (42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users, deployers, and affected persons, certain mandatory requirements should apply. These requirements should be objective-driven, fit for purpose, reasonable and effective, without adding undue regulatory burdens or costs on operators. The requirements should take into account the intended purpose, the reasonably foreseeable misuse of the system, and should be in accordance with the risk management system to be established by the provider. This risk management system should consist of a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system. The process should ensure that the provider identifies and analyses the risks to the health, safety, and fundamental rights of the persons who may be affected by the system in light of its intended purpose, including the possible risks arising from the interaction between the AI system and the environment within which it operates, and accordingly adopts suitable risk management measures in the light of state of the art. |
Recital (43) | (43) Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade. | (43) Requirements should apply to high-risk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as well as the environment, democracy and rule of law, asapplicable in the light of the intended purpose or reasonably foreseeable misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade. | (43) Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade. | (43) Requirements should apply to high-risk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to users and deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety, fundamental rights, as well as the environment, democracy and rule of law, as applicable in the light of the intended purpose or reasonably foreseeable misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade. |
Recital (44) | (44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbeable to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems. | (44) Access to data of high quality plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, and where applicable, validation and testing data sets, including the labels, should be sufficiently relevant, representative, appropriately vetted for errors and as complete as possible in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected by the high-risk AI system. Biases can for example be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings. Results provided by AI systems are influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain vulnerable or ethnic groups, or racialised communities. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, contextal, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should, exceptionally and following the application of all applicable conditions laid down under this Regulation and in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the negative bias detection and correction in relation to high-risk AI systems. Negative bias should be understood as bias that create direct or indirect discriminatory effect against a natural person The requirements related to data governance can be complied with by having recourse to third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices. | (44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These datasets should also be as free of errors and complete as possible in view of the intended purpose of the AI system, taking into account, in a proportionate manner, technical feasibility and state of the art, the availability of data and the implementation of appropriate risk management measures so that possible shortcomings of the datasets are duly addressed. The requirement for the datasets to be complete and free of errors should not affect the use of privacy-preserving techniques in the context of the the development and testing of AI systems. Training, validation and testing data sets should take into account, to the extent required by their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest within the meaning of Article 9(2)(g) of Regulation (EU) 2016/679 and Article 10(2)g) of Regulation (EU) 2018/1725, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems. | (44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. Access to data of high quality plays a vital role in providing structure and ensuring the performance of these systems. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. These practices can be complied with by having recourse to third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices. Training, validation and testing data sets should be sufficiently relevant, representative, and have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These datasets should also be as free of errors and complete as possible in view of the intended purpose of the AI system, taking into account, in a proportionate manner, technical feasibility and state of the art, the availability of data and the implementation of appropriate risk management measures so that possible shortcomings of the datasets are duly addressed. The requirement for the datasets to be complete and free of errors should not affect the use of privacy-preserving techniques in the context of the development and testing of AI systems. Specific attention should be given to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected by the high-risk AI system. Biases can for example be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings. Training, validation and testing data sets should take into account, to the extent required by their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems. This should be done exceptionally and following the application of all applicable conditions laid down under this Regulation and in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725. |
Recital (44a) (new) | (44a) When applying the principles referred to in Article 5(1)(c) of Regulation 2016/679 and Article 4(1)(c) of Regulation 2018/1725, in particular the principle of data minimisation, in regard to training, validation and testing data sets under this Regulation, due regard should be had to the full life cycle of the AI system. | In accordance with the principles outlined in Article 5(1)(c) of Regulation 2016/679 and Article 4(1)(c) of Regulation 2018/1725, particularly the principle of data minimisation, careful consideration should be given to the entire life cycle of the AI system when applying these principles to training, validation, and testing data sets under this Regulation. | ||
Recital (45) | (45) For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems. | (45) For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems. | (45) For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems. | (45) For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems. |
Recital (45a) (new) | (45a) The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are essential when the processing of data involves significant risks to the fundamental rights of individuals. Providers and users of AI systems should implement state-of-the-art technical and organisational measures in order to protect those rights. Such measures should include not only anonymisation and encryption, but also the use of increasingly available technology that permits algorithms to be brought to the data and allows valuable insights to be derived without the transmission between parties or unnecessary copying of the raw or structured data themselves. | The right to privacy and protection of personal data must be ensured throughout the entire lifecycle of the AI system. This necessitates the application of the principles of data minimisation and data protection by design and by default, as outlined in Union data protection law, particularly when data processing poses significant risks to individuals' fundamental rights. Providers and users of AI systems are required to implement cutting-edge technical and organisational measures to safeguard these rights. These measures should encompass not only anonymisation and encryption, but also the employment of increasingly accessible technology that enables algorithms to be brought to the data, facilitating the extraction of valuable insights without the need for data transmission between parties or unnecessary duplication of the raw or structured data. | ||
Recital (46) | (46) Having information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date. | (46) Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date appropriately throughout the lifecycle of the AI system. AI systems can have a large important environmental impact and high energy consumption during their lifecyle. In order to better apprehend the impact of AI systems on the environment, the technical documentation drafted by providers should include information on the energy consumption of the AI system, including the consumption during development and expected consumption during use. Such information should take into account the relevant Union and national legislation. This reported information should be comprehensible, comparable and verifiable and to that end, the Commission should develop guidelines on a harmonised metholodogy for calculation and reporting of this information. To ensure that a single documentation is possible, terms and definitions related to the required documentation and any required documentation in the relevant Union legislation should be aligned as much as possible. | (46) Having information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date. Furthermore, providers or users should keep logs automatically generated by the high-risk AI system, including for instance output data, start date and time etc., to the extent that such a system and the related logs are under their control, for a period that is appropriate to enable them to fulfil their obligations. | (46) Having comprehensive information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date appropriately throughout the lifecycle of the AI system. Furthermore, providers or users should keep logs automatically generated by the high-risk AI system, including for instance output data, start date and time etc., to the extent that such a system and the related logs are under their control, for a period that is appropriate to enable them to fulfil their obligations. AI systems can have a large important environmental impact and high energy consumption during their lifecycle. In order to better apprehend the impact of AI systems on the environment, the technical documentation drafted by providers should include information on the energy consumption of the AI system, including the consumption during development and expected consumption during use. Such information should take into account the relevant Union and national legislation. This reported information should be comprehensible, comparable and verifiable and to that end, the Commission should develop guidelines on a harmonised methodology for calculation and reporting of this information. To ensure that a single documentation is possible, terms and definitions related to the required documentation and any required documentation in the relevant Union legislation should be aligned as much as possible. |
Recital (46a) (new) | (46a) AI systems should take into account state-of-the art methods and relevant applicable standards to reduce the energy use, resource use and waste, as well as to increase their energy efficiency and the overall efficiency of the system. The environmental aspects of AI systems that are significant for the purposes of this Regulation are the energy consumption of the AI system in the development, training and deployment phase as well as the recording and reporting and storing of this data. The design of AI systems should enable the measurement and logging of the consumption of energy and resources at each stage of development, training and deployment. The monitoring and reporting of the emissions of AI systems must be robust, transparent, consistent and accurate. In order to ensure the uniform application of this Regulation and stable legal ecosystem for providers and deployers in the Single Market, the Commission should develop a common specification for the methodology to fulfil the reporting and documentation requirement on the consumption of energy and resources during development, training and deployment. Such common specifications on measurement methodology can develop a baseline upon which the Commission can better decide if future regulatory interventions are needed, upon conducting an impact assessment that takes into account existing law. | AI systems should incorporate state-of-the-art methods and relevant standards to minimize energy and resource use, and waste, while maximizing energy efficiency and overall system efficiency. Significant environmental aspects of AI systems under this Regulation include energy consumption during development, training, and deployment phases, as well as the recording, reporting, and storage of this data. The design of AI systems should facilitate the measurement and logging of energy and resource consumption at each stage of development, training, and deployment. The monitoring and reporting of AI system emissions must be robust, transparent, consistent, and accurate. To ensure uniform application of this Regulation and a stable legal ecosystem for providers and deployers in the Single Market, the Commission should develop a common specification for the methodology to meet the reporting and documentation requirements on energy and resource consumption during development, training, and deployment. These common specifications on measurement methodology can establish a baseline, enabling the Commission to make informed decisions about future regulatory interventions, based on an impact assessment that considers existing law. |
||
Recital (46b) (new) | (46b) In order to achieve the objectives of this Regulation, and contribute to the Union’s environmental objectives while ensuring the smooth functioning of the internal market, it may be necessary to establish recommendations and guidelines and, eventually, targets for sustainability. For that purpose the Commission is entitled to develop a methodology to contribute towards having Key Performance Indicators (KPIs) and a reference for the Sustainable Development Goals (SDGs). The goal should be in the first instance to enable fair comparison between AI implementation choices providing incentives to promote using more efficient AI technologies addressing energy and resource concerns. To meet this objective this Regulation should provide the means to establish a baseline collection of data reported on the emissions from development and training and for deployment. | In order to achieve the objectives of this Regulation and contribute to the Union’s environmental objectives while ensuring the smooth functioning of the internal market, it may be necessary to establish recommendations, guidelines, and potentially, sustainability targets. To this end, the Commission is authorized to develop a methodology to contribute towards the establishment of Key Performance Indicators (KPIs) and a reference for the Sustainable Development Goals (SDGs). The primary goal should be to enable a fair comparison between AI implementation choices, providing incentives to promote the use of more efficient AI technologies that address energy and resource concerns. To meet this objective, this Regulation should provide the means to establish a baseline collection of data reported on the emissions from development, training, and deployment of AI technologies. | ||
Recital (47) | (47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. | (47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. | (47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high- risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination of the persons who may be affected by the system in light of its intended purpose, where appropriate. To facilitate the understanding of the instructions of use by users, they should contain illustrative examples, as appropriate. | (47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination of the persons who may be affected by the system in light of its intended purpose, where appropriate. To facilitate the understanding of the instructions of use by users, they should contain illustrative examples, as appropriate. |
Recital (47a) (new) | (47a) Such requirements on transparency and on the explicability of AI decision-making should also help to counter the deterrent effects of digital asymmetry and so-called ‘dark patterns’ targeting individuals and their informed consent. | The requirements on transparency and the explicability of AI decision-making should be implemented to counteract the deterrent effects of digital asymmetry and the so-called 'dark patterns' that target individuals and their informed consent. | ||
Recital (48) | High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. | High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. | (48) High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. Considering the significant consequences for persons in case of incorrect matches by certain biometric identification systems, it is appropriate to provide for an enhanced human oversight requirement for those systems so that no action or decision may be taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. Those persons could be from one or more entities and include the person operating or using the system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate verifications by the different persons are automatically recorded in the logs generated by the system. | High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. Considering the significant consequences for persons in case of incorrect matches by certain biometric identification systems, it is appropriate to provide for an enhanced human oversight requirement for those systems so that no action or decision may be taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. Those persons could be from one or more entities and include the person operating or using the system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate verifications by the different persons are automatically recorded in the logs generated by the system. |
Recital (49) | (49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users. | (49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. Performance metrics and their expected level should be defined with the primary objective to mitigate risks and negative impact of the AI system. The expected level of performance metrics should be communicated in a clear, transparent, easily understandable and intelligible way to the deployers. The declaration of performance metrics cannot be considered proof of future levels, but relevant methods need to be applied to ensure consistent levels during use While standardisation organisations exist to establish standards, coordination on benchmarking is needed to establish how these standardised requirements and characteristics of AI systems should be measured. The European Artificial Intelligence Office should bring together national and international metrology and benchmarking authorities and provide non-binding guidance to address the technical aspects of how to measure the appropriate levels of performance and robustness. | (49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users. | (49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. Performance metrics and their expected level should be defined with the primary objective to mitigate risks and negative impact of the AI system. The level of accuracy and accuracy metrics should be communicated to the users and deployers in a clear, transparent, easily understandable and intelligible way. The declaration of performance metrics cannot be considered proof of future levels, but relevant methods need to be applied to ensure consistent levels during use. While standardisation organisations exist to establish standards, coordination on benchmarking is needed to establish how these standardised requirements and characteristics of AI systems should be measured. The European Artificial Intelligence Office should bring together national and international metrology and benchmarking authorities and provide non-binding guidance to address the technical aspects of how to measure the appropriate levels of performance and robustness. |
Recital (50) | (50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. | (50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. Users of the AI system should take steps to ensure that the possible trade-off between robustness and accuracy does not lead to discriminatory or negative outcomes for minority subgroups. | (50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to harmful or otherwise undesirable behaviour that may result from limitations within the systems or the environment in which the systems operate (e.g. errors, faults, inconsistencies, unexpected situations). High-risk AI systems should therefore be designed and developed with appropriate technical solutions to prevent or minimize that harmful or otherwise undesirable behaviour, such as for instance mechanisms enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. | (50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. High-risk AI systems should therefore be designed and developed with appropriate technical solutions to prevent or minimize that harmful or otherwise undesirable behaviour, such as for instance mechanisms enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined boundaries. Users of the AI system should take steps to ensure that the possible trade-off between robustness and accuracy does not lead to discriminatory or negative outcomes for minority subgroups. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. |
Recital (51) | (51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure. | (51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or confidentiality attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure. High-risk AI should be accompanied by security solutions and patches for the lifetime of the product, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer. | (51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure. | (51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or confidentiality attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure. High-risk AI should be accompanied by security solutions and patches for the lifetime of the product, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer. |
Recital (52) | (52) As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council51 setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council52 on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council53 on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’). ––––– 51. Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30). 52. Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82). 53. Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (Text with EEA relevance) (OJ L 169, 25.6.2019, p. 1–44). | (52) As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council51 setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council52 on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council53 on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’). ––––– 51. Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30). 52. Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82). 53. Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (Text with EEA relevance) (OJ L 169, 25.6.2019, p. 1–44). | (52) As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council22 setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council23 on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council24 on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’). –––––––– 22. Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30). 23. Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82). 24. Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (Text with EEA relevance) (OJ L 169, 25.6.2019, p. 1–44). | (52) As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’). ––––– Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30). Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82). Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (Text with EEA relevance) (OJ L 169, 25.6.2019, p. 1–44). |
Recital (52a) (new) | (52a) In line with New Legislative Framework principles, specific obligations for relevant operators within the AI value chain should be set to ensure legal certainty and facilitate compliance with this Regulation. In certain situations those operators could act in more than one role at the same time and should therefore fufil cumulatively all relevant obligations associated with those roles. For example, an operator could act as a distributor and an importer at the same time. | In accordance with the principles of the New Legislative Framework, it is necessary to establish specific obligations for relevant operators within the AI value chain to ensure legal certainty and facilitate compliance with this Regulation. It should be noted that in certain situations, these operators may assume multiple roles simultaneously and are therefore required to fulfill all relevant obligations associated with those roles. For instance, an operator could function as both a distributor and an importer, thereby needing to meet the responsibilities of both roles. | ||
Recital (53) | (53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system. | (53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system. | (53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system. | (53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system. |
Recital (53a) (new) | (53a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the Union and the Member States are legally obliged to protect persons with disabilities from discrilmination and promote their equality, to ensure that persons with disabilities have access, on an equal basis wirh others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that Providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system. | As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the Union and the Member States are legally bound to protect persons with disabilities from discrimination and promote their equality. This includes ensuring that persons with disabilities have equal access to information and communications technologies and systems, and that their privacy is respected. Given the increasing importance and use of AI systems, it is crucial that universal design principles are applied to all new technologies and services. This will ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, while respecting their inherent dignity and diversity. Therefore, it is essential that Providers comply fully with accessibility requirements, including Directive (EU) 2016/2102 and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design, integrating the necessary measures as much as possible into the design of the high-risk AI system. | ||
Recital (54) | (54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question. | (54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. For providers that have already in place quality management systems based on standards such as ISO 9001 or other relevant standards, no duplicative quality management system in full should be expected but rather an adaptation of their existing systems to certain aspects linked to compliance with specific requirements of this Regulation. This should also be reflected in future standardization activities or guidance adopted by the Commission in this respect. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question. | (54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question. | (54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. For providers that have already in place quality management systems based on standards such as ISO 9001 or other relevant standards, no duplicative quality management system in full should be expected but rather an adaptation of their existing systems to certain aspects linked to compliance with specific requirements of this Regulation. This should also be reflected in future standardization activities or guidance adopted by the Commission in this respect. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question. |
Recital (54a) (new) | (54a) To ensure legal certainty, it is necessary to clarify that, under certain specific conditions, any natural or legal person should be considered a provider of a new high-risk AI system and therefore assume all the relevant obligations. For example, this would be the case if that person puts its name or trademark on a high-risk AI system already placed on the market or put into service, or if that person modifies the intended purpose of an AI system which is not high-risk and is already placed on the market or put into service, in a way that makes the modified system a high-risk AI system. These provisions should apply without prejudice to more specific provisions established in certain New Legislative Framework sectorial legislation with which this Regulation should apply jointly. For example, Article 16, paragraph 2 of Regulation 745/2017, establishing that certain changes should not be considered modifications of a device that could affect its compliance with the applicable requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that Regulation. | To ensure legal certainty, it should be clarified that under specific conditions, any natural or legal person may be considered a provider of a new high-risk AI system and therefore assume all the relevant obligations. This would apply if that person puts their name or trademark on a high-risk AI system already placed on the market or put into service, or if they modify the intended purpose of an AI system which is not high-risk and is already placed on the market or put into service, in a way that makes the modified system a high-risk AI system. These provisions should apply without prejudice to more specific provisions established in certain New Legislative Framework sectorial legislation with which this Regulation should apply jointly. For instance, Article 16, paragraph 2 of Regulation 745/2017, establishing that certain changes should not be considered modifications of a device that could affect its compliance with the applicable requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that Regulation. | ||
Recital (55) | (55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation. | (55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation. | (55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the product manufacturer as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation. | (55) Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product, also referred to as the product manufacturer, as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation. |
Recital (56) | (56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union. | (56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union. | (56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union. | (56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union. |
Recital (56a) (new) | (56a) For providers who are not established in the Union, the authorised representative plays a pivotal role in ensuring the compliance of the high-risk AI systems placed on the market or put into service in the Union by those providers and in serving as their contact person established in the Union. Given that pivotal role, and in order to ensure that responsibility is assumed for the purposes of enforcement of this Regulation, it is appropriate to make the authorised representative jointly and severally liable with the provider for defective high- risk AI systems. The liability of the authorised representative provided for in this Regulation is without prejudice to the provisions of Directive 85/374/EEC on liability for defective products. | For providers not established in the Union, the authorised representative plays a crucial role in ensuring the compliance of high-risk AI systems placed on the market or put into service in the Union by those providers. They also serve as their contact person established in the Union. Given this crucial role, and to ensure that responsibility is assumed for the enforcement of this Regulation, it is appropriate to make the authorised representative jointly and severally liable with the provider for defective high-risk AI systems. This liability of the authorised representative under this Regulation does not prejudice the provisions of Directive 85/374/EEC on liability for defective products. | ||
Recital (57) | (57) In line with New Legislative Framework principles, specific obligations for relevant economic operators, such as importers and distributors, should be set to ensure legal certainty and facilitate regulatory compliance by those relevant operators. | (57) In line with New Legislative Framework principles, specific obligations for relevant economic operators, such as importers and distributors, should be set to ensure legal certainty and facilitate regulatory compliance by those relevant operators. | deleted | None |
Recital (58) | (58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. | (58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. | (58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. These obligations should be without prejudice to other user obligations in relation to high-risk AI systems under Union or national law, and should not apply where the use is made in the course of a personal non-professional activity. | (58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users and deployers. Users and deployers should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. These obligations should be without prejudice to other user or deployer obligations in relation to high-risk AI systems under Union or national law, and should not apply where the use is made in the course of a personal non-professional activity. |
Recital (58a) (new) [C] | (58a) It is appropriate to clarify that this Regulation does not affect the obligations of providers and users of AI systems in their role as data controllers or processors stemming from Union law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling. Harmonised rules for the placing on the market, the putting into service and the use of AI systems established under this Regulation should facilitate the effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed under Union law on the protection of personal data and of other fundamental rights. | This Regulation should clarify that it does not impact the obligations of AI system providers and users in their capacity as data controllers or processors under Union law on personal data protection, particularly when the design, development, or use of AI systems involves personal data processing. It should also be clear that data subjects retain all rights and guarantees provided by such Union law, including those related to solely automated individual decision-making and profiling. The harmonised rules for the marketing, commissioning, and use of AI systems under this Regulation should facilitate effective implementation and enable the exercise of data subjects' rights and other remedies guaranteed under Union law on personal data protection and other fundamental rights. | ||
Recital (58a) (new) [P] | (58a) Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. Deployers should identify appropriate governance structures in that specific context of use, such as arrangements for human oversight, complaint-handling procedures and redress procedures, because choices in the governance structures can be instrumental in mitigating risks to fundamental rights in concrete use-cases. In order to efficiently ensure that fundamental rights are protected, the deployer of high-risk AI systems should therefore carry out a fundamental rights impact assessment prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified at the latest from the time of putting it into use. If such plan cannot be identified, the deployer should refrain from putting the system into use. When performing this impact assessment, the deployer should notify the national supervisory authority and, to the best extent possible relevant stakeholders as well as representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment and are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website. This obligations should not apply to SMEs which, given the lack of resrouces, might find it difficult to perform such consultation. Nevertheless, they should also strive to invole such representatives when carrying out their fundamental rights impact assessment.In addition, given the potential impact and the need for democratic oversight and scrutiny, deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies, as well deployers who are undertakings designated as a gatekeeper under Regulation (EU) 2022/1925 should be required to register the use of any high-risk AI system in a public database. Other deployers may voluntarily register. | While acknowledging that risks related to AI systems can arise from their design, it is also recognized that risks can stem from how these systems are used. Therefore, deployers of high-risk AI systems play a crucial role in ensuring the protection of fundamental rights, supplementing the responsibilities of the provider during the development of the AI system. Given their understanding of the specific use of the high-risk AI system, deployers are in a position to identify potential significant risks that may not have been anticipated during the development phase. This understanding is based on their knowledge of the context of use, the individuals or groups likely to be affected, including marginalized and vulnerable groups. Deployers are therefore required to identify suitable governance structures in the specific context of use, such as arrangements for human oversight, complaint-handling procedures, and redress procedures. These governance structures can play a key role in mitigating risks to fundamental rights in specific use-cases. To effectively ensure the protection of fundamental rights, deployers of high-risk AI systems should conduct a fundamental rights impact assessment before deploying the system. This assessment should be accompanied by a detailed plan outlining the measures or tools that will help mitigate the identified risks to fundamental rights from the time of deployment. If such a plan cannot be identified, the deployer should refrain from deploying the system. When conducting this impact assessment, the deployer should inform the national supervisory authority and, to the best extent possible, relevant stakeholders and representatives of groups likely to be affected by the AI system. This is to gather necessary information for the impact assessment. Deployers are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website. However, this obligation should not apply to SMEs, which may find it difficult to perform such consultation due to resource constraints. Nevertheless, they should also strive to involve such representatives when conducting their fundamental rights impact assessment. Furthermore, considering the potential impact and the need for democratic oversight and scrutiny, deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices, and agencies, as well as deployers designated as gatekeepers under Regulation (EU) 2022/1925, should be required to register the use of any high-risk AI system in a public database. Other deployers may register voluntarily. |
||
Recital (59) | (59) It is appropriate to envisage that the user of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity. | (59) It is appropriate to envisage that the deployer of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity. | deleted | (59) It is appropriate to envisage that the entity, whether a natural or legal person, public authority, agency or other body under whose authority the AI system is operated, should be considered the user or deployer of the AI system, except where the use is made in the course of a personal non-professional activity. |
Recital (60) | (60) In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the onesinvolved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, with providers and users to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation. | (60) Within the AI value chain multiple entities often supply tools and services but also components or processes that are then incorporated by the provider into the AI system, including in relation to data collection and pre-processing, model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. The involved entities may make their offering commercially available directly or indirectly, through interfaces, such as Application Programming Interfaces (API), and distributed under free and open source licenses but also more and more by AI workforce platforms, trained parameters resale, DIY kits to build models or the offering of paying access to a model serving architecture to develop and train models. In the light of this complexity of the AI value chain, all relevant third parties, in particular those that are involved in the development, sale and the commercialsupply of software tools, components, pre-trained models or data incorporated into the AI system, or providers of network services, should without compromising their own intellectual property rights or trade secrets, make available the required information, training or expertise and cooperate, as appropriate, with providers to enable their control over all compliance relevant aspects of the AI system that falls under this Regulation. To allow a cost-effective AI value chain governance, the level of control shall be explicitly disclosed by each third party that supplies the provider with a tool, service, component or process that is later incorporated by the provider into the AI system. | deleted | (60) Given the complexity of the artificial intelligence value chain, which often involves multiple entities supplying tools, services, components, or processes that are incorporated into the AI system, all relevant third parties, particularly those involved in the development, sale, and commercial supply of software tools, components, pre-trained models, or data, or providers of network services, should cooperate, as appropriate, with providers and users. This cooperation should enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation, without compromising their own intellectual property rights or trade secrets. These third parties should make available the required information, training, or expertise to enable control over all compliance-relevant aspects of the AI system that falls under this Regulation. The level of control should be explicitly disclosed by each third party that supplies the provider with a tool, service, component, or process that is later incorporated by the provider into the AI system, to allow for cost-effective AI value chain governance. |
Recital (60a) (new) | (60a) Where one party is in a stronger bargaining position, there is a risk that that party could leverage such position to the detriment of the other contracting party when negotiating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Such contractual imbalances particularly harm micro, small and medium-sized enterprises as well as start-ups, unless they are owned or sub-contracted by an enterprise which is able to compensate the sub-contractor appropriately, as they are without a meaningful ability to negotiate the conditions of the contractual agreement, and may have no other choice than to accept ‘take-it-or-leave-it’ contractual terms. Therefore, unfair contract terms regulating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations should not be binding to such micro, small or medium-sized enterprises and start-ups when they have been unilaterally imposed on them. | In situations where one party holds a stronger bargaining position, there is a potential risk of this party leveraging their position to the detriment of the other party during negotiations involving the supply of tools, services, components, or processes used or integrated in a high-risk AI system, or the remedies for the breach or termination of related obligations. This imbalance in contractual power can particularly harm micro, small, and medium-sized enterprises, as well as start-ups, unless they are owned or subcontracted by an enterprise capable of adequately compensating the subcontractor. These smaller entities often lack the ability to negotiate the conditions of the contractual agreement effectively and may be left with no choice but to accept 'take-it-or-leave-it' contractual terms. Therefore, any unfair contract terms related to the supply of tools, services, components, or processes used or integrated in a high-risk AI system, or the remedies for the breach or termination of related obligations, should not be binding on such micro, small, or medium-sized enterprises and start-ups if they have been unilaterally imposed. | ||
Recital (60b) (new) | (60b) Rules on contractual terms should take into account the principle of contractual freedom as an essential concept in business-to-business relationships. Therefore, not all contractual terms should be subject to an unfairness test, but only to those terms that are unilaterally imposed on micro, small and medium-sized enterprises and start-ups. This concerns ‘take-it-or-leave-it’ situations where one party supplies a certain contractual term and the micro, small or medium-sized enterprise and start-up cannot influence the content of that term despite an attempt to negotiate it. A contractual term that is simply provided by one party and accepted by the micro, small, medium-sized enterprise or a start-up or a term that is negotiated and subsequently agreed in an amended way between contracting parties should not be considered as unilaterally imposed. | In accordance with the principle of contractual freedom, which is fundamental in business-to-business relationships, not all contractual terms should be subject to an unfairness test. This test should only apply to terms that are unilaterally imposed on micro, small and medium-sized enterprises and start-ups. This specifically pertains to 'take-it-or-leave-it' situations where one party provides a certain contractual term and the micro, small or medium-sized enterprise and start-up are unable to influence the content of that term despite attempts to negotiate. A contractual term that is merely provided by one party and accepted by the micro, small, medium-sized enterprise or a start-up, or a term that is negotiated and subsequently agreed upon in an amended form between contracting parties, should not be considered as unilaterally imposed. | ||
Recital (60c) (new) | (60c) Furthermore, the rules on unfair contractual terms should only apply to those elements of a contract that are related to supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Other parts of the same contract, unrelated to these elements, should not be subject to the unfairness test laid down in this Regulation. | The rules on unfair contractual terms should only apply to those elements of a contract that are related to the supply of tools, services, components or processes that are used or integrated in a high risk AI system, or the remedies for the breach or the termination of related obligations. Other parts of the same contract, unrelated to these elements, should not be subject to the unfairness test laid down in this Regulation. | ||
Recital (60d) (new) | (60d) Criteria to identify unfair contractual terms should be applied only to excessive contractual terms, where a stronger bargaining position is abused. The vast majority of contractual terms that are commercially more favourable to one party than to the other, including those that are normal in business-to-business contracts, are a normal expression of the principle of contractual freedom and continue to apply. If a contractual term is not included in the list of terms that are always considered unfair, the general unfairness provision applies. In this regard, the terms listed as unfair terms should serve as a yardstick to interpret the general unfairness provision. | Criteria for identifying unfair contractual terms should be applied solely to excessive terms where there is an abuse of a stronger bargaining position. The majority of contractual terms that are commercially more favorable to one party, including those common in business-to-business contracts, are a standard expression of contractual freedom and should continue to be applicable. If a contractual term is not included in the list of terms that are always considered unfair, the general unfairness provision should be applied. The terms listed as unfair should serve as a benchmark for interpreting the general unfairness provision. | ||
Recital (60e) (new) | (60e) Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. The foundation model can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems. | Foundation models represent a recent advancement in AI, characterized by their design which optimizes for generality and versatility of output. These models are typically trained on a diverse range of data sources and large volumes of data, enabling them to perform a wide array of downstream tasks, including those they were not specifically developed and trained for. Foundation models can be either unimodal or multimodal and can be trained through various methods such as supervised learning or reinforced learning. They can be implemented in AI systems with a specific intended purpose or in general purpose AI systems. This implies that each foundation model can be reused in numerous downstream AI or general purpose AI systems. The growing importance of these models is evident in their increasing use in many downstream applications and systems. | ||
Recital (60f) (new) | (60f) In the case of foundation models provided as a service such as through API access, the cooperation with downstream providers should extend throughout the time during which that service is provided and supported, in order to enable appropriate risk mitigation, unless the provider of the foundation model transfers the training model as well as extensive and appropriate information on the datasets and the development process of the system or restricts the service, such as the API access, in such a way that the downstream provider is able to fully comply with this Regulation without further support from the original provider of the foundation model. | In the instance of foundation models offered as a service, such as through API access, the collaboration with downstream providers should be maintained throughout the duration of the service provision and support. This is to facilitate appropriate risk mitigation, unless the original provider of the foundation model transfers the training model along with comprehensive and relevant information on the datasets and the system's development process. Alternatively, the service, such as API access, can be limited in a manner that allows the downstream provider to fully comply with this Regulation without requiring additional support from the original provider of the foundation model. | ||
Recital (60g) | (60g) In light of the nature and complexity of the value chain for AI system, it is essential to clarify the role of actors contributing to the development of AI systems. There is significant uncertainty as to the way foundation models will evolve, both in terms of typology of models and in terms of self-governance. Therefore, it is essential to clarify the legal situation of providers of foundation models. Combined with their complexity and unexpected impact, the downstream AI provider’s lack of control over the foundation model’s development and the consequent power imbalance and in order to ensure a fair sharing of responsibilities along the AI value chain, such models should be subject to proportionate and more specific requirements and obligations under this Regulation, namely foundation models should assess and mitigate possible risks and harms through appropriate design, testing and analysis, should implement data governance measures, including assessment of biases, and should comply with technical design requirements to ensure appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity and should comply with environmental standards. These obligations should be accompanied by standards. Also, foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans. These specific requirements and obligations do not amount to considering foundation models as high risk AI systems, but should guarantee that the objectives of this Regulation to ensure a high level of protection of fundamental rights, health and safety, environment, democracy and rule of law are achieved. Pre-trained models developed for a narrower, less general, more limited set of applications that cannot be adapted for a wide range of tasks such as simple multi-purpose AI systems should not be considered foundation models for the purposes of this Regulation, because of their greater interpretability which makes their behaviour less unpredictable. | Given the complexity and evolving nature of AI systems, it is crucial to define the roles of all actors involved in their development, particularly those providing foundation models. These models should be subject to specific and proportionate requirements under this Regulation due to their complexity, potential impact, and the downstream AI provider's lack of control over their development. These requirements should include risk assessment and mitigation through appropriate design, testing, and analysis. Foundation models should also implement data governance measures, including bias assessment, and comply with technical design requirements to ensure performance, predictability, interpretability, corrigibility, safety, cybersecurity, and environmental standards. In addition, foundation models should have information obligations and prepare necessary technical documentation to assist downstream providers in complying with their obligations under this Regulation. Generative foundation models should also ensure transparency by clearly indicating that their content is AI-generated, not human-generated. However, these specific requirements and obligations do not classify foundation models as high-risk AI systems. Instead, they aim to ensure that this Regulation's objectives of protecting fundamental rights, health and safety, the environment, democracy, and the rule of law are met. Pre-trained models developed for a narrower, less general, more limited set of applications that cannot be adapted for a wide range of tasks, such as simple multi-purpose AI systems, should not be considered foundation models under this Regulation due to their greater interpretability and predictability. |
||
Recital (60h) | (60h) Given the nature of foundation models, expertise in conformity assessment is lacking and third-party auditing methods are still under development . The sector itself is therefore developing new ways to assess fundamental models that fulfil in part the objective of auditing (such as model evaluation, red-teaming or machine learning verification and validation techniques). Those internal assessments for foundation models should be should be broadly applicable (e.g. independent of distribution channels, modality, development methods), to address risks specific to such models taking into account industry state-of-the-art practices and focus on developing sufficient technical understanding and control over the model, the management of reasonably foreseeable risks, and extensive analysis and testing of the model through appropriate measures, such as by the involvement of independent evaluators. As foundation models are a new and fast-evolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically asses the legislative and governance framework of such models and in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council. | Given the emerging nature of foundation models, there is a recognized lack of expertise in conformity assessment and third-party auditing methods are still under development. The sector is actively developing new ways to assess these models, including model evaluation, red-teaming, and machine learning verification and validation techniques. These internal assessments should be broadly applicable, addressing risks specific to such models and taking into account industry state-of-the-art practices. The focus should be on developing sufficient technical understanding and control over the model, managing reasonably foreseeable risks, and conducting extensive analysis and testing through appropriate measures, such as the involvement of independent evaluators. As foundation models represent a new and rapidly evolving aspect of artificial intelligence, it is deemed appropriate for the Commission and the AI Office to monitor and periodically assess the legislative and governance framework of such models. This is particularly relevant for generative AI systems based on these models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It is important to clarify that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council. |
||
Recital (61) | (61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient. _________ 54. Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12). | (61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. To ensure the effectiveness of standards as policy tool for the Union and considering the importance of standards for ensuring conformity with the requirements of this Regulation and for the competitiveness of undertakings, it is necessary to ensure a balanced representation of interests by involving all relevant stakeholders in the development of standards. The standardisation process should be transparent in terms of legal and natural persons participating in the standardisation activities. _________ 54. Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12). | (61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council25, which are normally expected to reflect the state of the art,should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, in the absence of relevant references to harmonised standards, the Commission should be able to establish, via implementing acts, common specifications for certain requirements under this Regulation as an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation process is blocked or when there are delays in the establishment of an appropriate harmonised standard. If such delay is due to the technical complexity of the standard in question, this should be considered by the Commission before contemplating the establishment of common specifications. An appropriate involvement of small and medium enterprises in the elaboration of standards supporting the implementation of this Regulation is essential to promote innovation and competitiveness in the field of artificial intelligence within the Union. Such involvement should be appropriately ensured in accordance with Article 5 and 6 of Regulation 1025/2012. _________ 25. Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12). | (61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, in the absence of relevant references to harmonised standards, the Commission could adopt common technical specifications as an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation process is blocked or when there are delays in the establishment of an appropriate harmonised standard. If such delay is due to the technical complexity of the standard in question, this should be considered by the Commission before contemplating the establishment of common specifications. To ensure the effectiveness of standards as a policy tool for the Union and considering the importance of standards for ensuring conformity with the requirements of this Regulation and for the competitiveness of undertakings, it is necessary to ensure a balanced representation of interests by involving all relevant stakeholders in the development of standards. The standardisation process should be transparent in terms of legal and natural persons participating in the standardisation activities. An appropriate involvement of small and medium enterprises in the elaboration of standards supporting the implementation of this Regulation is essential to promote innovation and competitiveness in the field of artificial intelligence within the Union. Such involvement should be appropriately ensured in accordance with Article 5 and 6 of Regulation 1025/2012. |
Recital (61a) (new) [C] | (61a) It is appropriate that, without prejudice to the use of harmonised standards and common specifications, providers benefit from a presumption of conformity with the relevant requirement on data when their high-risk AI system has been trained and tested on data reflecting the specific geographical, behavioural or functional setting within which the AI system is intended to be used. Similarly, in line with Article 54(3) of Regulation (EU) 2019/881 of the European Parliament and of the Council, high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the references of which have been published in the Official Journal of the European Union should be presumed to be in compliance with the cybersecurity requirement of this Regulation. This remains without prejudice to the voluntary nature of that cybersecurity scheme. | It is proposed that, without undermining the use of harmonised standards and common specifications, providers should be presumed to be in compliance with the relevant data requirement when their high-risk AI system has been trained and tested on data reflecting the specific geographical, behavioural or functional setting within which the AI system is intended to be used. Furthermore, high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council, and the references of which have been published in the Official Journal of the European Union, should be presumed to be in compliance with the cybersecurity requirement of this Regulation. This proposal acknowledges the voluntary nature of the cybersecurity scheme. | ||
Recital (61a) (new) [P] | (61a) In order to facilitate compliance, the first standardisation requests should be issued by the Commission two months after the entry into force of this Regulation at the latest. This should serve to improve legal certainty, thereby promoting investment and innovation in AI, as well as competitiveness and growth of the Union market, while enhancing multistakeholder governance representing all relevant European stakeholders such as the AI Office, European standardisation organisations and bodies or experts groups established under relevant sectorial Union law as well as industry, SMEs, start-ups, civil society, researchers and social partners, and should ultimately facilitate global cooperation on standardisation in the field of AI in a manner consistent with Union values. When preparing the standardisation request, the Commission should consult the AI Office and the AI advisory Forum in order to collect relevant expertise. | In order to ensure compliance and improve legal certainty, the first standardisation requests should be issued by the Commission no later than two months after the entry into force of this Regulation. This measure is expected to promote investment and innovation in AI, enhance the competitiveness and growth of the Union market, and strengthen multistakeholder governance. This governance should represent all relevant European stakeholders, including the AI Office, European standardisation organisations and bodies, expert groups established under relevant sectorial Union law, industry, SMEs, start-ups, civil society, researchers, and social partners. This approach should also facilitate global cooperation on standardisation in the field of AI in a manner consistent with Union values. In the preparation of the standardisation request, the Commission should consult with the AI Office and the AI advisory Forum to gather relevant expertise. | ||
Recital (61b) (new) | (61b) When AI systems are intended to be used at the workplace, harmonised standards should be limited to technical specifications and procedures. | For AI systems intended for use in the workplace, harmonised standards should be confined to technical specifications and procedures. | ||
Recital (61c) (new) | (61c) The Commission should be able to adopt common specifications under certain conditions, when no relevant harmonised standard exists or to address specific fundamental rights concerns. Through the whole drafting process, the Commission should regularly consult the AI Office and its advisory forum, the European standardisation organisations and bodies or expert groups established under relevant sectorial Union law as well as relevant stakeholders, such as industry, SMEs, start-ups, civil society, researchers and social partners. | The Commission should have the authority to adopt common specifications under certain conditions, particularly when no relevant harmonised standard exists or to address specific fundamental rights concerns. Throughout the entire drafting process, the Commission should consistently consult with the AI Office and its advisory forum. Additionally, the Commission should engage with the European standardisation organisations and bodies or expert groups established under relevant sectorial Union law. It is also crucial for the Commission to involve relevant stakeholders, such as industry, SMEs, start-ups, civil society, researchers and social partners in the consultation process. | ||
Recital (61d) (new) | (61d) When adopting common specifications, the Commission should strive for regulatory alignment of AI with likeminded global partners, which is key to fostering innovation and cross-border partnerships within the field of AI, as coordination with likeminded partners in international standardisation bodies is of great importance. | In the adoption of common specifications, it is crucial that the Commission seeks regulatory alignment of AI with likeminded global partners. This alignment is instrumental in promoting innovation and fostering cross-border partnerships within the field of AI. Furthermore, coordination with these likeminded partners in international standardisation bodies holds significant importance. | ||
Recital (62) | (62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. | (62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. To increase the trust in the value chain and to give certainty to businesses about the performance of their systems, third-parties that supply AI components may voluntarily apply for a third-party conformity assessment. | (62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. | (62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. To increase the trust in the value chain and to give certainty to businesses about the performance of their systems, third-parties that supply AI components may voluntarily apply for a third-party conformity assessment. |
Recital (63) | (63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. | (63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. | (63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. With regard to high-risk AI systems related to products covered by Regulations 745/2017 and 746/2017 on medical devices, the applicability of the requirements of this Regulation should be without prejudice and take into account the risk management logic and benefit-risk assessment performed under the medical device framework. | (63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. With regard to high-risk AI systems related to products covered by Regulations 745/2017 and 746/2017 on medical devices, the applicability of the requirements of this Regulation should be without prejudice and take into account the risk management logic and benefit-risk assessment performed under the medical device framework. |
Recital (64) | (64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited. | (64) Given the complexity of high-risk AI systems and the risks that are associated to them, it is essential to develop a more adequate capacity for the application of third party conformity assessment for high-risk AI systems. However, given the current experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, or AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited. | (64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited. | (64) Given the complexity of high-risk AI systems and the risks associated with them, it is essential to develop a more adequate capacity for the application of third-party conformity assessment for high-risk AI systems. However, considering the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility. The only exceptions are AI systems intended to be used for the remote biometric identification of persons, or AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems. For these systems, the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited. |
Recital (65) | (65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests. | (65) In order to carry out third-party conformity assessments when so required, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence, absence of conflicts of interests and minimum cybersecurity requirements. Member States should encourage the designation of a sufficient number of conformity assessment bodies, in order to make the certification feasible in a timely manner. The procedures of assessment, designation, notification and monitoring of conformity assessment bodies should be implemented as uniformly as possible in Member States, with a view to removing administrative border barriers and ensuring that the potential of the internal market is realised. | (65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be notified under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests. Notification of those bodies should be sent by national competent authorities to the Commission and the other Member States by means of the electronic notification tool developed and managed by the Commission pursuant to Article R23 of Decision 768/2008. | (65) In order to carry out third-party conformity assessments for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence, absence of conflicts of interests and minimum cybersecurity requirements. Notification of these bodies should be sent by national competent authorities to the Commission and the other Member States by means of the electronic notification tool developed and managed by the Commission pursuant to Article R23 of Decision 768/2008. Member States should encourage the designation of a sufficient number of conformity assessment bodies, in order to make the certification feasible in a timely manner. The procedures of assessment, designation, notification and monitoring of conformity assessment bodies should be implemented as uniformly as possible in Member States, with a view to removing administrative border barriers and ensuring that the potential of the internal market is realised. |
Recital (65a) (new) | (65a) In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is adequate to maximise the acceptance of test results produced by competent conformity assessment bodies, independent of the territory in which they are established, where necessary to demonstrate conformity with the applicable requirements of the Regulation. The Commission should actively explore possible international instruments for that purpose and in particular pursue the possible establishment of mutual recognition agreements with countries which are on a comparable level of technical development, and have compatible approach concerning AI and conformity assessment. | In accordance with the Union's commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is crucial to maximize the acceptance of test results produced by competent conformity assessment bodies, regardless of their geographical location, in order to demonstrate conformity with the applicable requirements of the Regulation. The Commission is encouraged to actively investigate potential international instruments for this purpose. Specifically, the Commission should consider the establishment of mutual recognition agreements with countries that have a similar level of technical development and a compatible approach to AI and conformity assessment. | ||
Recital (66) | (66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. | (66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an high-risk AI system undergoes a new conformity assessment whenever an unplanned change occurs which goes beyond controlled or predetermined changes by the provider including continuous learning and which may create a new unacceptable risk and significantly affect the compliance of the high-risk AI system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system, provided that they do not amount to a substantial modification | (66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that whenever a change occurs which may affect the compliance of a high risk AI system with this Regulation (e.g. change of operating system or software architecture), or when the intended purpose of the system changes, that AI system should be considered a new AI system which should undergo a new conformity assessment. However, changes occuring to the algorithm and the performance of AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. automatically adapting how functions are carried out) should not constitute a substantial modification, provided that those changes have been pre-determined by the provider and assessed at the moment of the conformity assessment. | (66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that a high-risk AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. This includes unplanned changes that go beyond controlled or predetermined changes by the provider, including continuous learning, which may create a new unacceptable risk. However, changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. This also applies to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system, provided that they do not amount to a substantial modification. In the event of a change of operating system or software architecture, the AI system should be considered a new AI system and should undergo a new conformity assessment. |
Recital (67) | (67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking. | (67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. For physical high-risk AI systems, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For digital only high-risk AI systems, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking. | (67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking. | (67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. For physical high-risk AI systems, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For digital only high-risk AI systems, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking. |
Recital (68) | (68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment | (68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment. | (68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment. | (68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment. |
Recital (69) | (69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. –––––– 55. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). | (69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission. This database should be freely and publicly accessible, easily understandable and machine-readable. The database should also be user-friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk under Annex IV and keywords. Deployers who are public authorities or Union institutions, bodies, offices and agencies or deployers acting on their behalf and deployers who are undertakings designated as a gatekeeper under Regulation (EU)2022/1925 should also register in the EU database before putting into service or using a high-risk AI system for the first time and following each substantial modification. Other deployers should be entitled to do so voluntarily. Any substantial modification of high-risk AI systems shall also be registered in the EU database. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. The Commission should take into account cybersecurity and hazard-related risks when carrying out its tasks as data controller on the EU database. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the Directive 2019/882. –––––– 55. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). | (69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register themselves and information about their high-risk AI system in a EU database, to be established and managed by the Commission. Before using a high-risk AI system listed in Annex III, users of high risk AI systems that are public authorities, agencies or bodies, with the exception of law enforcement, border control, immigration or asylum authorities, and authorities that are users of high-risk AI systems in the area of critical infrastructure shall also register themselves in such database and select the system that they envisage to use. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council26. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. ––––– 26. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). | (69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register themselves, their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission. This database should be freely and publicly accessible, easily understandable, machine-readable, user-friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk under Annex IV and keywords. Before using a high-risk AI system listed in Annex III, users of high-risk AI systems that are public authorities, agencies or bodies, with the exception of law enforcement, border control, immigration or asylum authorities, and authorities that are users of high-risk AI systems in the area of critical infrastructure shall also register themselves in such database and select the system that they envisage to use. Deployers who are public authorities or Union institutions, bodies, offices and agencies or deployers acting on their behalf and deployers who are undertakings designated as a gatekeeper under Regulation (EU)2022/1925 should also register in the EU database before putting into service or using a high-risk AI system for the first time and following each substantial modification. Other deployers should be entitled to do so voluntarily. Any substantial modification of high-risk AI systems shall also be registered in the EU database. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. The Commission should take into account cybersecurity and hazard-related risks when carrying out its tasks as data controller on the EU database. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the Directive 2019/882. |
Recital (70) | (70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. | (70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. | (70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing such obligation, the characteristics of individuals belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons should be notified when they are exposed to systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tatoos, personal traits, ethnic origin, personal preferences and interests or to other aspects such as sexual or political orientation. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. The compliance with the information obligations referred to above should not be interpreted as indicating that the use of the system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for users of AI systems laid down in Union or national law. Furthermore it should also not be interpreted as indicating that the use of the system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Funadamental Rights of the EU, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties. | (70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and interests or to other aspects such as sexual or political orientation. Such information and notifications should be provided in accessible formats for persons with disabilities, taking into account the characteristics of individuals belonging to vulnerable groups due to their age or disability to the extent the AI system is intended to interact with those groups as well. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. The compliance with the information obligations referred to above should not be interpreted as indicating that the use of the system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for users of AI systems laid down in Union or national law. Furthermore, it should also not be interpreted as indicating that the use of the system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties. |
Recital (71) | (71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. | (71) Artificial intelligence is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-proof, and resilient to disruption, Member States should establish at least one artificial intelligence regulatory sandbox to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. It is indeed desirable for the establishment of regulatory sandboxes, whose establishment is currently left at the discretion of Member States, as a next step to be made mandatory with established criteria. That mandatory sandbox could also be established jointly with one or several other Member States, as long as that sandbox would cover the respective national level of the involved Member States. Additional sandboxes may also be established at different levels, including cross Member States, in order to facilitate cross-border cooperation and synergies. With the exception of the mandatory sandbox at national level, Member States should also be able to establish virtual or hybrid sandboxes. All regulatory sandboxes should be able to accommodate both physical and virtual products. Establishing authorities should also ensure that the regulatory sandboxes have the adequate financial and human resources for their functioning. | (71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. | (71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe, controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof, and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish at least one artificial intelligence regulatory sandbox. This would facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. The establishment of regulatory sandboxes should be made mandatory with established criteria, and could be established jointly with one or several other Member States, as long as that sandbox would cover the respective national level of the involved Member States. Additional sandboxes may also be established at different levels, including cross Member States, to facilitate cross-border cooperation and synergies. With the exception of the mandatory sandbox at national level, Member States should also be able to establish virtual or hybrid sandboxes. All regulatory sandboxes should be able to accommodate both physical and virtual products. Establishing authorities should also ensure that the regulatory sandboxes have the adequate financial and human resources for their functioning. |
Recital (72) | (72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680. | (72) The objectives of the regulatory sandboxes should be: for the establishing authorities to increase their understanding of technical developments, improve supervisory methods and provide guidance to AI systems developers and providers to achieve regulatory compliance with this Regulation or where relevant, other applicable Union and Member States legislation, as well as with the Charter of Fundamental Rights ; for the prospective providers to allow and facilitate the testing and development of innovative solutions related to AI systems in the pre-marketing phase to enhance legal certainty, to allow for more regulatory learning by establishing authorities in a controlled environment to develop better guidance and to identify possible future improvements of the legal framework through the ordinary legislative procedure. Any significant risks identified during the development and testing of such AI systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. Member States should ensure that regulatory sandboxes are widely available throughout the Union, while the participation should remain voluntary. It is especially important to ensure that SMEs and startups can easily access these sandboxes, are actively involved and participate in the development and testing of innovative AI systems, in order to be able to contribute with their knowhow and experience. | (72) The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs), including start ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure. Where appropriate, national competent authorities establishing AI regulatory sandboxes should cooperate with other relevant authorities, including those supervising the protection of fundamental rights, and could allow for the involvement of other actors within the AI ecosystem such as national or European standardisation organisations, notified bodies, testing and experimentation facilities, research and experimentation labs, innovation hubs and relevant stakeholder and civil society organisations. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. AI regulatory sandboxes established under this Regulation should be without prejudice to other legislation allowing for the establishment of other sandboxes aiming at ensuring compliance with legislation other that this Regulation. Where appropriate, relevant competent authorities in charge of those other regulatory sandboxes should consider the benefits of using those sandboxes also for the purpose of ensuring compliance of AI systems with this Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory sandbox. | (72) The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation. The sandboxes should enhance legal certainty for innovators and improve the competent authorities’ oversight and understanding of the technical developments, opportunities, emerging risks and the impacts of AI use. They should also accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should cover their development, training, testing and validation before the systems are placed on the market or put into service. Any significant risks identified during the development and testing of such AI systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. Member States should ensure that regulatory sandboxes are widely available throughout the Union, while the participation should remain voluntary. It is especially important to ensure that SMEs and startups can easily access these sandboxes, are actively involved and participate in the development and testing of innovative AI systems, in order to be able to contribute with their knowhow and experience. AI regulatory sandboxes established under this Regulation should be without prejudice to other legislation allowing for the establishment of other sandboxes aiming at ensuring compliance with legislation other than this Regulation. |
Recital (72a) [P] / (-72a) [C] (new) | (72a) This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox only under specified conditions in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Prospective providers in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety, health and the environment and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the prospective providers in the sandbox should be taken into account when competent authorities decide over the temporary or permanent suspension of their participation in the sandbox whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680. | (-72a) This Regulation should provide the legal basis for the participants in the AI regulatory sandbox to use personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) and 9(2)(g) of Regulation (EU) 2016/679, and Article 5 and 10 of Regulation (EU) 2018/1725, and without prejudice to Articles 4(2) and 10 of Directive (EU) 2016/680. All other obligations of data controllers and rights of data subjects under Regulation (EU) 2016/679, Regulation (EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. In particular, this Regulation should not provide a legal basis in the meaning of Article 22(2)(b) of Regulation (EU) 2016/679 and Article 24(2)(b) of Regulation (EU) 2018/1725. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680. | This Regulation should provide the legal basis for the participants in the AI regulatory sandbox to use personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) and 9(2)(g) of Regulation (EU) 2016/679, and Article 5, 6 and 10 of Regulation (EU) 2018/1725, and without prejudice to Articles 4(2) and 10 of Directive (EU) 2016/680. All other obligations of data controllers and rights of data subjects under Regulation (EU) 2016/679, Regulation (EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. This Regulation should not provide a legal basis in the meaning of Article 22(2)(b) of Regulation (EU) 2016/679 and Article 24(2)(b) of Regulation (EU) 2018/1725. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety, health, the environment and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide over the temporary or permanent suspension of their participation in the sandbox or whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680. | |
Recital (72a) (new) [C] | (72a) In order to accelerate the process of development and placing on the market of high-risk AI systems listed in Annex III, it is important that providers or prospective providers of such systems may also benefit from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory sandbox. However, in such cases and taking into account the possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient guarantees and conditions are introduced by the Regulation for providers or prospective providers. Such guarantees should include, among others, requesting informed consent of natural persons to participate in testing in real world conditions, with the exception of law enforcement in cases where the seeking of informed consent would prevent the AI system from being tested. Consent of subjects to participate in such testing under this Regulation is distinct from and without prejudice to consent of data subjects for the processing of their personal data under the relevant data protection law. | In order to expedite the development and market introduction of high-risk AI systems listed in Annex III, it is crucial that providers or potential providers of these systems have the opportunity to test these systems in real-world conditions, without the necessity of participating in an AI regulatory sandbox. However, considering the potential impact of such testing on individuals, the Regulation should introduce appropriate and sufficient guarantees and conditions for providers or potential providers. These guarantees should encompass, among others, the requirement for informed consent from natural persons to participate in real-world testing, with the exception of law enforcement situations where seeking informed consent could hinder the testing of the AI system. The consent for participation in such testing under this Regulation is separate from and does not prejudice the consent of data subjects for the processing of their personal data under the applicable data protection law. | ||
Recital (72b) (new) | (72b) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support and promote research and development of AI in support of socially and environmentally beneficial outcomes by allocating sufficient resources, including public and Union funding, and giving priority access to regulatory sandboxes to projects led by civil society. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics | To ensure the development of Artificial Intelligence (AI) that leads to socially and environmentally beneficial outcomes, it is proposed that Member States should actively support and promote research and development in this field. This can be achieved by allocating sufficient resources, including public and Union funding. Priority access to regulatory sandboxes should be given to projects led by civil society. These projects should be grounded on the principle of interdisciplinary cooperation. This cooperation should involve AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics. | ||
Recital (73) | (73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. | (73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, user and other innovators to provide guidance and respond to queries about the implementation of this Regulation. Such existing channels could include but are not limited to ENISA’s Computer Security Incident Response Teams, National Data Protection Agencies, the AI-on demand platform, the European Digital Innovation Hubs and other relevant instruments funded by EU programmes as well as the Testing and Experimentation Facilities established by the Commission and the Member States at national or Union level. Where appropriate, these channels shall work together to create synergies and ensure homogeneity in their guidance to start-ups, SMEs and users. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. The Commission shall regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs. For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. Medium-sized enterprises which recently changed from the small to medium-size category within the meaning of the Annex to Recommendation 2003/361/EC (Article 16) shall have access to these initiatives and guidance for a period of time deemed appropriate by the Member States, as these new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding and compliance with provisions. | (73) In order to promote and protect innovation, it is important that the interests of SME providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of SME providers shall be taken into account when notified bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. | (73) In order to promote and protect innovation, it is crucial that the interests of small-scale providers, SMEs, and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising, and information communication. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, user and other innovators to provide guidance and respond to queries about the implementation of this Regulation. Such existing channels could include but are not limited to ENISA’s Computer Security Incident Response Teams, National Data Protection Agencies, the AI-on demand platform, the European Digital Innovation Hubs and other relevant instruments funded by EU programmes as well as the Testing and Experimentation Facilities established by the Commission and the Member States at national or Union level. Where appropriate, these channels shall work together to create synergies and ensure homogeneity in their guidance to start-ups, SMEs and users. Moreover, the specific interests and needs of small-scale providers and SMEs shall be taken into account when Notified Bodies set conformity assessment fees. The Commission shall regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs. For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. Medium-sized enterprises which recently changed from the small to medium-size category within the meaning of the Annex to Recommendation 2003/361/EC (Article 16) shall have access to these initiatives and guidance for a period of time deemed appropriate by the Member States, as these new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding and compliance with provisions. |
Recital (73a) (new) | (73a) In order to promote and protect innovation, the AI-on demand platform, all relevant EU funding programmes and projects, such as Digital Europe Programme, Horizon Europe, implemented by the Commission and the Member States at national or EU level should contribute to the achievement of the objectives of this Regulation. | To foster and safeguard innovation, it is proposed that the AI-on demand platform, along with all pertinent EU funding programmes and projects, including the Digital Europe Programme and Horizon Europe, implemented by the Commission and the Member States at both national and EU levels, should contribute towards the accomplishment of the objectives of this Regulation. | ||
Recital (74) | (74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies. | (74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies. | (74) In particular, in order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers, notably SMEs, and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies. | (74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market, particularly for SMEs, as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies. |
Recital (74a) (new) | (74a) Moreover, in order to ensure proportionality considering the very small size of some operators regarding costs of innovation, it is appropriate to exempt microenterprises from the most costly obligations, such as to establish a quality management system which would reduce the administrative burden and the costs for those enterprises without affecting the level of protection and the need for compliance with the requirements for high-risk AI systems. | In order to ensure proportionality considering the small size of some operators in terms of innovation costs, it is proposed to exempt microenterprises from the most costly obligations, such as the establishment of a quality management system. This would reduce the administrative burden and costs for these enterprises, without compromising the level of protection and the need for compliance with the requirements for high-risk AI systems. | ||
Recital (75) | (75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746. | (75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746. | (75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746. | (75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746. |
Recital (76) | (76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. | (76) In order to avoid fragmentation, to ensure the optimal functioning of the Single market, to ensure effective and harmonised implementation of this Regulation, to achieve a high level of trustworthiness and of protection of health and safety, fundamental rights, the environment, democracy and the rule of law across the Union with regards to AI systems, to actively support national supervisory authorities, Union institutions, bodies, offices and agencies in matters pertaining to this Regulation, and to increase the uptake of artificial intelligence throughout the Union, an European Union Artificial Intelligence Office should be established. The AI Office should have legal personality, should act in full independence, should be responsible for a number of advisory and coordination tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and should be adequately funded and staffed. Member States should provide the strategic direction and control of the AI Office through the management board of the AI Office, alongside the Commission, the EDPS, the FRA, and ENISA. An executive director should be responsible for managing the activities of the secretariat of the AI office and for representing the AI office. Stakeholders should formally participate in the work of the AI Office through an advisory forum that should ensure varied and balanced stakeholder representation and should advise the AI Office on matters pertaining to this Regulation. In case the establishment of the AI Office prove not to be sufficient to ensure a fully consistent application of this Regulation at Union level as well as efficient cross-border enforcement measures, the creation of an AI agency should be considered. | (76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should reflect the various interests of the AI eco-system and be composed of representatives of the Member States. In order to ensure the involvement of relevant stakeholders, a standing subgroup of the Board should be created. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or contributing to guidance on matters related to the implementation of this Regulation, including on enforcement matters, technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to the Commission and the Member States and their national competent authorities on specific questions related to artificial intelligence. In order to give some flexibility to Member States in the designation of their representatives in the AI Board, such representatives may be any persons belonging to public entities who should have the relevant competences and powers to facilitate coordination at national level and contribute to the achievement of the Board's tasks. The Board should establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related respectively to market surveillance and notified bodies. The standing subgroup for market surveillance should act as the Administrative Cooperation Group (ADCO) for this Regulation in the meaning of Article 30 of Regulation (EU) 2019/1020. In line with the role and tasks of the Commission pursuant to Article 33 of Regulation (EU) 2019/1020, the Commission should support the activities of the standing subgroup for market surveillance by undertaking market evaluations or studies, notably with a view to identifying aspects of this Regulation requiring specific and urgent coordination among market surveillance authorities. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. The Board should also cooperate, as appropriate, with relevant EU bodies, experts groups and networks active in the context of relevant EU legislation, including in particular those active under relevant EU regulation on data, digital products and services. | (76) In order to facilitate a smooth, effective, and harmonised implementation of this Regulation, to ensure the optimal functioning of the Single market, to achieve a high level of trustworthiness and of protection of health and safety, fundamental rights, the environment, democracy and the rule of law across the Union with regards to AI systems, and to increase the uptake of artificial intelligence throughout the Union, a European Artificial Intelligence Board, hereafter referred to as the AI Office, should be established. The AI Office should reflect the various interests of the AI eco-system and be composed of representatives of the Member States. It should have legal personality, act in full independence, and be adequately funded and staffed. The AI Office should be responsible for a number of advisory and coordination tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on enforcement matters, technical specifications or existing standards regarding the requirements established in this Regulation. It should provide advice to the Commission, the Member States and their national competent authorities on specific questions related to artificial intelligence. Member States should provide the strategic direction and control of the AI Office through the management board of the AI Office, alongside the Commission, the EDPS, the FRA, and ENISA. An executive director should be responsible for managing the activities of the secretariat of the AI office and for representing the AI office. In order to ensure the involvement of relevant stakeholders, a standing subgroup of the Board should be created. Stakeholders should formally participate in the work of the AI Office through an advisory forum that should ensure varied and balanced stakeholder representation and should advise the AI Office on matters pertaining to this Regulation. The Board should establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related respectively to market surveillance and notified bodies. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. The Board should also cooperate, as appropriate, with relevant EU bodies, experts groups and networks active in the context of relevant EU legislation, including in particular those active under relevant EU regulation on data, digital products and services. In case the establishment of the AI Office prove not to be sufficient to ensure a fully consistent application of this Regulation at Union level as well as efficient cross-border enforcement measures, the creation of an AI agency should be considered. |
Recital (76a) (new) | (76a) The Commission should actively support the Member States and operators in the implementation and enforcement of this Regulation. In this regard it should develop guidelines on particular topics aiming at facilitating the application of this Regulation, while paying particular attention to the needs of SMEs and start-us in sectors most likely to be affected. In order to support adequate enforcement and the capacities of the Member States, Union testing facilities on AI and a pool of relevant experts should be established and made available to the Member States. | The Commission is encouraged to actively support the Member States and operators in the implementation and enforcement of this Regulation. This includes the development of guidelines on specific topics to facilitate the application of this Regulation, with a special focus on the needs of SMEs and start-ups in sectors most likely to be affected. To further aid in enforcement and bolster the capacities of the Member States, the establishment of Union testing facilities on AI and a pool of relevant experts should be considered, with these resources made available to the Member States. | ||
Recital (77) | (77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels, in each Member State one national authority should be designated as national supervisory authority. | (77) Each Member State should designate a national supervisory authority for the purpose of supervising the application and implementation of this Regulation. It should also represent its Member State at the management board of the AI Office. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels. Each national supervisory authority should act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation. | (77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. Member States may decide to appoint any kind of public entity to perform the tasks of the national competent authorities within the meaning of this Regulation, in accordance with their specific national organisational characteristics and needs. | (77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities, which may include any kind of public entity in accordance with their specific national organisational characteristics and needs, for the purpose of supervising the application and implementation of this Regulation. One such authority should be designated as the national supervisory authority, acting with complete independence in performing its tasks and exercising its powers in accordance with this Regulation. This authority should also represent its Member State at the management board of the AI Office. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-Ã -vis the public and other counterparts at Member State and Union levels, it is necessary to establish such a national supervisory authority. |
Recital (77a) (new) | (77a) The national supervisory authorities should monitor the application of the provisions pursuant to this Regulation and contribute to its consistent application throughout the Union. For that purpose, the national supervisory authorities should cooperate with each other, with the relevant national competent authorities, the Commission, and with the AI Office. | The national supervisory authorities should oversee the implementation of the provisions in accordance with this Regulation and contribute to its uniform application across the Union. To achieve this, the national supervisory authorities should collaborate with each other, with the relevant national competent authorities, the Commission, and with the AI Office. | ||
Recital (77b) (new) | (77b) The member or the staff of each national supervisory authority should, in accordance with Union or national law, be subject to a duty of professional secrecy both during and after their term of office, with regard to any confidential information which has come to their knowledge in the course of the performance of their tasks or exercise of their powers. During their term of office, that duty of professional secrecy should in particular apply to trade secrets and to reporting by natural persons of infringements of this Regulation | Each member or staff of the national supervisory authority should be obligated, in accordance with Union or national law, to maintain professional secrecy both during and after their term of office. This pertains to any confidential information they have acquired in the course of performing their tasks or exercising their powers. This duty of professional secrecy should specifically apply to trade secrets and to the reporting of infringements of this Regulation by natural persons. | ||
Recital (78) | (78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law protecting fundamental rights resulting from the use of their AI systems. | (78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ or evolve after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law, including those protecting fundamental rights and consumer rights resulting from the use of their AI systems and take appropriate corrective actions. Deployers should also report to the relevant authorities, any serious incidents or breaches to national and Union law resulting from the use of their AI system when they become aware of such serious incidents or breaches. | (78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents resulting from the use of their AI systems. | (78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ or evolve after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law, including those protecting fundamental rights and consumer rights resulting from the use of their AI systems and take appropriate corrective actions. Providers should also report any serious incidents resulting from the use of their AI systems. Deployers should also report to the relevant authorities, any serious incidents or breaches to national and Union law resulting from the use of their AI system when they become aware of such serious incidents or breaches. |
Recital (79) | (79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. | (79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. For the purpose of this Regulation, national supervisory authorities should act as market surveillance authorities for AI systems covered by this Regulation except for AI systems covered by Annex II of this Regulation. For AI systems covered by legal acts listed in the Annex II, the competent authorites under those legal acts should remain the lead authority. National supervisory authorities and competent authorities in the legal acts listed in Annex II should work together whenever necessary. When appropriate, the competent authorities in the legal acts listed in Annex II should send competent staff to the national supervisory authority in order to assist in the performance of its tasks. For the purpose of this Regulation, national supervisory authorities should have the same powers and obligations as market surveillance authorities under Regulation (EU) 2019/1020. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. After having exhausted all other reasonable ways to assess/verify the conformity and upon a reasoned request, the national supervisory authority should be granted access to the training, validation and testing datasets, the trained and training model of the high-risk AI system, including its relevant model parameters and their execution /run environment. In cases of simpler software systems falling under this Regulation that are not based on trained models, and where all other ways to verify conformity have been exhausted, the national supervisory authority may exceptionally have access to the source code, upon a reasoned request. Where the national supervisory authority has been granted access to the training, validation and testing datasets in accordance with this Regulation, such access should be achieved through appropriate technical means and tools, including on site access and in exceptional circumstances, remote access. The national supervisory authority should treat any information, including source code, software, and data as applicable, obtained as confidential information and respect relevant Union law on the protection of intellectual property and trade secrets. The national supervisory authority should delete any information obtained upon the completion of the investigation. | (79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Market surveillance authorities designated pursuant to this Regulation should have all enforcement powers under this Regulation and Regulation (EU) 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias. Although the majority of AI systems are not subject to specific requirements and obligations under this Regualtion, market surveillance authorities may take measures in relation to all AI systems when they present a risk in accordance with this Regulation. Due to the specific nature of Union institutions, agencies and bodies falling within the scope of this Regulation, it is appropriate to designate the European Data Protection Supervisor as a competent market surveillance authority for them. This should be without prejudice to the designation of national competent authorities by the Member States. Market surveillance activities should not affect the ability of the supervised entities to carry out their tasks independently, when such independence is required by Union law. | (79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. National supervisory authorities should act as market surveillance authorities for AI systems covered by this Regulation except for AI systems covered by Annex II of this Regulation. For AI systems covered by legal acts listed in the Annex II, the competent authorities under those legal acts should remain the lead authority. National supervisory authorities and competent authorities in the legal acts listed in Annex II should work together whenever necessary. Market surveillance authorities designated pursuant to this Regulation should have all enforcement powers under this Regulation and Regulation (EU) 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias. They may take measures in relation to all AI systems when they present a risk in accordance with this Regulation. The European Data Protection Supervisor should be designated as a competent market surveillance authority for Union institutions, agencies and bodies falling within the scope of this Regulation, without prejudice to the designation of national competent authorities by the Member States. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. After having exhausted all other reasonable ways to assess/verify the conformity and upon a reasoned request, the national supervisory authority should be granted access to the training, validation and testing datasets, the trained and training model of the high-risk AI system, including its relevant model parameters and their execution /run environment. In cases of simpler software systems falling under this Regulation that are not based on trained models, and where all other ways to verify conformity have been exhausted, the national supervisory authority may exceptionally have access to the source code, upon a reasoned request. The national supervisory authority should treat any information, including source code, software, and data as applicable, obtained as confidential information and respect relevant Union law on the protection of intellectual property and trade secrets. The national supervisory authority should delete any information obtained upon the completion of the investigation. Market surveillance activities should not affect the ability of the supervised entities to carry out their tasks independently, when such independence is required by Union law. |
Recital (79a) (new) | (79a) This Regulation is without prejudice to the competences, tasks, powers and independence of relevant national public authorities or bodies which supervise the application of Union law protecting fundamental rights, including equality bodies and data protection authorities. Where necessary for their mandate, those national public authorities or bodies should also have access to any documentation created under this Regulation. A specific safeguard procedure should be set for ensuring adequate and timely enforcement against AI systems presenting a risk to health, safety and fundamental rights. The procedure for such AI systems presenting a risk should be applied to high-risk AI systems presenting a risk, prohibited systems which have been placed on the market, put into service or used in violation of the prohibited practices laid down in this Regulation and AI systems which have been made available in violation of the transparency requirements laid down in this Regulation and present a risk. | This Regulation respects the competences, tasks, powers and independence of relevant national public authorities or bodies supervising the application of Union law protecting fundamental rights, including equality bodies and data protection authorities. These national public authorities or bodies should have access to any documentation created under this Regulation, as necessary for their mandate. A specific safeguard procedure should be established to ensure adequate and timely enforcement against AI systems presenting a risk to health, safety and fundamental rights. This procedure should be applied to high-risk AI systems, prohibited systems which have been placed on the market, put into service or used in violation of the prohibited practices laid down in this Regulation, and AI systems which have been made available in violation of the transparency requirements laid down in this Regulation and present a risk. | ||
Recital (80) | (80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. –––– 56. Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338). | (80) Union law on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services law, the competent authorities responsible for the supervision and enforcement of the financial services law, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. –––– 56. Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338). | (80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions unless Member States decide to designate another authority to fulfill these market surveillance tasks. Those competent authorities should have all powers under this Regulation and Regulation (EU) 2019/1020 on market surveillance to enforce the requirements and obligations of this Regulation, including powers to carry our ex post market surveillance activities that can be integrated, as appropriate, into their existing supervisory mechanisms and procedures under the relevant Union financial services legislation. It is appropriate to envisage that, when acting as market surveillance authorities under this Regulation, the national authorities responsible for the supervision of credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism (SSM) established by Council Regulation No 1024/2013, should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the European Central Bank’s prudential supervisory tasks as specified in that Regulation. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council27, it is also appropriate to integrate some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. The same regime should apply to insurance and re-insurance undertakings and insurance holding companies under Directive 2009/138/EU (Solvency II) and the insurance intermediaries under Directive 2016/97/EU and other types of financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services legislation to ensure consistency and equal treatment in the financial sector. –––– 27. Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338). | (80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions unless Member States decide to designate another authority to fulfill these market surveillance tasks. Those competent authorities should have all powers under this Regulation and Regulation (EU) 2019/1020 on market surveillance to enforce the requirements and obligations of this Regulation, including powers to carry out ex post market surveillance activities that can be integrated, as appropriate, into their existing supervisory mechanisms and procedures under the relevant Union financial services legislation. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council, it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. The same regime should apply to insurance and re-insurance undertakings and insurance holding companies under Directive 2009/138/EU (Solvency II) and the insurance intermediaries under Directive 2016/97/EU and other types of financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services legislation to ensure consistency and equal treatment in the financial sector. |
Recital (80a) (new) | (80a) Given the objectives of this Regulation, namely to ensure an equivalent level of protection of health, safety and fundamental rights of natural persons, to ensure the protection of the rule of law and democracy, and taking into account that the mitigation of the risks of AI system against such rights may not be sufficiently achieved at national level or may be subject to diverging interpretation which could ultimately lead to an uneven level of protection of natural persons and create market fragmentation, the national supervisory authorities should be empowered to conduct joint investigations or rely on the union safeguard procedure provided for in this Regulation for effective enforcement. Joint investigations should be initiated where the national supervisory authority have sufficient reasons to believe that an infringement of this Regulation amount to a widespread infringement or a widespread infringement with a Union dimension, or where the AI system or foundation model presents a risk which affects or is likely to affect at least 45 million individuals in more than one Member State. | Given the objectives of this Regulation, which are to ensure an equivalent level of protection of health, safety, and fundamental rights of natural persons, as well as the protection of the rule of law and democracy, it is necessary to empower national supervisory authorities to conduct joint investigations or rely on the union safeguard procedure provided for in this Regulation for effective enforcement. This is due to the potential insufficiency of risk mitigation at the national level and the possibility of diverging interpretations leading to an uneven level of protection and market fragmentation. Joint investigations should be initiated where the national supervisory authority has sufficient reasons to believe that an infringement of this Regulation amounts to a widespread infringement or a widespread infringement with a Union dimension, or where the AI system or foundation model presents a risk which affects or is likely to affect at least 45 million individuals in more than one Member State. | ||
Recital (81) | (81)The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data. | (81)The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data. | (81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data. | (81) The development of AI systems other than high-risk AI systems, in accordance with the requirements of this Regulation, may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data. |
Recital (82) | (82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out herein are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council57 would apply as a safety net. ––––– 57. Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4). | (82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council57 would apply as a safety net. ––––– 57. Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4). | (82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out herein are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council28 would apply as a safety net. ––––– 28. Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4). | (82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out herein or for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council would apply as a safety net. ––––– Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4). |
Recital (83) | (83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks. | (83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should aim for transparency and openness while respecting the confidentiality of information and data obtained in carrying out their tasks by putting in place technical and organisational measures to protect the security and confidentiality of the information obtained carrying out their activities including for intellectual property rights and public and national security interests. Where the activities of the Commission, national competent authorities and notified bodies pursuant to this Regulation results in a breach of intellectual property rights, Member States should provide for adequate measures and remedies to ensure the enforcement of intellectual property rights in application of Directive 2004/48/EC. | (83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks, in accordance with Union or national law. | (83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should aim for transparency and openness, while respecting the confidentiality of information and data obtained in carrying out their tasks. This should be in accordance with Union or national law and should involve putting in place technical and organisational measures to protect the security and confidentiality of the information obtained carrying out their activities, including for intellectual property rights and public and national security interests. Where the activities of the Commission, national competent authorities and notified bodies pursuant to this Regulation results in a breach of intellectual property rights, Member States should provide for adequate measures and remedies to ensure the enforcement of intellectual property rights in application of Directive 2004/48/EC. |
Recital (84) | (84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. | (84) Compliance with this Regulation should be enforceable by means of the imposition of fines by the national supervisory authority when carrying out proceedings under the procedure laid down in this Regulation. Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down;. When assessing the amount of the fines, national competent authorities should, in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the provider’s size, in particular if the provider is a SME or a start-up. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. The penalties and litigation costs under this Regulation should not be subject to contractual clauses or any other arrangements. | (84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement, and in respect of the ne bis in idem principle. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. | (84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement, and in respect of the ne bis in idem principle. Compliance with this Regulation should be enforceable by means of the imposition of fines by the national supervisory authority when carrying out proceedings under the procedure laid down in this Regulation. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down. When assessing the amount of the fines, national competent authorities should, in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the provider’s size, in particular if the provider is a SME or a start-up. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. The penalties and litigation costs under this Regulation should not be subject to contractual clauses or any other arrangements. |
Recital (84a) (new) | (84a) As the rights and freedoms of natural and legal persons and groups of natural persons can be seriously undermined by AI systems, it is essential that natural and legal persons or groups of natural persons have meaningful access to reporting and redress mechanisms and to be entitled to access proportionate and effective remedies. They should be able to report infringments of this Regulation to their national supervisory authority and have the right to lodge a complaint against the providers or deployers of AI systems. Where applicable, deployers should provide internal complaints mechanisms to be used by natural and legal persons or groups of natural persons. Without prejudice to any other administrative or non-judicial remedy, natural and legal persons and groups of natural persons should also have the right to an effective judicial remedy with regard to a legally binding decision of a national supervisory authority concerning them or, where the national supervisory authority does not handle a complaint, does not inform the complainant of the progress or preliminary outcome of the complaint lodged or does not comply with its obligation to reach a final decision, with regard to the complaint. | Natural and legal persons, as well as groups of natural persons, whose rights and freedoms can be significantly impacted by AI systems, should have access to meaningful reporting and redress mechanisms. They should be entitled to proportionate and effective remedies, including the ability to report infringements of this Regulation to their national supervisory authority. They should also have the right to lodge a complaint against the providers or deployers of AI systems. Where applicable, deployers should establish internal complaints mechanisms for use by natural and legal persons or groups of natural persons. In addition to any other administrative or non-judicial remedy, these individuals and groups should have the right to an effective judicial remedy concerning a legally binding decision of a national supervisory authority that affects them. This also applies in cases where the national supervisory authority does not handle a complaint, does not inform the complainant of the progress or preliminary outcome of the complaint lodged, or does not comply with its obligation to reach a final decision on the complaint. |
||
Recital (84b) (new) | (84b) Affected persons should always be informed that they are subject to the use of a high-risk AI system, when deployers use a high-risk AI system to assist in decision-making or make decisions related to natural persons. This information can provide a basis for affected persons to exercise their right to an explanation under this Regulation.When deployers provide an explanation to affected persons under this Regulation, they should take into account the level of expertise and knowledge of the average consumer or individual. | When deployers utilize a high-risk AI system for decision-making or decisions related to natural persons, it is imperative that the affected persons are always informed of their involvement with such a system. This information serves as a foundation for the affected persons to exercise their right to an explanation under this Regulation. In providing an explanation, deployers should consider the level of expertise and knowledge of the average consumer or individual. | ||
Recital (84c) (new) | (84c) Union law on the protection of whistleblowers (Directive (EU) 2019/1937) has full application to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators acquiring information on breaches of Union law by a provider of AI system or its AI system. | The Directive (EU) 2019/1937 on the protection of whistleblowers fully applies to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators who acquire information on breaches of Union law by a provider of AI system or its AI system. | ||
Recital (85) | (85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. ––– 58. OJ L 123, 12.5.2016, p. 1. | (85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58. These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, civil society, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. ––– 58. OJ L 123, 12.5.2016, p. 1. | (85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making29. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. Such consultations and advisory support should also be carried out in the framework of the activities of the AI Board and its subgroups. ––– 29. OJ L 123, 12.5.2016, p. 1. | (85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making. These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, civil society, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. Such consultations and advisory support should also be carried out in the framework of the activities of the AI Board and its subgroups. ––– OJ L 123, 12.5.2016, p. 1. |
Recital (85a) (new) | (85a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the Commission should regularly review the implementation of this Regulation, in particular the prohibited AI systems, the transparency obligations and the list of high-risk areas and use cases, at least every year, while consulting the AI office and the relevant stakeholders. | Given the rapid technological advancements and the necessary technical expertise in evaluating high-risk AI systems, it is proposed that the Commission should conduct a regular review of the implementation of this Regulation. This review should particularly focus on the prohibited AI systems, the transparency obligations, and the list of high-risk areas and use cases. This review should be conducted at least annually, with the Commission consulting the AI office and relevant stakeholders during the process. | ||
Recital (86) | (86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council.59 ––––––– 59. Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by the Member States of the Commission's exercise of implementing powers (OJ L 55, 28.2.2011, p.13). | (86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council.59 ––––––– 59. Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by the Member States of the Commission's exercise of implementing powers (OJ L 55, 28.2.2011, p.13). | (86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council30. It is of particular importance that, in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making, whenever broader expertise is needed in the early preparation of draft implementing acts, the Commission makes use of expert groups, consults targeted stakeholders or carries out public consultations, as appropriate. Such consultations and advisory support should also be carried out in the framework of the activities of the AI Board and its subgroups, including the preparation of implementing acts in relation to Articles 4, 4b and 6. ––––––– 30. Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by the Member States of the Commission's exercise of implementing powers (OJ L 55, 28.2.2011, p.13). | (86) In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council. It is of particular importance that, in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making, whenever broader expertise is needed in the early preparation of draft implementing acts, the Commission makes use of expert groups, consults targeted stakeholders or carries out public consultations, as appropriate. Such consultations and advisory support should also be carried out in the framework of the activities of the AI Board and its subgroups, including the preparation of implementing acts in relation to Articles 4, 4b and 6. ––––––– Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by the Member States of the Commission's exercise of implementing powers (OJ L 55, 28.2.2011, p.13). |
Recital (87) | (87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective. | (87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective. | (87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective. | (87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective. |
Recital (87a) (new) [C] | (87a) In order to ensure legal certainty, ensure an appropriate adaptation period for operators and avoid disruption to the market, including by ensuring continuity of the use of AI systems, it is appropriate that this Regulation applies to the high-risk AI systems that have been placed on the market or put into service before the general date of application thereof, only if, from that date, those systems are subject to significant changes in their design or intended purpose. It is appropriate to clarify that, in this respect, the concept of significant change should be understood as equivalent in substance to the notion of substantial modification, which is used with regard only to high-risk AI systems as defined in this Regulation. | In order to ensure legal certainty, facilitate an appropriate adaptation period for operators, and prevent market disruption, including the continuity of AI systems usage, this Regulation should apply to high-risk AI systems that have been placed on the market or put into service prior to the general date of application. However, this should only be the case if, from that date, these systems undergo significant changes in their design or intended purpose. For clarity, the term 'significant change' should be interpreted as being equivalent to the concept of 'substantial modification', which is used exclusively for high-risk AI systems as defined in this Regulation. | ||
Recital (87a) (new) [P] | (87a) As reliable information on the resource and energy use, waste production and other environmental impact of AI systems and related ICT technology, including software, hardware and in particular data centres, is limited, the Commission should introduce of an adequate methodology to measure the environmental impact and effectiveness of this Regulation in light of the Union environmental and climate objectives. | Given the limited reliable information on the resource and energy use, waste production, and other environmental impacts of AI systems and related ICT technology, including software, hardware, and particularly data centres, it is proposed that the Commission should develop an adequate methodology to measure the environmental impact and effectiveness of this Regulation, in alignment with the Union's environmental and climate objectives. | ||
Recital (88) | (88) This Regulation should apply from … [OP – please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [OP – please insert the date – three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP – please insert the date – twelve months following the entry into force of this Regulation]. | (88) This Regulation should apply from … [OP – please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [OP – please insert the date – three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP – please insert the date – twelve months following the entry into force of this Regulation]. | (88) This Regulation should apply from … [OP – please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [OP – please insert the date – three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP – please insert the date – twelve months following the entry into force of this Regulation]. | (88) This Regulation should apply from ... [OP – please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from ... [OP – please insert the date – three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP – please insert the date – twelve months following the entry into force of this Regulation]. |
Recital (89) | (89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on […]”. | (89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on 18 June 2021. | (89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on […]”. | (89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on 18 June 2021. |
HAVE ADOPTED THIS REGULATION: | HAVE ADOPTED THIS REGULATION: | HAVE ADOPTED THIS REGULATION: | None | |
TITLE I | TITLE I GENERAL PROVISIONS | TITLE I GENERAL PROVISIONS | TITLE I GENERAL PROVISIONS | TITLE I GENERAL PROVISIONS |
Article 1 | Article 1 Subject matter | Article 1 Subject matter | Article 1 Subject matter | None |
Article 1 – paragraph 1 (new) | 1. The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation; | This Regulation aims to foster the adoption of human-centric and trustworthy artificial intelligence, ensuring a high level of protection for health, safety, fundamental rights, democracy, the rule of law, and the environment from the potential harmful effects of artificial intelligence systems within the Union, while simultaneously promoting innovation. | ||
Article 1 – paragraph 1 - introductory part | This Regulation lays down: | This Regulation lays down: | This Regulation lays down: | None |
Article 1 – paragraph 1 – point (a) | (a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union; | (a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union; | (a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union; | (a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union; |
Article 1 – paragraph 1 – point (b) | (b) prohibitions of certain artificial intelligence practices; | (b) prohibitions of certain artificial intelligence practices; | (b) prohibitions of certain artificial intelligence practices; | (b) prohibitions of certain artificial intelligence practices; |
Article 1 – paragraph 1 – point (c) | (c) specific requirements for high-risk AI systems and obligations for operators of such systems; | (c) specific requirements for high-risk AI systems and obligations for operators of such systems; | (c) specific requirements for high-risk AI systems and obligations for operators of such systems; | (c) specific requirements for high-risk AI systems and obligations for operators of such systems; |
Article 1 – paragraph 1 – point (d) | (d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content; | (d) harmonised transparency rules for certain AI systems | (d) harmonised transparency rules for certain AI systems; | (d) harmonised transparency rules for certain AI systems, specifically those intended to interact with natural persons, emotion recognition systems, biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content; |
Article 1 – paragraph 1 – point (e) | (e) rules on market monitoring and surveillance. | (e) rules on market monitoring, market surveillance governance and enforcement; | (e) rules on market monitoring, market surveillance and governance; | (e) rules on market monitoring, market surveillance, governance and enforcement; |
Article 1 – paragraph 1 – point ea (new) | (ea) measures to support innovation, with a particular focus on SMEs and start-ups, including on setting up regulatory sandboxes and targeted measures to reduce the regulatory burden on SMEs’s and start-ups; | (ea) measures in support of innovation. | (ea) measures to support innovation, particularly focusing on SMEs and start-ups, including the establishment of regulatory sandboxes and targeted measures to alleviate the regulatory burden on SMEs and start-ups. | |
Article 1 – paragraph 1 – point e b (new) | (eb) rules for the establishment and functioning of the Union’s Artificial Intelligence Office (AI Office). | "Establishment and functioning rules for the Union’s Artificial Intelligence Office (AI Office)." | ||
Article 2 | Article 2 Scope | Article 2 Scope | Article 2 Scope | None |
Article 2 – paragraph 1 - introductory part | 1.This Regulation applies to: | 1.This Regulation applies to: | 1. This Regulation applies to: | None |
Article 2 – paragraph 1 – point a | (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; | (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; | (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the Union or in a third country; | (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established or physically present within the Union or in a third country; |
Article 2 – paragraph 1 – point b | (b) users of AI systems located within the Union; | (b) deployers of AI systems that have their place of establishment or who are located within the Union; | (b) users of AI systems who are physically present or established within the Union; | (b) users and deployers of AI systems who have their place of establishment, are located, or are physically present within the Union; |
Article 2 – paragraph 1 – point c | (c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union; | (c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union; | (c) providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the Union; | (c) providers, deployers, and users of AI systems that have their place of establishment, are located or are physically present in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used or is used in the Union; |
Article 2 – paragraph 1 – point cb [P] / point d [C] (new) | (cb) importers and distributors of AI systems as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union; | (d) importers and distributors of AI systems; | Importers and distributors of AI systems, as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union. | |
Article 2 – paragraph 1 – point e (new) | (e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; | None | ||
Article 2 – paragraph 1 – point f (new) | (f) authorised representatives of providers, which are established in the Union; | None | ||
Article 2 – paragraph 1 – point ca (new) | (ca) providers placing on the market or putting into service AI systems referred to in Article 5 outside the Union where the provider or distributor of such systems is located within the Union; | Providers located within the Union, who are placing on the market or putting into service AI systems referred to in Article 5 outside the Union. | ||
Article 2 – paragraph 1 – point cc / f (new) | (cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union. | Affected individuals as per Article 3(8a), who are situated within the Union and whose health, safety, or fundamental rights are negatively affected by the deployment of an AI system that is introduced to the market or put into service within the Union. | ||
Article 2 – paragraph 2 – introductory part | 2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts, only Article 84 of this Regulation shall apply: | 2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems and that fall, within the scope of harmonisation legislation listed in Annex II - Section B, only Article 84 of this Regulation shall apply; | 2. For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2) related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply. Article 53 shall apply only insofar as the requirements for high-risk AI systems under this Regulation have been integrated under that Union harmonisation legislation. | 2. For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2), that are safety components of products or systems, or which are themselves products or systems, and that fall within the scope of Union harmonisation legislation listed in Annex II - Section B, only Article 84 of this Regulation shall apply. Article 53 shall apply only insofar as the requirements for high-risk AI systems under this Regulation have been integrated under that Union harmonisation legislation. |
Article 2 – paragraph 2 – point a | (a) Regulation (EC) 300/2008; | deleted | deleted | None |
Article 2 – paragraph 2 – point b | (b) Regulation (EU) No 167/2013; | deleted | deleted | None |
Article 2 – paragraph 2 – point c | (c) Regulation (EU) No 168/2013; | deleted | deleted | None |
Article 2 – paragraph 2 – point d | (d) Directive 2014/90/EU; | deleted | deleted | None |
Article 2 – paragraph 2 – point e | (e) Directive (EU) 2016/797; | deleted | deleted | None |
Article 2 – paragraph 2 – point f | (f) Regulation (EU) 2018/858; | deleted | deleted | None |
Article 2 – paragraph 2 – point g | (g) Regulation (EU) 2018/1139; | deleted | deleted | None |
Article 2 – paragraph 2 – point h | (h) Regulation (EU) 2019/2144. | deleted | deleted | None |
Article 2 – paragraph 3 | This Regulation shall not apply to AI systems developed or used exclusively for military purposes. | This Regulation shall not apply to AI systems developed or used exclusively for military purposes. | 3. This Regulation shall not apply to AI systems if and insofar placed on the market, put into service, or used with or without modification of such systems for the purpose of activities which fall outside the scope of Union law, and in any event activities concerning military, defence or national security, regardless of the type of entity carrying out those activities. In addition, this Regulation shall not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union for the purpose of activities which fall outside the scope of Union law, and in any event activities concerning military, defence or national security, regardless of the type of entity carrying out those activities. | This Regulation shall not apply to AI systems developed, placed on the market, put into service, or used, with or without modification, exclusively for military purposes or for activities concerning military, defence or national security, regardless of the type of entity carrying out those activities. This exclusion also applies to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union for the same purposes. |
Article 2 – paragraph 4 | 4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States. | 4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States and are subject of a decision of the Commission adopted in accordance with Article 36 of Directive (EU)2016/680 or Article 45 of Regulation 2016/679 (adequacy decision) or are part of an international agreement concluded between the Union and that third country or international organisation pursuant to Article 218 TFUE providing adequate safeguards with respect to the protection of privacy and fundamental rights and freedoms of individuals; | 4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States. | 4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements or cooperation for law enforcement and judicial cooperation with the Union or with one or more Member States, and are subject of a decision of the Commission adopted in accordance with Article 36 of Directive (EU)2016/680 or Article 45 of Regulation 2016/679 (adequacy decision) or are part of an international agreement concluded between the Union and that third country or international organisation pursuant to Article 218 TFUE providing adequate safeguards with respect to the protection of privacy and fundamental rights and freedoms of individuals. |
Article 2 – paragraph 5 | This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV of Directive 2000/31/EC of the European Parliament and of the Council60 [as to be replaced by the corresponding provisions of the Digital Services Act]. –––– 60. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce') (OJ L 178, 17.7.2000, p. 1). | This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV of Directive 2000/31/EC of the European Parliament and of the Council60 [as to be replaced by the corresponding provisions of the Digital Services Act]. –––– 60. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce') (OJ L 178, 17.7.2000, p. 1). | 5. This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of the European Parliament and of the Council31 [as to be replaced by the corresponding provisions of the Digital Services Act]. –––– 31. Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce') (OJ L 178, 17.7.2000, p. 1). | This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV (or 4) of Directive 2000/31/EC of the European Parliament and of the Council [as to be replaced by the corresponding provisions of the Digital Services Act]. –––– Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce') (OJ L 178, 17.7.2000, p. 1). |
Article 2 – paragraph 6 (new) | 6. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development. | This Regulation shall not apply to AI systems, including their output, that are specifically developed and put into service solely for the purpose of scientific research and development. | ||
Article 2 – paragraph5a (new) | 5a. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processes in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.; | This Regulation shall uphold Union law on the protection of personal data, privacy, and the confidentiality of communications, specifically in relation to personal data processes associated with the rights and obligations established herein. This Regulation shall not impact Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, notwithstanding the provisions set out in Article 10(5) and Article 54 of this Regulation. | ||
Article 2 – paragraph 7(new) | 7. This Regulation shall not apply to any research and development activity regarding AI systems. | This Regulation shall not apply to any research and development activity related to AI systems. | ||
Article 2 – paragraph 5b(new) | 5b. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety; | This Regulation is without prejudice to the rules established by other Union legal acts related to consumer protection and product safety. | ||
Article 2 – paragraph 8 (new) | 8. This Regulation shall not apply to obligations of users who are natural persons using AI systems in the course of a purely personal non-professional activity, except Article 52. | This Regulation shall not apply to obligations of users who are natural persons using AI systems in the course of a purely personal non-professional activity, except Article 52. | ||
Article 2 – paragraph 5c (new) | 5c. This regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers. | This regulation shall not prevent Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers. | ||
Article 2 – paragraph 5d (new) | 5d. This Regulation shall not apply to research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law. The testing in real world conditions shall not be covered by this exemption.The Commission is empowered to may adopt delegated acts in accordance with Article 73 that clarify the application of this paragraph to specify this exemption to prevent its existing and potential abuse. The AI Office shall provide guidance on the governance of research and development pursuant to Article 56, also aiming to coordinate its application by the national supervisory authorities; | This Regulation shall not apply to research, testing, and development activities related to an AI system prior to its placement on the market or its entry into service, provided that these activities respect fundamental rights and comply with applicable Union law. However, testing in real-world conditions is not included in this exemption. The Commission is authorized to adopt delegated acts in accordance with Article 73 to clarify the application of this paragraph and specify this exemption to prevent its existing and potential abuse. The AI Office will provide guidance on the governance of research and development in accordance with Article 56, with the aim of coordinating its application by the national supervisory authorities. | ||
Article 2 – paragraph 5e (new) | 5e. This Regulation shall not apply to AI components provided under free and open-source licences except to the extent they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. This exemption shall not apply to foundation models as defined in Art 3. | This Regulation shall not apply to AI components provided under free and open-source licences, unless they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. However, this exemption shall not apply to foundation models as defined in Art 3. | ||
Article 3 | Article 3 Definitions | Article 3 Definitions | Article 3 Definitions | None |
Article 3 – introductory part | For the purpose of this Regulation, the following definitions apply: | For the purpose of this Regulation, the following definitions apply: | For the purpose of this Regulation, the following definitions apply: | None |
Article 3 – paragraph 1 – point 1 | (1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with; | (1) ‘‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments; | (1) ‘artificial intelligence system’ (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts; | (1) ‘artificial intelligence system’ (AI system) means a system that is developed with one or more of the techniques and approaches listed in Annex I, designed to operate with varying levels of autonomy, and can, for a given set of human-defined, explicit or implicit objectives, generate outputs such as content, predictions, recommendations, or decisions. These outputs, based on machine and/or human-provided data and inputs, influence the physical or virtual environments they interact with, using machine learning and/or logic- and knowledge-based approaches. |
Article 3 – paragraph 1 – point 1a (new [C] | (1a) ‘life cycle of an AI system’ means the duration of an AI system, from design through retirement. Without prejudice to the powers of the market surveillance authorities, such retirement may happen at any point in time during the post-market monitoring phase upon the decision of the provider and implies that the system may not be used further. An AI system lifecycle is also ended by a substantial modification to the AI system made by the provider or any other natural or legal person, in which case the substantially modified AI system shall be considered as a new AI system. | The life cycle of an AI system is defined as the duration of an AI system, from its design through to its retirement. This retirement can occur at any point during the post-market monitoring phase, based on the decision of the provider, and signifies that the system may not be used further. The life cycle of an AI system also concludes when a substantial modification is made to the AI system by the provider or any other natural or legal person. In such cases, the substantially modified AI system is considered as a new AI system. This definition does not infringe upon the powers of the market surveillance authorities. | ||
Article 3 – paragraph 1 – point 1a (new) [P] | (1a) ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm; | Risk' is defined as the combination of the likelihood of an occurrence of harm and the severity of that harm. | ||
Article 3 – paragraph 1 – point 1b [C] / point 1d [P] (new) | (1d) ‘general purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed; | (1b) ‘general purpose AI system’ means an AI system that - irrespective of how it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems; | ‘General purpose AI system’ means an AI system that, irrespective of how it is placed on the market or put into service, including as open source software, is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others. This system can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed. A general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems. | |
Article 3 – paragraph 1 – point 1b (new) | (1b) ‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons; | Significant risk' is defined as a risk that becomes significant due to the combination of its severity, intensity, likelihood of occurrence, and the duration of its effects. It also includes its capacity to impact an individual, multiple individuals, or a specific group of individuals. | ||
Article 3 – paragraph 1 – point 1c (new) | (1c) ‘foundation model’ means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks; | Foundation model' is defined as an AI system model that is trained on a broad scale of data, designed for generality of output, and can be adapted to a wide range of distinctive tasks. | ||
Article 3 – paragraph 1 – point 1e (new) | (1e) ‘large training runs’ means the production process of a powerful AI model that require computing resources above a very high threshold; | Large training runs' is defined as the production process of a highly powerful AI model that necessitates the use of computing resources exceeding a significantly high threshold. | ||
Article 3 – paragraph 1 – point 2 | (2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; | (2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; | (2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge; | (2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, and places that system on the market or puts it into service, whether for payment or free of charge; |
Article 3 – paragraph 1 – point 3 | (3) ‘small-scale provider’ means a provider that is a micro or small enterprise within the meaning of Commission Recommendation 2003/361/EC61 ; __________________ 61 Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124, 20.5.2003, p. 36). | deleted | deleted | None |
Article 3 – paragraph 1 – point 3a (new) | (3a) ‘small and medium-sized enterprise’ (SMEs) means an enterprise as defined in the Annex of Commission Recommendation 2003/361/EC concerning the definition of micro, small and medium-sized enterprises; | (3a) 'small and medium-sized enterprise' (SMEs) is defined as an enterprise in accordance with the Annex of Commission Recommendation 2003/361/EC concerning the definition of micro, small and medium-sized enterprises. | ||
Article 3 – paragraph 1 – point 4 | (4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity; | (4) ‘deployer means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; | (4) ‘user’ means any natural or legal person, including a public authority, agency or other body, under whose authority the system is used; | (4) ‘user’ or 'deployer' means any natural or legal person, public authority, agency or other body, including a public authority, agency or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity; |
Article 3 – paragraph 1 – point 5 | (5) ‘authorised representative’ means any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; | (5) ‘authorised representative’ means any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; | (5) ‘authorised representative’ means any natural or legal person physically present or established in the Union who has received and accepted a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; | (5) ‘authorised representative’ means any natural or legal person established and physically present in the Union who has received and accepted a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation; |
Article 3 – paragraph 1 – point 5a (new) | (5a) ‘product manufacturer’ means a manufacturer within the meaning of any of the Union harmonisation legislation listed in Annex II | (5a) 'product manufacturer' refers to a manufacturer as defined by any of the Union harmonisation legislation listed in Annex II. | ||
Article 3 – paragraph 1 – point 6 | (6)‘importer’ means any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union; | (6)‘importer’ means any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union; | (6) ‘importer’ means any natural or legal person physically present or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union | (6) 'importer' means any natural or legal person established or physically present in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union; |
Article 3 – paragraph 1 – point 7 | (7)‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties; | (7)‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties; | (7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market; | (7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market; |
Article 3 – paragraph 1 – point 8 | (8) ‘operator’ means the provider, the user, the authorised representative, the importer and the distributor; | (8) ‘operator’ means the provider, the deployer, the authorised representative, the importer and the distributor; | (8) ‘operator’ means the provider, the product manufacturer, the user, the authorised representative, the importer or the distributor; | (8) ‘operator’ means the provider, the user, the deployer, the product manufacturer, the authorised representative, the importer and the distributor; |
Article 3 – paragraph 1 – point 8a (new) | (8a) ‘affected person’ means any natural person or group of persons who are subject to or otherwise affected by an AI system; | Affected person' is defined as any natural person or group of persons who are subject to or otherwise impacted by an AI system. | ||
Article 3 – paragraph 1 – point 9 | (9)‘placing on the market’ means the first making available of an AI system on the Union market; | (9)‘placing on the market’ means the first making available of an AI system on the Union market; | (9) ‘placing on the market’ means the first making available of an AI system on the Union market; | (9) 'placing on the market' means the first making available of an AI system on the Union market; |
Article 3 – paragraph 1 – point 10 | (10)‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; | (10)‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; | (10) ‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; | (10) 'making available on the market' means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; |
Article 3 – paragraph 1 – point 11 | (11) ‘putting into service’ means the supply of an AI system for first use directly to the user or for own use on the Union market for its intended purpose; | (11) ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use on the Union market for its intended purpose; | (11) ‘putting into service’ means the supply of an AI system for first use directly to the user or for own use in the Union for its intended purpose; | (11) ‘putting into service’ means the supply of an AI system for first use directly to the user or deployer or for own use on the Union market or in the Union for its intended purpose; |
Article 3 – paragraph 1 – point 12 | (12)‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; | (12)‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; | (12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; | (12) 'intended purpose' means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation; |
Article 3 – paragraph 1 – point 13 | (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems; | (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose as indicated in instructions for use established by the provider, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems; | (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems; | (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose as indicated in instructions for use established by the provider, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems; |
Article 3 – paragraph 1 – point 14 | (14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property | (14) ‘safety component of a product or system’ means, in line with Union harmonisation law listed in Annex II, a component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons; | (14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property; | (14) ‘safety component of a product or system’ means, in line with Union harmonisation law listed in Annex II, a component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons or property; |
Article 3 – paragraph 1 – point 15 | (15) ‘instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use, inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used; | (15) ‘instructions for use’ means the information provided by the provider to inform the deployer of in particular an AI system’s intended purpose and proper use, as well as information on any precautions to be taken; inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used; | (15) ‘instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use; | (15) ‘instructions for use’ means the information provided by the provider to inform the user or deployer of in particular an AI system’s intended purpose and proper use, as well as information on any precautions to be taken; inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used; |
Article 3 – paragraph 1 – point 16 | (16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider of an AI system made available to users; | (16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider of an AI system that has been made available to deployers; | (16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider or taking it out of service or disabling the use of an AI system made available to users; | (16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider, taking it out of service, or disabling the use of an AI system that has been made available to users or deployers; |
Article 3 – paragraph 1 – point 17 | (17)‘withdrawal of an AI system’ means any measure aimed at preventing the distribution, display and offer of an AI system; | (17)‘withdrawal of an AI system’ means any measure aimed at preventing the distribution, display and offer of an AI system; | (17) ‘withdrawal of an AI system’ means any measure aimed at preventing an AI system in the supply chain being made available on the market; | (17) 'withdrawal of an AI system' means any measure aimed at preventing the distribution, display, offer of an AI system, and its availability on the market in the supply chain; |
Article 3 – paragraph 1 – point 18 | (18)‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose; | (18)‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose; | (18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose; | (18) 'performance of an AI system' means the ability of an AI system to achieve its intended purpose; |
Article 3 – paragraph 1 – point 19 | (19)‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; | (19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; | (20) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; | (19) 'notifying authority' means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring; |
Article 3 – paragraph 1 – point 20 | (20) ‘conformity assessment’ means the process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled; | (20) ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled; | (19) ‘conformity assessment’ means the process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled; | (20) ‘conformity assessment’ means the process of verifying and demonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system or a high-risk AI system have been fulfilled; |
Article 3 – paragraph 1 – point 21 | (21)‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection; | (21)‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection; | (21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection; | (21) 'conformity assessment body' means a body that performs third-party conformity assessment activities, including testing, certification and inspection; |
Article 3 – paragraph 1 – point 22 | (22) ‘notified body’ means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonisation legislation; | (22) ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation; | (22) ‘notified body’ means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonisation legislation; | (22) ‘notified body’ means a conformity assessment body designated and notified in accordance with this Regulation and other relevant Union harmonisation legislation; |
Article 3 – paragraph 1 – point 23 | (23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed; | (23) ‘substantial modification’ means a modification or a series of modifications of the AI system after its placing on the market or putting into service which is not foreseen or planned in the initial risk assessment by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation is affected or results in a modification to the intended purpose for which the AI system has been assessed; | (23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation, or a modification to the intended purpose for which the AI system has been assessed. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high_x005Frisk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. | (23) ‘substantial modification’ means a change or a series of modifications to the AI system following its placing on the market or putting into service, which is not foreseen or planned in the initial risk assessment by the provider, and which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation, or results in a modification to the intended purpose for which the AI system has been assessed. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. |
Article 3 – paragraph 1 – point 24 | (24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing; | (24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing; | (24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 or in Article 4b of this Regulation and other applicable Union legal act harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing; | (24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 or in Article 4b of this Regulation and other applicable Union legislation or Union legal act harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing; |
Article 3 – paragraph 1 – point 25 | (25)‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions; | (25)‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions; | (25) ‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions; | (25) 'post-market monitoring' refers to all activities conducted by providers of AI systems to proactively and systematically collect and review experience gained from the use of AI systems they place on the market or put into service, with the aim of identifying any need to immediately apply any necessary corrective or preventive actions; |
Article 3 – paragraph 1 – point 26 | (26)‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020; | (26)‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020; | (26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020; | (26) 'market surveillance authority' means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020; |
Article 3 – paragraph 1 – point 27 | (27)‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012; | (27)‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012; | (27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012; | (27) 'harmonised standard' means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012; |
Article 3 – paragraph 1 – point 28 | (28)‘common specifications’ means a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under this Regulation; | (28)‘common specifications’ means a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under this Regulation; | (28) ‘common specification’ means a set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under this Regulation; | (28) 'common specifications' means a document or a set of technical specifications, other than a standard, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012, containing technical solutions providing a means to comply with certain requirements and obligations established under this Regulation. |
Article 3 – paragraph 1 – point 29 | (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters, including the weights of a neural network; | (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters; | (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters; | (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters, including the weights of a neural network; |
Article 3 – paragraph 1 – point 30 | (30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split; | (30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split; | (30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split; | (30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split; |
Article 3 – paragraph 1 – point 31 | (31)‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service; | (31)‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service; | (31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service; | (31) 'testing data' means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service; |
Article 3 – paragraph 1 – point 32 | (32)‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output; | (32)‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output; | (32) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output; | (32) 'input data' means data provided to or directly acquired by an AI system on the basis of which the system produces an output; |
Article 3 – paragraph 1 – point 33 | (33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data; | (33) ‘biometric data’ means biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679; | (33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data; | (33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data, as defined in Article 4, point (14) of Regulation (EU) 2016/679; |
Article 3 – paragraph 1 – point 33a (new) | (33a) ‘biometric-based data’ means data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person; | Biometric-based data' is defined as data derived from specific technical processing related to the physical, physiological or behavioural signals of a natural person. | ||
Article 3 – paragraph 1 – point 33b (new) | (33b) ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, and psychological human features for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification); | Biometric identification' is defined as the automated recognition of physical, physiological, behavioural, and psychological human features. This process is used for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database, also known as one-to-many identification. | ||
Article 3 – paragraph 1 – point 33c (new) | (33c) ‘biometric verification’ means the automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication); | Biometric verification' is defined as the automated process of verifying the identity of natural persons by comparing an individual's biometric data to previously provided biometric data. This process involves one-to-one verification, including authentication. | ||
Article 3 – paragraph 1 – point 33d (new) | (33d) ‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU)2016/679; | Special categories of personal data' shall refer to the categories of personal data as outlined in Article 9(1) of Regulation (EU) 2016/679. | ||
Article 3 – paragraph 1 – point 34 | (34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data; | (34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data; | (34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring psychological states, emotions or intentions of natural persons on the basis of their biometric data; | (34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring psychological states, emotions, thoughts, states of mind or intentions of natural persons, individuals or groups on the basis of their biometric and biometric-based data; |
Article 3 – paragraph 1 – point 35 | (35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data; | (35) ‘biometric categorisation means assigning natural persons to specific categories, or inferring their characteristics and attributes on the basis of their biometric or biometric-based data, or which can be inferred from such data; | (35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data; | (35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, or inferring their characteristics and attributes, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric or biometric-based data, or which can be inferred from such data. |
Article 3 – paragraph 1 – point 36 | (36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ; | (36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the deployer of the AI system whether the person will be present and can be identified, excluding verification systems; | (36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons typically at a distance, without their active involvement, through the comparison of a person’s biometric data with the biometric data contained in a reference data repository; | (36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons typically at a distance, without their active involvement and without prior knowledge of the user or deployer of the AI system whether the person will be present and can be identified, through the comparison of a person’s biometric data with the biometric data contained in a reference database or data repository, excluding verification systems. |
Article 3 – paragraph 1 – point 37 | (37) ‘‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention. | (37) ‘‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited delays in order to avoid circumvention; | (37) ‘‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur instantaneously or near instantaneously; | (37) ‘‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, instantaneously or near instantaneously. This comprises not only instant identification, but also includes limited short delays in order to avoid circumvention. |
Article 3 – paragraph 1 – point 38 | (38)‘‘post’ remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system; | (38)‘‘post’ remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system; | deleted | None |
Article 3 – paragraph 1 – point 39 | (39) ‘publicly accessible space’ means any physical place accessible to the public, regardless of whether certain conditions for access may apply; | (39) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to the public, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions; | (39) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number of natural persons regardless of whether certain conditions or circumstances for access have been predetermined, and regardless of the potential capacity restrictions; | (39) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to the public or an undetermined number of natural persons, regardless of whether certain conditions or circumstances for access may apply or have been predetermined, and regardless of the potential capacity restrictions. |
Article 3 – paragraph 1 – point 40 | (40)‘law enforcement authority’ means: (a)any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b)any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (40)‘law enforcement authority’ means: (a)any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b)any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (40) ‘law enforcement authority’ means: (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (40) ‘law enforcement authority’ means: (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; |
Article 3 – paragraph 1 – point 41 | (41) ‘law enforcement’ means activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; | (41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; |
Article 3 – paragraph 1 – point 42 | (42) ‘national supervisory authority’ means the authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board; | (42) ‘national supervisory authority’ means a public (AM 69) authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State in the management Board of the AI Office; | deleted | (42) ‘national supervisory authority’ means a public authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board. |
Article 3 – paragraph 1 – point 43 | (43) ‘national competent authority’ means the national supervisory authority, the notifying authority and the market surveillance authority; | (43) ‘national competent authority’ means any of the national authorities which are responsible for the enforcement of this Regulation; | (43) ‘national competent authority’ means any of the following: the notifying authority and the market surveillance authority. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, the European Data Protection Supervisor shall fulfil the responsibilities that in the Member States are entrusted to the national competent authority and, as relevant, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor; | (43) ‘national competent authority’ means any of the national authorities, including the national supervisory authority, the notifying authority, and the market surveillance authority, which are responsible for the enforcement of this Regulation. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, the European Data Protection Supervisor shall fulfil the responsibilities that in the Member States are entrusted to the national competent authority and, as relevant, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor. |
Article 3 – paragraph 1 – point 44 | (44) ‘serious incident’ means any incident that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious damage to a person’s health, to property or the environment, (b) a serious disruption of the management and operation of critical infrastructure. | (44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious damage to a person’s health, (b) a serious disruption of the management and operation of critical infrastructure, (ba) a breach of fundamental rights protected under Union law, (bb) serious damage to property or the environment. | (44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious damage to a person’s health; (b) a serious and irreversible disruption of the management and operation of critical infrastructure; (c) breach of obligations under Union law intended to protect fundamental rights; (d) serious damage to property or the environment. | (44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious damage to a person’s health, to property or the environment, (b) a serious and potentially irreversible disruption of the management and operation of critical infrastructure, (c) a breach of obligations or fundamental rights protected under Union law, (d) serious damage to property or the environment. |
Article 3 – paragraph 1 – point 45 [C] / point 44h [P] (new) | (44h) ‘critical infrastructure’ means an asset, a facility, equipment, a network or a system, or a part of an asset, a facility, equipment, a network or a system, which is necessary for the provision of an essential service within the meaning of Article 2(4) of Directive (EU) 2022/2557; | (45) ‘critical infrastructure’ means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive ...../..... on the resilience of critical entities; | Critical infrastructure' means an asset, a facility, equipment, a network or a system, or a part thereof, which is necessary for the provision and delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive (EU) 2022/2557 on the resilience of critical entities. | |
Article 3 – paragraph 1 – point 46 [C] / point 44a [P] (new) | (44a) 'personal data' means personal data as defined in Article 4, point (1) of Regulation (EU)2016/679; | (46) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679; | Personal data' means data as defined in Article 4, point (1) of Regulation (EU) 2016/679. | |
Article 3 – paragraph 1 – point 47 [C] / point 44b [P] (new) | (44b) ‘non-personal data’ means data other than personal data; | (47) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679; | Non-personal data' refers to data that is not classified as personal data, as defined in point (1) of Article 4 of Regulation (EU) 2016/679. | |
Article 3 – paragraph 1 – point 44c (new) | (44c) ‘profiling’ means any form of automated processing of personal data as defined in point (4) of Article 4 of Regulation (EU) 2016/679; or in the case of law enforcement authorities – in point 4 of Article 3 of Directive (EU) 2016/680 or, in the case of Union institutions, bodies, offices or agencies, in point 5 Article 3 of Regulation (EU) 2018/1725; | Profiling' refers to any form of automated processing of personal data as defined in point (4) of Article 4 of Regulation (EU) 2016/679. This definition also applies to law enforcement authorities as specified in point 4 of Article 3 of Directive (EU) 2016/680 and to Union institutions, bodies, offices or agencies as outlined in point 5 Article 3 of Regulation (EU) 2018/1725. | ||
Article 3 – paragraph 1 – point 48 [C] / point 44n [P] (new) | (44n) ‘testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment; | (48) ‘testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled; | Testing in real world conditions' means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment, with the aim of gathering reliable and robust data and assessing and verifying the conformity of the AI system with the requirements of this Regulation. This testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled. | |
Article 3 – paragraph 1 – point 44d (new) | (44d) "deep fake" means manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, produced using AI techniques, including machine learning and deep learning; | "Deep fake" refers to manipulated or synthetic audio, image or video content that falsely appears to be authentic or truthful. It features depictions of individuals appearing to say or do things they did not actually say or do. This content is produced using AI techniques, including but not limited to machine learning and deep learning. | ||
Article 3 – paragraph 1 – point 49 (new) | (49) ‘real world testing plan’ means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions; | real world testing plan' is defined as a document outlining the objectives, methodology, geographical, population and temporal scope, monitoring, organization, and conduct of testing in real world conditions. | ||
Article 3 – paragraph 1 – point 44e (new) | (44e) ‘widespread infringement’ means any act or omission contrary to Union law that protects the interest of individuals: (a) which has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other than the Member State, in which: (i) the act or omission originated or took place; (ii) the provider concerned, or, where applicable, its authorised representative is established; or, (iii) the deployer is established, when the infringement is committed by the deployer; (b) which protects the interests of individuals, that have caused, cause or are likely to cause harm to the collective interests of individuals and that have common features, including the same unlawful practice, the same interest being infringed and that are occurring concurrently, committed by the same operator, in at least three Member States; | Widespread infringement' is defined as any act or omission that is contrary to Union law, which is designed to protect the interests of individuals. This infringement: (a) Has caused harm or is likely to cause harm to the collective interests of individuals residing in at least two Member States, excluding the Member State where: (i) The act or omission originated or occurred; (ii) The provider involved, or its authorized representative, is established; or, (iii) The deployer is established, in cases where the infringement is committed by the deployer. (b) Protects the interests of individuals and has caused, is causing, or is likely to cause harm to the collective interests of individuals. These infringements share common characteristics, such as the same unlawful practice, the same interest being infringed upon, and are occurring concurrently. They are committed by the same operator in at least three Member States. |
||
Article 3 – paragraph 1 – point 44f (new) | (44f) ‘widespread infringement with a Union dimension’ means a widespread infringement that has harmed or is likely to harm the collective interests of individuals in at least two-thirds of the Member States, accounting, together, for at least two-thirds of the population of the Union; | Widespread infringement with a Union dimension' is defined as a widespread infringement that has caused or has the potential to cause harm to the collective interests of individuals in a minimum of two-thirds of the Member States, collectively representing at least two-thirds of the total population of the Union. | ||
Article 3 – paragraph 1 – point 50 (new) | (50) ‘subject’ for the purpose of real world testing means a natural person who participates in testing in real world conditions; | For the purpose of real world testing, a 'subject' is defined as a natural person who participates in testing under real world conditions. | ||
Article 3 – paragraph 1 – point 51 (new) | (51) ‘informed consent’ means a subject's free and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject's decision to participate; in the case of minors and of incapacitated subjects, the informed consent shall be given by their legally designated representative; | Informed consent' is defined as the free and voluntary expression of a subject's willingness to participate in a particular real-world condition testing, after being fully informed of all relevant aspects of the testing that could influence their decision to participate. In situations involving minors or incapacitated subjects, the informed consent should be provided by their legally designated representative. | ||
Article 3 – paragraph 1 – point 52 [C] / 44g [P] (new) | (44g) ‘regulatory sandbox’ means a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision; | (52) ‘AI regulatory sandbox’ means a concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision. | AI Regulatory Sandbox' means a controlled and concrete framework established by a national competent public authority, which facilitates and offers providers or prospective providers the safe and appropriate opportunity to develop, train, validate, and test innovative AI systems, where appropriate in real world conditions, for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision. | |
Article 3 – paragraph 1 – point 44k (new) | (44k) ‘social scoring’ means evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics; | Social scoring' is defined as the process of evaluating or classifying natural persons based on their social behaviour, socio-economic status, or known or predicted personal or personality characteristics. | ||
Article 3 – paragraph 1 – point 44l (new) | (44l) ‘social behaviour’ means the way a natural person interacts with and influences other natural persons or society; | Social behaviour' is defined as the manner in which a natural person interacts with and influences other natural persons or society. | ||
Article 3 – paragraph 1 – point 44 m (new) | (44m) ‘state of the art’ means the developed stage of technical capability at a given time as regards products, processes and services, based on the relevant consolidated findings of science, technology and experience; | State of the art' refers to the current and advanced stage of technical capability at any given time in relation to products, processes, and services. This is based on the relevant consolidated findings of science, technology, and experience. | ||
Article 4 | Article 4 Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein. | deleted | Article 4 Implementing acts In order to ensure uniform conditions for the implementation of this Regulation as regards machine learning approaches and logic- and knowledged based approaches referred to in Article 3(1), the Commission may adopt implementing acts to specify the technical elements of those approaches, taking into account market and technological developments. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2). | Article 4 Amendments and Implementing Acts The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein. Furthermore, to ensure uniform conditions for the implementation of this Regulation as regards machine learning approaches and logic- and knowledge-based approaches referred to in Article 3(1), the Commission may adopt implementing acts to specify the technical elements of those approaches, taking into account market and technological developments. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2). |
Article 4a (new) [P] | Article 4 a General principles applicable to all AI systems 1. All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded: a) ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans; b) ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system so as to allow unlawful use by malicious third parties; c) ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity; d) ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights;. e) ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law; f) ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. 2. Paragraph 1 is without prejudice to obligations set up by existing Union and national law. For high-risk AI systems, the general principles are translated into and complied with by providers or deployers by means of the requirements set out in Articles 8 to 15, and the relevant obligations laid down in Chapter 3 of Title III of this Regulation. For foundation models, the general principles are translated into and complied with by providers by means of the requirements set out in Articles 28 to 28b. For all AI systems, the application of the principles referred to in paragraph 1 can be achieved, as applicable, through the provisions of Article 28, Article 52, or the application of harmonised standards, technical specifications, and codes of conduct as referred to in Article 69,without creating new obligations under this Regulation. 3. The Commission and the AI Office shall incorporate these guiding principles in standardisation requests as well as recommendations consisting in technical guidance to assist providers and deployers on how to develop and use AI systems. European Standardisation Organisations shall take the general principles referred to in paragraph 1of this Article into account as outcome-based objectives when developing the appropriate harmonised standards for high risk AI systems as referred to in Article 40(2b). | Article 4 a General principles applicable to all AI systems 1. All operators subject to this Regulation shall strive to develop and use AI systems or foundation models in accordance with the following general principles, which establish a high-level framework that encourages a consistent, human-centric European approach to ethical and trustworthy Artificial Intelligence, in full alignment with the Charter and the values upon which the Union is founded: a) 'human agency and oversight' implies that AI systems should be developed and used as tools that serve people, respect human dignity and personal autonomy, and function in a manner that can be appropriately controlled and overseen by humans; b) 'technical robustness and safety' implies that AI systems should be developed and used in a way that minimizes unintended and unexpected harm, ensures robustness in the face of unintended problems, and is resilient against attempts to alter the use or performance of the AI system for unlawful use by malicious third parties; c) 'privacy and data governance' implies that AI systems should be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards of quality and integrity; d) 'transparency' implies that AI systems should be developed and used in a way that allows for appropriate traceability and explainability, while making humans aware that they are communicating or interacting with an AI system, and duly informing users of the capabilities and limitations of that AI system and affected persons about their rights; e) 'diversity, non-discrimination and fairness' implies that AI systems should be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law; f) 'social and environmental well-being' implies that AI systems should be developed and used in a sustainable and environmentally friendly manner, and in a way that benefits all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. 2. Paragraph 1 does not prejudice obligations established by existing Union and national law. For high-risk AI systems, the general principles are translated into and complied with by providers or deployers through the requirements set out in Articles 8 to 15, and the relevant obligations laid down in Chapter 3 of Title III of this Regulation. For foundation models, the general principles are translated into and complied with by providers through the requirements set out in Articles 28 to 28b. For all AI systems, the application of the principles referred to in paragraph 1 can be achieved, as applicable, through the provisions of Article 28, Article 52, or the application of harmonised standards, technical specifications, and codes of conduct as referred to in Article 69, without creating new obligations under this Regulation. 3. The Commission and the AI Office shall incorporate these guiding principles in standardisation requests as well as recommendations consisting in technical guidance to assist providers and deployers on how to develop and use AI systems. European Standardisation Organisations shall take the general principles referred to in paragraph 1 of this Article into account as outcome-based objectives when developing the appropriate harmonised standards for high risk AI systems as referred to in Article 40(2b). |
||
Article 4b (new) [P] | Article 4 b AI literacy 1. When implementing this Regulation, the Union and the Member States shall promote measures for the development of a sufficient level of AI literacy, across sectors and taking into account the different needs of groups of providers, deployers and affected persons concerned, including through education and training, skilling and reskilling programmes and while ensuring proper gender and age balance, in view of allowing a democratic control of AI systems 2. Providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used. 3. Such literacy measures shall consist, in particular, of the teaching of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits. 4. A sufficient level of AI literacy is one that contributes, as necessary, to the ability of providers and deployers to ensure compliance and enforcement of this Regulation. | Article 4 b AI literacy 1. In the implementation of this Regulation, the Union and the Member States shall advocate for the development of an adequate level of AI literacy. This should be across sectors and should consider the varying needs of groups of providers, deployers, and affected persons. This can be achieved through education and training, skilling and reskilling programmes, ensuring a balanced representation of gender and age. The aim is to allow democratic control of AI systems. 2. Providers and deployers of AI systems are required to take measures to ensure their staff and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This should take into account their technical knowledge, experience, education, training, and the context in which the AI systems are to be used. Consideration should also be given to the persons or groups of persons on which the AI systems are to be used. 3. The literacy measures should primarily consist of teaching basic notions and skills about AI systems and their functioning. This includes understanding the different types of products and uses, their risks, and benefits. 4. A sufficient level of AI literacy is defined as one that contributes, as necessary, to the ability of providers and deployers to ensure compliance and enforcement of this Regulation. |
||
TITLE Ia (new) | TITLE Ia GENERAL PURPOSE AI SYSTEMS | None | ||
Article 4a (new) [C] | Article 4a Compliance of general purpose AI systems with this Regulation 1. Without prejudice to Articles 5, 52, 53 and 69 of this Regulation, general purpose AI systems shall only comply with the requirements and obligations set out in Article 4b. 2. Such requirements and obligations shall apply irrespective of whether the general purpose AI system is placed on the market or put into service as a pre-trained model and whether further fine-tuning of the model is to be performed by the user of the general purpose AI system. | Article 4a Compliance of general purpose AI systems with this Regulation 1. Without prejudice to Articles 5, 52, 53 and 69 of this Regulation, general purpose AI systems shall only comply with the requirements and obligations set out in Article 4b. 2. Such requirements and obligations shall apply irrespective of whether the general purpose AI system is placed on the market or put into service as a pre-trained model and whether further fine-tuning of the model is to be performed by the user of the general purpose AI system. |
||
Article 4b (new) [C] | Article 4b Requirements for general purpose AI systems and obligations for providers of such systems 1. General purpose AI systems which may be used as high risk AI systems or as components of high risk AI systems in the meaning of Article 6, shall comply with the requirements established in Title III, Chapter 2 of this Regulation as from the date of application of the implementing acts adopted by the Commission in accordance with the examination procedure referred to in Article 74(2) no later than 18 months after the entry into force of this Regulation. Those implementing acts shall specify and adapt the application of the requirements established in Title III, Chapter 2 to general purpose AI systems in the light of their characteristics, technical feasibility, specificities of the AI value chain and of market and technological developments. When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account. 2. Providers of general purpose AI systems referred to in paragraph 1 shall comply, as from the date of application of the implementing acts referred to in paragraph 1, with the obligations set out in Articles 16aa, 16e, 16f, 16g, 16i, 16j, 25, 48 and 61. 3. For the purpose of complying with the obligations set out in Article 16e, providers shall follow the conformity assessment procedure based on internal control set out in Annex VI, points 3 and 4. 4. Providers of such systems shall also keep the technical documentation referred to in Article 11 at the disposal of the national competent authorities for a period ending ten years after the general purpose AI system is placed on the Union market or put into service in the Union. 5. Providers of general purpose AI systems shall cooperate with and provide the necessary information to other providers intending to put into service or place such systems on the Union market as high-risk AI systems or as components of high-risk AI systems, with a view to enabling the latter to comply with their obligations under this Regulation. Such cooperation between providers shall preserve, as appropriate, intellectual property rights, and confidential business information or trade secrets in accordance with Article 70. In order to ensure uniform conditions for the implementation of this Regulation as regards the information to be shared by the providers of general purpose AI systems, the Commission may adopt implementing acts in accordance with the examination procedure referred to in Article 74(2). 6. In complying with the requirements and obligations referred to in paragraphs 1, 2 and 3: - any reference to the intended purpose shall be understood as referring to possible use of the general purpose AI systems as high risk AI systems or as components of AI high risk systems in the meaning of Article 6; - any reference to the requirements for high-risk AI systems in Chapter II, Title III shall be understood as referring only to the requirements set out in the present Article. | Article 4b Requirements and Obligations for Providers of General Purpose AI Systems 1. General purpose AI systems, which may be utilized as high-risk AI systems or as components of high-risk AI systems as defined in Article 6, must adhere to the requirements outlined in Title III, Chapter 2 of this Regulation. This compliance must be effective from the date of application of the implementing acts adopted by the Commission in accordance with the examination procedure referred to in Article 74(2), no later than 18 months after the enforcement of this Regulation. These implementing acts will specify and adapt the application of the requirements established in Title III, Chapter 2 to general purpose AI systems, considering their characteristics, technical feasibility, specificities of the AI value chain, and market and technological developments. The generally acknowledged state of the art should be considered when fulfilling these requirements. 2. Providers of general purpose AI systems, as mentioned in paragraph 1, must comply with the obligations outlined in Articles 16aa, 16e, 16f, 16g, 16i, 16j, 25, 48, and 61, effective from the date of application of the implementing acts referred to in paragraph 1. 3. To comply with the obligations set out in Article 16e, providers must follow the conformity assessment procedure based on internal control outlined in Annex VI, points 3 and 4. 4. Providers must also keep the technical documentation referred to in Article 11 available for the national competent authorities for a period ending ten years after the general purpose AI system is placed on the Union market or put into service in the Union. 5. Providers of general purpose AI systems must cooperate with and provide necessary information to other providers intending to put such systems into service or place them on the Union market as high-risk AI systems or as components of high-risk AI systems. This is to enable the latter to comply with their obligations under this Regulation. This cooperation between providers must preserve intellectual property rights, confidential business information, or trade secrets in accordance with Article 70. To ensure uniform conditions for the implementation of this Regulation regarding the information to be shared by the providers of general purpose AI systems, the Commission may adopt implementing acts in accordance with the examination procedure referred to in Article 74(2). 6. In complying with the requirements and obligations referred to in paragraphs 1, 2, and 3: - any reference to the intended purpose should be understood as referring to the possible use of the general purpose AI systems as high-risk AI systems or as components of high-risk AI systems as defined in Article 6; - any reference to the requirements for high-risk AI systems in Chapter II, Title III should be understood as referring only to the requirements outlined in this Article. |
||
Article 4c (new) | Article 4c Exceptions to Article 4b 1. Article 4b shall not apply when the provider has explicitly excluded all high-risk uses in the instructions of use or information accompanying the general purpose AI system. 2. Such exclusion shall be made in good faith and shall not be deemed justified if the provider has sufficient reasons to consider that the system may be misused. 3. When the provider detects or is informed about market misuse they shall take all necessary and proportionate measures to prevent such further misuse, in particular taking into account the scale of the misuse and the seriousness of the associated risks. | Article 4c Exceptions to Article 4b 1. Article 4b shall not apply when the provider has explicitly excluded all high-risk uses in the instructions of use or information accompanying the general purpose AI system. 2. Such exclusion shall be made in good faith and shall not be deemed justified if the provider has sufficient reasons to consider that the system may be misused. 3. When the provider detects or is informed about market misuse they shall take all necessary and proportionate measures to prevent such further misuse, in particular taking into account the scale of the misuse and the seriousness of the associated risks. |
||
TITLE II | TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES | TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES | TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES | TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES |
Article 5 – paragraph 1 – introductory part | 1.The following artificial intelligence practices shall be prohibited: | 1.The following artificial intelligence practices shall be prohibited: | 1.The following artificial intelligence practices shall be prohibited: | None |
Article 5 – paragraph 1 – point a | (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; | (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; The prohibition of AI system that deploys subliminal techniques referred to in the first sub-paragraph shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian; | (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness with the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm; | (a) The placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or employs purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant physical or psychological harm; The prohibition of AI system that deploys subliminal techniques referred to in the first sub-paragraph shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian; |
Article 5 – paragraph 1 – point b | (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; | (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person’s or a such group’s known or predicted personality traits or social or economic situation age, physical or mental ability with the objective or to the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is likely to cause that person or another person significant harm;; | (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm; | (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person’s or a such group’s known or predicted personality traits, age, physical or mental disability, or specific social or economic situation, with the objective or to the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; |
Article 5 – paragraph 1 – point ba (new) | (ba) the placing on the market, putting into service or use of biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics. This prohibition shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian. | The marketing, deployment, or utilization of biometric categorization systems that classify individuals based on sensitive or protected attributes or characteristics, or the inference of such attributes or characteristics, is prohibited. This prohibition does not extend to AI systems designed for approved therapeutic purposes, provided that specific informed consent is obtained from the individuals exposed to these systems, or, where applicable, their legal guardian. | ||
Article 5 – paragraph 1 – point c | (c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; | (c) the placing on the market, putting into service or use of AI systems for the social scoring evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; | (c) the placing on the market, putting into service or use of AI systems for the evaluation or classification of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; | (c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; |
Article 5 – paragraph 1 – point d | (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State. –––– 62. Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1). | (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces; (i) deleted (ii) deleted (iii) deleted | (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their behalf for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime; (ii) the prevention of a specific and substantial threat to the critical infrastructure, life, health or physical safety of natural persons or the prevention of terrorist attacks; (iii) the localisation or identification of a natural person for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Article 2(2) of Council Framework Decision 2002/584/JHA32 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, or other specific offences punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least five years, as determined by the law of that Member State. –––– 32. Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1). | (d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their behalf for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life, health or physical safety of natural persons, the critical infrastructure, or of a terrorist attack; (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, or other specific offences punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least five years, as determined by the law of that Member State. –––– Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1). |
Article 5 – paragraph 1 – point da (new) | (da) the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons; | The use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons, should be regulated when it comes to placing on the market, putting into service or use. | ||
Article 5 – paragraph 1 – point db (new) | (db) The placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; | The implementation, utilization, or marketing of AI systems that generate or enlarge facial recognition databases through the indiscriminate extraction of facial images from the internet or CCTV footage. | ||
Article 5 – paragraph 1 – point dc (new) | (dc) the placing on the market, putting into service or use of AI systems to infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions. | The implementation, marketing, and utilization of AI systems for the purpose of inferring the emotions of a natural person in the sectors of law enforcement, border management, workplaces, and educational institutions. | ||
Article 5 – paragraph 1 – point dd (new) | (dd) the putting into service or use of AI systems for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems, unless they are subject to a pre-judicial authorisation in accordance with Union law and strictly necessary for the targeted search connected to a specific serious criminal offense as defined in Article 83(1) of TFEU that already took place for the purpose of law enforcement. | The use of AI systems for the analysis of recorded footage of publicly accessible spaces through 'post' remote biometric identification systems is permitted, provided they are subject to a pre-judicial authorisation in accordance with Union law. This is strictly necessary for the targeted search connected to a specific serious criminal offense as defined in Article 83(1) of TFEU that has already occurred for the purpose of law enforcement. | ||
Article 5 – paragraph 1a (new) | 1a. This Article shall not affect the prohibitions that apply where an artificial intelligence practice infringes another Union law, including Union law on data protection, non discrimination, consumer protection or competition; | This Article shall not affect the prohibitions that apply where an artificial intelligence practice infringes another Union law, including Union law on data protection, non discrimination, consumer protection or competition. | ||
Article 5 – paragraph 2 | 2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations. | deleted | 2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations. | 2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements: (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system; (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations. |
Article 5 – paragraph 3 | 3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2. | deleted | 3. As regards paragraphs 1, point (d) and 2, each use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation provided that, such authorisation shall be requested without undue delay during use of the AI system, and if such authorisation is rejected, its use shall be stopped with immediate effect. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2. | 3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use, provided that such authorisation shall be requested without undue delay during use of the AI system, and if such authorisation is rejected, its use shall be stopped with immediate effect. The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2. |
Article 5 – paragraph 4 | 4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. | deleted | 4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. | 4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. |
TITLE III | TITLE III HIGH-RISK AI SYSTEMS | TITLE III HIGH-RISK AI SYSTEMS | TITLE III HIGH-RISK AI SYSTEMS | TITLE III HIGH-RISK AI SYSTEMS |
Chapter 1 | CHAPTER 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK | CHAPTER 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK | CHAPTER 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK | CHAPTER 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK |
Article 6 | Article 6 Classification rules for high-risk AI systems | Article 6 Classification rules for high-risk AI systems | Article 6 Classification rules for high-risk AI systems | Article 6 Classification rules for high-risk AI systems |
Article 6 – paragraph 1 | 1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. | 1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to risks for health and safety, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation law listed in Annex II; | 1. An AI system that is itself a product covered by the Union harmonisation legislation listed in Annex II shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the above mentioned legislation. | 1. Regardless of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to risks for health and safety, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. |
Article 6 – paragraph 2 | 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk. | 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment. The Commission shall, six months prior to the entry into force of this Regulation, after consulting the AI Office and relevant stakeholders, provide guidelines clearly specifying the circumstances where the output of AI systems referred to in Annex III would pose a significant risk of harm to the health, safety or fundamental rights of natural persons or cases in which it would not. | 2. An AI system intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to above mentioned legislation. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product. | 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons, or if they are intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 and are required to undergo a third-party conformity assessment. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment. The Commission shall, six months prior to the entry into force of this Regulation, after consulting the AI Office and relevant stakeholders, provide guidelines clearly specifying the circumstances where the output of AI systems referred to in Annex III would pose a significant risk of harm to the health, safety or fundamental rights of natural persons or cases in which it would not. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product. |
Article 6 – paragraph 3 (new) [C] | 3. AI systems referred to in Annex III shall be considered high-risk unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights. In order to ensure uniform conditions for the implementation of this Regulation, the Commission shall, no later than one year after the entry into force of this Regulation, adopt implementing acts to specify the circumstances where the output of AI systems referred to in Annex III would be purely accessory in respect of the relevant action or decision to be taken. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74, paragraph 2. | AI systems mentioned in Annex III will be classified as high-risk unless the system's output is merely supplementary to the relevant action or decision to be taken and does not pose a significant risk to health, safety, or fundamental rights. To ensure uniform conditions for the implementation of this Regulation, the Commission will adopt implementing acts within one year after this Regulation comes into force. These acts will detail the circumstances under which the output of AI systems mentioned in Annex III would be considered purely supplementary to the relevant action or decision. The adoption of these implementing acts will follow the examination procedure outlined in Article 74, paragraph 2. | ||
Article 6 – paragraph 2a (new) [P] | 2a. Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk as described in paragraph 2, they shall submit a reasoned notification to the national supervisory authority that they are not subject to the requirements of Title III Chapter 2 of this Regulation. Where the AI system is intended to be used in two or more Member States, that notification shall be addressed to the AI Office. Without prejudice to Article 65, the national supervisory authority shall review and reply to the notification, directly or via the AI Office, within three months if they deem the AI system to be misclassified. | Providers operating in one or more of the critical areas and use cases referred to in Annex III, who believe that their AI system does not pose a significant risk as described in paragraph 2, are required to submit a reasoned notification to the national supervisory authority stating that they are not subject to the requirements of Title III Chapter 2 of this Regulation. In cases where the AI system is intended for use in two or more Member States, the notification should be addressed to the AI Office. The national supervisory authority, without prejudice to Article 65, is responsible for reviewing and responding to the notification, either directly or through the AI Office, within a three-month period if they consider the AI system to be misclassified. | ||
Article 6 – paragraph 2b (new) | 2b. Providers that misclassify their AI system as not subject to the requirements of Title III Chapter 2 of this Regulation and place it on the market before the deadline for objection by national supervisory authorities shall be subject to fines pursuant to Article 71. | Providers that incorrectly categorize their AI system as not being subject to the requirements of Title III Chapter 2 of this Regulation and subsequently place it on the market before the deadline for objection by national supervisory authorities shall be liable to penalties as outlined in Article 71. | ||
Article 6 – paragraph 2c (new) | 2 c. National supervisory authorities shall submit a yearly report to the AI Office detailing the number of notifications received, the related high-risk areas at stake and the decisions taken concerning received notifications | National supervisory authorities are required to annually submit a report to the AI Office. This report should detail the number of notifications received, identify the high-risk areas involved, and outline the decisions made in response to these notifications. | ||
Article 7 | Article 7 Amendments to Annex III | Article 7 Amendments to Annex III | Article 7 Amendments to Annex III | Article 7 Amendments to Annex III |
Article 7 – paragraph 1 – introductory part | 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled: | 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying areas or use-cases of high-risk AI systems where these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. | 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled: | 1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update and amend the list in Annex III by adding or modifying areas or use-cases of high-risk AI systems where both of the following conditions are fulfilled: these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. |
Article 7 – paragraph 1 – point a | (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; | deleted | (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; | (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; |
Article 7 – paragraph 1 – point b | (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. | deleted | (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. | (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. |
Article 7 – paragraph 1a (new) | 1a. The Commission is also empowered to adopt delegated acts in accordance with Article 73 to remove use-cases of high-risk AI systems from the list in Annex III if the conditions referred to in paragraph 1 no longer apply; | The Commission is authorized to adopt delegated acts in accordance with Article 73 to remove use-cases of high-risk AI systems from the list in Annex III, if the conditions mentioned in paragraph 1 no longer apply. | ||
Article 7 – paragraph 2 – introductory part | 2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria: | 2. When assessing an AI system for the purposes of paragraph 1 and 1a the Commission shall take into account the following criteria: | 2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria: | 2. When assessing an AI system for the purposes of paragraph 1 and 1a, whether it poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria: |
Article 7 – paragraph 2 – point a | (a) the intended purpose of the AI system; | (a) the intended purpose of the AI system; | (a) the intended purpose of the AI system; | (a) the intended purpose of the AI system; |
Article 7 – paragraph 2 – point aa (new) | (aa) the general capabilities and functionalities of the AI system independent of its intended purpose; | The AI system should be evaluated based on its general capabilities and functionalities, irrespective of its intended purpose. | ||
Article 7 – paragraph 2 – point b | (b) the extent to which an AI system has been used or is likely to be used; | (b) the extent to which an AI system has been used or is likely to be used; | (b) the extent to which an AI system has been used or is likely to be used; | (b) the extent to which an AI system has been used or is likely to be used; |
Article 7 – paragraph 2 – point ba (new) | (ba) the nature and amount of the data processed and used by the AI system; | The nature and volume of the data processed and utilized by the AI system. | ||
Article 7 – paragraph 2 – point bb (new) | (bb) the extent to which the AI system acts autonomously; | The extent to which the AI system operates autonomously. | ||
Article 7 – paragraph 2 – point c | (c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities; | (c) the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights, the environment, democracy and the rule of law or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated for example by reports or documented allegations submitted to national supervisory authorities, to the Commission, to the AI Office, to the EDPS, or to the European Union Agency for Fundamental Rights; | (c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities; | (c) the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights, the environment, democracy and the rule of law or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities, to the Commission, to the AI Office, to the EDPS, or to the European Union Agency for Fundamental Rights; |
Article 7 – paragraph 2 – point d | (d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons; | (d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to disproportionately affect a particular group of persons; | (d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons; | (d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to disproportionately affect a particular group of persons; |
Article 7 – paragraph 2 – point e | (e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; | (e) the extent to which potentially harmed or adversely impacted persons are dependent on the output produced involving an AI system, and that output is purely accessory in respect of the relevant action or decision to be taken, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that output; | (e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome; | (e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome or output produced with or involving an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome or output; and that output is purely accessory in respect of the relevant action or decision to be taken. |
Article 7 – paragraph 2 – point ea (new) | (ea) the potential misuse and malicious use of the AI system and of the technology underpinning it; | The potential misuse and malicious use of the AI system and of the technology underpinning it should be thoroughly examined and addressed. | ||
Article 7 – paragraph 2 – point f | (f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age; | (f) the extent to which there is an imbalance of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age; | (f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age; | (f) the extent to which there is an imbalance of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to status, authority, imbalance of power, knowledge, economic or social circumstances, or age; |
Article 7 – paragraph 2 – point g | (g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible; | (g) the extent to which the outcome produced involving an AI system is easily reversible or remedied, whereby outcomes having an adverse impact on health, safety, fundamental rights of persons, the environment, or on democracy and rule of law shall not be considered as easily reversible; | (g) the extent to which the outcome produced with an AI system is not easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible; | (g) the extent to which the outcome produced with an AI system is easily reversible or remedied, whereby outcomes having an adverse impact on health, safety, fundamental rights of persons, the environment, or on democracy and rule of law shall not be considered as easily reversible; |
Article 7 – paragraph 2 – point ga (new) | (ga) the extent of the availability and use of effective technical solutions and mechanisms for the control, reliability and corrigibility of the AI system; | The extent of the availability and use of effective technical solutions and mechanisms for the control, reliability, and corrigibility of the AI system. | ||
Article 7 – paragraph 2 – point gb (new) | (gb) the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety; | The potential benefits of deploying the AI system should be evaluated, considering the magnitude and likelihood of its impact on individuals, groups, or society at large. This includes potential enhancements in product safety. | ||
Article 7 – paragraph 2 – point gc (new) | (gc) the extent of human oversight and the possibility for a human to intercede in order to override a decision or recommendations that may lead to potential harm; | The extent of human oversight should be ensured, along with the provision for a human to intervene and override a decision or recommendations that may potentially lead to harm. | ||
Article 7 – paragraph 2 – point h | (h) the extent to which existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks. | (h) the extent to which existing Union law provides for: (i) effective measures of redress in relation to the damage caused by an AI system, with the exclusion of claims for direct or indirect damages; (ii) effective measures to prevent or substantially minimise those risks. | (h) the extent to which existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks; | (h) the extent to which existing Union legislation provides for: (i) effective measures of redress in relation to the risks or damage caused by an AI system, with the exclusion of claims for direct, indirect or any other damages; (ii) effective measures to prevent or substantially minimise those risks. |
Article 7 – paragraph 2 – point i (new) | (i) the magnitude and likelihood of benefit of the AI use for individuals, groups, or society at large. | The AI use should be evaluated based on the magnitude and likelihood of its benefit for individuals, groups, or society at large. | ||
Article 7 – paragraph 3 (new) [C] | 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled: (a) the high-risk AI system(s) concerned no longer pose any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2; (b) the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law. | The Commission is authorized to adopt delegated acts in accordance with Article 73 to modify the list in Annex III by removing high-risk AI systems, provided that both of the following conditions are met: (a) the high-risk AI system(s) in question no longer present any significant risks to fundamental rights, health or safety, considering the criteria listed in paragraph 2; (b) the removal does not reduce the overall level of protection of health, safety, and fundamental rights under Union law. |
||
Article 7 – paragraph 2a (new) [P] | 2a. When assessing an AI system for the purposes of paragraphs 1 or 1a the Commission shall consult the AI Office and, where relevant, representatives of groups on which an AI system has an impact, industry, independent experts, the social partners, and civil society organisations. The Commission shall also organise public consultations in this regard and shall make the results of those consultations and of the final assessment publicly available; | In assessing an AI system for the purposes outlined in paragraphs 1 or 1a, the Commission shall engage in consultation with the AI Office. Where applicable, the Commission shall also consult with representatives of groups impacted by the AI system, industry representatives, independent experts, social partners, and civil society organisations. To ensure transparency and public involvement, the Commission shall organise public consultations. The results of these consultations, along with the final assessment, shall be made publicly available. | ||
Article 7 – paragraph 2b (new) | 2b. The AI Office, national supervisory authorities or the European Parliament may request the Commission to reassess and recategorise the risk categorisation of an AI systemin accordance with paragraphs 1 and 1a. The Commission shall give reasons for its decision and make them public. | The AI Office, national supervisory authorities, or the European Parliament may request the Commission to reassess and recategorise the risk categorisation of an AI system in accordance with paragraphs 1 and 1a. The Commission is obligated to provide reasons for its decision and make them public. | ||
CHAPTER II | CHAPTER 2 REQUIREMENTS FOR HIGH-RISK AI SYSTEMS | CHAPTER 2 REQUIREMENTS FOR HIGH-RISK AI SYSTEMS | CHAPTER 2 REQUIREMENTS FOR HIGH-RISK AI SYSTEMS | None |
Article 8 | Article 8 Compliance with the requirements | Article 8 Compliance with the requirements | Article 8 Compliance with the requirements | None |
Article 8 – paragraph 1 | 1.High-risk AI systems shall comply with the requirements established in this Chapter. | 1.High-risk AI systems shall comply with the requirements established in this Chapter. | 1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art. | 1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art. |
Article 8 – paragraph 1a (new) | 1a. In complying with the requirement established in this Chapter, due account shall be taken of guidelines developed as referred to in Article 82b, the generally acknowledged state of the art, including as reflected in the relevant harmonised standards and common specifications as referred to in articles 40 and 41 or those already set out in Union harmonisation law;. | In compliance with the requirements of this Chapter, due consideration should be given to the guidelines developed as mentioned in Article 82b, the universally recognized state of the art, including those reflected in the relevant harmonized standards and common specifications as referred to in Articles 40 and 41 or those already established in Union harmonization law. | ||
Article 8 – paragraph 2 | 2. The intended purpose of the high-risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. | 2. The intended purpose of the high-risk AI system, the reasonably foreseeable misuses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. | 2. The intended purpose of the high-risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. | 2. The intended purpose of the high-risk AI system, the reasonably foreseeable misuses, and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. |
Article 8 – paragraph 2a (new) | 2a. As long as the requirements of Title III, Chapters 2 and 3 or Title VIII, Chapters 1, 2 and 3 for high-risk AI systems are addressed by Union harmonisation law listed in Annex II, Section A, the requirements or obligations of those Chapters of this Regulation shall be deemed to be fulfilled, as long as they include the AI component. Requirements of Chapters 2 and 3 of Title III or Title VIII, Chapters 1, 2 and 3 for high-risk AI systems not addressed by Union harmonisation law listed in Annex II Section A, shall be incorporated into that Union harmonisation law, where applicable. The relevant conformity assessment shall be carried out as part of the procedures laid out under Union harmonisation law listed in Annex II, Section A. | As long as the requirements of Title III, Chapters 2 and 3 or Title VIII, Chapters 1, 2 and 3 for high-risk AI systems are addressed by Union harmonisation law listed in Annex II, Section A, the requirements or obligations of those Chapters of this Regulation shall be deemed to be fulfilled, provided they include the AI component. For high-risk AI systems not addressed by Union harmonisation law listed in Annex II Section A, the requirements of Chapters 2 and 3 of Title III or Title VIII, Chapters 1, 2 and 3 shall be incorporated into that Union harmonisation law, where applicable. The relevant conformity assessment shall be carried out as part of the procedures laid out under Union harmonisation law listed in Annex II, Section A. | ||
Article 9 | Article 9 Risk management system | Article 9 Risk management system | Article 9 Risk management system | Article 9 Risk management system |
Article 9 – paragraph 1 | 1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. | 1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. The risk management system can be integrated into, or a part of, already existing risk management procedures relating to the relevant Union sectoral law insofar as it fulfils the requirements of this article. | 1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. | 1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. This risk management system can be integrated into, or a part of, already existing risk management procedures relating to the relevant Union sectoral law insofar as it fulfils the requirements of this article. |
Article 9 – paragraph 2 – introductory part | 2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps: | 2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular review and updating of the risk management process, to ensure its continuing effectiveness, and documentation of any significant decisions and actions taken subject to this Article. It shall comprise the following steps: | 2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps: | 2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular review and systematic updating of the risk management process, to ensure its continuing effectiveness, and documentation of any significant decisions and actions taken subject to this Article. It shall comprise the following steps: |
Article 9 – paragraph 2 – point a | (a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system; | (a) identification, estimation and evaluation of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health or safety of natural persons, their fundamental rights including equal access and opportunities, democracy and rule of law or the environement when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; | (a) identification and analysis of the known and foreseeable risks most likely to occur to health, safety and fundamental rights in view of the intended purpose of the high-risk AI system; | (a) Identification, analysis, estimation and evaluation of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health or safety of natural persons, their fundamental rights including equal access and opportunities, democracy and rule of law or the environment, most likely to occur in view of the intended purpose and under conditions of reasonably foreseeable misuse of the high-risk AI system. |
Article 9 – paragraph 2 – point b | (b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; | deleted | deleted | None |
Article 9 – paragraph 2 – point c | (c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61; | (c) evaluation of emerging significant risks as described in point (a) and identified based on the analysis of data gathered from the post-market monitoring system referred to in Article 61; | (c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61; | (c) evaluation of other possibly arising and emerging significant risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61; |
Article 9 – paragraph 2 – point d | (d) adoption of suitable risk management measures in accordance with the provisions of the following paragraphs. | (d) adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to points a and b of this paragraph in accordance with the provisions of the following paragraphs | (d) adoption of suitable risk management measures in accordance with the provisions of the following paragraphs. | (d) adoption of suitable, appropriate and targeted risk management measures designed to address the risks identified pursuant to points a and b of this paragraph in accordance with the provisions of the following paragraphs. |
Article 9 – paragraph 2 – subparagraph 2 (new) | The risks referred to in this paragraph shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information. | The risks addressed in this context should only pertain to those that can be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or by providing sufficient technical information. | ||
Article 9 – paragraph 3 | 3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications. | 3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2, with a view to mitigate risks effectively while ensuring an appropriate and proportionate implementation of the requirements. | 3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Chapter 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. | 3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications, with a view to mitigating risks effectively and minimising them more effectively while ensuring an appropriate, proportionate, and balanced implementation of the measures to fulfil those requirements. |
Article 9 – paragraph 4 – introductory part | 4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user. In identifying the most appropriate risk management measures, the following shall be ensured: | 4. The risk management measures referred to in paragraph 2, point (d) shall be such that relevant residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is reasonably judged to be acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks and the reasoned judgements made shall be communicated to the deployer. In identifying the most appropriate risk management measures, the following shall be ensured: | 4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable. In identifying the most appropriate risk management measures, the following shall be ensured: | 4. The risk management measures referred to in paragraph 2, point (d) shall be such that any relevant residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is reasonably judged to be acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks and the reasoned judgements made shall be communicated to the user or deployer. In identifying the most appropriate risk management measures, the following shall be ensured: |
Article 9 – paragraph 4 – subparagraph 1 – point a | (a) elimination or reduction of risks as far as possible through adequate design and development; | (a) elimination or reduction of identified risks as far as technically feasible through adequate design and development of the high-risk AI system, involving when relevant, experts and external stakeholders; | (a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 as far as possible through adequate design and development of the high risk AI system; | (a) Elimination or reduction of identified and evaluated risks, as far as technically feasible and as far as possible, through adequate design and development of the high-risk AI system, involving, when relevant, experts and external stakeholders. |
Article 9 – paragraph 4 – subparagraph 1 – point b | (b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated; | (b) where appropriate, implementation of adequate mitigation and control measures addressing significant risks that cannot be eliminated; | (b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated; | (b) where appropriate, implementation of adequate mitigation and control measures in relation to significant risks that cannot be eliminated; |
Article 9 – paragraph 4 – subparagraph 1 – point c | (c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users. | (c) provision of the required information pursuant to Article 13, and, where appropriate, training to deployers. | (c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users. | (c) provision of the required and adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users and deployers. |
Article 9 – paragraph 4 – subparagraph 2 | In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used. | In eliminating or reducing risks related to the use of the high-risk AI system, providers shall take into due consideration the technical knowledge, experience, education and training the deployer may need, including in relation to the presumable context of use. | With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used. | In the process of eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, and training that may be expected from the user or needed by the deployer. This includes considering the environment and the presumable context in which the system is intended to be used. |
Article 9 – paragraph 5 | 5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter. | 5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate and targeted risk management measures and weighing any such measures against the potential benefits and intended goals of the system. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter. | 5. High-risk AI systems shall be tested in order to ensure that high-risk AI systems perform in a manner that is consistent with their intended purpose and they are in compliance with the requirements set out in this Chapter. | 5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate and targeted risk management measures, and weighing any such measures against the potential benefits and intended goals of the system. This testing is to ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter. |
Article 9 – paragraph 6 | 6. Testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose. | 6. Testing procedures shall be suitable to achieve the intended purpose of the AI system. | 6. Testing procedures may include testing in real world conditions in accordance with Article 54a. | 6. Testing procedures shall be suitable and may include testing in real world conditions in accordance with Article 54a, to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose. |
Article 9 – paragraph 7 | 7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. | 7. The testing of the high-risk AI systems shall be performed, prior to the placing on the market or the putting into service. Testing shall be made against prior defined metrics, and probabilistic thresholds that are appropriate to the intended purpose or reasonably foreseeable misuse of the high-risk AI system. | 7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. | 7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose or reasonably foreseeable misuse of the high-risk AI system. |
Article 9 – paragraph 8 | 8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children. | 8. When implementing the risk management system described in paragraphs 1 to 7, providers shall give specific consideration to whether the high-risk AI system is likely to adversely impact vulnerable groups of people or children. | 8. The risk management system described in paragraphs 1 to 7 shall give specific consideration to whether the high-risk AI system is likely to be accessed by or have an impact on persons under the age of 18. | 8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an adverse impact on vulnerable groups, including children or persons under the age of 18. |
Article 9 – paragraph 9 | 9. For credit institutions regulated by Directive 2013/36/EU, the aspects described in paragraphs 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directive. | 9. For providers and AI systems already covered by Union law that require them to establish a specific risk management, including credit institutions regulated by Directive 2013/36/EU, the aspects described in paragraphs 1 to 8 shall be part of or combined with the risk management procedures established by that Union law. | 9. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under relevant sectorial Union law, the aspects described in paragraphs 1 to 8 may be part of the risk management procedures established pursuant to that law. | 9. For providers, including credit institutions regulated by Directive 2013/36/EU and providers of high-risk AI systems already covered by Union law, that require them to establish a specific risk management, the aspects described in paragraphs 1 to 8 may be part of or combined with the risk management procedures established pursuant to that Union law or relevant sectorial Union law. |
Article 10 | Article 10 Data and data governance | Article 10 Data and data governance | Article 10 Data and data governance | None |
Article 10 – paragraph 1 | 1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5. | 1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application. Techniques that do not require labelled input data such as unsupervised learning and reinforcement learning shall be developed on the basis of data sets such as for testing and verification that meet the quality criteria referred to in paragraphs 2 to 5. | 1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5. | 1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5. This should be technically feasible according to the specific market segment or scope of application. Techniques that do not require labelled input data such as unsupervised learning and reinforcement learning shall also be developed on the basis of data sets for testing and verification that meet the quality criteria referred to in paragraphs 2 to 5. |
Article 10 – paragraph 2 – introductory part | 2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular, | 2. Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system. Those measures shall concern in particular, | 2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular, | 2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices suitable for the context of use and the intended purpose of the AI system. Those practices shall concern in particular, |
Article 10 – paragraph 2 – point a | (a) the relevant design choices; | (a) the relevant design choices; | (a) the relevant design choices; | (a) the relevant design choices; |
Article 10 – paragraph 2 – point aa (new) | (aa) transparency as regards the original purpose of data collection; | Transparency should be maintained concerning the original purpose of data collection. | ||
Article 10 – paragraph 2 – point b | (b) data collection; | (b) data collection processes; | (b) data collection processes; | (b) data collection processes; |
Article 10 – paragraph 2 – point c | (c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation; | (c) data preparation processing operations, such as annotation, labelling, cleaning, updating enrichment and aggregation; | (c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation; | (c) relevant data preparation processing operations, such as annotation, labelling, cleaning, updating enrichment and aggregation; |
Article 10 – paragraph 2 – point d | (d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; | (d) the formulation of assumptions, notably with respect to the information that the data are supposed to measure and represent; | (d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; | (d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; |
Article 10 – paragraph 2 – point e | (e) a prior assessment of the availability, quantity and suitability of the data sets that are needed; | (e) an assessment of the availability, quantity and suitability of the data sets that are needed; | (e) a prior assessment of the availability, quantity and suitability of the data sets that are needed; | (e) a prior assessment of the availability, quantity and suitability of the data sets that are needed; |
Article 10 – paragraph 2 – point f | (f) examination in view of possible biases; | (f) examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (‘feedback loops’) and appropriate measures to detect, prevent and mitigate possible biases; | (f) examination in view of possible biases that are likely to affect health and safety of natural persons or lead to discrimination prohibited by Union law; | (f) examination in view of possible biases that are likely to affect the health and safety of persons or natural persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations ('feedback loops'), and appropriate measures to detect, prevent and mitigate possible biases. |
Article 10 – paragraph 2 – point fa (new) | (fa) appropriate measures to detect, prevent and mitigate possible biases | "Appropriate measures should be implemented to detect, prevent, and mitigate possible biases." | ||
Article 10 – paragraph 2 – point g | (g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed. | (g) the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed; | (g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed. | (g) the identification of any relevant data gaps or shortcomings that may prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. |
Article 10 – paragraph 3 | 3. Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof. | 3. Training datasets, and where they are used, validation and testing datasets, including the labels, shall be relevant, sufficiently representative, appropriately vetted for errors and be as complete as possible in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. These characteristics of the datasets shall be met at the level of individual datasets or a combination thereof. | 3. Training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof. | 3. Training, validation and testing data sets, including the labels where they are used, shall be relevant, representative, and to the best extent possible, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets shall be met at the level of individual data sets or a combination thereof, in view of the intended purpose. |
Article 10 – paragraph 4 | 4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to beused. | 4. Datasets shall take into account, to the extent required by the intended purpose or reasonably foreseeable misuses of the AI system, the characteristics or elements that are particular to the specific geographical, contextual behavioural or functional setting within which the high-risk AI system is intended to be used. | 4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used. | 4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpose or reasonably foreseeable misuses of the AI system, the characteristics or elements that are particular to the specific geographical, contextual behavioural or functional setting within which the high-risk AI system is intended to be used. |
Article 10 – paragraph 5 | 5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued. | 5. To the extent that it is strictly necessary for the purposes of ensuring negative bias detection and correction in relation to the high-risk AI systems, the providers of such systems may exceptionally process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving. In particular, all the following conditions shall apply in order for this processing to occur: (a) the bias detection and correction cannot be effectively fulfilled by processing synthetic or anonymised data; (b) the data are pseudonymised; (c) the provider takes appropriate technical and organisational measures to ensure that the data processed for the purpose of this paragraph are secured, protected, subject to suitable safeguards and only authorised persons have access to those data with appropriate confidentiality obligations; (d) the data processed for the purpose of this paragraph are not to be transmitted, transferred or otherwise accessed by other parties; (e) the data processed for the purpose of this paragraph are protected by means of appropriate technical and organisational measures and deleted once the bias has been corrected or the personal data has reached the end of its retention period; (f) effective and appropriate measures are in place to ensure availability, security and resilience of processing systems and services against technical or physical incidents; (g) effective and appropriate measures are in place to ensure physical security of locations where the data are stored and processed, internal IT and IT security governance and management, certification of processes and products; Providers having recourse to this provision shall draw up documentation explaining why the processing of special categories of personal data was necessary to detect and correct biases. | 5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued. | 5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may, under exceptional circumstances, process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725. This processing is subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued. The following conditions must be met for this processing to occur: (a) the bias detection and correction cannot be effectively fulfilled by processing synthetic or anonymised data; (b) the data are pseudonymised; (c) the provider takes appropriate technical and organisational measures to ensure that the data processed for the purpose of this paragraph are secured, protected, subject to suitable safeguards and only authorised persons have access to those data with appropriate confidentiality obligations; (d) the data processed for the purpose of this paragraph are not to be transmitted, transferred or otherwise accessed by other parties; (e) the data processed for the purpose of this paragraph are protected by means of appropriate technical and organisational measures and deleted once the bias has been corrected or the personal data has reached the end of its retention period; (f) effective and appropriate measures are in place to ensure availability, security and resilience of processing systems and services against technical or physical incidents; (g) effective and appropriate measures are in place to ensure physical security of locations where the data are stored and processed, internal IT and IT security governance and management, certification of processes and products; Providers having recourse to this provision shall draw up documentation explaining why the processing of special categories of personal data was necessary to detect and correct biases. |
Article 10 – paragraph 6 | 6. Appropriate data governance and management practices shall apply for the development of high-risk AI systems other than those which make use of techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2. | 6. Appropriate data governance and management practices shall apply for the development of high-risk AI systems other than those which make use of techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2. | 6. For the development of high-risk AI systems not using techniques involving the training of models, paragraphs 2 to 5 shall apply only to the testing data sets. | 6. Appropriate data governance and management practices shall apply for the development of high-risk AI systems, excluding those which make use of techniques involving the training of models, to ensure compliance with paragraph 2. For these systems, paragraphs 2 to 5 shall apply only to the testing data sets. |
Article 10 – paragraph 6a (new) | 6a. Where the provider cannot comply with the obligations laid down in this Article because that provider does not have access to the data and the data is held exclusively by the deployer, the deployer may, on the basis of a contract, be made responsible for any infringement of this Article. | Where the provider is unable to comply with the obligations outlined in this Article due to lack of access to the data, and the data is held exclusively by the deployer, the deployer may be held responsible for any infringement of this Article, based on the terms of a contract. | ||
Article 11 | Article 11 Technical documentation | Article 11 Technical documentation | Article 11 Technical documentation | Article 11 Technical documentation |
Article 11 – paragraph 1 | The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. | The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national supervisory authorities and notified bodies with the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority. | 1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high- risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs, including start-ups, any equivalent documentation meeting the same objectives, unless deemed inappropriate by the competent authority. | The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities, supervisory authorities, and notified bodies with all the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority, unless deemed inappropriate by the same authority. |
Article 11 – paragraph 2 | 2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one single technical documentation shall be drawn up containing all the information set out in Annex IV as well as the information required under those legal acts. | 2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one single technical documentation shall be drawn up containing all the information set out in paragraph 1 as well as the information required under those legal acts. | 2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one single technical documentation shall be drawn up containing all the information set out in Annex IV as well as the information required under those legal acts. | 2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one single technical documentation shall be drawn up containing all the information set out in Annex IV and paragraph 1 as well as the information required under those legal acts. |
Article 11 – paragraph 3 | 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter. | 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter. | 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter. | 3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter. |
Article 11 – paragraph 3a (new) | 3a. Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the technical documentation as part of the documentation concerning internal governance, arrangements, processes and mechanisms pursuant to Article 74 of that Directive. | Credit institutions regulated by Directive 2013/36/EU, acting as providers, are required to maintain the technical documentation. This should be incorporated as part of the documentation concerning internal governance, arrangements, processes, and mechanisms in accordance with Article 74 of the same Directive. | ||
Article 12 | Article 12 Record-keeping | Article 12 Record-keeping | Article 12 Record-keeping | Article 12 Record-keeping |
Article 12 – paragraph 1 | 1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications. | 1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to the state of the art and recognised standards or common specifications. | 1. High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over the duration of the life cycle of the system. | 1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) over the duration of the life cycle of the system while the high-risk AI systems is operating. Those logging capabilities shall conform to the state of the art, recognised standards or common specifications. |
Article 12 – paragraph 2 | 2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system. | 2. In order to ensure a level of traceability of the AI system’s functioning throughout its entire lifetime that is appropriate to the intended purpose of the system, the logging capabilities shall facilitate the monitoring of operations as referred to in Article 29(4) as well as the post market monitoring referred to in Article 61. In particular, they shall enable the recording of events relevant for the identification of situations that may: (a) result in the AI system presenting a risk within the meaning of Article65(1); or (b) lead to a substantial modification of the AI system. | 2. In order to ensure a level of traceability of the AI system’s functioning that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: (i) identification of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or in a substantial modification; (ii) facilitation of the post-market monitoring referred to in Article 61; and (iii) monitoring of the operation of high-risk AI systems referred to in Article 29(4). | 2. To ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system, the logging capabilities shall facilitate the monitoring of operations as referred to in Article 29(4), the post-market monitoring referred to in Article 61, and enable the recording of events relevant for the identification of situations that may: (a) result in the AI system presenting a risk within the meaning of Article 65(1); or (b) lead to a substantial modification of the AI system. This includes the operation of high-risk AI systems. |
Article 12 – paragraph 2a (new) | 2a. High-risk AI systems shall be designed and developed with, the logging capabilities enabling the recording of energy consumption, the measurement or calculation of resource use and environmental impact of the high-risk AI system during all phases of the system’s lifecycle. | High-risk AI systems should be designed and developed with logging capabilities that enable the recording of energy consumption, as well as the measurement or calculation of resource use and environmental impact during all phases of the system's lifecycle. | ||
Article 12 – paragraph 3 | 3. In particular, logging capabilities shall enable the monitoring of the operation of the high-risk AI system with respect to the occurrence of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or lead to a substantial modification, and facilitate the post-market monitoring referred to in Article 61. | deleted | deleted | None |
Article 12 – paragraph 4 | 4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5). | 4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5). | 4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5). | 4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum: (a) recording of the period of each use of the system (start date and time and end date and time of each use); (b) the reference database against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5). |
Article 13 | Article 13 Transparency and provision of information to users | Article 13 Transparency and provision of information | Article 13 Transparency and provision of information to users | Article 13 Transparency and provision of information to users |
Article 13 – paragraph 1 | 1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title. | 1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning. Appropriate transparency shall be ensured in accordance with the intended purpose of the AI system, with a view to achieving compliance with the relevant obligations of the provider and user set out in Chapter 3 of this Title. Transparency shall thereby mean that, at the time the high-risk AI system is placed on the market, all technical means available in accordance with the generally acknowledged state of art are used to ensure that the AI system’s output is interpretable by the provider and the user. The user shall be enabled to understand and use the AI system appropriately by generally knowing how the AI system works and what data it processes, allowing the user to explain the decisions taken by the AI system to the affected person pursuant to Article 68(c). | 1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title and enabling users to understand and use the system appropriately. | 1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to interpret and reasonably understand the system’s functioning and output. An appropriate type and degree of transparency shall be ensured, in accordance with the intended purpose of the AI system and the generally acknowledged state of art, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title. Transparency shall thereby mean that, at the time the high-risk AI system is placed on the market, all technical means available are used to ensure that the AI system’s output is interpretable by the provider and the user. The user shall be enabled to understand and use the AI system appropriately by generally knowing how the AI system works and what data it processes, allowing the user to explain the decisions taken by the AI system to the affected person pursuant to Article 68(c). |
Article 13 – paragraph 2 | 2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users. | 2. High-risk AI systems shall be accompanied by intelligible instructions for use in an appropriate digital format or made otherwise available in a durable medium that include concise, correct, clear and to the extent possible complete information that helps operating and maintaining the AI system as well as supporting informed decision-making by users and is reasonably relevant, accessible and comprehensible to users . | 2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users. | 2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise available in a durable medium that include concise, complete, correct, and clear information that is relevant, accessible, and comprehensible to users. This information should support informed decision-making by users and help in operating and maintaining the AI system. |
Article 13 – paragraph 3 – introductory part | 3. The information referred to in paragraph 2 shall specify: | 3. To achieve the outcomes referred to in paragraph 1, information referred to in paragraph 2 shall specify: | 3. The information referred to in paragraph 2 shall specify: | 3. To achieve the outcomes referred to in paragraph 1, the information referred to in paragraph 2 shall specify: |
Article 13 – paragraph 3 – point a | (a) the identity and the contact details of the provider and, where applicable, of its authorised representative; | (a) the identity and the contact details of the provider and, where applicable, of its authorised representatives; | (a) the identity and the contact details of the provider and, where applicable, of its authorised representative; | (a) the identity and the contact details of the provider and, where applicable, of its authorised representative or representatives; |
Article 13 – paragraph 3 – point aa (new) | (aa) where it is not the same as the provider, the identity and the contact details of the entity that carried out the conformity assessment and, where applicable, of its authorised representative; | In cases where the provider is not the same entity, the identity and contact details of the entity that conducted the conformity assessment should be provided, along with the details of its authorized representative, if applicable. | ||
Article 13 – paragraph 3 – point b – introductory part | (b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including: | (b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including, where appropriate: | (b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including: | (b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including, where appropriate: |
Article 13 – paragraph 3 – point b – point i | (i) its intended purpose; | (i) its intended purpose; | (i) its intended purpose, inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used; | (i) its intended purpose, including the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used; |
Article 13 – paragraph 3 – point b – point ii | (ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; | (ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any clearly known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; | (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; | (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any clearly known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; |
Article 13 – paragraph 3 – point b – point iii | (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights; | (iii) any clearly known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety, fundamental rights or the environment, including, where appropriate, illustrative examples of such limitations and of scenarios for which the system should not be used; | (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose, which may lead to risks to the health and safety or fundamental rights referred to in Aricle 9(2); | (iii) any clearly known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety, fundamental rights referred to in Article 9(2), or the environment, including, where appropriate, illustrative examples of such limitations and of scenarios for which the system should not be used; |
Article 13 – paragraph 3 – point b – point iiia (new) | (iiia) the degree to which the AI system can provide an explanation for decisions it takes; | The AI system should have the capability to provide a comprehensive explanation for the decisions it makes. | ||
Article 13 – paragraph 3 – point b – point iv | (iv) its performance as regards the persons or groups of persons on which the system is intended to be used; | (iv) its performance as regards the persons or groups of persons on which the system is intended to be used; | (iv) when appropriate, its behaviour regarding specific persons or groups of persons on which the system is intended to be used; | (iv) its performance and, when appropriate, its behaviour regarding the persons or groups of persons on which the system is intended to be used; |
Article 13 – paragraph 3 – point b – point v | (v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system. | (v) relevant information about user actions that may influence system performance, including type or quality of input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system. | (v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system; | (v) When appropriate, specifications for the input data, including relevant information about user actions that may influence system performance, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system. |
Article 13 – paragraph 3 – point b – point vi (new) | (vi) when appropriate, description of the expected output of the system. | When deemed suitable, a description of the expected output of the system should be provided. | ||
Article 13 – paragraph 3 – point c | (c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; | (c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; | (c) the changes to the high-risk AI system and its performance which have been pre- determined by the provider at the moment of the initial conformity assessment, if any; | (c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; |
Article 13 – paragraph 3 – point d | (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; | (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; | (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; | (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; |
Article 13 – paragraph 3 – point e | (e) the expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates. | (e) any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates, through its expected lifetime. | (e) the computational and hardware resources needed, the expected lifetime of the high- risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; | (e) The expected lifetime of the high-risk AI system, the computational and hardware resources needed, and any necessary maintenance and care measures, including their frequency and as regards software updates, to ensure the proper functioning of that AI system throughout its expected lifetime. |
Article 13 – paragraph 3 – point ea [p] / point f [C] (new) | (ea) a description of the mechanisms included within the AI system that allows users to properly collect, store and interpret the logs in accordance with Article 12(1). | (f) a description of the mechanism included within the AI system that allows users to properly collect, store and interpret the logs, where relevant. | (f) a description of the mechanisms included within the AI system that allows users to properly collect, store and interpret the logs in accordance with Article 12(1), where relevant. | |
Article 13 – paragraph 3 – point eb (new) | (eb) The information shall be provided at least in the language of the country where the AI system is used. | The information should be provided at least in the language of the country where the AI system is used. | ||
Article 13 – paragraph 3 a (new) | 3a. In order to comply with the obligations laid down in this Article, providers and users shall ensure a sufficient level of AI literacy in line with Article 4b. | In order to fulfill the obligations outlined in this Article, it is necessary for both providers and users to ensure an adequate level of AI literacy, as stipulated in Article 4b. | ||
Article 14 | Article 14 Human oversight | Article 14 Human oversight | Article 14 Human oversight | Article 14 Human oversight |
Article 14 – paragraph 1 | 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. | 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. Natural persons in charge of ensuring human oversight shall have sufficient level of AI literacy in accordance with Article 4b and the necessary support and authority to exercise that function, during the period in which the AI system is in use and to allow for thorough investigation after an incident. | 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. | 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. The oversight by natural persons should be proportionate to the risks associated with those systems. Natural persons in charge of ensuring human oversight shall have sufficient level of AI literacy in accordance with Article 4b and the necessary support and authority to exercise that function, and to allow for thorough investigation after an incident. |
Article 14 – paragraph 2 | 2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter. | 2. Human oversight shall aim at preventing or minimising the risks to health, safety, fundamental rights or environment that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter and where decisions based solely on automated processing by AI systems produce legal or otherwise significant effects on the persons or groups of persons on which the system is to be used. | 2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter | 2. Human oversight shall aim at preventing or minimising the risks to health, safety, fundamental rights or environment that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter and where decisions based solely on automated processing by AI systems produce legal or otherwise significant effects on the persons or groups of persons on which the system is to be used. |
Article 14 – paragraph 3 – introductory part | 3. Human oversight shall be ensured through either one or all of the following measures: | 3. Human oversight shall take into account the specific risks, the level of automation, and context of the AI system and shall be ensured through either one or all of the following types of measures: | 3. Human oversight shall be ensured through either one or all of the following types of measures: | 3. Human oversight shall take into account the specific risks, the level of automation, and context of the AI system and shall be ensured through either one or all of the following types of measures: |
Article 14 – paragraph 3 – point a | (a)identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service | (a)identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service | (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; | (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; |
Article 14 – paragraph 3 – point b | (b) identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user. | (b) identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user. | (b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user. | (b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user. |
Article 14 – paragraph 4 – introductory part | 4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances: | 4. For the purpose of implementing paragraphs 1 to 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the circumstances: | 4. For the purpose of implementing paragraphs 1 to 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the circumstances: | 4. In order to implement the measures referred to in paragraphs 1 to 3, the high-risk AI system should be provided to the user in such a way that enables the individuals to whom human oversight is assigned to perform their duties, as appropriate and proportionate to the circumstances. |
Article 14 – paragraph 4 – point a | (a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible; | (a) be aware of and sufficiently understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible; | (a) to understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation; | (a) Understand and be fully aware of the capacities and limitations of the high-risk AI system, and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible. |
Article 14 – paragraph 4 – point b | (b)remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; | (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; | (b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’); | (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; |
Article 14 – paragraph 4 – point c | (c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available; | (c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available; | (c) to correctly interpret the high-risk AI system’s output, taking into account for example the interpretation tools and methods available; | (c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available, for example; |
Article 14 – paragraph 4 – point d | (d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; | (d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; | (d) to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; | (d) have the ability to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; |
Article 14 – paragraph 4 – point e | (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure. | (e) be able to intervene on the operation of the high-risk AI system or interrupt, the system through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state, except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art. | (e) to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure. | (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop" button or a similar procedure that allows the system to come to a halt in a safe state, except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art. |
Article 14 – paragraph 5 | 5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons. | 5. For high-risk AI systems referred to in point1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons with the necessary competence, training and authority. | 5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. The requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum, in cases where Union or national law considers the application of this requirement to be disproportionate. | 5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. The requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum, in cases where Union or national law considers the application of this requirement to be disproportionate. |
Article 15 | Article 15 Accuracy, robustness and cybersecurity | Article 15 Accuracy, robustness and cybersecurity | Article 15 Accuracy, robustness and cybersecurity | Article 15 Accuracy, robustness and cybersecurity |
Article 15 – paragraph 1 | 1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. | 1. High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application. | 1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. | High-risk AI systems shall be designed and developed following the principle of security by design and by default, in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application. |
Article 15 – paragraph 1a (new) | 1a. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 of this Article, the AI Office shall bring together national and international metrology and benchmarking authorities and provide non-binding guidance on the matter as set out in Article 56, paragraph 2, point (a). | To address the technical aspects of measuring the appropriate levels of accuracy and robustness as outlined in paragraph 1 of this Article, the AI Office is tasked with convening national and international metrology and benchmarking authorities. The AI Office will provide non-binding guidance on this matter, as detailed in Article 56, paragraph 2, point (a). | ||
Article 15 – paragraph 1b (new) | 1b. To address any emerging issues across the internal market with regard to cybersecurity, the European Union Agency for Cybersecurity (ENISA) shall be involved alongside the European Artificial Intelligence Board as set out Article 56, paragraph 2, point (b). | To address any potential emerging issues within the internal market concerning cybersecurity, the European Union Agency for Cybersecurity (ENISA) shall be actively involved. This involvement will be in collaboration with the European Artificial Intelligence Board as stipulated in Article 56, paragraph 2, point (b). | ||
Article 15 – paragraph 2 | 2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. | 2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. The language used shall be clear, free of misunderstandings or misleading statements. | 2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. | 2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. The language used shall be clear, free of misunderstandings or misleading statements. |
Article 15 – paragraph 3 – subparagraph 1 | 3. High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. | 3. Technical and organisational measures shall be taken to ensure that high-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. | 3. High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. | 3. Measures shall be implemented to ensure that high-risk AI systems are resilient in regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, particularly due to their interaction with natural persons or other systems. |
Article 15 – paragraph 3 – subparagraph 2 | The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. | The robustness of high-risk AI systems may be achieved by the appropriate provider with input from the user, where necessary, through technical redundancy solutions, which may include backup or fail-safe plans. | The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. | The robustness of high-risk AI systems may be achieved by the appropriate provider with input from the user, where necessary, through technical redundancy solutions, which may include backup or fail-safe plans. |
Article 15 – paragraph 3 – subparagraph 3 | High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures. | High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs influencing input for future operations (‘feedback loops’) and malicious manipulation of inputs used in learning during operation are duly addressed with appropriate mitigation measures. | High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures. | High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that the risk of possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) and malicious manipulation of inputs used in learning during operation are duly addressed, eliminated or reduced as far as possible with appropriate mitigation measures. |
Article 15 – paragraph 4 – subparagraph 1 | High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities. | High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, behaviour, outputs or performance by exploiting the system vulnerabilities. | 4. High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities | High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, behaviour, outputs or performance by exploiting the system vulnerabilities. |
Article 15 – paragraph 4 – subparagraph 2 | The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. | The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. | The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. | The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. |
Article 15 – paragraph 4 – subparagraph 3 | The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws. | The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (‘data poisoning’), or pre-trained components used in training (‘model poisoning’) , inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws, which could lead to harmful decision-making. | The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws. | The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset ('data poisoning'), inputs designed to cause the model to make a mistake ('adversarial examples'), pre-trained components used in training ('model poisoning'), confidentiality attacks or model flaws, which could lead to harmful decision-making. |
CHAPTER 3 | CHAPTER 3 OBLIGATIONS OF PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS and other parties | OBLIGATIONS OF PROVIDERS AND DEPLOYERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES | OBLIGATIONS OF PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES | OBLIGATIONS OF PROVIDERS, DEPLOYERS, AND USERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES |
Article 16 | Article 16 Obligations of providers of high-risk AI systems | Article 16 Obligations of providers and deployers of high-risk AI systems and other parties | Article 16 Obligations of providers of high-risk AI systems | Article 16 Obligations of providers and deployers of high-risk AI systems |
Article 16 – paragraph 1 – introductory part | Providers of high-risk AI systems shall: | Providers of high-risk AI systems shall: | Providers of high-risk AI systems shall: | None |
Article 16 – paragraph 1 – point a | (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title; | (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; | (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title; | (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; |
Article 16 – paragraph 1 – point aa (new) | (aa) indicate their name, registered trade name or registered trade mark, and their address and contact information on the high-risk AI system or, where that is not possible, on its accompanying documentation, as appropriate; | (aa) indicate their name, registered trade name or registered trade mark, the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable; | (aa) indicate their name, registered trade name or registered trade mark, and their address and contact information at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as appropriate; | |
Article 16 – paragraph 1 – point ab (new) | (ab) ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware of the risk of automation or confirmation bias; | Ensure that natural persons, who are assigned human oversight of high-risk AI systems, are specifically informed about the risk of automation or confirmation bias. | ||
Article 16 – paragraph 1 – point ac (new) | (ac) provide specifications for the input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system; | Provide specifications for the input data, or any other relevant information in terms of the datasets used, including their limitations and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuses of the AI system. | ||
Article 16 – paragraph 1 – point b | (b) have a quality management system in place which complies with Article 17; | (b) have a quality management system in place which complies with Article 17; | (b) have a quality management system in place which complies with Article 17; | (b) have a quality management system in place which complies with Article 17; |
Article 16 – paragraph 1 – point c | (c) draw-up the technical documentation of the high-risk AI system; | (c) draw-up and keep the technical documentation of the high-risk AI system referred to in Article 11; | (c) keep the documentation referred to in Article 18; | (c) draw-up, keep and maintain the technical documentation of the high-risk AI system referred to in Article 11 and Article 18; |
Article 16 – paragraph 1 – point d | (d) when under their control, keep the logs automatically generated by their high-risk AI systems; | (d) when under their control, keep the logs automatically generated by their high-risk AI systems that are required for ensuring and demonstrating compliance with this Regulation, in accordance with Article 20; | (d) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 20; | (d) when under their control, keep the logs automatically generated by their high-risk AI systems, as required for ensuring and demonstrating compliance with this Regulation, in accordance with Article 20; |
Article 16 – paragraph 1 – point e | (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service; | (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service, in accordance with Article 43; | (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its placing on the market or putting into service; | (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, as referred to in Article 43, prior to its placing on the market or putting into service; |
Article 16 – paragraph 1 – point ea (new) | (ea) draw up an EU declaration of conformity in accordance with Article 48; | Draw up an EU declaration of conformity in accordance with Article 48. | ||
Article 16 – paragraph 1 – point eb (new) | (eb) affix the CE marking to the high-risk AI system to indicate conformity with this Regulation, in accordance with Article 49; | The high-risk AI system should be affixed with the CE marking to indicate conformity with this Regulation, as per the stipulations of Article 49. | ||
Article 16 – paragraph 1 – point f | (f) comply with the registration obligations referred to in Article 51; | (f) comply with the registration obligations referred to in Article 51; | (f) comply with the registration obligations referred to in Article 51(1); | (f) comply with the registration obligations referred to in Article 51(1); |
Article 16 – paragraph 1 – point g | (g) take the necessary corrective actions, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title; | (g) take the necessary corrective actions as referred to in Article 21 and provide information in that regard; | (g) take the necessary corrective actions as referred to in Article 21, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title; | (g) take the necessary corrective actions, as referred to in Article 21, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title and provide information in that regard; |
Article 16 – paragraph 1 – point h | (h) inform the national competent authorities of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken; | deleted | (h) inform the relevant national competent authority of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken; | (h) inform the relevant national competent authorities of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken; |
Article 16 – paragraph 1 – point i | (i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49; | deleted | (i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49; | (i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49; |
Article 16 – paragraph 1 – point j | (j) upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title. | (j) upon a reasoned request of a national supervisory authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title. | (j) upon request of a national competent authority, demonstrate the conformity of the highrisk AI system with the requirements set out in Chapter 2 of this Title. | (j) upon a reasoned request of a national competent authority or supervisory authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title. |
Article 16 – paragraph 1 – point ja (new) | (ja) ensure that the high-risk AI system complies with accessibility requirements. | Ensure that the high-risk AI system adheres to accessibility requirements. | ||
Article 17 | Article 17 Quality management system | Article 17 Quality management system | Article 17 Quality management system | Article 17 Quality management system |
Article 17 – paragraph 1 – introductory part | 1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: | 1. Providers of high-risk AI systems shall have a quality management system in place that ensures compliance with this Regulation. It shall be documented in a systematic and orderly manner in the form of written policies, procedures or instructions, and can be incorporated into an existing quality management system under Union sectoral legislative acts. It shall include at least the following aspects: | 1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: | 1. Providers of high-risk AI systems shall establish a quality management system in place that ensures compliance with this Regulation. This system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions. It can be incorporated into an existing quality management system under Union sectoral legislative acts. The system shall include at least the following aspects: |
Article 17 – paragraph 1 – point a | (a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; | deleted | (a) a strategy for regulatory compliance, i |