ISO/IEC 42001: a new global standard for responsible AI management systems

The standard covers key components such as transparency, explainability, and autonomy, and it includes various requirements for managing AI systems effectively.

AI Brain: Digital graphic design showcasing AI technology.

On December 18, 2023, ISO/IEC 42001 on artificial intelligence (AI) management systems was published to assist organizations in developing a strong AI governance framework.

ISO/IEC 42001 is ‘an international standard that provides a framework for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS).’
The standard covers key components such as transparency, explainability, and autonomy, and it includes various requirements for managing AI systems effectively. These provisions cover ‘leadership, planning, support, operation, performance evaluation, and continual improvement.’

When deploying AI systems, ISO/IEC 42001 highlights ethical principles and values such as fairness, non-discrimination, and respect for privacy. ISO/IEC 42001 is part of a broader set of standards aimed at governing best practices for trustworthy AI deployment and improvement. It serves as the foundation for external certification and auditing of AI systems in accordance with the risk assessment methodology in the future ISO/IEC 42006 standard. ISO/IEC 42001 is scalable, making it relevant for organizations of all sizes and sectors.


Why does it matter?


ISO/IEC 42001 is the first international management system standard for the safe and reliable development and implementation of AI, aiming to help businesses and organizations develop a robust AI governance framework. The standard is applicable to all types of companies in any industry and is the only AIMS that is certifiable.
It is comparable to ISO 9001 on quality management and ISO 27001 on information security, providing best practices, rules, definitions, and guidance to manage risks and operational aspects of AI systems.
The new ISO standard is designed to promote the development and use of AI systems that are trustworthy, transparent, and accountable.
It will help organizations identify and mitigate risks related to AI development and implementation, prioritize human well-being, safety, and user experience in AI design and deployment, and guide organizations in complying with relevant legislation, regulations, data protection rules, or obligations towards stakeholders.

To address some of the worries about AI, governments throughout the world are racing to propose rules and regulations to govern its usage, such as in the US with the Biden executive order on safe, secure, and trustworthy development and use of AI and the European Union’s AI Act. The new ISO/IEC standard specifies the requirements for a certifiable AI management system framework, allowing organisations to maximise advantages while assuring stakeholders that systems have been established and are being managed ethically. It provides a comprehensive framework for managing AI systems, enabling global interoperability, and laying the groundwork for responsible AI development and implementation. It is an essential tool made available to organizations to enable ethical and trustworthy AI model development and deployment.