US NIST to develop AI risk management framework
The US National Institute of Standards and Technology (NIST) has launched a call for public input that will inform the development of an artificial intelligence (AI) risk management framework. The framework is intended to serve as guidance to help technology developers, users, and evaluators improve the trustworthiness of AI systems. Organisations and individuals involved in developing and using AI systems are invited to provide input on how to address the full scope of AI risks and how a framework for managing the risks could be constructed. Some of the issues on which information is requested include (a) include challenges in improving the management of AI-related risks; (b) how organisations define and manage characteristics and principles of AI trustworthiness; (c) the extent to which AI risks are incorporated into the organisations’ overarching risk management systems; and (d) AI risk management standards, frameworks, models, methodologies, tools, guidelines and best practices, principles, and practices which NIST should consider to ensure that the risk management framework is aligned with other efforts.