US AI Safety Institute Consortium formed to enhance AI safety standards

The US introduced the AI Safety Institute Consortium (AISIC), which aims to bring together AI creators, users, academics, researchers, and civil society organisations to support the development and deployment of safe and reliable AI.

United States flag - great for a background

US Secretary of Commerce Gina Raimondo has announced the establishment of the US AI Safety Institute Consortium (AISIC), a collaborative effort to advance the development and deployment of safe and trustworthy AI. The consortium, housed under the US AI Safety Institute (USAISI), aligns with President Biden’s Executive Order, emphasising key priorities such as red-teaming guidelines, capability evaluations, risk management, safety and security, and watermarking synthetic content.

The AISIC, joined by prominent technology corporations such as Google, Amazon, Meta, OpenAI, IBM, and Microsoft, boasts a collective participation of 200 companies and stakeholders within the AI industry, united in their commitment to advancing the safe development and deployment of AI technologies. The AISIC aims to support these goals and ensure that America remains at the forefront in the AI tech revolution, bringing together industry, civil society, and academia leaders to address challenges and establish measurements and standards for responsible AI development.

Bruce Reed, White House Deputy Chief of Staff, stressed the need for swift and coordinated efforts across government, private sector, and academia to keep pace with AI advancements. The AISIC, as a critical forum, provides a platform for collaboration among various stakeholders to harness the potential of AI while effectively managing associated risks.

Why does it matter?

Companies and organisations gathered by AISIC actively create and utilise advanced AI systems and hardware, contributing to the foundational understanding of AI’s transformative impact on society. With representation from professions deeply engaged in AI usage, the consortium forms the most extensive collection of test and evaluation teams established to date.

The primary focus of the AISIC is to lay the groundwork for a new measurement science in AI safety. By bringing together state and local governments, non-profits, and organisations from like-minded nations, the consortium aims to develop interoperable, effective and global safety tools. The cooperation endeavours within the AISIC reflect a concerted push towards responsible AI development and the establishment of robust safety standards in the evolving landscape of AI.