Amazon, Apple, Google, Meta, Microsoft, NVIDIA, OpenAI, join new 200-strong AI safety consortium unveiled by the White House

The consortium is led by major industry players such as OpenAI, Google, Microsoft, Meta, Apple, Amazon, Intel, NVIDIA, and Anthropic, as well as other firms, academic institutions, industry researchers, civil society organizations, and government agencies.

 Astronomy, Moon, Nature, Night, Outdoors, Robot

The Biden-Harris administration has unveiled the creation of the US Artificial Intelligence (AI) Safety Institute Consortium (AISIC). Under the umbrella of the US AI Safety Institute (USAISI) at the National Institute of Standards and Technology (NIST), the initiative brings together more than 200 entities to address the risks associated with AI’s development and deployment. The consortium is led by major industry players such as OpenAI, Google, Microsoft, Meta, Apple, Amazon, Intel, NVIDIA, and Anthropic, as well as other firms, academic institutions, industry researchers, civil society organizations, and government agencies.

Why does it matter?


US Commerce Secretary Gina Raimondo highlighted the crucial role of the US government in setting standards and developing tools for AI safety. The AISIC aims to work on priority actions outlined in President Biden’s landmark executive order, focusing on AI capability evaluations, risk management, safety and security, and watermarking synthetic content. To reduce the risks AI poses to consumers, workers, minority groups, and national security, the White House has mandated federal agencies to set guidelines for testing AI systems and managing associated risks.


Despite the Biden administration’s efforts to implement safeguards, Congress has been unable to address AI risks, with no new laws passed despite multiple high-level hearings and gatherings.
The AISIC is the largest collection of test and evaluation teams ever assembled, and its task is to build the framework for a new measurement science in AI safety.