US and UK AI Safety Institutes partner for advanced model testing

The United States and the United Kingdom launched a new alliance on the science of artificial intelligence (AI) safety amid mounting concerns about the next generation of systems.

 American Flag, Flag

The US and UK have announced a partnership on the science of AI safety, with a particular focus on developing tests for the most advanced AI models.

US Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to collaborate on advanced AI model testing after agreements during the AI Safety Summit at Bletchley Park last November. The joint program will involve the UK’s and US’s AI Safety Institutes working together on research, safety evaluations, and guidance for AI safety.

Why does it matter?

The partnership aims to accelerate the work of both institutes across the full spectrum of AI risks, from national security concerns to broader societal issues. The UK and US plan to conduct at least one joint testing exercise on a publicly accessible model and are considering staff exchanges between the institutes. The two partners are among several countries that have created public AI safety institutions.

In October, British Prime Minister Rishi Sunak said that its AI Safety Institute would investigate and test new AI models. The US announced in November that it was establishing its own institute to assess threats from frontier AI models, and in February, Secretary Raimondo launched the AI Safety Institute Consortium (AISIC) to partner with 200 firms and organisations. The US-UK partnership is intended to strengthen the special relationship between the two countries and contribute to the global effort to ensure the safe development of AI.