UK AI Safety Institute unveils open-source platform for global AI safety evaluations

The UK AI Safety Institute has launched a new open testing platform to help improve AI safety evaluations.

 Logo, Car, Transportation, Vehicle

The UK AI Safety Institute has released ‘Inspect,’ a new AI safety evaluation platform designed to accelerate the safe innovation of AI models globally.

To help standardize and enhance the quality of AI safety evaluations globally, the new platform is open and available to the international AI community. Inspect includes a software library to facilitate the assessment of individual AI models’ capabilities by testers from various sectors, including developers, start-ups, and researchers. AI models will be evaluated on core knowledge, reasoning, and autonomous capabilities. Inspect is composed of three basic elements: data sets, which provide samples for evaluation testing; solvers, to perform the tests; and scorers, tasked to evaluate solvers’ work and convert test results into usable metrics.

Why does it matter?

After Prime Minister Rishi Sunak established the world’s first AI Safety Institute during the inaugural AI Safety Summit held at Bletchley park last November, the UK is now pushing for global collaboration on AI safety testing for frontier AI models.

The Inspect platform is available on an MIT open-source license to make it accessible to the entire AI community to use and adapt. The government-backed institute wants to spearhead the acceleration of global AI safety evaluation requirements, aiming at a standardized methodology.

The release of Inspect is a significant measure in the development of AI safety, with the platform’s open accessibility designed to promote international collaboration and knowledge sharing in AI safety research.
Ian Hogarth, Chair of the UK AI Safety Institute’s Chair, highlighted that motivation and inspiration are drawn from other major open-source AI projects. Hogarth hopes that “Inspect will not only be used for model safety tests but also contribute to developing high-quality evaluations across the board.”
The move is part of the UK’s efforts to become a global hub for AI safety research and development, ultimately contributing to designing and deploying secure and beneficial AI models for all.