Singapore launches AI testing framework

Singapore introduced A.I. Verify, a testing framework for responsible AI development, to help companies verify their systems’ performance against transparency, safety, accountability, and human oversight principles. It relies on self-testing to encourage public trust in AI. Companies like Google, Meta, and Microsoft have already tested or provided feedback on the framework.

Companies that want to demonstrate that they develop responsible AI can do so by using Singapore’s recently launched A.I. Verify, a governance testing framework and toolkit to be piloted in the country. 

The framework defines technical tests and process checks that will allow AI developers and owners to verify the performance of their AI systems against a set of principles such as transparency, safety and resilience, accountability, and human agency and oversight. While it relies on self-testing and does not guarantee that tested AI systems are completely safe, the framework is expected to help foster public trust in AI. 

Google, Meta, and Microsoft are among a handful of companies which have already tested the framework or provided feedback.