Singapore opens global sandbox to test AI responsibly
Eleven principles guide evaluations, mapped to NIST and ISO frameworks for consistency.
Singapore has launched a global AI assurance sandbox led by IMDA and AI Verify Foundation. Minister Josephine Teo framed the move during PDPC Week, emphasising rapid, cross-border testing. The programme opened in July 2025 for real-world pilots.
The sandbox aligns with eleven governance principles mapped to international frameworks. References include NIST’s AI Risk Management Framework, ISO/IEC 42001, and the Hiroshima Process code. Companies test systems against shared benchmarks rather than fragmented national rules.
Organisers aim to cut testing barriers, connect deployers with specialist evaluators, and grow assurance markets. Prior PETs sandboxes helped firms such as Mastercard coordinate controls across jurisdictions. Singapore participates in the International Network of AI Safety Institutes.
Countries are setting up AI sandboxes and partnerships. Datasphere reports dozens of initiatives worldwide, with Europe building complementary pathways. The sandbox is intended to inform future standards and policymaking.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
