AI safeguards prove hard to define

The US AI safety chief warns of tough challenges.

 Adult, Male, Man, Person, Light

Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised. Elizabeth Kelly, director of the US Artificial Intelligence Safety Institute, highlighted challenges such as “jailbreaks” that bypass AI security measures and the ease of tampering with digital watermarks meant to identify AI-generated content. Speaking at the Reuters NEXT conference, Kelly acknowledged the difficulty in establishing best practices without clear evidence of their effectiveness.

The US AI Safety Institute, launched under the Biden administration, is collaborating with academic, industry, and civil society partners to address these issues. Kelly emphasised that AI safety transcends political divisions, calling it a “fundamentally bipartisan issue” amid the upcoming transition to Donald Trump’s presidency. The institute recently hosted a global meeting in San Francisco, bringing together safety bodies from 10 countries to develop interoperable tests for AI systems.

Kelly described the gathering as a convergence of technical experts focused on practical solutions rather than typical diplomatic formalities. While the challenges remain significant, the emphasis on global cooperation and expertise offers a promising path forward.