US NTIA recommends policy reforms to foster accountability and trustworthiness in AI systems

On 27 March 2024, the National Telecommunications and Information Administration (NTIA) issued important guidelines to guarantee that AI systems perform what they claim – without harm.

AI and security

The NTIA’s AI Accountability Policy Report advocates for increased openness in AI systems, independent inspections, and penalties for posing unacceptable risks or making false claims. According to the NTIA press release, ‘Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm.’

Why does it matter?


Following President Biden’s executive order on AI last October and the administration’s efforts to leverage AI’s potential while mitigating its risks, the NTIA issued eight sets of policy recommendations to assist in safe, secure, and trustworthy AI innovation. They include guidance and standards for audits and auditors, support for people and tools for research, and regulation through independent audits and regulatory inspections of high-risk AI systems, strengthening its capacity to address risks and practices related to AI across sectors of the economy.


The federal government will invest in the resources necessary for independent assessment of AI systems, including by further supporting the newly established AI Safety Institute housed at the National Institute for Standards and Technology (NIST) and by creating and funding a National AI Research Resource with datasets to test for equity, efficacy, computing, the cloud infrastructure required to perform stringent and independent evaluations, and a workforce development program.


NTIA will also collaborate with private sector partners to develop accountability mechanisms and with other branches of the federal government to support the policy recommendations included in the report.