UK NAO guide sets AI oversight questions for public bodies
Public sector organisations should not assume AI productivity gains will translate into organisation-wide savings, the NAO warns.
The UK National Audit Office has published a good practice guide for public sector organisations using AI, setting out questions for audit and risk assurance committees overseeing the planning, deployment and scaling of the technology.
The guide draws on NAO findings, the UK government’s AI Playbook and lessons from digital transformation programmes. It advises committees to assess whether organisations are clear on why they are using AI, what risks they need to manage and how responsible adoption will be assured. The NAO says the guide will evolve as AI continues to develop.
AI is already being used across government for fraud and error detection, imaging, document processing, operational management, research and monitoring, text generation, virtual assistants and coding support. The NAO notes that several of these uses may involve personal data, making governance, assurance and data protection especially important.
The guide warns that productivity gains from AI should not be assumed. AI may speed up individual tasks, but those gains do not automatically translate into organisation-wide savings, particularly where work still depends on approvals, governance processes or human judgement.
The NAO also highlights external risks from AI use, including increased demand on public services, more low-quality or repeated submissions, higher fraud risks, cyberattacks and attempts to extract sensitive data. Audit committees are advised to ensure organisations can anticipate, monitor and mitigate such risks.
Key areas for oversight include innovation, AI strategy, leadership and skills, data, security, pilots, scaling, guardrails and workforce culture. The guide says strong digital and AI strategies should be business-led, aligned with organisational priorities, backed by leadership support and supported by clear governance, funding and measurable objectives.
Data quality, accessibility and governance are presented as foundational risks, with weak data affecting model performance, bias, explainability and reliability. The NAO also warns that AI can increase exposure to operational and security risks, including data breaches, model manipulation, supply-chain risk and resilience problems.
Recommended guardrails include acceptable use policies, data protection controls, bias testing, human oversight of automated decisions and clear accountability for AI outcomes. The guide also urges organisations to plan for workforce changes, including new skills needs, role redesign, AI literacy, risks to entry-level learning, overreliance on automation and loss of institutional knowledge.
Why does it matter?
The guide shows that public-sector AI adoption is becoming an audit, governance and accountability issue, not only a technology project. By focusing on oversight questions, the NAO is pushing public bodies to test whether AI projects have clear objectives, reliable data, measurable benefits, security controls and safeguards for staff and citizens before they are scaled.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
