Joint cybersecurity agencies publish guidance on secure adoption of agentic AI
The guidance outlines cybersecurity risks, vulnerabilities, and best practices for the adoption of agentic AI systems in organisational IT environments.
Cybersecurity agencies from Australia, Canada, New Zealand, the United Kingdom and the United States have published joint guidance on the careful adoption of agentic AI services in organisational IT environments.
The guidance is intended to help organisations design, develop, deploy and operate agentic AI systems, and to make informed risk assessments and mitigations. It primarily focuses on large-language-model-based agentic AI systems.
The publication examines threats to and vulnerabilities within agentic AI systems, including risks introduced through system components, integrations and downstream use. It also considers broader risks arising from agentic AI behaviour in IT environments.
The guidance covers wider agentic AI security considerations, specific security risks, best practices for securing agentic AI systems and steps organisations can take to prepare for emerging and future threats.
It was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the UK National Cyber Security Centre.
Why does it matter?
Agentic AI systems can act with greater autonomy than conventional software tools, including by interacting with other systems, using integrations and taking steps towards defined goals. That creates new cybersecurity risks when such tools are embedded in organisational IT environments. The joint guidance shows that major cyber agencies are treating agentic AI as an emerging operational security issue, not only as a question of AI policy or experimentation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
