NSA warns AI poses new risks for operational technology
Experts stress that pre-existing OT infrastructure issues must be resolved before integrating AI into industrial systems.
The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).
The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.
AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.
The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.
Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.
Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.
Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.
The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
