UK’s NCSC urges caution on using AI to detect software vulnerabilities

AI can help identify vulnerabilities, but the NCSC says organisations still need patching, asset management, and human expertise.

NCSC guidance on using AI to find vulnerabilities, prioritise cyber risks, protect data, and strengthen patch management

The UK National Cyber Security Centre has warned organisations not to rush into using AI models to find software vulnerabilities without first considering security, legal, operational, and resourcing risks.

In guidance signed by Ruth C, Head of Vulnerability Management Group at the NCSC, the agency says organisations may feel pressure to use new AI models for vulnerability discovery, but should first ask what they are trying to achieve and whether AI is the best way to improve security.

The NCSC stresses that finding vulnerabilities does not automatically improve an organisation’s security and could make it worse if teams lack a process to manage, prioritise, and fix the issues that AI tools identify. It says basic cyber hygiene, including patching known vulnerabilities and controlling unauthorised access, is still more important for most organisations than focusing on zero-days.

The guidance also urges organisations to prioritise exploitable vulnerabilities rather than simply counting how many issues have been found. It notes that more than 40,000 vulnerabilities were assigned CVEs in 2025, while CISA’s Known Exploited Vulnerabilities catalogue tracked about 400 newly exploited vulnerabilities and around 40 that were zero-days when first exploited.

The NCSC highlights several risks associated with using AI for vulnerability discovery, including information leakage, infrastructure security, sandboxing, production-environment access, permissions granted to large language models, data retention policies, and legal compliance. It also advises organisations using hosted models to consider the physical location and legal jurisdictions that apply to them.

The guidance recommends starting with the external attack surface and verifying results through both AI and human review. It says keeping pace with frontier AI cyber developments will almost certainly be critical to cyber resilience over the next decade, but adds that organisations should invest in people as well as tools, stating that AI models accelerate the skills of cybersecurity staff rather than replacing them.

The NCSC also says organisations should understand how everything they develop or use is patched, with good asset management and dependency management described as crucial foundations for cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!