UK NCSC: AI will escalate the frequency and impact of cyberattacks

The NCSC reveals the current use of AI in malicious activities and projecting a substantial increase in the frequency and impact of cyberattacks, particularly ransomware, in the short term.

AI artificial intelligence concept - robot hands typing on lit keyboard

The UK’s National Cyber Security Centre (NCSC), a division of GCHQ, has issued an assessment focusing on the imminent influence of AI on cyber threats. The findings indicate that AI is currently used in malicious cyber activities and is poised to significantly escalate the frequency and impact of cyberattacks, including ransomware, in the short term.

The report highlights AI’s role in lowering entry barriers for less skilled cyber actors, such as novice criminals, hackers-for-hire, and hacktivists, enabling them to conduct more proficient access and information-gathering operations. This heightened accessibility, coupled with AI’s enhanced targeting capabilities, is anticipated to contribute to the global ransomware threat over the next two years.

According to the report, AI is anticipated to play a supportive role in areas such as malware and exploit development, vulnerability research, and lateral movement by enhancing existing techniques. However, in the immediate future, these domains will continue to depend on human expertise. Consequently, any incremental advancements will likely be limited to existing threat actors already possessing the necessary capabilities. While AI has the potential to create malware evading current security filters, this hinges on its training with quality exploit data. A realistic possibility exists that highly capable states maintain repositories of malware of sufficient scale to train AI models effectively for this purpose.

The report highlights the importance of continuous evaluation, recognising the potential for breakthroughs in transformative AI during the assessment period. Despite anticipating a negative impact on cyber threats, the assessment suggests that AI can enhance cybersecurity resilience through improved detection and security by design.

Key challenges include the growing difficulty in distinguishing genuine communications as AI-generated content becomes more convincing. Moreover, the report foresees an acceleration of cyber resilience challenges as AI assists threat actors in faster and more precise reconnaissance.

While AI currently demands significant expertise and resources, the report envisions a shift towards greater accessibility with the proliferation of sophisticated AI models. Less-skilled cyber actors are expected to benefit from the commodification of AI, resulting in a substantial enhancement of capabilities, especially in operations like spear-phishing.