Security frameworks lag behind rising AI threats

AI breaches reveal security gaps not covered by traditional cybersecurity frameworks.

Illustration showing AI systems and digital locks, representing security gaps and emerging threats not covered by traditional frameworks

A series of high-profile incidents has highlighted how AI systems are exposing organisations to new security risks not covered by existing frameworks. In 2024 alone, an estimated 23.77 million secrets were leaked via AI systems, marking a 25% year-on-year increase.

Recent breaches included compromised AI libraries, malicious packages leaking credentials, and flaws enabling unauthorised data extraction from AI systems. In each case, organisations met compliance requirements but remained exposed to AI-specific attacks.

Security experts say the problem lies in traditional frameworks such as NIST, ISO 27001, and CIS Controls. Built for conventional IT systems, they offer limited guidance on threats unique to AI.

Unlike traditional attacks, many AI threats occur within authorised processes, such as model training or natural language interaction. As a result, existing controls often fail to detect abuse that exploits how AI systems interpret data, prompts, or pre-trained components.

Analysts warn that organisations cannot wait for frameworks to be updated. Addressing AI security risks will require dedicated assessments, new technical controls, and specialised expertise, as regulators increase scrutiny and AI deployment accelerates across sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!