The UK Information Commissioner’s Office has warned that AI is enabling faster, more advanced and harder-to-detect cyberattacks, urging organisations to strengthen their defences against emerging threats.
In a blog post, the regulator highlighted risks such as AI-generated phishing emails, deepfake social engineering, automated vulnerability scanning, AI-powered malware, credential attacks, data poisoning and indirect prompt injection. The ICO said cybersecurity must be treated as a shared responsibility, with organisations expected to take proactive steps to protect the personal data they hold.
The ICO said strong foundational security measures remain essential, but should be reinforced with layered defences to counter AI-powered threats. It pointed to practical steps such as patching systems, restricting access through multi-factor authentication, applying least-privilege principles and managing supplier risks.
The recommendations also include monitoring systems for unusual activity, carrying out vulnerability scanning and penetration testing, and maintaining regularly tested incident response plans. The ICO said AI can also support cyber defence, but should operate within a clear framework of human oversight and accountability.
Organisations are further advised to minimise data collection, conduct regular data audits and train staff to recognise AI-powered social engineering attacks. The ICO said AI tools processing high-risk personal data should be supported by data protection impact assessments and appropriate safeguards.
Why does it matter?
The ICO’s warning links AI-powered cyber threats directly to data protection obligations. As attackers use AI to scale phishing, exploit vulnerabilities and impersonate trusted contacts, organisations are expected not only to improve technical security, but also to limit the personal data they hold, strengthen governance and prepare for faster-moving incidents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
