AI data risks prompt new global cybersecurity guidance

New cybersecurity guidance warns of rising threats to AI systems, including data poisoning, supply chain risks, and data drift.

New cybersecurity guidance warns of rising threats to AI systems, including data poisoning, supply chain risks, and data drift.

A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.

Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.

The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.

The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.

With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot