Generative AI presents the biggest data-risk challenge in history
An expert warns that the rise of generative AI poses significant risks to information security, urging a reevaluation of data governance and cybersecurity.
Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.
Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.
The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.
Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.
Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.
Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
