Shadow AI and poor governance fuel growing cyber risks, IBM warns
IBM warns that Shadow AI is exposing organisations to rising security threats, with poor oversight leaving systems and data vulnerable.
Many organisations racing to adopt AI are failing to implement adequate security and governance controls, according to IBM’s Cost of a Data Breach Report 2025. The report warns that attackers are already exploiting these weaknesses, targeting AI models and applications with increasing frequency.
Of those surveyed, 13 percent confirmed experiencing a breach involving either an AI model or an AI-powered application. Almost all of these organisations – a striking 97 percent – admitted they lacked appropriate access controls for their AI systems at the time of the incident.
The consequences were significant. Around one-third of affected organisations reported operational disruption and unauthorised access to sensitive data, while 23 percent incurred financial losses.
Seventeen percent also suffered reputational damage, potentially diminishing customer trust and investor confidence. The most common root cause of the breaches was supply chain compromise. In this context, IBM includes attacks that exploited third-party applications, plug-ins, and APIs.
Notably, most of the intrusions involving AI were traced to software provided by external vendors operating under software-as-a-service (SaaS) models.
The report also highlights the growing risk posed by so-called ‘shadow AI’ – tools introduced or deployed internally without the knowledge or approval of IT departments or data governance teams.
The unsanctioned systems often lack even basic protections, making them an attractive target for threat actors.
‘Because shadow AI often operates outside formal oversight mechanisms, organisations may not even be aware of the risk until an incident occurs,’ IBM noted. The lack of visibility makes it difficult to enforce compliance or respond to threats effectively.
Worryingly, 87 percent of surveyed organisations acknowledged they had no formal AI governance policies in place to manage such risks.
Furthermore, nearly two-thirds of those that suffered breaches had not conducted regular audits of their AI systems, and over three-quarters had not performed adversarial testing to evaluate vulnerabilities.
The shortcomings are not new. Security and governance concerns have repeatedly been cited as key reasons why enterprise AI rollouts have stalled or failed to progress beyond the pilot stage.
In 2024, The Register reported that several major companies had delayed deploying AI assistants built on Microsoft Copilot after discovering that these tools surfaced internal data that employees should not have been able to access.
Analyst firm Gartner similarly warned last year that up to 30 percent of generative AI (GenAI) projects would be abandoned before full implementation by the end of 2025. The causes included poor data quality, lack of robust risk controls, escalating operational costs, and unclear return on investment.
IBM’s report suggests that in the rush to embrace AI, many organisations are choosing speed over due diligence. The strategy, the company warns, is already proving costly.
‘The report reveals a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation,’ said Suja Viswesan, IBM’s Vice President of Security and Runtime Products.
‘As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn’t just financial, it’s the loss of trust, transparency, and control,’ she added.
Viswesan stressed that the gap between AI adoption and proper oversight is widening, and malicious actors are increasingly exploiting this imbalance. To address the issue, IBM is urging companies to treat AI risk management as a core component of cybersecurity strategy – not an afterthought.
As organisations accelerate their AI deployments, the need for robust frameworks that encompass access control, governance, transparency, and threat mitigation will only grow.
Those who ignore these fundamentals may find themselves not only vulnerable but also falling behind in terms of resilience, regulatory compliance, and stakeholder confidence.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!