Fortinet warns AI security failures can affect patient safety in healthcare

New commentary from Fortinet Australia argues that healthcare AI security risks extend beyond compliance into patient safety.

Fortinet logo beside a hospital bed and medical display illustrating concerns about AI security, patient data protection, and patient safety in healthcare systems

A new article published by Hospital + Healthcare argues that AI security should now be treated as a patient-safety issue for healthcare organisations, as AI tools become more deeply embedded in clinical and administrative systems. The article, supplied by Fortinet Australia and written by Cornelius Mare, says AI adoption in healthcare is expanding across care delivery, operations, medical imaging analysis, patient scheduling, and administrative automation.

The Fortinet Australia piece says healthcare organisations have traditionally focused on protecting electronic health records, hospital networks, and connected medical devices, but AI introduces a different attack surface that can affect not only data confidentiality but also the integrity of clinical decisions, operational processes, and patient outcomes. According to the article, treating AI as just another application to mitigate risks overlooks vulnerabilities specific to model-driven systems.

Breach data is used to frame the urgency of the issue. The article cites the Office of the Australian Information Commissioner, saying the health sector accounted for 18% of all notifiable data breaches in Australia between January and June 2025, the highest share of any industry in that period.

The article argues that greater use of digital health systems and AI in care delivery increases the importance of protecting those systems.

Three main AI-related risks are highlighted. First, AI systems depend on large datasets that may contain sensitive patient information, creating risks if training environments are compromised or if data is manipulated. Second, systems using natural language interfaces or automated workflows may be exposed to prompt injection and other input-based manipulation.

Third, AI models themselves may become targets through methods such as model manipulation or model inversion intended to extract sensitive data or influence outputs.

The Fortinet Australia article says the consequences differ from those of conventional cyber incidents because failures can compromise the integrity of medical insights and clinical workflows. A manipulated imaging model could affect diagnostic results, while a compromised system supporting triage or scheduling could disrupt patient prioritisation. Administrative AI systems handling sensitive data may also expose patient records if controls are inadequate.

Regulatory compliance alone is presented as insufficient. Existing privacy and data protection frameworks, the Fortinet Australia article says, were largely built around traditional IT systems rather than AI-driven decision environments. Fortinet’s piece argues that healthcare organisations need governance approaches covering how models are trained, validated, monitored, and secured throughout their lifecycle.

Five broad measures are proposed: establish AI governance frameworks and standards, secure the data pipeline, strengthen identity-centric security, monitor AI behaviour and outputs, and align cybersecurity with clinical resilience. The article also points to ISO 27090, described there as a developing standard relevant to healthcare organisations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!