Secure the AI That Touches Patient Data

Diagnostic AI, clinical decision support, and administrative automation all process protected health information. The security controls protecting that data were not designed with AI attack surfaces in mind.

What We See in This Space

HIPAA Privacy and Security Rules apply to AI systems that process protected health information - but the HIPAA Security Rule's technical safeguard requirements predate AI systems and do not address AI-specific attack vectors.
Adversarial attacks on diagnostic AI models - radiology, pathology, dermatology - can induce misclassification without detectable image manipulation, creating patient safety risk.
FDA AI/ML-based software as a medical device (SaMD) guidance requires documented risk management for AI systems - but few manufacturers have conducted adversarial security testing.
Training data for clinical AI models contains the most sensitive patient information that exists - data pipeline security for AI training is rarely assessed with the rigor applied to production EHR systems.
Healthcare supply chains include AI model vendors, data aggregators, and cloud AI services - each representing a potential entry point for data poisoning or model integrity attacks.

Healthcare is deploying AI faster than it is securing it. Diagnostic imaging AI, clinical decision support systems, administrative automation, and patient-facing chatbots all process protected health information - and all carry attack surfaces that neither traditional healthcare IT security nor traditional penetration testing addresses.

HIPAA Implications for AI and Machine Learning

The HIPAA Security Rule requires covered entities and business associates to implement administrative, physical, and technical safeguards for electronic protected health information (ePHI). AI systems that process ePHI - whether for diagnostic support, care coordination, billing, or patient communication - are subject to these requirements.

The challenge is interpretive: HIPAA’s technical safeguard requirements reference access controls, audit controls, integrity controls, and transmission security. These controls were designed for databases and applications, not for machine learning systems where the same ePHI that appears in training data may influence model behavior in ways that are not captured by traditional access logs or integrity monitoring.

Key HIPAA considerations for AI systems:

Training data governance - Patient records used to train AI models represent ePHI. The minimum necessary standard requires that only the minimum necessary patient data be used for model training. Access controls, audit logging, and de-identification requirements all apply - but are rarely implemented with the same rigor as production EHR access controls for training data pipelines.

Business associate agreements - Cloud AI providers, model vendors, and AI data services that handle ePHI are business associates under HIPAA and require executed BAAs. Many healthcare organizations are operating AI systems with third-party AI components that have not been evaluated as business associates.

Breach notification for AI systems - If an AI system is compromised and ePHI is accessed or exfiltrated, HIPAA breach notification obligations apply. Most healthcare incident response plans do not address AI-specific breach scenarios.

infosec.qa’s AI Governance Risk Framework maps your AI security controls against HIPAA requirements - identifying gaps in training data governance, third-party AI vendor management, and incident response planning.

Adversarial Attacks on Diagnostic Models

AI diagnostic models - radiology, pathology, dermatology, ophthalmology - represent one of the highest-stakes applications of machine learning in any industry. Errors have direct patient safety consequences.

Adversarial examples for medical imaging are among the most extensively documented attacks in the AI security research literature. A diagnostic model trained to classify chest X-rays can be fooled by pixel-level perturbations that are visually indistinguishable from unmodified images - causing the model to output an incorrect classification with high confidence. Similar attacks have been demonstrated against dermatology classifiers, diabetic retinopathy screening systems, and histopathology models.

The clinical risk is direct: an adversarial input that causes a diagnostic AI to output a false negative for a malignancy, or a false positive that leads to unnecessary intervention, represents a patient safety incident with regulatory and liability implications.

The threat actors are not hypothetical: insurance fraud, clinical trial data manipulation, and competitive intelligence gathering all represent plausible motivations for adversarial attacks on healthcare AI.

infosec.qa’s LLM Red Teaming and AI Attack Surface Assessment services include adversarial robustness testing for clinical AI systems - assessing susceptibility to known attack classes and providing remediation guidance aligned with FDA and ISO 14971 risk management frameworks.

FDA AI/ML Regulatory Framework for Software as a Medical Device

The FDA’s guidance on AI/ML-based Software as a Medical Device (SaMD) establishes requirements for AI systems that meet the definition of a medical device. The framework requires:

  • A Predetermined Change Control Plan documenting anticipated AI model changes and associated revalidation requirements
  • Algorithm transparency sufficient to support clinical users’ understanding of AI-generated recommendations
  • Performance monitoring for production AI models with defined retraining and revalidation triggers
  • Cybersecurity considerations as part of the overall SaMD risk management process

The FDA has explicitly noted that cybersecurity for AI/ML SaMD extends beyond traditional software cybersecurity to include model integrity - protecting against adversarial manipulation of model inputs and outputs. Few manufacturers have operationalized AI-specific security testing as part of their SaMD cybersecurity program.

infosec.qa’s AI Governance Risk Framework supports FDA SaMD manufacturers in developing AI security testing programs aligned with FDA pre-market cybersecurity guidance, with findings documentation suitable for 510(k) submissions and De Novo requests.

Data Poisoning in Clinical Datasets

Data poisoning represents the most insidious category of AI security risk in healthcare - because it attacks the model at the training stage, before any production security controls are active, and the effects may not manifest as detectable failures in normal model evaluation.

Clinical datasets used for AI training are uniquely vulnerable:

EHR data integrity - Electronic health record data used for model training may come from multiple source systems, each with different access controls and data quality processes. An adversary with access to any of these upstream systems can inject manipulated records into the training set.

Multi-site training data - Healthcare AI models trained on federated data from multiple institutions aggregate data from systems with varying security postures. A compromised institution’s data contribution can poison the aggregate model.

Synthetic data generation - Many healthcare organizations use synthetic data generation to augment small training sets. The security of the generation process and the integrity of the synthetic data pipeline are rarely assessed.

Third-party data vendors - Healthcare AI developers increasingly rely on curated training datasets from commercial vendors. The security and integrity controls applied by these vendors to their data pipelines represent supply chain risk.

infosec.qa’s AI Supply Chain Security service assesses training data pipeline integrity - mapping data provenance, evaluating access controls on training data repositories, and testing for indicators of data poisoning in deployed models.

Frameworks We Cover

HIPAA Privacy Rule and Security RuleFDA AI/ML-Based SaMD GuidanceNIST AI RMFISO 42001 (AI Management System)SOC 2 Type IIEU AI Act (High-Risk AI System requirements)

How We Help

LLM Red Teaming

AI Attack Surface Assessment

AI Governance Risk Framework

AI Supply Chain Security

AI Threat Intelligence

AI Security Training

Know Your AI Attack Surface

Request a free AI Security Scorecard assessment and discover your AI exposure in 5 minutes.

Get Your Free Scorecard