AI Security Governance for Healthcare: Where HIPAA Meets Machine Learning

AI security governance for healthcare requires satisfying two parallel obligations: HIPAA's Security Rule, which applies to any system that creates, receives, maintains, or transmits electronic protected health information (ePHI), and emerging AI governance frameworks that address the unique risks of machine learning in clinical settings. Healthcare organizations deploying AI tools, whether for clinical decision support, revenue cycle, radiology, or operational workflows, must treat those deployments as both a privacy compliance matter and an AI risk management discipline. The two cannot be addressed separately, and neither can be deferred.
What HIPAA Already Requires from AI Systems
HIPAA's Security Rule does not use the words "artificial intelligence," but it does not need to. The rule requires covered entities and their business associates to protect ePHI regardless of the technology used to process it. Any AI system that creates, receives, maintains, or transmits ePHI is a covered system, and the full suite of HIPAA technical, administrative, and physical safeguards applies.
The administrative safeguard requirements have particular weight for AI. HIPAA requires a formal, documented risk analysis that identifies threats to and vulnerabilities of ePHI, and an ongoing risk management process to reduce those risks to a reasonable and appropriate level. An AI system that processes patient records, generates clinical notes, or informs treatment recommendations must appear in that risk analysis. If it does not, the organization has an undocumented gap that an OCR auditor will find.
The technical safeguard requirements are equally direct. HIPAA requires access controls that limit ePHI access to authorized users, audit controls that record and examine activity in systems containing ePHI, and transmission security for ePHI sent over electronic networks. AI systems that accept patient data as input, generate outputs that contain patient data, or integrate with EHR systems must satisfy each of these requirements.
Does your AI inventory include every system touching ePHI?
Z Cyber conducts AI governance readiness assessments for healthcare organizations, mapping your AI footprint to HIPAA Security Rule requirements and the NIST AI RMF.
The 2024 HIPAA Security Rule Proposed Update
HHS published a proposed update to the HIPAA Security Rule in December 2024. The proposal responds to a decade of healthcare data breaches and the growing complexity of the technology environments where ePHI is stored and processed. For organizations with AI systems, several of the proposed requirements have direct operational implications.
The proposed rule would require covered entities and business associates to maintain a documented technology asset inventory reviewed at least once every 12 months. Every system that creates, receives, maintains, or transmits ePHI must appear in that inventory, including AI tools embedded in EHR platforms, standalone clinical AI applications, and vendor-provided AI features added to existing software products. For many healthcare organizations, this requirement alone will surface AI deployments that compliance teams did not know existed.
The proposed rule would also require a written risk analysis at least annually, replacing the current "periodic" standard that OCR has struggled to enforce. Annual risk analysis directly requires organizations to evaluate every AI system in their ePHI inventory and document the risks each one presents. Organizations that lack a structured process for AI risk assessment will need to build one to meet this requirement.
Additional proposed requirements include mandatory multi-factor authentication for systems accessing ePHI, network segmentation to limit the spread of incidents, and enhanced workforce incident reporting within 24 hours of discovery. The final rule has not been published as of April 2026, but the proposed requirements reflect where OCR enforcement is heading regardless of when the final rule takes effect.
FDA-Authorized AI and Clinical Governance Obligations
The FDA has authorized more than 950 AI-enabled medical devices through its 510(k) and De Novo clearance pathways as of 2024, with the largest concentrations in radiology, cardiology, and pathology. For healthcare organizations deploying these devices, FDA clearance does not end the governance obligation. It begins it.
A cleared AI medical device is authorized for a specific intended use with specific input data types and specific output formats. Using a cleared device outside its cleared indication, feeding it data it was not trained or validated on, or integrating it into a clinical workflow in ways the manufacturer did not anticipate all create clinical and liability risk. Healthcare organizations must maintain device inventory that captures the cleared indication for each AI device and ensure clinical use stays within that boundary.
The FDA's predetermined change control plan (PCCP) framework addresses a fundamental challenge of clinical AI: algorithms that learn or adapt over time may change their behavior in ways that affect performance. PCCPs require manufacturers to document in advance what modifications are permitted without a new clearance submission. For healthcare organizations, this means understanding which AI devices in your environment are operating under a PCCP, what the permitted modification boundaries are, and how the manufacturer communicates when those modifications occur.
Vendor management is the practical implication. Healthcare organizations should require AI device manufacturers to disclose: the cleared indication, the training data population, known performance limitations (particularly for demographic subgroups), the manufacturer's post-market surveillance process, and notification procedures when algorithm updates are deployed. These disclosures belong in vendor contracts, not just product datasheets.
Shadow AI in Healthcare: The Governance Gap
Shadow AI in healthcare is not a hypothetical future problem. It is a current operational reality. Clinicians use general-purpose AI assistants to draft clinical notes, summarize research, and generate prior authorization letters. Administrative staff use AI-powered productivity tools to process scheduling and billing data. EHR vendors and health IT platforms quietly activate AI features that were not present when vendor contracts were signed. Each of these scenarios may involve ePHI flowing to systems that have never been reviewed for HIPAA compliance or clinical accuracy.
The business associate agreement (BAA) requirement is the clearest legal issue. HIPAA requires a BAA with every vendor that receives ePHI on behalf of a covered entity. Most general-purpose AI tools are not designed as HIPAA-covered services and will not sign a BAA. When a clinician pastes patient information into a consumer AI assistant, that data has been disclosed to a third party without a BAA in place. OCR has investigated incidents involving unauthorized disclosures through third-party technology platforms, and the investigation process is the same regardless of whether the disclosure was intentional.
The governance response to shadow AI requires both detection and policy. Detection means inventorying AI tool use across the organization, including network-level visibility into which AI services are being accessed and endpoint monitoring for unauthorized tool installation. Policy means defining which AI tools are approved for different categories of use (tools approved for de-identified data only, tools approved for ePHI with a BAA in place, tools prohibited entirely), and communicating those boundaries to the workforce with documented training. For a detailed framework on shadow AI discovery and governance, see our guide on Shadow AI in the Enterprise: Discovery, Risk, and Governance.
Shadow AI is your largest unreviewed HIPAA risk.
Z Cyber's AI governance assessments include shadow AI discovery, BAA gap analysis, and a phased remediation roadmap tailored to healthcare organizations.
Applying NIST AI RMF to Healthcare Contexts
The NIST AI Risk Management Framework (AI RMF 1.0), released in January 2023, provides the most comprehensive and practically applicable structure for building an AI governance program in healthcare. Its four core functions, Govern, Map, Measure, and Manage, map naturally onto the compliance and clinical governance obligations healthcare organizations already carry. For a full practitioner walkthrough, see our NIST AI RMF Implementation Guide.
The Govern function establishes the policies, roles, and accountability structures for AI risk management. In a healthcare context, this means defining who is responsible for AI governance (typically a cross-functional committee including Compliance, Clinical Informatics, IT Security, Legal, and clinical leadership), what the AI risk tolerance is for different use case categories, and how AI governance decisions are documented and communicated. The Govern function directly supports HIPAA's administrative safeguard requirement for assigned security responsibility and security management process.
The Map function involves identifying and classifying AI systems by their risk profile. In healthcare, this risk tiering should account for both HIPAA sensitivity (does the AI touch ePHI) and clinical risk (does the AI influence clinical decisions). A radiology AI that recommends cancer diagnoses carries materially different risk than an AI tool that optimizes scheduling. Risk tiering determines the depth of review, validation, and monitoring each AI system requires before and after deployment.
The Measure function addresses ongoing performance monitoring. Clinical AI systems can degrade over time as patient populations, clinical practices, or data inputs change. Organizations must define performance metrics for each deployed AI system, establish monitoring cadences, and document how performance issues are escalated and remediated. This function also encompasses bias testing: evaluating whether AI outputs perform consistently across patient demographic groups, a requirement that aligns with both clinical quality standards and emerging AI governance regulations.
The Manage function closes the loop by defining how identified AI risks are treated, monitored, and communicated. For healthcare organizations, this includes incident response procedures specific to AI systems (what happens when an AI produces a clinically significant error), vendor notification procedures, and the process for removing or suspending AI systems that fail performance thresholds.
Building a Healthcare AI Governance Program
A healthcare AI governance program does not require building parallel infrastructure alongside your existing HIPAA compliance program. The most effective approach treats AI governance as an extension of existing security management, vendor management, and risk analysis processes. The incremental additions are specific to AI: risk tiering by clinical sensitivity, algorithm performance monitoring, and BAA confirmation for AI vendors.
| Program Component | HIPAA Anchor | AI RMF Function |
|---|---|---|
| AI asset inventory | Risk Analysis (§164.308(a)(1)) | Map |
| BAA confirmation for AI vendors | Business Associate Contracts (§164.308(b)) | Govern |
| Clinical AI risk tiering | Risk Management (§164.308(a)(1)(ii)(B)) | Map / Measure |
| Algorithm performance monitoring | Audit Controls (§164.312(b)) | Measure / Manage |
| AI incident response | Security Incident Procedures (§164.308(a)(6)) | Manage |
| Workforce AI use policy | Workforce Training (§164.308(a)(5)) | Govern |
For organizations at early stages of AI governance maturity, the first priority is inventory: knowing what AI systems are in use, which ones touch ePHI, and which have confirmed BAAs. Without that foundation, risk analysis is incomplete and remediation is impossible. Organizations ready to assess their current posture can start with a structured AI governance readiness assessment that maps the existing AI footprint to HIPAA and AI RMF requirements.
Healthcare organizations with formal vCISO or security leadership should ensure AI governance is an explicit part of the security program charter, not an informal add-on. The Z Cyber virtual CISO (vCISO) practice includes AI governance program development as a core service component, structured around HIPAA and NIST AI RMF alignment.
Three Things to Do This Week
- Inventory your AI systems. Compile a list of every AI tool in use across your organization, including features embedded in existing software platforms. For each tool, document whether it touches ePHI, whether a BAA is in place, and whether it has been formally reviewed by IT Security and Compliance.
- Audit your BAA coverage. Identify every AI vendor on your list that receives or could receive ePHI. Confirm whether a signed BAA exists for each one. For vendors without BAAs, either obtain one or classify the tool as prohibited for ePHI use cases until a BAA is in place.
- Read the proposed HIPAA Security Rule update. Review HHS's December 2024 proposed rule with your compliance and IT security leads. Identify which proposed requirements (annual risk analysis, technology asset inventory, MFA mandates) your current program does not yet satisfy, and begin gap remediation planning now rather than waiting for the final rule.
Ready to build your healthcare AI governance program?
Z Cyber's AI Security and Governance advisory practice works with healthcare organizations to build HIPAA-aligned AI governance programs mapped to NIST AI RMF. See how our healthcare cybersecurity practice supports your compliance posture.
Frequently Asked Questions
Does HIPAA apply to AI systems used in healthcare?
Yes. HIPAA's Security Rule applies to any system that creates, receives, maintains, or transmits ePHI, regardless of whether that system uses artificial intelligence. An AI diagnostic tool, a predictive scheduling algorithm, or a clinical documentation assistant that processes patient records is a covered system under HIPAA. Covered entities and business associates must conduct a risk analysis that includes these AI systems and document safeguards accordingly.
What is the HHS proposed HIPAA Security Rule update and how does it affect AI?
HHS published a proposed update to the HIPAA Security Rule in December 2024. The proposal would require an annual technology asset inventory, an annual written risk analysis, mandatory MFA for systems accessing ePHI, and network segmentation. For AI systems, the inventory and risk analysis requirements directly require organizations to document every AI tool that touches ePHI and formally assess its risk annually.
What FDA requirements apply to AI-enabled medical devices?
The FDA has authorized more than 950 AI-enabled medical devices as of 2024. Healthcare organizations deploying these devices must maintain a device inventory that captures the cleared clinical indication for each AI device, ensure clinical use stays within that cleared indication, and track manufacturer notifications about algorithm updates or changes made under a predetermined change control plan (PCCP).
What is shadow AI risk in healthcare?
Shadow AI refers to AI tools adopted by clinicians or business units without formal IT and compliance review, including general-purpose AI assistants used to process patient information. When ePHI flows to a system without a BAA in place, a potential HIPAA violation has occurred. OCR has investigated incidents involving unauthorized disclosures through third-party technology platforms, and the investigation process is the same regardless of intent.
Does a vCISO help with healthcare AI governance?
Yes. A virtual CISO with healthcare and AI governance expertise can own the AI governance program design, lead HIPAA risk analyses that cover AI systems, oversee vendor BAA management, and provide board-level reporting on AI risk. For healthcare organizations that lack a full-time CISO but need practitioner-level AI governance leadership, a vCISO provides that capability without the cost and time commitment of a direct hire. See our guide on fractional CISO vs. full-time CISO for a detailed comparison.
How does the NIST AI RMF apply to healthcare organizations?
The NIST AI RMF's four core functions (Govern, Map, Measure, Manage) map directly onto healthcare AI governance needs and complement HIPAA Security Rule requirements. The Govern function aligns with HIPAA administrative safeguards. The Map function supports clinical AI risk tiering. The Measure function supports ongoing algorithm performance monitoring. For healthcare organizations, using the AI RMF alongside HIPAA is the most efficient path to a defensible governance posture.
Frequently Asked Questions
Does HIPAA apply to AI systems used in healthcare?
Yes. HIPAA's Security Rule applies to any system that creates, receives, maintains, or transmits electronic protected health information (ePHI), regardless of whether that system uses artificial intelligence. An AI diagnostic tool, a predictive scheduling algorithm, or a clinical documentation assistant that processes patient records is a covered system under HIPAA. Covered entities and their business associates must conduct a HIPAA risk analysis that includes AI systems in scope and document technical, administrative, and physical safeguards for those systems.
What is the HHS proposed HIPAA Security Rule update and how does it affect AI?
HHS published a proposed update to the HIPAA Security Rule in December 2024. The proposal would require covered entities and business associates to maintain a technology asset inventory reviewed at least annually, conduct a written risk analysis at least once every 12 months, deploy multi-factor authentication for systems accessing ePHI, and implement network segmentation. For AI systems, the annual asset inventory and risk analysis requirements directly require organizations to document every AI tool that touches ePHI and formally assess its risk. The final rule has not yet been published as of April 2026, but organizations should prepare now.
What FDA requirements apply to AI-enabled medical devices?
The FDA has authorized more than 950 AI-enabled medical devices through its 510(k) and De Novo clearance pathways as of 2024. For devices that learn or change their behavior over time (adaptive AI), the FDA's predetermined change control plan (PCCP) framework requires manufacturers to document in advance what types of performance modifications are permitted without a new clearance submission. Healthcare organizations deploying cleared AI medical devices must maintain device inventory, track manufacturer updates, and ensure the device's cleared indication matches how it is used clinically.
What is shadow AI risk in healthcare?
Shadow AI in healthcare refers to AI tools adopted by clinicians, administrators, or business units outside of formal IT and compliance review, including consumer-grade generative AI tools, AI-powered scheduling platforms, and vendor-embedded AI features in billing or EHR systems. When a clinician pastes patient information into a general-purpose AI assistant, or when a vendor silently adds an AI feature to an existing software product, patient data may flow to systems that have never been reviewed for HIPAA compliance, vendor security posture, or clinical accuracy. The risk is not hypothetical: OCR has investigated multiple incidents involving unauthorized disclosure of ePHI through third-party technology platforms.
How does the NIST AI RMF apply to healthcare organizations?
The NIST AI Risk Management Framework (AI RMF 1.0) applies to any organization developing or deploying AI, including healthcare entities. Healthcare organizations can use the AI RMF's four core functions (Govern, Map, Measure, Manage) to build a structured AI governance program. The Govern function aligns well with HIPAA's administrative safeguard requirements around workforce training, security management, and assigned responsibility. The Map function supports clinical AI risk tiering (diagnostic AI vs. administrative AI carries different risk profiles). The Measure function supports ongoing performance monitoring of clinical algorithms. For HIPAA-regulated organizations, the AI RMF and HIPAA Security Rule are complementary frameworks that can be implemented together.
What is a business associate agreement and does it cover AI vendors?
A business associate agreement (BAA) is a written contract required by HIPAA when a covered entity shares ePHI with a vendor (business associate) that will create, receive, maintain, or transmit that data on the covered entity's behalf. Healthcare organizations must execute a BAA with every AI vendor whose system processes ePHI, including EHR-integrated AI modules, clinical decision support tools, revenue cycle AI platforms, and AI-powered transcription services. A vendor's refusal to sign a BAA means the covered entity cannot legally share ePHI with that vendor. Confirming BAA coverage for every AI vendor is one of the first steps in a healthcare AI governance program.
Subscribe for Updates
Get cybersecurity insights delivered to your inbox.

