Skip to main content
FeaturedAdvisoryMarch 20, 20268 min read

AI Governance Frameworks Are No Longer Optional. Here's What You Need to Know

Your security team likely doesn't know how many AI systems are running in your organization right now. That's not a personnel problem. It's a visibility problem. Shadow AI has expanded the attack surface in ways that are unprecedented, and it only takes one exploitation of that gap to bring a company to its knees.

This isn't hypothetical. It's where we are today.

In December 2024, NIST released a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence, known as the Cyber AI Profile. Around the same time, HITRUST released an AI security assessment certification. Together, these documents signal a fundamental shift: AI governance is moving from "nice to have" to essential infrastructure.

The problem is most security teams haven't looked at either one yet. That's a problem because these frameworks will eventually be used to measure whether your organization has handled AI responsibly.

What the NIST Cyber AI Profile Actually Is (and Isn't)

Let's start with what it's not, because that's where confusion usually starts.

The Cyber AI Profile is not a replacement for NIST CSF 2.0. It's not a replacement for the Risk Management Framework (RMF). Think of it as an overlay, a lens you put on top of your existing cybersecurity program that forces one critical question: Have you actually accounted for AI?

What it is: A set of AI-specific considerations mapped across the entire CSF 2.0 framework. NIST deliberately built this with a broad definition of AI (large language models, generative AI, predictive analytics, recommendation engines, agentic systems) because the category changes every day. If you're using AI in your organization in any form, this applies to you.

Right now, it's non-binding. That matters less than you'd think. The original CSF was voluntary too. Today, regulators, auditors, plaintiffs in court, and insurance companies use it as a benchmark for what "reasonable" security looks like. The Cyber AI Profile will follow the same path.

You can download the draft now as NIST IR 8596. Don't wait for the final version to start your assessment.

Three Focus Areas: Secure, Defend, Thwart

The Cyber AI Profile organizes everything around three focus areas. The goal across all three is managing both AI-related cybersecurity risks and opportunities. This matters: it's not just defensive. It's about what makes AI possible for your security program.

Secure: Protecting Your AI Systems and Components

This is about securing the models, agents, training data, and prompts you've built or deployed. These are attack surfaces right now.

Supply chain risk is beyond theoretical. You're pulling in third-party models, open-source tools, embedded APIs, and SaaS integrations. If you don't know what's in your AI stack, you can't secure it. End of conversation.

The secure focus area forces you to answer questions like:

  • What AI systems are deployed in your environment?
  • Where's the training data coming from?
  • What third-party models or APIs are you using?
  • Who has access to prompts and model parameters?
  • How are you detecting unauthorized AI deployments?

Defend: Using AI for Your Own Security Operations

This is about using AI as a tool for threat detection, security operations centers, automated response, and SOAR platforms.

Here's the uncomfortable question nobody's really answering yet: When AI makes the call on a threat response, who owns that decision? Your security tools already have AI baked in, and you're just not governing it. Automated incident response, anomaly detection, false-positive filtering: all of it touches AI.

The defend focus area asks:

  • What AI-assisted decisions are you making in your security operations?
  • How do you validate those decisions before they execute?
  • What's your accountability model for AI-driven actions?
  • How are you monitoring AI tool performance and drift?

Thwart: Defending Against AI-Enabled Attacks

Your adversaries are using AI too. AI-generated phishing, deepfakes, automated vulnerability discovery, AI-assisted malware. These aren't future threats. They're active now.

Thwart focuses on resilience. It's the conversation boards need to be having: How do we defend against threats that are being generated and deployed at machine speed?

Need help building your AI governance framework? Z Cyber's advisory team can guide you through the process.

Get Started

HITRUST AI Certification: Proof Over Assertions

NIST tells you what to think about. HITRUST gives you a way to prove you've acted on it.

HITRUST consolidated more than two dozen frameworks (NIST, ISO, OWASP, WASP, and others) into a single assessment built specifically for AI. There are 44 prescriptive controls (more likely coming), independently validated and tailored to your specific AI deployment.

If you're already familiar with HITRUST in a healthcare or compliance context, the AI assessment plugs directly into your existing assessment infrastructure. You're not starting over; you're extending what you've already built.

Here's the number that matters: Less than 1% of HITRUST-certified environments reported a breach over the last two years. The assurance model works.

For CISOs, this changes how you demonstrate AI security posture. Instead of asserting on it, you can prove it: to your customers, your board, your regulators, and your vendors. If you manage third-party risk, this changes your vendor questionnaires and assessment processes.

How It Maps to CSF 2.0

The Cyber AI Profile maps directly into the six CSF 2.0 functions: Govern, Identify, Protect, Detect, Respond, and Recover.

Within each function, you get AI-specific considerations and a priority level: High, Moderate, or Foundational.

The priority system is the most practical tool in the document. It gives you a place to start when you can't tackle everything at once, which is every organization, all the time.

Example mapping:

  • Govern: Policy and oversight of AI systems, accountability structures, third-party risk management
  • Identify: AI inventory, model provenance, data lineage, vulnerability assessment of AI components
  • Protect: Access controls, data protection, supply chain security for AI tools
  • Detect: Monitoring AI system behavior, detecting adversarial inputs, identifying prompt injection
  • Respond: Incident response procedures that account for AI-specific issues
  • Recover: Recovery from AI-enabled attacks, model retraining after compromise

NIST is also building control overlays for securing AI systems, which will map all of this to 853 controls. When that lands, it becomes your technical implementation roadmap. Watch for it.

The Agentic AI Gap

There's one thing the draft doesn't quite address well enough yet: agentic AI.

Systems where one AI is directing another. Where hyperparameters are being set by the AI itself. That's where enterprise deployments are heading, or already deployed right now.

If you're running multi-agent pipelines or autonomous AI workflows, you're ahead of the framework. That's not a reason to ignore it. It's actually an advantage. You're building the governance layer that NIST is going to codify. Use that position.

What You Can Do This Week

Organizations waiting for binding regulation to move on this are already behind. Regulators, auditors, courts, and insurance companies are going to start applying these benchmarks backwards. Your time to move is now.

Three things you can do immediately:

1. Pull NIST IR 8596 and map your AI inventory. Take your list of AI systems and map them against the three focus areas: Secure, Defend, and Thwart. Where are you exposed? Where are you completely ungoverned? You have to figure out what you don't know first.

2. Get cross-functional alignment. This is not simply an IT or security problem. Legal, compliance, HR, executive leadership. They all need to be in this conversation before the incident, not after. Organizations that align now will move faster and avoid costly surprises.

3. Start your HITRUST assessment planning. Whether you pursue certification or not, the 44 prescriptive controls give you a concrete roadmap. They're independently validated and purpose-built for AI. That's more actionable than a framework alone.

What's Next

The Cyber AI Profile gives you the what and the why. The next step is understanding the how, starting with the Govern function, the backbone of the entire framework.

Governance has to be right, or none of the rest of this works.

Ready to build your AI governance framework? Our team can help you map your AI inventory, align cross-functional stakeholders, and chart a path to compliance with the NIST Cyber AI Profile.

Get Started with Z Cyber

Frequently Asked Questions

What is the NIST Cyber AI Profile (IR 8596)?

The NIST Cyber AI Profile is an overlay on the existing NIST Cybersecurity Framework 2.0 that adds AI-specific considerations across all six CSF functions. It covers large language models, generative AI, predictive analytics, recommendation engines, and agentic systems. It is currently a non-binding preliminary draft.

What are the three focus areas of the Cyber AI Profile?

The three focus areas are Secure (protecting AI systems and components), Defend (using AI for cyber defense operations), and Thwart (defending against AI-enabled cyber attacks).

What is the HITRUST AI security certification?

HITRUST's AI security assessment certification consolidates more than two dozen frameworks into a single assessment with 44 prescriptive controls specifically for AI. Less than 1% of HITRUST-certified environments reported a breach over the last two years.

How does the Cyber AI Profile map to NIST CSF 2.0?

The profile maps directly into all six CSF 2.0 functions: Govern, Identify, Protect, Detect, Respond, and Recover. Each function receives AI-specific considerations with priority levels of High, Moderate, or Foundational.