NIST CSF 2.0 and the Govern Function: Where AI Risk Management Starts
In Part 1 of this series, we covered what the NIST Cyber AI Profile (IR 8596) is and why it exists. In Part 2, we got into the operational reality of the three focus areas: Secure, Defend, and Thwart.
Now we need to show you where this profile actually sits inside the broader framework it was built on. If you don't understand that relationship, you're going to miss how the whole governance stack works together.
Why the CSF 2.0 Mapping Matters
Every major governance framework eventually becomes a layer in a stack. Nobody operates against just one. You've got CSF, you've got 800-53, you've got industry-specific overlays. The organizations that govern well understand how those layers connect. The ones that struggle treat each framework as a separate project.
The Cyber AI Profile is not a standalone document. It's explicitly built on top of the NIST Cybersecurity Framework 2.0. It uses the CSF 2.0 structure (its functions, categories, and subcategories) and overlays AI-specific considerations on top of each one.
If your organization is already working against the CSF, you're not starting over. You're extending what you already have.
If CSF 2.0 is not in your foundation, that's the conversation you need to have first. You can't layer AI governance onto something that isn't there yet.
The Six Functions Through an AI Lens
Here's how each CSF 2.0 function maps when you apply the Cyber AI Profile.
Identify. Look at your asset inventory right now and ask whether AI components are actually in there. Models, agents, third-party AI dependencies. Chances are they're not. That's where shadow AI becomes your biggest exposure.
Protect. This is where prompt injection defense and model security live. If you have LLM-based applications running and AI-specific controls aren't in your security architecture yet, that's the gap to close.
Detect. This runs in two directions simultaneously: detecting threats coming at your AI systems and detecting AI-enabled threats coming at your broader environment.
Respond. Pull your incident response plan and ask one honest question: was it written with an AI-enabled attack scenario in mind? Most weren't.
Recover. Model integrity validation. When AI systems are compromised, you need a process for validating them before they go back into production.
Govern. The function that drives all of it. And the one we need to spend real time on.
Govern: The Function That Determines Whether Everything Else Works
Govern isn't just another box in the framework. It's the function that determines whether everything else actually works.
When you look at Govern through an AI lens, what it's really asking is whether your organization has made conscious, documented decisions about AI risk. Not reactive decisions. Not informal ones. Conscious and documented.
Define Your Risk Appetite
The framework asks you to define what level of AI risk your organization is willing to accept. That sounds straightforward until you actually try to do it.
What does acceptable AI use mean in your organization? Is it defined? Which AI deployments are inside that boundary and which ones aren't? Who made that determination, and when did they make it? Where is it written down?
Those aren't easy questions. And the answers have real consequences for every AI deployment decision your teams are making right now, decisions they're making without that framework in place.
Answer the Accountability Question
This one deserves more attention than it currently gets.
Govern requires you to define who is responsible for AI risk. Is it the CISO? A Chief AI Officer? A Chief Risk Officer? A cross-functional committee?
There's no single right answer. But there is a definitively wrong one: nobody.
Risk without a named owner does not get managed. It accumulates quietly until something forces the conversation. Don't let that be an incident.
Close the Policy Gap
Look at your AI use policy right now and ask whether it governs agentic AI, whether it covers employees using personal AI tools for work purposes, and whether it covers third-party models embedded in your vendor stack.
The AI use policies that exist in most organizations were written fast and reactively when the only consideration was ChatGPT. The threat landscape, the technology, and the regulatory expectations have all moved significantly since then. Chances are your policy has not kept up.
Address Supply Chain Risk Management
When you bring an AI system into your environment, whether you built it, bought it, or integrated it through a vendor, you're inheriting the security posture of every component in that system. The model, the training data, the inference infrastructure, the APIs. All of it.
Govern requires you to have a framework for assessing and managing that risk before you deploy. Not after something goes wrong.
Establish Board Oversight
Govern is explicit: AI risk must have visibility at the leadership level. Not buried in a technology risk report. Not delegated entirely to the security team.
The board needs to understand the organization's AI risk posture the same way they understand financial and operational risk. That means your CISO or AI risk lead needs to be able to walk into that room and give a clear, business-level picture of where the organization stands.
If that briefing doesn't exist yet, building it is a Govern priority.
Need help building your AI governance framework? Z Cyber's advisory team can guide you through the Govern function and beyond.
Get StartedGovern Is Not About Slowing Down
Here's the frame that matters: Govern is not about slowing AI adoption down. It's about making sure the decisions your organization is making about AI are conscious ones, made with eyes open, with accountability attached, with a policy framework that reflects where the technology and the threat landscape are today.
The organizations that get Govern right don't move slower on AI. They move more confidently. Because they've answered the hard questions before they need them.
Three Things to Do This Week
1. Pull your AI use policy and read it against reality. Where is your organization with AI today? If your policy doesn't cover agentic AI, third-party model risk, or employee use of personal AI tools, those are the gaps to address first.
2. Answer the accountability question. Who owns AI risk in your organization right now? If the answer isn't immediately clear, that's your starting point.
3. Get AI risk onto the enterprise risk register this quarter. Named owner. Review schedule. That one move changes the conversation at the leadership level.
What's Next: The Global AI Governance Landscape
Video 4 in this series zooms out to the full global AI governance landscape, including HITRUST, OWASP, CIS, and the EU AI Act, and why the geopolitical dimension of AI governance matters more than most security leaders are accounting for right now.
Ready to operationalize AI governance? We help security teams move from framework to practice across your entire AI environment.
Get Started with Z CyberFrequently Asked Questions
How does the NIST Cyber AI Profile relate to CSF 2.0?
The Cyber AI Profile (IR 8596) is an overlay built on top of the NIST Cybersecurity Framework 2.0. It uses the same structure (functions, categories, and subcategories) and adds AI-specific considerations on top. Organizations already working against the CSF are extending, not starting over.
What is the Govern function in NIST CSF 2.0?
Govern is the function that determines whether everything else in the framework works. For AI, it requires defining risk appetite, naming who owns AI risk, updating AI use policies, managing supply chain risk, and ensuring board-level visibility into AI risk posture.
Who should own AI risk in an organization?
The NIST Cyber AI Profile requires a named owner for AI risk, whether that's the CISO, a Chief AI Officer, a Chief Risk Officer, or a cross-functional committee. There's no single right answer, but the wrong answer is nobody. Risk without a named owner accumulates until an incident forces the conversation.
What should an AI use policy cover in 2026?
An effective AI use policy should cover agentic AI (autonomous multi-agent pipelines), employee use of personal AI tools for work purposes, and third-party AI models embedded in your vendor stack. Most policies written reactively for ChatGPT don't address these areas.

