Home - Teleport Blog - NIST CSF 2.0 and Agentic AI: Building Profiles for Autonomous Systems
NIST CSF 2.0 and Agentic AI: Building Profiles for Autonomous Systems

About the author: Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI. Over the last 15 years, he has authored standards, guidance and best practices with ISO, NIST, and other governing bodies. Smith strives to create actionable resources for organizations seeking to minimize technological risk and increase value to customers. His expertise encompasses ISO 27110, the NICE Workforce Framework, the NIST Cybersecurity Framework, security framework analysis, process creation, process improvement, and data analysis.
AI agents are likely already running inside your infrastructure. They triage alerts, remediate incidents, provision resources, and make decisions without waiting for a human to approve each step.
For teams aligned to NIST’s Cybersecurity Framework (CSF) 2.0, this creates a problem: the framework assumes human actors, human-speed decisions, and human-readable audit trails. Autonomous systems break all three assumptions.
The good news is that CSF 2.0 was designed to be adapted.
The Cyber AI Profile (NIST IR 8596)
CSF 2.0’s Community Profile mechanism lets organizations tailor the framework’s six core functions to specific technologies, sectors, or risk environments.
NIST recognized this assumption gap directly when it published the preliminary draft of the Cyber AI Profile (NIST IR 8596) in December 2025, offering the first official guidance for applying CSF 2.0 to AI systems. In February 2026, the agency went further by launching the AI Agent Standards Initiative, the first U.S. government program dedicated to AI agent security standards.
For platform engineers, SREs, DevOps engineers, and security architects, building a CSF AI Profile is a way to organize and communicate cybersecurity outcomes for the agents already operating in your environment.
What NIST CSF 2.0 provides, and where agentic AI demands tailoring
CSF 2.0 organizes cybersecurity outcomes into six functions: Govern, Identify, Protect, Detect, Respond, and Recover.
Each function holds up well as a category of concern for agentic AI. But the specific subcategories and implementation guidance need rethinking when the “user” making API calls, accessing databases, and modifying infrastructure is a piece of software acting on its own.
Govern: Policies and oversight for autonomous systems
CSF 2.0 introduced the Govern function as a new addition to the framework, placing organizational context, risk strategy, and oversight at the center of every cybersecurity program. For traditional systems, governance means defining roles, setting risk tolerances, and establishing policies that humans follow. Agentic AI complicates each of these.
An AI agent operating your CI/CD pipeline or your incident response toolchain does not read your acceptable use policy. It follows whatever constraints your developers encoded, plus whatever permissions your infrastructure grants it. Agents can even perform actions they’re explicitly asked not to.
Governance for agents means defining, in machine-enforceable terms:
- What an agent is allowed to do
- What it must never do
- What requires human approval before proceeding
The Cyber AI Profile emphasizes this shift. Under the Govern function, it calls for organizations to identify AI dependencies across their operations, integrate AI-specific risks into formal risk appetite statements, and establish human-in-the-loop checkpoints for high-consequence decisions.
Key takeaway
→ Teams building a CSF AI Profile should start by inventorying every agent in their environment, documenting its scope of action, and mapping those actions to existing governance controls. Gaps between what an agent can do and what your governance framework covers become your first priorities.
Watch: Why Agentic AI Breaks Legacy Identity — and What Infrastructure Leaders Must Do Next
Identify: Risk assessments for non-human actors
Traditional risk assessments catalog assets, threats, and vulnerabilities with the assumption that threat actors are external and that internal users behave predictably. Agentic AI disrupts both assumptions.
An AI agent is simultaneously an asset (it performs valuable work), a potential threat vector (it can be manipulated through prompt injection or data poisoning), a source of unpredictable behavior (model drift, hallucination, emergent actions), as well as a non-human identity.
Therefore, risk assessments for agentic systems need to account for behaviors that have no precedent in traditional IT. NIST’s own red-team research from January 2025 demonstrated that novel attack strategies against AI agents achieved an 81% success rate, compared to 11% against baseline defenses.
Key takeaway
→ Your Identify profile should include AI-specific threat modeling that addresses prompt injection, training data compromise, excessive autonomy, and multi-agent coordination failures.
Protect: Key controls for agentic identity
Identity and access management (IAM) becomes the highest-stakes control surface when agents operate inside your infrastructure. A human engineer authenticates once, works within defined permissions, and generates an audit trail tied to their identity. An AI agent may spin up dozens of sessions, escalate its own privileges through tool use, and operate across multiple systems in seconds.
The scale of the problem is significant. Teleport’s 2026 Infrastructure Identity Survey found that 70% of organizations have given their AI systems greater access than they would give a human employee performing the same job. The consequences are measurable: organizations with over-privileged AI reported a 76% incident rate, compared to 17% for those enforcing least-privileged access for AI systems.
The National Cybersecurity Center of Excellence (NCCoE) — a division of NIST — published a concept paper in February 2026 proposing adaptations to existing identity and authorization frameworks for AI agents, acknowledging this as a critical gap.
Key takeaway
→ Your Protect profile for agents should enforce least-privilege access at the task level (not the role level), require short-lived credentials that expire after each operation, and maintain a distinct identity for every agent instance. Treat each agent like an untrusted contractor with narrowly scoped permissions, not a trusted service account with broad access.
Learn how Teleport protects infrastructure with identity and access control for AI agents.
Detect: Continuous monitoring when systems act on their own
Continuous monitoring (DE.CM in CSF 2.0) assumes you can distinguish normal behavior from anomalous behavior. For human users, normal behavior follows recognizable patterns: login times, access frequencies, and data volumes. For AI agents, “normal” can change with every model update, every prompt variation, and every shift in the data the agent consumes.
Detection strategies for agentic systems need behavioral baselines that are continuously recalibrated. Your monitoring should track not just what an agent accesses, but what it decides, what alternatives it considers, and whether its outputs fall within expected bounds.
Key takeaway
→ Visibility becomes fragmented when agents operate across distributed infrastructure (multi-cloud environments, edge nodes, third-party APIs). A Detect profile for agents should require centralized logging of all agent actions, decision traces that capture the reasoning chain behind each action, and anomaly detection tuned to the specific behavioral patterns of each agent type.
Respond: Incident response for autonomous system behavior
Traditional incident response (RS.MA in CSF 2.0) assumes a human analyst investigates, contains, and remediates. When an AI agent is the source of unexpected behavior, the response calculus changes. You need to determine whether the agent was compromised, whether it encountered an edge case in its training, or whether it is functioning as designed but producing unintended consequences.
Key takeaways
→ Define clear escalation paths for agent-originated incidents. Include kill switches that can halt an agent’s operations without disrupting the broader system.
→ Perform post-incident analysis that examines the agent’s decision log, not just the system logs.
→ If an agentic workflow acted outside of its sanctioned scope and caused a security incident, reconstruct why the agent chose that action, what inputs drove the decision, and whether the same inputs could trigger the same behavior again.
Recover: Graceful failure and healing from agentic behaviors
Recovery planning for agentic systems centers on two questions: how do you reduce downtime when an agent fails, and how do you minimize customer impact during that failure?
Agents that operate autonomously also need autonomous fallback mechanisms. Therefore your Recover profile should:
- Define degraded-mode operations where human operators or simpler automated systems take over when an agent is disabled.
- Include rollback capabilities that can revert agent-initiated changes to infrastructure or data.
- Establish clear SLAs for agent recovery that account for the time needed to diagnose the root cause (which may involve retraining or re-validating a model, not just restarting a service).
Key takeaways
→ Design agents with circuit breakers that trip when outputs exceed expected thresholds.
→ Build recovery paths that do not depend on the failed agent to execute.
→ Test agent failure scenarios the same way you test infrastructure failure scenarios: regularly, in production-like environments, and with clear success criteria.
Primary takeaways for teams navigating AI and NIST CSF 2.0
AI and agentic systems are reshaping how CSF 2.0 gets applied in practice. For teams trying to prioritize when AI touches governance, detection, and response simultaneously, focus on these steps:
- Start with an agent inventory: You cannot govern, protect, or monitor what you have not cataloged. Document every AI agent in your environment, its scope of action, its data access, and its current permission model.
- Build your Govern layer first: Define machine-enforceable policies for agent behavior before you deploy new agents. Establish human-in-the-loop requirements for high-consequence actions. Integrate agent-specific risks into your existing risk management strategy.
- Fix identity before anything else: Machine identity sprawl is the most immediate and exploitable gap in most organizations. Implement short-lived, narrowly scoped credentials for every agent. Retire long-lived service account tokens that agents have inherited from older architectures.
- Instrument decision trails, not just access logs: Traditional logging tells you what happened. Agent decision trails tell you why. Build observability that captures the reasoning chain behind each autonomous action.
- Test failure and recovery as aggressively as you test functionality: Agent failure scenarios differ from infrastructure failure scenarios. A misconfigured agent can cause cascading damage before any alert fires. Run chaos engineering exercises that simulate agent misbehavior, not just agent unavailability.
The CSF 2.0 Profile mechanism gives you a structured way to address all of this within a framework your organization already understands, and NIST’s Cyber AI Profile draft provides a starting point for applying this to AI systems.
However, the work of tailoring it to your specific agents, your specific infrastructure, and your specific risk tolerance belongs to your team.
Accelerate NIST CSF 2.0 alignment: Identity, access, and audit controls
Teleport is trusted by leading organizations to accelerate NIST CSF 2.0 compliance across critical control families, including:
- Access Controls
- Audit & Accountability
- Identification & Authentication
- And more
Related articles
→ NIST 800-171 and Agentic AI: What Autonomous Systems Mean for CUI Protection
→ How to Apply NIST 800-53 to AI Systems
→ Why Agentic AI Breaks Legacy Identity — and What Infrastructure Leaders Must Do Next
→ Secure AI Agent Infrastructure with Zero-Code MCP
Frequently asked questions
How does AI impact NIST CSF 2.0 alignment?
AI systems require organizations seeking CSF 2.0 alignment to apply the framework’s six functions across new AI-specific assets, including models, agents, application programming interfaces (APIs) and keys, datasets/metadata, and embedded AI integrations. NIST also recommends updating inventories, AI-specific threat assessments, and human oversight of AI-mediated actions.
What is the Cyber AI Profile (NIST IR 8596)?
The Cyber AI Profile (NIST IR 8596) is a NIST community profile that provides guidelines for managing cybersecurity risk that AI systems can introduce. The profile also identifies opportunities where AI can be used to enhance cybersecurity capabilities, organized around CSF 2.0’s functions, categories, and subcategories.
How do you build a Cyber AI Profile?
You can build a Cyber AI Profile by applying the CSF 2.0 structure to AI-specific risks across three focus areas:
- Identifying cybersecurity challenges when integrating AI into organizational ecosystems and infrastructure
- Identifying opportunities to use AI to enhance cybersecurity and understanding challenges when leveraging AI to support defensive operations
- Building resilience to protect against new AI-enabled threats
What is the AI Agent Standards Initiative?
The AI Agent Standards Initiative is a program launched on February 17, 2026 by NIST’s Center for AI Standards and Innovation intended to provide technical standards and protocols to build public trust in AI agents and support an interoperable agent ecosystem.
Table Of Contents
Teleport Newsletter
Stay up-to-date with the newest Teleport releases by subscribing to our monthly updates.
Tags
Subscribe to our newsletter

