The 2026 Infrastructure Identity Survey: State of AI Adoption
Read Survey
Teleport logoGet a Demo

Home - Teleport Blog - How to Apply NIST 800-53 to AI Systems

How to Apply NIST 800-53 to AI Systems

by Matthew Smith Mar 13, 2026

How to Apply NIST 800-53 to AI Systems header image

About the author

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI. Over the last 15 years, he has authored standards, guidance and best practices with ISO, NIST, and other governing bodies. Smith strives to create actionable resources for organizations seeking to minimize technological risk and increase value to customers. His expertise encompasses ISO 27110, the NICE Workforce Framework, the NIST Cybersecurity Framework, security framework analysis, process creation, process improvement, and data analysis.

How AI and agentic systems impact NIST SP 800-53 security controls

NIST SP 800-53 has long been the de facto control catalog for organizations building mature cybersecurity programs, defining specific, enforceable requirements around access, accountability, and system integrity that security practitioners and CISOs across industries use to structure risk management programs, satisfy regulatory expectations, and benchmark their security posture.

For years, teams have applied these controls to infrastructure composed of human-operated systems with well-understood boundaries. With AI and agentic systems, this operating model is shifting.

AI and agentic systems are becoming deeply embedded into enterprise infrastructure, and they are not simply new software to inventory. They are autonomous actors that make decisions, initiate changes, and operate across environments in ways that fundamentally challenge the assumptions behind many SP 800-53 controls. For example, an agent that autonomously monitors supply chain data, triggers remediation workflows, or adjusts infrastructure configurations is making context-dependent decisions at machine speed, often across multiple system boundaries.

The core question for CISOs is not whether 800-53 applies to AI-driven environments — it does. The question is where the complexity increases, and what their security teams need to do differently to meet these controls effectively as autonomous systems take on more operational roles.

Three NIST SP 800-53 control families illustrate where AI systems most significantly change how teams implement these controls:

  1. Access Control (AC)
  2. Audit and Accountability (AU)
  3. Configuration Management (CM)

These controls represent a subset of relevant controls within the 800-53 catalog, and are meant to be a starting point. Conduct your own risk assessment to determine the additional NIST 800-53 controls that may mitigate risk in your organization.

Where AI and agentic systems stress-test existing NIST 800-53 controls

The challenge with agentic AI is not that it falls outside the scope of 800-53 but that it stress-tests controls designed for environments where humans are the primary actors. When you examine how specific controls apply to autonomous systems, the gaps in traditional implementation approaches become clear.

Access and boundary protection (AC-06, SC-10, SC-7)

Boundary protection controls like SC-07 and SC-10 require organizations to monitor and control communications at external and key internal boundaries, and to terminate network connections after defined periods of inactivity. These controls assume relatively static network topologies and predictable connection patterns.

Agentic systems challenge both assumptions.

An autonomous agent tasked with predictive maintenance, for example, may need to reach across infrastructure environments, query sensor data from on-premises systems, interact with cloud-hosted models, and trigger downstream workflows, all without direct human initiation. The boundaries it crosses are dynamic, and its access needs may shift based on the operational context.

AC-06: Least privilege

Enforcing least privilege under AC-06 becomes significantly more complex in this context, as least privilege must apply to both human and non-human identities, including machines, AI agents, and services, across the entire model lifecycle.

In practice, this requires organizations to:

  • Assign unique, non-shared identities to training pipelines, inference services, and model monitoring agents.
  • Restrict service accounts to only the specific datasets, model artifacts, and compute resources required for their role.
  • Extend least privilege policies to machines, AI agents, and automated services, not just human users.

For most organizations, this is a meaningful departure from current identity and access management practices, where service accounts tend to be broadly scoped and infrequently reviewed.

SC-10: Network disconnect

For security teams, this departure also means revisiting how boundary protection is implemented when agents are the primary traffic generators.

Network segmentation strategies need to account for the communication patterns of autonomous systems. For example, inactivity timeout policies under SC-10 may need rethinking, since an agent's periods of silence between actions do not necessarily indicate a session that should be terminated.

The practical work here is mapping how agents traverse the environment and building access policies that are granular enough to enforce least privilege without breaking legitimate autonomous workflows.

SC-07(10): Exfiltration prevention

Exfiltration prevention under SC-07(10) also takes on new dimensions with AI systems in the fold.

For example, the exfiltration of information about a training model can enable more sophisticated attacks that subvert system objectives. Extraction attacks grow more effective when an attacker can seed the model with specific information to extract details about training data or model architecture.

Teams must consider not just traditional data loss prevention, but also protections against model extraction and data privacy attacks unique to AI systems.

Key takeaway

  • AC-06 (Least privilege): Extend least privilege policies to AI agents, services, and pipelines by assigning unique identities and restricting access to only the datasets, models, and infrastructure required for their role.
  • SC-07 (Boundary protection): Revisit network segmentation and boundary monitoring to account for autonomous agents that dynamically interact across infrastructure environments and services.
  • SC-10 (Network disconnect): Adjust session management and inactivity timeout policies so they accommodate agent-driven workflows, where periods of inactivity may not indicate a terminated session.

Audit and accountability (AU-03, AU-04, AU-10)

The audit and accountability controls in NIST SP 800-53 require organizations to generate audit records with sufficient content to establish what occurred, maintain adequate log storage, and protect against unauthorized modification of audit information.

When the actor generating the activity is a human, meeting these requirements is well understood. When the actor is an autonomous agent, these requirements can become complex.

AU-03: Content of audit records and AU-04: Audit log storage capacity

Consider what happens when an agentic system processes thousands of inference requests per hour, each potentially triggering downstream actions. The volume of audit data generated by autonomous systems can be orders of magnitude greater than what human operators produce.

AU-04 requires organizations to allocate sufficient audit log storage capacity. However, agents can act faster and with greater frequency than humans, and are already outnumbering human actors in many environments, resulting in an exponential increase in logs. This means that capacity planning for agent-generated activity requires fundamentally different assumptions about data volume.

Beyond storage, the structure and content of audit records under AU-03 must also capture the decision chain of autonomous actions.

A traditional audit record might log that a user executed a command. An audit record for an agent, however, needs to capture the following with enough fidelity to reconstruct the sequence of events:

  • Triggering condition
  • Model's decision logic
  • Action taken
  • Downstream effects

AU-10: Non-repudiation

Non-repudiation under AU-10 introduces another layer of difficulty.

In human-driven systems, non-repudiation is typically tied to individual user credentials. For agentic systems, the challenge becomes how to attribute actions to a specific agent instance, model version, or pipeline execution in a way that is verifiable and tamper-resistant.

Automated real-time analysis described in SI-04(02) becomes essential. Monitoring tools must track interactions with all datasets and algorithms during development and training, and capture all activity once the model is deployed. Automated methods also need to distinguish activity outside established parameters that could indicate poisoning, evasion, or an attacker probing the model.

This requires a level of observability infrastructure that many organizations have not yet built.

Key takeaways: Strengthening audit and accountability for AI

  • AU-03 (Audit record content): Ensure audit records capture the full decision chain of autonomous actions, including triggering conditions, model decision logic, actions taken, and downstream system effects.
  • AU-04 (Audit log storage capacity): Plan for significantly larger volumes of audit data generated by high-frequency inference activity and agent-initiated workflows.
  • AU-10 (Non-repudiation): Implement mechanisms that allow actions to be attributed to specific agent instances, model versions, or pipeline executions in a verifiable and tamper-resistant manner.

Configuration management (CM-02, CM-04, CM-08)

Configuration management controls require organizations to maintain baseline configurations, conduct impact analyses before changes, and track system component inventories. These controls become considerably more complex when autonomous systems are initiating changes rather than human operators.

CM-02: Baseline configuration

Baseline configuration under CM-02 illustrates the challenge.

Predictive AI systems introduce configuration elements not explicitly addressed in baseline configurations of enterprise IT systems, including machine learning frameworks and libraries, model architectures, and specialized compute environments. This means that now, baseline configuration must also encompass:

  • The AI software stack
  • Data and pipeline configurations
  • Model configuration artifacts

For infrastructure teams accustomed to tracking server configurations and application versions, this represents a significant expansion of scope. Most configuration management databases and processes were not designed to handle these additional layers of AI system configuration.

CM-04: Access restrictions for change

Impact analysis under CM-04 is where the stakes become most apparent.

Minor configuration changes can significantly alter model behavior, potentially introducing bias, drift, or changes in the accuracy of outputs. A library update that would be routine in a conventional application could fundamentally change how an AI model processes inputs and generates outputs.

Organizations should expand their change advisory processes to include AI-specific evaluation criteria. This includes analyzing changes to:

  • Machine learning frameworks
  • Supporting libraries
  • Containers
  • Data pipelines

CM-08: System component inventory

Component inventory under CM-08 faces similar expansion.

Agentic systems may dynamically provision and deprovision resources, deploy model versions, or modify pipeline configurations. However, traditional inventory tracking controls assume relatively stable system components.

When autonomous systems can modify infrastructure state, inventory management must now capture not only what exists at a point in time, but also:

  • The lineage of configuration changes
  • Relationships between model versions and training data
  • Dependencies between infrastructure components and deployed models

Maintaining this level of visibility is essential for understanding how AI systems evolve and for ensuring configuration integrity over time.

Key takeaways: Strengthening configuration management (CM-02, CM-04, CM-08) for AI

  • CM-02 (Baseline configuration): Expand baseline configurations to include the full AI stack, including machine learning frameworks, model architectures, data pipelines, and specialized compute environments.
  • CM-04 (Impact analysis): Update change management processes to evaluate how configuration changes to ML frameworks, libraries, containers, or data pipelines could affect model behavior, accuracy, bias, or drift.
  • CM-08 (System component inventory): Extend system inventories to track model versions, training data relationships, and dynamically provisioned infrastructure components associated with AI systems.

Additional NIST 800-53 controls to consider for AI systems

Beyond these three families, several additional controls warrant consideration as AI systems scale.

Vulnerability monitoring RA-05

This control must extend to AI-specific components and environments, including:

  • Machine learning frameworks
  • Specialized libraries that update frequently
  • Dynamic compute resources
  • Model artifacts

Vulnerability scanning should occur at system deployment, after significant model or pipeline changes, and at regular risk-based intervals.

Malicious code protection under SI-03(08)

Implementing SI-03(08) in AI environments requires accounting for unauthorized commands targeting model access points. Data ingest methods should parse inputs to block system-level commands, much the way SQL injection protections work in traditional applications.

Threat modeling under SA-11(02)

Finally, implementation of SA-11(02) should also evolve to track the threats that affect AI deployments. This includes monitoring vulnerabilities associated with the model itself, the datasets used for training, and the development tools used to build and deploy AI systems.

Practical takeaways for security leaders

The work ahead is not about creating entirely new security frameworks. It is about adapting proven (and implemented) controls to address the realities of autonomous systems operating within your environment.

The NIST SP 800-53 catalog already contains the controls you need. Now is the time to apply them with the specificity that AI and agentic systems demand.

1. Start with visibility

For CISOs and security teams, the first and most actionable steps begin with visibility.

Map how agentic systems traverse your environment: what boundaries they cross, what data they access, and what changes they initiate. Security teams should build identity and access management practices that treat agents as first-class entities, assigning unique identities and granular, lifecycle-aware permissions.

Invest in observability infrastructure capable of handling the volume and complexity of agent-generated audit data, capturing not just what happened but the decision chain that led to each action.

2. Expand control coverage across the AI lifecycle

Configuration management practices must also evolve.

Expand your configuration management scope to include the full AI stack, treating model artifacts, training pipelines, and ML frameworks with the same rigor applied to traditional infrastructure components. Impact analysis processes should evaluate how changes affect model behavior, not just security and availability.

It is also critical to recognize that AI access patterns change across lifecycle phases. The permissions an agent requires during model training differ from those needed in production environments, and your controls should reflect those differences.

The Bottom Line

Organizations that begin adapting their NIST SP 800-53 implementation now, and particularly around access control, audit, and configuration management, will be better positioned as regulatory expectations around AI security continue to mature.

Teleport features for AI compliance with NIST SP 800-53

Teleport supports NIST SP 800-53 compliance for AI environments by establishing a unified layer across humans, machines, workloads, and agentic systems. Identities are secured cryptographically using short-lived certificates that generate context-rich, identity-traceable, and audit-ready logs.

This includes support for many of the NIST 800-53 controls discussed in this article:

Referenced NIST SP 800-53 controlExample Teleport features for AI compliance
AU-03: Content of audit recordsIdentity-traceable audit logs, Kubernetes request-level logging, and full session recordings for human and non-humans
AU-04: Audit log storage capacityAudit log streaming to S3 and DynamoDB to accommodate increases in log volume due to autonomous activity
AU-10: Non-repudiationSession recording, certificate-bound audit logs, and SSO identity attribution for all machines and agents
CM-08: System component inventoryLive inventory of nodes, clusters, databases, and applications
SC-10: Network disconnectionRequires valid X.509 or SSH certificates for connection with session termination upon certificate expiry; session locks; inactivity timeouts

Simplify NIST 800-53 compliance

Teleport is trusted to simplify NIST 800-53 compliance for cloud-native, on-prem, and AI infrastructure across multiple control categories, including:

  • Access Controls (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Identification and Authentication (IA)
  • And more
Learn More →

NIST SP 800-53 Control Mapping
Accelerate Compliance
Access Logging
Streamlining FedRAMP Compliance

Frequently Asked Questions

What is NIST SP 800-53's role in securing AI systems?

NIST SP 800-53 provides the baseline security and privacy controls organizations apply to AI systems. It governs how identities access infrastructure, how activity is logged and attributed, and how system configurations and boundaries are managed. Controls such as Access Control (AC), Audit and Accountability (AU), and Configuration Management (CM) help organizations enforce least privilege, maintain traceable activity, and operate AI systems within defined security boundaries.

How can organizations integrate AI systems with NIST SP 800-53 security controls?

Organizations should begin by mapping AI components such as models, pipelines, datasets, and supporting infrastructure to relevant control families. Assign unique identities to AI agents, services, and pipelines, and enforce least privilege under Access Control (AC) so each component only accesses the systems and data it requires.

Simultaneously, implement logging and attribution under Audit and Accountability (AU) to capture model actions, data access, and infrastructure changes. Configuration Management (CM) should also extend to machine learning frameworks, model artifacts, and pipeline configurations so updates to models, libraries, and compute environments are tracked and reviewed.

Ongoing monitoring and integrity controls are also critical. These controls help detect abnormal behavior, unauthorized changes, or unexpected model activity across the AI lifecycle.

What guidance supports AI-driven continuous monitoring for NIST SP 800-53 compliance?

Continuous monitoring should extend across the infrastructure that supports AI systems, including servers, Kubernetes clusters, databases, and data pipelines where models run and interact with data. Continuous Monitoring (CA-07) requires organizations to track system activity and security posture over time, while Audit and Accountability (AU) controls capture infrastructure and identity activity for investigation and reporting. System and Information Integrity (SI) controls help detect abnormal behavior such as unauthorized configuration changes, adversarial inputs, or unexpected model behavior across the AI lifecycle.

background

Subscribe to our newsletter

PAM / Teleport