Teleport Launches Beams — Trusted Agent Runtimes For Infrastructure
Learn More
Teleport logoGet a Demo

Home - Teleport Blog - How AI Agents Impact SOC 2 Trust Services Criteria

How AI Agents Impact SOC 2 Trust Services Criteria

by Kayne McGladrey Feb 25, 2026

How AI Agents Impact SOC 2 Trust Services Criteria header image

SOC 2, which stands for Systems and Organization Controls 2, is a framework developed by the American Institute of Certified Public Accountants (AICPA) to evaluate controls for security, availability, processing integrity, confidentiality, and privacy.

As agentic AI systems begin acting autonomously, AI and SOC 2 compliance become closely linked. These systems drive new efficiencies, but also introduce new risks.

Achieving SOC 2 compliance in the age of AI and agentic systems requires demonstrating that security controls and oversight mechanisms extend to autonomous technologies, even as guidance for these emerging systems continues to evolve.

This article explains how SOC 2 Trust Services Criteria (TSC) can be mapped to agentic AI, offering practical guidance for securing autonomous agents while preserving SOC 2 compliance.

How AI impacts Trust Services Criteria for SOC 2

Integrating AI into production environments expands the scope of SOC 2 to cover models, training data, and automated decision-making systems. This shift affects every Trust Services Criterion.

It also expands “evidentiary requirements,” requiring auditable records for production execution in addition to the AI decisions and automation workflows that triggered those executions.

Notable impacts:

  1. Security now depends on strict access controls for models and APIs, and availability requires resilient, reliable AI services that can maintain consistent performance.

  2. Processing integrity hinges on validating and continuously monitoring outputs, and confidentiality focuses on protecting training data and model artifacts.

  3. Privacy governs the lawful handling of personal data throughout the AI lifecycle.

In real environments, defining these controls for AI is not the primary challenge. Instead, it’s proving that these controls are operating correctly at the high frequency of actions that AI, pipelines, and ephemeral systems are designed for.

The table below summarizes how these impacts align with SOC 2 control expectations.

SOC 2 Trust Service CriterionSummary of Coverage and the Impact of AI
SecurityImplements controls that protect systems from unauthorized access, prevent malicious alteration of AI models or inputs, and protects the integrity of automated decisions. Also includes encryption of data in transit and at rest, vulnerability management, and protection of API endpoints.

Security evidence depends on clear ownership. If logs show only a tool name (for example, “CI/CD Runner”) or a shared service account, auditors may question who initiated and approved the action.
AvailabilityEnsures AI services remain operational and responsive, providing consistent uptime, meeting inference latency requirements, implementing failover mechanisms, and supporting disaster recovery planning so users can rely on real-time capabilities without interruption.

Availability evidence becomes harder when auto-scaling creates short-lived instances that are not consistently instrumented, because gaps in the log chain can weaken confidence in monitoring coverage.
Processing IntegrityRequires data to be processed accurately, completely, and promptly, while validating model accuracy, detecting bias, and verifying outputs, all of which are essential for producing trustworthy AI outcomes.

Processing integrity also includes showing that automated code promotions or deployments did not bypass validation, because pipelines can promote outputs based on thresholds without producing the same approval and test artifacts that humans typically attach to change tickets.
ConfidentialityRequires protection of sensitive information such as proprietary training datasets, model outputs, and model parameters, while isolating user data and implementing data classification and handling procedures, reducing the risk of competitive leakage.

Auditors may also look for evidence that logs and monitoring data do not leak sensitive content, while still preserving enough context and metadata to reconstruct intent.
PrivacyAddresses lawful handling of personal data, aligning AI deployments with regulations including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

Privacy evidence often depends on data lineage and retention. If you cannot show where personal data flowed (training, fine-tuning, inference, logs), you will struggle to prove compliance with handling and disposal expectations.

Challenges when implementing AI in a SOC 2 compliant environment

Agentic AI systems can perform actions without direct human oversight, creating several potential compliance challenges.

Aligning these risks with SOC 2 criteria helps organizations show that their controls are both effective and auditable.

Agentic AI RiskCorresponding SOC 2 Control(s)
Unintended autonomous actionsLogical access (CC6)

Role-based permissions (CC6)
Model drift / degraded performanceChange management procedures (CC8)

Continuous monitoring (CC8)
Data leakage from training setsEncryption at rest (CC6)

Encryption in transit (CC6)
Bias and unfair outcomesRegular audits (CC3, CC4)

Documentation of model decisions (CC7, CC8)

Unintended autonomous actions

Unintended autonomous actions are risky when an AI system performs functions it was not specifically allowed to do.

What auditors look for: Auditors will often treat “no human request” as a major accountability gap, because SOC 2 expects privileged actions to be attributable to an accountable individual, not to an autonomous agent or generic system account.

Recommended guidance:
  • Using logical least privileged access controls and role-based permissions
  • Adding limits on actions and approved tool lists
  • Setting up approval workflows and human-on-the-loop checks for important decisions
  • Installing circuit breakers or kill switches to stop abnormal behavior

Model drift or degraded performance

Model drift or degraded performance occurs when an AI model's outputs diverge from expected standards over time.

What auditors look for: Auditors commonly look for the same change management evidence for AI-driven changes that they expect for traditional infrastructure: a documented request, approval, validation evidence, and a rollback plan. If an AI system or pipeline updates production without a ticket, test artifacts, or a documented rollback procedure, it can directly conflict with change management expectations.

Recommended guidance:
  • Set up change management steps, keep continuous monitoring running, and feed real-time anomaly scores into your SIEM system
  • Keep immutable audit logs and follow clear retention rules
  • Add the ability to roll back to a known good state and to degrade gracefully when problems appear

Data leakage from training sets

Data leakage from training sets poses a significant risk to confidentiality as they may contain sensitive information.

What auditors look for: If logs are produced in proprietary formats, lack documentation, or omit key metadata, auditors may have reduced confidence in completeness and trustworthiness.

Recommended guidance:
  • Encrypt data at rest and in transit
  • Detect and mask Personally Identifiable Information (PII) or Protected Health Information (PHI)
  • Apply content filters to both inputs and outputs
  • Validate data schemas
  • Track data lineage for provenance
  • Segment networks
  • Use API rate limiting together with mutual TLS for secure communication
  • Keep immutable logs for auditing to ensure confidentiality is upheld throughout the data lifecycle

Bias and unfair outcomes

Bias and unfair outcomes undermine fairness and can lead to regulatory scrutiny.

What auditors look for: When auditors see outputs and actions without context or intent, they may conclude the control did not operate as intended, even if the system functioned as designed. Bias controls also need “why” evidence — clear rationale and traceable decision evidence.

Recommended guidance:
  • Perform regular audits
  • Document model decisions
  • Apply input validation to guard against prompt injection and adversarial inputs
  • Enforce output validation and policy-aligned response checks.
  • Implement data governance policies with scope restrictions, and use automated evidence collection with lineage tracking for auditable fairness assessments. These practices provide systematic review evidence and support the integrity and privacy principles of SOC 2.

Best practices for auditing AI models under SOC 2 frameworks

Organizations can follow these best practices to align their AI development and operations with SOC 2 requirements, demonstrate a strong security posture, and build trust with customers and regulators. A practical goal is to reduce “audit scramble” by producing consistent, reusable evidence so teams are not asked for the same artifact repeatedly during the audit window.

Governance and policy

  • Draft an AI security policy that references the SOC 2 Trust Services Criteria.
  • Appoint an AI security officer and a data steward to own the policy, monitor compliance, and report to senior management.
  • Explicitly define what AI “in scope” means for SOC 2, as this affects what evidence you must retain.

Secure Development Lifecycle (SDLC)

  • Incorporate threat modeling early in the design of AI components to identify potential attack vectors.
  • Perform code reviews and static analysis on model-serving scripts to catch insecure coding patterns before deployment.
  • Treat AI-driven configuration and deployment scripts like production code. Auditors can struggle when AI-generated configuration is deployed directly from a repository without a clear record connecting changes to a responsible developer and an approved workflow.

Access management

  • Require multi-factor authentication (MFA) for all tools that manage AI platforms.
  • Assign least-privilege roles for activities like model training, deployment, and inference, ensuring users receive only the permissions needed for their tasks.
  • Continuously inventory transient identities created by automation (such as auto-scaling pods, serverless functions, or pipeline jobs).

Monitoring and incident response

  • Log every AI agent action, including inputs, outputs, and trigger events, in a tamper-evident repository.
  • Configure alerts for anomalous behavior, such as unexpected API calls or spikes in inference latency.
  • Develop an incident response playbook that outlines steps for containment, investigation, and remediation of AI-related breaches.
  • Create runbooks that let investigators trace a production incident back to the AI decision source.

Data protection

  • Encrypt training datasets, model weights, and inference logs both at rest and in transit using industry-standard algorithms.
  • Where feasible, apply tokenization or differential privacy techniques to reduce the risk of exposing sensitive information during model training or inference.
  • Ensure short-lived resources are fully instrumented. Auto-scaling can create gaps if new instances do not emit logs consistently, which weakens evidence that controls operated continuously.

Auditing & continuous improvement

  • Schedule periodic SOC 2 readiness assessments that focus specifically on AI controls and document the results.
  • Record findings, remediate identified gaps, and revise policies and technical controls to reflect lessons learned.
  • Adopt a blended control approach that maps AI-specific risks into your existing SOC 2 control objectives. Many organizations layer AI risk management practices (such as risk assessment, lifecycle governance, continuous monitoring, and red teaming) on top of SOC 2 to strengthen evidence of oversight for autonomous behavior.

Conclusion

Aligning agentic AI security with SOC 2 standards is essential for protecting data integrity, confidentiality, and availability in modern enterprises.

Conducting a thorough gap analysis and engaging both security and AI teams early allows organizations to identify weaknesses, implement appropriate controls, and demonstrate compliance to stakeholders. This coordinated approach not only mitigates risk but also builds trust in AI-driven systems, paving the way for responsible innovation — all while reducing friction between auditors and practitioners.

Teleport features for AI and SOC 2 compliance

Teleport supports AI and SOC 2 compliance by enforcing strong identity and access controls across all human, machine, workload, and AI identities. These identities are secured cryptographically and automatically generate audit-ready evidence.

Role-based access control (RBAC), per-session multi-factor authentication (MFA), short-lived certificates, and just-in-time access help organizations meet critical access and least-privilege requirements (CC6). Built-in session recording and identity-traceable, tamper-evident audit logs strengthen monitoring and investigation controls (CC7).

By unifying identity and secure access to infrastructure and AI systems, Teleport enables continuous alignment with SOC 2 security, confidentiality, and availability criteria while reducing audit overhead.

Learn More →

About the author

Kayne McGladrey, CISSP Former Defense Industrial Base CISO, senior IEEE member, and cybersecurity strategist with extensive experience in governance, risk, and compliance programs. Kayne advises organizations on aligning emerging technologies, including AI systems, with established security frameworks such as SOC 2, ISO 27001, and NIST.

Streamlining SOC 2 Compliance
2026 Research: The Top AI Infrastructure Risks and Identity Gaps
Principle of Least Privilege: What It Is & How to Implement It

Frequently Asked Questions

Does SOC 2 apply to AI agents?

Yes, SOC 2 applies to AI agents. SOC 2 compliance requires documented, enforceable controls across the AI data lifecycle.

Access to training datasets, model artifacts, and inference systems must follow least-privilege principles, with defined roles, periodic access reviews, immutable logging of inputs and outputs, and continuous monitoring for anomalous behavior. Sensitive and personal data must be encrypted in transit and at rest, validated for schema and provenance, governed by retention and deletion policies, and processed in alignment with documented lawful purposes.

Does SOC 2 CC6.1 / CC6.2 apply to AI and AI agents?

SOC 2 CC6.1 requires logical access control procedures that limit system entry to authorized individuals, and CC6.2 calls for regular review of those permissions.

In AI environments, these controls cover model repositories, training data sets, and inference APIs, so only approved users can modify algorithms or retrieve outputs. Because agentic AI systems act autonomously, frequent access reviews, least-privilege principles, and continuous monitoring helps prevent unauthorized changes by AI agents and keeps AI operations aligned with SOC 2's logical access objectives.

How does SOC 2 CC6.3 apply to AI agents and how do you maintain compliance?

To meet SOC 2 CC6.3 requirements, organizations should configure AI agents to operate with the least-privilege principle by granting only the permissions required for each specific task.

AI agent access controls must be reviewed regularly, and any elevated rights should be time-bound and logged. Role-based access assignments simplify enforcement, while automated monitoring detects privilege creep and triggers revocation. When an agent interacts with sensitive data, encryption keys and APIs must be restricted to read-only or write-only scopes as appropriate. Documentation of these procedures supports audit readiness and demonstrates compliance with SOC 2 CC6.3.

How does SOC 2 CC8.1 "Authorized Changes" apply to AI agents and how do you maintain compliance?

Organizations that deploy AI agents must establish documented procedures for authorized changes to satisfy SOC 2 CC8.1.

First, they should maintain a change request log that records the initiator, purpose, risk assessment, and approval signatures before a modification that allows an AI agent to make changes. Second, they need automated version control that tracks code revisions and restricts deployment to environments that have passed security testing. Third, they must conduct periodic audits to verify that only approved changes reach production and that rollback mechanisms are in place.

How does SOC 2 CC7.2 / CC7.3 "Monitoring & Investigation" apply to AI agents and how do you maintain compliance?

SOC 2 CC7.2 / CC7.3 require continuous monitoring procedures that collect, aggregate, and analyze security event data from all critical systems, including AI systems.

Organizations should maintain a centralized logging repository with immutable storage for at least one year and ensure logs are time-synchronized using a trusted source. Automated alerting should correlate events against defined risk thresholds and create incident tickets promptly. An incident response team should have documented investigation workflows, clear escalation paths and access to forensic tools to preserve evidence. Regular reviews of alerts and investigations should be performed to refine detection rules and demonstrate ongoing compliance with SOC 2 criteria 7.2 and 7.3.

background

Subscribe to our newsletter

PAM / Teleport