Home - Teleport Blog - CMMC Requirements for AI Systems: What Assessors Actually Look For
CMMC Requirements for AI Systems: What Assessors Actually Look For

About the author
Josh Rector is the Compliance Director, Public Sector at Ace of Cloud, a security and compliance consulting firm, certified CMMC Third-Party Assessor Organization (C3PAO), and Registered Provider Organization (RPO). With more than a decade of experience in cybersecurity compliance, he has worked both sides of the assessment table, leading internal and external assessments, serving as ISSO for systems at federal agencies, and guiding cloud service providers through the FedRAMP authorization process. Previously, he led governance and compliance programs at Anthology and other organizations, managing FedRAMP, CMMC, ISO 27k, SOC 2, and DoD SRG IL-4 audits across more than 50 cloud products in AWS and Azure.
What CMMC assessors actually look for and how AI systems complicate it
The Cybersecurity Maturity Model Certification (CMMC) is a Department of Defense (DoD) framework requiring contractors to protect sensitive unclassified information (FCI/CUI) on their systems
I’ve spent a lot of time on both sides of the CMMC table, including as an ISSO helping organizations get ready for assessments, and now supporting clients through the C3PAO process. One thing I see consistently: organizations that struggle in assessments don’t usually struggle because they lack controls. They struggle because they can’t demonstrate them. Evidence is everything.
That challenge is about to get harder. AI tools and agentic systems are being deployed into enterprise environments at a pace that most compliance programs haven’t caught up with. And while the CMMC framework is robust enough to cover these systems, the way AI operates creates evidence problems that I’m not sure the broader community has fully reckoned with yet.
This post is about both of those things: how assessors evaluate evidence for the domains that matter most, and how AI systems create new complications in each. If you’re preparing for an assessment or building out a compliance program that includes AI, this is meant to be practical.
How assessors evaluate CMMC evidence: The basics
Before getting into AI-specific challenges, it helps to understand the assessor’s job.
Under the CMMC Assessment Guide, assessors use a combination of examine, interview, and test methods to determine whether a practice is implemented. That sounds straightforward, but what it means in practice is that assessors are looking for corroborating evidence across multiple sources. A policy document alone doesn’t cut it. Neither does a verbal explanation from a system administrator. We want to see the thing working.
The three domains where evidence challenges show up most frequently are Access Control (AC), Audit and Accountability (AU), and Identification and Authentication (IA). These are also the domains most directly impacted when AI systems enter the picture.
| Domain | What Assessors Look For | Common Evidence Gap | Sufficient Evidence |
|---|---|---|---|
| AC | Role assignments, group memberships, screenshots of access controls, policy artifacts. | Roles defined in policy but no evidence of enforcement in the system; access reviews undocumented. | Export of IAM role assignments, dated access review records, system screenshots showing least-privilege enforcement. |
| AU | Log configuration screenshots, SIEM exports, sample log events, retention policy. | Logs exist but aren’t reviewed; no evidence of regular review or alerting; retention below policy. | SIEM dashboard exports, documented review schedule with sign-offs, alert rule configurations, 90-day log sample. |
| IA | MFA enrollment reports, account provisioning/deprovisioning records, password policy config. | MFA enabled in policy but not enforced for all users; service accounts without IA controls. | MFA enrollment report showing 100% coverage, screenshots of enforced Conditional Access policies, service account inventory with controls documented. |
A few things worth noting about the table above.
First, the “sufficient evidence” column isn’t a checklist, it’s a floor. Assessors are looking for evidence that demonstrates the control is actually operating as intended, not just configured.
Second, the most common gap across all three domains is the same thing: the policy says one thing and the system tells a different story. That delta is where findings live.
Where CMMC evidence challenges emerge
Access Control (AC)
The AC domain is conceptually straightforward: control who gets in and what they can do.
However, the evidence challenges are surprisingly common. The most persistent one I see is access reviews that exist in policy but not in practice. An organization will have a beautiful access review procedure documented in their SSP, and then when an assessor asks for records of the last review, someone has to dig through email threads to find a spreadsheet that may or may not be current.
The other common gap is role creep that nobody documented. Users accumulate access over time, roles get assigned for one-off reasons and never revoked, and the access model that exists in the system doesn’t match what the policy describes. Assessors can spot this fast. We’ll pull a sample of user accounts and check whether their assigned roles are consistent with their documented job function. If we can’t make that connection, that’s a finding.
A note on privileged accounts: assessors pay particular attention to these. If you have privileged users without documented justification, without enhanced IA controls (separate privileged account, MFA, session logging), or without clear evidence of periodic review, expect follow-up questions.
Audit and Accountability (AU)
Audit logging is an area where organizations often have more coverage than they think, but also more gaps than they realize. The challenge isn’t usually that logs don’t exist. It’s that they’re disconnected.
What I mean by that is that you might have endpoint logs in one place, authentication logs in another, application logs in a third, and maybe a SIEM that’s ingesting some but not all of them. When an assessor asks to see evidence of a user’s activity across a session (e.g., login, data access, file operations, logout), you should be able to pull that as a coherent narrative. If you have to stitch it together from three different consoles and hope the timestamps align, that’s a gap.
The other AU challenge is log review. AU.2.042 requires review and analysis of audit records. Assessors want to see that someone is actually looking at the logs, not just that the logs exist.
Identification and Authentication (IA)
IA is where I see the most surprises during assessment prep.
Organizations typically have MFA deployed for their primary workforce and documented in their SSP. What they haven’t always addressed is the full scope of IA requirements: service accounts, shared credentials, system-to-system authentication, and privileged access pathways that bypass the standard identity provider.
Assessors will ask for an inventory of all accounts with access to Controlled Unclassified Information (CUI) systems. If your service accounts aren’t in that inventory, or they are but they lack documented IA controls, that’s an exposure. The same is true for any account that bypasses your primary identity provider, even for “legacy” reasons.
Where AI systems complicate all of this
Here’s the scenario I’m seeing more and more.
An organization has done solid compliance work. Their AC, AU, and IA coverage is real. And then they deploy an AI assistant or agentic workflow as a sanctioned IT decision or as a departmental tool that got adopted before anyone asked compliance questions.
Suddenly, several of those controls have gaps that didn’t exist before.
The identity problem
AI agents don’t have natural identities. When a user invokes an AI tool that retrieves documents from a SharePoint library or queries a contract management system, that action often runs under one of a few problematic identity patterns:
- The user’s own credentials (meaning the agent inherits all of the user’s access, not just what’s needed for the task)
- A shared service account (meaning you can’t attribute actions to specific sessions or users)
- No formal identity at all (meaning the system is acting inside your boundary without any enrollment in your access control model)
From an AC assessment standpoint, all three of those patterns are findings waiting to happen.
Assessors evaluating AC.1.001 and AC.1.002 want to see that access is limited to authorized users and processes. An AI agent that operates under borrowed or shared credentials doesn’t satisfy that requirement, regardless of how the tool was marketed.
The fix isn’t complicated in concept, but it requires deliberate action: assign each AI system or agent a distinct service identity, enroll it in your identity governance model, scope its permissions explicitly, and include it in your account inventory. This is the same thing you’d do for any other non-human system. The AI label doesn’t change the obligation.
The audit trail problem
This is the one that keeps me up at night a little, honestly.
Imagine this: a user asks an AI assistant to find all the relevant technical specs for a project and draft a summary. The AI queries three document repositories, retrieves a dozen files, some of which contain CUI, and produces output. The user’s query is logged. The output might be stored.
But the intermediate data access events? In most implementations I’ve seen, those are invisible.
That’s a significant AU gap. The actual exposure (in this case, the CUI retrieval) isn’t captured in any log that an assessor or incident responder could review. And the problem gets worse with agentic systems that chain multiple tool calls together. A pipeline that touches six systems in sequence to complete a task may leave almost no coherent audit trail of what it accessed and why.
What assessors need to see for AU coverage of AI systems includes:
- Tool call logs that capture every retrieval, query, and API call the AI system makes to authoritative sources, not just the user-facing input/output.
- Session attribution linking agent actions back to the human user who initiated them, not just to a generic service account.
- Reasoning traces where the orchestration framework supports them to establish the sequence of decisions and actions, which is critical for incident reconstruction.
- Integration with your SIEM so AI-generated events appear alongside endpoint, identity, and network telemetry in the same correlation engine.
The scope problem
One thing that’s easy to miss: AI systems that process CUI are in scope for your CMMC boundary.
It doesn’t matter whether the tool is cloud-hosted, vendor-managed, or “just an add-on.” If it touches CUI, the controls apply. And if it’s not documented in your SSP, assessors will ask about it.
I’ve seen organizations deploy AI tools under the assumption that because they’re SaaS products, they’re someone else’s responsibility. That’s not how CMMC works. You own the CUI. You’re responsible for where it goes and what touches it.
If your organization is using AI tools that have access to CUI, whether that’s a RAG-based assistant, a document summarization tool, or an agentic workflow, and those systems aren’t documented in your SSP with explicit data flow descriptions and control coverage, that is an assessment exposure. Get ahead of it.
What “sufficient evidence” looks like for AI systems
Assessors are reasonable people doing a difficult job. They’re not trying to find ways to fail you; they’re trying to determine whether your controls are real. The question they’re implicitly asking for every practice is: “if something went wrong here, would I be able to tell?” For AI systems, the answer needs to be yes, and you need to be able to show it.
Evidence for Access Control (AC)
Sufficient AC evidence looks like this: an inventory of all AI systems and agents operating in your environment, including their service identities, the resources they’re authorized to access, and the scope of that access. Role assignments should be explicit and documented, not inherited from a user account. Access reviews should include AI service accounts on the same cycle as human accounts, with dated records showing who reviewed what and when.
If your AI system supports attribute-based or resource-based access controls, show them. If it runs under a service principal in your identity provider, show the permission scope. The goal is to demonstrate that the agent is operating within a defined authorization envelope, not just doing whatever it can get away with.
Evidence for Audit and Accountability (AU)
This is where the architectural choices you make matter most.
Sufficient evidence for AU coverage of an AI system means identity-based audit events (i.e., every action is attributed to a specific identity, not a shared account). It means unified access logs that capture the full session: user initiation, agent actions, data retrieved, outputs produced. And it means those logs are in your SIEM, not sitting in an application-specific log file that nobody reviews.
Some orchestration platforms produce structured activity logs that include tool calls, retrieved document references, and decision traces. If your platform supports this, use it and retain it. That level of detail is genuinely useful to an assessor trying to understand what happened during a given session.
Run a tabletop exercise: pick a specific AI-assisted workflow, trace what evidence exists for a single session from end to end and ask whether an assessor could reconstruct exactly what happened from the logs alone. If the answer is no, you’ve identified your gap.
Evidence for Identification and Authentication (IA)
Every AI system and agent that accesses CUI needs to be in your account inventory with documented IA controls.
That means no shared credentials, no user impersonation, no anonymous retrieval. Each agent should authenticate using a managed identity or service principal, with MFA or certificate-based auth where the system supports it. For privileged operations, (i.e., anything involving write access, administrative functions, or elevated data access) session-level logging is non-negotiable.
The documentation that satisfies an assessor here is an account inventory that includes AI service accounts, a record of what IA controls are applied to each, and evidence that those controls are actually enforced in the system. Screenshots of service principal configurations, Conditional Access policies, and certificate enrollment records are all useful here.
Closing thoughts
CMMC compliance is hard enough without the added complexity of AI systems that weren’t designed with evidence collection in mind. But this is solvable. The organizations I’ve seen handle it well are the ones that stop treating AI tools as special cases and start treating them as what they are: systems that operate inside a compliance boundary and need to be governed accordingly.
That means service identities, scoped permissions, real audit trails, and SSP documentation. It means including AI systems in your access review cycles and your incident response planning. And it means asking your vendors the hard questions about what logs their systems produce, where those logs go, and whether they can be integrated into your existing monitoring infrastructure.
If you’re heading into an assessment in the next 12 months and you have AI tools in your environment, now is the time to close these gaps. Assessors are going to start asking about this more deliberately, and “we hadn’t thought about it” is not a finding you want in your assessment report.
How Teleport helps organizations meet CMMC requirements
For organizations concerned with the AC, AU, and IA gaps that surface when AI enters a CMMC boundary, Teleport can simplify CMMC compliance and reduce the audit burden by unifying humans, machines, workloads, and AI systems with a single identity, access, and audit layer.
| CMMC Domain | Highlighted Teleport Features |
|---|---|
| Access Control (AC) | Agents receive their own cryptographic identity and authenticate with short-lived certificates, eliminating shared or assumed credentials. Label-based access policies scope each agent to only the resources it needs. Time-bound access requests grant privileges on demand and automatically revoke them when the window expires. Recurring access reviews cover AI service accounts on the same cycle as human accounts, with dated sign-off records. MCP tool-level controls filter which tools an agent can call based on its assigned role. |
| Audit and Accountability (AU) | Every action is logged as an identity-attributed audit event tied to a specific agent or user, not a shared account. Full session recording and replay captures SSH, database, Kubernetes, and desktop sessions end to end. MCP protocol logging records every tool call an agent makes through an MCP server as a discrete audit event. A built-in event export pipeline can forward events to Splunk, Datadog, Elastic, or Panther so agent activity appears alongside existing telemetry. AI-generated session summaries can quickly surface what happened in a session. |
| Identification and Authentication (IA) | Agents authenticate using platform-native proof (AWS IAM roles, GCP service accounts, Kubernetes tokens). No static API keys or passwords. Short-lived certificates with configurable lifetimes replace long-lived credentials and can auto-renew without human intervention. A server-side agent instance registry tracks every authentication event and certificate generation, giving assessors a verifiable account inventory. Device trust enforcement ensures access originates only from enrolled devices with hardware-backed attestation. MCP connections authenticate with signed tokens carrying the caller's identity, roles, and traits, replacing anonymous or shared access paths. |
Learn more about how Teleport’s unified identity layer simplifies compliance, increases infrastructure resilience, and accelerates audits for:
Related articles:
→ How to Apply NIST 800-53 to AI Systems
→ Streamlining NIST 800-171 Compliance
→ How AI Agents Impact SOC 2 Trust Services Criteria
→ FedRAMP. AI. Player 3 Has Entered the Game.
Table Of Contents
- What CMMC assessors actually look for and how AI systems complicate it
- How assessors evaluate CMMC evidence: The basics
- Where CMMC evidence challenges emerge
- Where AI systems complicate all of this
- What “sufficient evidence” looks like for AI systems
- Closing thoughts
- How Teleport helps organizations meet CMMC requirements
Teleport Newsletter
Stay up-to-date with the newest Teleport releases by subscribing to our monthly updates.
Tags
Subscribe to our newsletter

