Teleport Launches Beams — Trusted Agent Runtimes For Infrastructure
Learn More

Home - Teleport Blog - Multi-Site Data Center Audit and Compliance Best Practices

Multi-Site Data Center Audit and Compliance Best Practices

by Mayur Pipaliya May 11, 2026

Multi-Site Data Center Audit and Compliance Best Practices Blog Header Image

Most multi-site infrastructure teams manage access and audit logging site by site, using stacks that have been built up over time through different tools, different owners, and thousands of static credentials or standing admin privileges. This makes org-wide auditability nearly impossible to produce on demand, and adds complexity to regional compliance requirements.

Read this blog to discover:

  • What auditors look for across distributed infrastructure logs
  • Where common audit gaps emerge in multi-site data center environments
  • Best practices for consistent audit visibility and compliance across data center sites
  • Real-world implementation examples from leading data center operators

What auditors ask for (and where multi-site infrastructure creates gaps)

Auditors ask the same questions regardless of how many data centers or cloud environments you operate: who accessed what, when, and exactly what was done.

An auditor reviewing SOC 2 controls or a regional cybersecurity framework requirement will make specific demands: show me that a particular engineer, bot, or vendor accessed a particular server at a particular time, or show me exactly what they did during that SSH, Kubernetes, database, RDP, or web session.

But instead of a simple answer to these demands, most multi-site data center environments are only able to produce a patchwork of SSH logs, RDP session recordings, and Kubernetes audit logs, and often with nothing at all for BMC access or third-party/vendor access sessions. This creates several audit gaps.

Individual identity attribution

Compliance frameworks require proving which specific identity performed each specific action. However, shared service accounts and static SSH keys make this impossible. For example, if a team of 50 engineers shares one SSH key to access a fleet of Linux servers, offboarding a departing employee means rotating the key across every host creates a process so taxing, the odds of it actually happening are slim to none.

But even if that SSH key is properly rotated, audit trails will only show that "the key" logged in, not the actual user identity. Many organizations have infrastructure fleets where static credentials sit on nearly every server, where local users are created on the fly, and where shared jump boxes may serve as the only chokepoint for traffic passing through. This renders identity attribution nearly impossible.

Command-level visibility (not just connection metadata)

Auditors do not just want to know that a session happened; they want to know what was executed during it. Standard session logging captures the terminal input and output visible on screen, but engineers routinely run scripts that obscure the actual operations being performed. For example, a script named deploy.sh might execute dozens of curl commands, file mutations, and network connections that never appear in a basic session log.

This creates a visibility gap between knowing "who logged in" and "what they did at the kernel level," which can often be the difference between a passing and failing audit.

Proof of least-privilege enforcement and access expiry

Auditors want evidence that access is granted on a need-to-do basis, and automatically revoked when that need expires. Any permanent admin access, standing root privileges, or long-lived keys or tokens will fail this test.

This means that data center teams must be ready to demonstrate that all access was:

  • Requested for a specific purpose
  • Approved by an authorized party
  • Granted for a limited duration
  • Automatically revoked when that duration expired

Unified visibility across sites, protocols, and vendors

Compliance frameworks scope to the organization, not the infrastructure site, meaning a visibility gap at one site becomes a gap everywhere. Even if your US-East data center has excellent audit logs, if there are audit gaps in your US-West facility, you will encounter compliance challenges.

Tool limitations can also fragment logs at the protocol level. For example, legacy PAM tools designed for Windows servers and Active Directory are less likely to accurately or comprehensively log resources outside of their scope, such as ephemeral Kubernetes clusters, bare-metal BMC web interfaces exposed as application endpoints, or ephemeral compute nodes that boot from RAM and have no persistent disk to store credentials on. Patching these visibility gaps requires deploying separate tools for each protocol and environment, which generates different log formats, storage, and silos.


Best practices for data center audit readiness across sites

For multi-site compliance, audit trails must be comprehensive across every site and include every protocol, region, tenant, and vendor session. Ideally, these audit trails are unified in a single, queryable, and identity-based system — all of which are achievable by implementing the following best practices.

1. Replace keys and shared accounts with short-lived certificates tied to identity

The foundation of auditable infrastructure access is a single, authoritative identity that follows an engineer across every resource they touch, including SSH hosts, Windows desktops, Kubernetes clusters, databases, cloud consoles, and BMC interfaces. When every access event traces back to a specific person authenticated through your identity provider, auditors can answer "who did what, where, and when" without stitching together logs from disconnected systems.

When a user authenticates through Okta, Entra ID, Google Workspace, or another SAML/OIDC provider, Teleport issues a short-lived x.509 certificate that is scoped to exactly the resources their roles allow, automatically expires when the session ends, and attributes the session to a cryptographic identity in the audit log.

Real-world data center implementation example

Several cloud and data center operators use Teleport to:

  • Map their existing Okta SSO groups directly to infrastructure roles to preserve existing structure while eliminating static credentials.
  • Use OIDC to map identity provider group membership to determine access, which is enforced cryptographically rather than through static files on disk.

2. Extend identity to machines using hardware-rooted trust

Hardware, machine, and AI identities are equally as important as human identities.

In a modern data center, servers, automation bots, CI/CD runners, and service workloads all need to prove who they are. When thousands of nodes are registering dynamically, traditional methods like static join tokens become both a security liability and an operational burden because there is no practical way to consistently manage, rotate, or audit static secrets across the fleet.

To ensure that no static secrets or join tokens live on hardware, physical servers should instead authenticate using their Trusted Platform Module (TPM), reading it on boot and registering dynamically. Teleport's TPM-based joining implements this, and can be used to register several thousand TPMs on a per-node basis. This ensures every machine is attributed to a cryptographic identity in the audit log using the same model that governs human identity and access.

The same principle extends to AI workloads. As GPU cloud operators and hyperscalers deploy workloads that act autonomously on infrastructure, equipping agents with shared credentials or long-lived tokens introduces risk. Assigning agents unique, cryptographic identities ensures every action an agent takes is clearly attributed to a verified identity in the audit log, and with the same short-lived privileges and access controls applied to all other actors.

3. Reduce audit scope with certificate-based Windows access

When data centers run Windows infrastructure alongside Linux, shared service accounts, password rotation policies, and Active Directory credential management creates more untracked credentials, and more for auditors to manually verify.

Certificate-based smart card emulation eliminates static credentials from the access chain by generating a temporary PIN at session time, using it to sign and authenticate with Active Directory via certificate exchange, and then automatically discarding the PIN. This ensures that users never know or manage passwords, and that administrators can be created without passwords using certificates.

This removes an entire category of credential management from audits. Teleport implements this using certificate-based smart card emulation, integrating with Active Directory for auto-discovery of domain-joined hosts and supporting a local-user mode for machines outside the domain.

4. Capture kernel-level syscalls, not just terminal I/O

Standard session logging captures what appears on screen, but engineers routinely run scripts that obscure the actual operations. For audit purposes, session recordings should capture system calls directly from the kernel so that every command a script executes, including any child processes, is logged.

Consider the audit difference between two logs of the same engineer’s session:

  • Without kernel-level capture: User ran deploy.sh
  • With kernel-level capture: deploy.sh executed curl to an external endpoint, wrote configuration to /etc/app/config.yaml, and established a network connection to 10.0.3.42 on port 5432

Without kernel-level capture it only tells an auditor that something happened. With kernel-level capture, auditors know exactly what happened. Teleport's enhanced session recording uses eBPF (extended Berkeley Packet Filter) to hook into the Linux kernel and capture system calls directly, including child processes.

5. Store session recordings as searchable, structured text

Session recordings should be text-based, not video. Structured text recordings are small (a working session typically compresses to single-digit megabytes), searchable, and machine-parseable, which then enables auditors to search recordings for specific commands, copy text out of a playback, or programmatically scan recordings for policy violations.

But audit data is only useful if it reaches the systems where security teams actually work. To ensure this, structured audit events should ship to whatever SIEM or log aggregation platform the organization already uses, formatted consistently enough for teams to build automated alerting rules, correlation queries, and compliance dashboards.

Teleport session recordings are stored as structured, text-based data that is compatible with S3-compatible backends. A Fluentd-based event forwarder can ship JSON-structured audit events to Rapid7 InsightIDR, Elastic/ELK, Splunk, Datadog, and other platforms. For managed infrastructure providers, Teleport audit logs can also be forwarded directly to customers to provide verifiable evidence of access controls.

Learn more about Teleport audit events.

6. Enforce zero standing privileges with just-in-time access

Your environment should contain no persistent access to production infrastructure. Instead, when work requires infrastructure access, your engineers should submit a just-in-time access request specifying the resources they need and the reason. Access should then be granted for a defined window (e.g., for an hour, the length of a shift, or a maintenance window) and expire automatically when the window closes. This approach ensures every production access event has a corresponding request, approval, and does not create standing privileges.

Approvals should route through the communication and ticketing systems teams already use, creating a bidirectional audit trail in both the access management system and the external tool. For managed infrastructure providers, delegated approval workflows can route requests for customer resources through a customer-facing interface, so customers explicitly approve before a provider's engineers touch their computer.

Teleport implements this with fine-grained, task-based access controlled by short-lived certificates. The access window is set by role policy or scoped to the request itself, and can be tied to an on-call schedule in PagerDuty, a ticket in Jira, or a maintenance window defined at request time. Every access event has a corresponding request, an approval, a time-bound grant, and an automatic expiry, and every session is attributed to a cryptographic identity in the audit log.

Approvals route through the tools teams already use (such as Slack, PagerDuty, Jira, or Microsoft Teams). Routine role-based access can be granted automatically by policy, and sensitive operations can require dual authorization or a live session moderator.

Real-world data center implementation example

A neocloud provider built a three-component system using Teleport, where a plugin monitors every access request to:

  1. Identify the customer associated with the requested resources based on labels
  2. Validate that the request targets only one customer's resources
  3. Forward the request to the customer's portal for approval

If the request includes resources from multiple customers, or resources not owned by the identified customer, it is automatically denied.

7. Unify audit data across every site without centralizing traffic

Multi-site operators need a single audit plane across dozens of data centers without routing all operational traffic through a central proxy. A common solution is to route everything through a central proxy. The problem is that centralizing operational traffic introduces latency and is operationally untenable at scale.

A hub-and-spoke model provides a central control plane connected to per-site clusters via outbound-only reverse tunnels. In this model, engineers only need to authenticate once at the hub to access resources in any connected site. Audit events from all sites flow to the hub, but operational traffic — including Ansible playbooks, Kubernetes commands, or database queries — stays local. This enables multi-tenant operators to further segment audit trails by customer using labels and role-based access controls (RBAC) to pull per-customer trails from a single unified log without deploying separate infrastructure per tenant.

Teleport's trusted cluster architecture implements this hub-and-spoke pattern, and new data centers inherit the architecture automatically when their leaf cluster joins the root.


How Teleport aligns with data center compliance requirements

Data center operators around the world trust Teleport to simplify audits and satisfy requirements for multiple regional compliance frameworks.

FrameworkKey controlsHow Teleport addresses them
SOC 2Logical access controls (CC6.1), system monitoring (CC7.1), change management (CC8.1)Short-lived certificates issued at runtime eliminate static credentials and standing privileges; role-based access controls enforce least privileged access; eBPF session recording and SIEM integration for monitoring; JIT access with policy enforcement for change management
ISO 27001Access management (A.9), operations security (A.12), communications security (A.13)RBAC tied to SSO identity; session recording across all resource types; encrypted reverse tunnels between clusters
FedRAMPAccess enforcement (AC-3), audit review (AU-6), session controls (SC-10)FIPS-compliant build; unified audit log with SIEM export; session recording and live session controls
HIPAAAccess control (§164.312(a)), audit controls (§164.312(b))SSO-tied certificates for unique user identification; unified audit log for activity logging
Regional frameworks (e.g., Saudi cybersecurity, NIS2, DORA)Privileged access management, identity governance, audit logging, incident response readiness, data residencyJIT access controls with policy enforcement; identity provider integration for consolidated identity governance; eBPF session recording with SIEM export; break-glass clusters for incident response; configurable in-jurisdiction storage for session recordings and audit logs

Looking for compliance for a framework not listed above? Contact us to learn more.


The outcome: Compliance that scales with your data center

Every new data center, cloud region, or resource type used to mean another disconnected audit source, credential surface, and access path to govern.

With the right standards, every new data center inherits your audit posture instead of fragmenting it. Identity fragmentation compounds with scale. So does the quarterly scramble to produce evidence auditors can actually use.

A unified identity layer turns compliance from a quarterly scramble into a continuous, auditable baseline. It eliminates static credentials through short-lived certificates, records sessions at the kernel level, enforces least-privilege through automated JIT access and policy enforcement, and consolidates audit data across every region and resource type.

Discover how Teleport unifies identity across data centers and cloud deployments.


Simplify compliance, sovereignty, and data residency

Learn how Teleport provides complete visibility and auditability across authentication, authorization, and audit data while supporting self-hosted, air-gapped environments and region-specific regulatory requirements.

Learn More →


About the author: Mayur Pipaliya (MP) is a Staff Architect at Teleport known for his expertise in trusted computing and identity-first infrastructure for hybrid cloud and AI. Across 19+ years, he has charted a trailblazing path, from being an accidental entrepreneur running a data center company in the pre-cloud era to founding Splunk's Global Forward Deployed Software Engineering (FDSE) team and leading platform engineering, DevRel/DX, CNCF OTel open source, and AI/ML marketplace functions that helped scale data products to $4B in revenue. MP advises enterprises on modernizing data centers and multi-cloud, multi-agent strategy, and volunteers with DEFCON, OWASP, null Security, Columbia's Justice Through Code, and Habitat for Humanity. When not firefighting, he loves trail running and biking in the Bay Area


Guide: How to Unify Identity Across Cloud and Data Center Infrastructure
How to Secure Third-Party Remote Access to Data Centers (Without SSH Keys)
Unified Identity for Your Entire Data Center
Guide: DORA Compliance Evidence for Agentic AI

background

Subscribe to our newsletter

PAM / Teleport