Skip to main content

Scaling

Report an IssueView as Markdown

This section explains the recommended configuration settings for large-scale self-hosted deployments of Teleport.

tip

Teleport Enterprise Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.

Get started with a free trial of Teleport Enterprise Cloud.

Hardware recommendations

Set up Teleport with a High Availability configuration.

ScenarioMax Recommended CountProxy ServiceAuth ServiceAWS Instance Types
Teleport SSH Nodes connected to Auth Service10,0002x 4 vCPUs, 8GB RAM2x 8 vCPUs, 16GB RAMm8i.2xlarge
Teleport SSH Nodes connected to Auth Service50,0002x 4 vCPUs, 16GB RAM2x 8 vCPUs, 16GB RAMm8i.2xlarge
Teleport SSH Nodes connected to Proxy Service through reverse tunnels10,0002x 4 vCPUs, 8GB RAM2x 8 vCPUs, 16+GB RAMm8i.2xlarge

Auth Service and Proxy Service Configuration

Upgrade Teleport's connection limits from the default connection limit of 15000 to 65000.

# Teleport Auth Service and Proxy Service
teleport:
  connection_limits:
    max_connections: 65000

Agent configuration

Agents cache roles and other configuration locally in order to make access-control decisions quickly. By default agents are fairly aggressive in trying to re-initialize their caches if they lose connectivity to the Auth Service. In very large clusters, this can contribute to a "thundering herd" effect, where control plane elements experience excess load immediately after restart. Setting the max_backoff parameter to something in the 8-16 minute range can help mitigate this effect:

teleport:
  cache:
    enabled: true
    max_backoff: 12m

Kernel parameters

Tweak Teleport's systemd unit parameters to allow a higher amount of open files:

[Service]
LimitNOFILE=65536

Verify that Teleport's process has high enough file limits:

cat /proc/$(pidof teleport)/limits

Limit Soft Limit Hard Limit Units

Max open files 65536 65536 files

DynamoDB configuration

When using Teleport with DynamoDB, we recommend using on-demand provisioning. This allow DynamoDB to scale with cluster load.

For customers that can not use on-demand provisioning, we recommend at least 250 WCU and 100 RCU for 10k clusters.

etcd

When using Teleport with etcd, we recommend you do the following.

  • For performance, use the fastest SSDs available and ensure low-latency network connectivity between etcd peers. See the etcd Hardware recommendations guide for more details.
  • For debugging, ingest etcd's Prometheus metrics and visualize them over time using a dashboard. See the etcd Metrics guide for more details.

During an incident, we may ask you to run etcdctl, test that you can run the following command successfully.

etcdctl \ --write-out=table \ --cacert=/path/to/ca.cert \ --cert=/path/to/cert \ --key=/path/to/key.pem \ --endpoints=127.0.0.1:2379 \ endpoint status

Supported Load

The tests below were performed against a Teleport Cloud tenant which runs on instances with 8 vCPU and 32 GiB memory and has default limits of 4CPU and 4Gi memory.

Concurrent Logins

Resource TypeLogin CommandLoginsFailure
SSHtsh login2000Auth CPU Limits exceeded
Applicationtsh app login2000Auth CPU Limits exceeded
Databasetsh db login2000Auth CPU Limits exceeded
Kubernetestsh kube login && tsh kube credentials2000Auth CPU Limits exceeded

Sessions Per Second

Resource TypeSessionsFailure
SSH1000Auth CPU Limits exceeded
Application2500Proxy CPU Limits exceeded
Database40Proxy CPU Limits exceeded
Kubernetes50Proxy CPU Limits exceeded

Teleport Windows Desktop Service resource utilization

Windows Desktop Service resource utilization can vary significantly based on workload, user behavior, and environment. For this reason it is challenging to provide absolute CPU and RAM requirements. This worked example is an illustration of one potential approach in determining the resource limits for a given Windows Desktop Service instance.

There are three primary factors that influence resource utilization by the Windows Desktop Service:

  1. Number of concurrent sessions.
  2. Number of registered desktops.
  3. Screen update frequency per session.
  4. Whether session recording is enabled.
Note

The figures listed in this guide are illustrative only using a low activity workload (mostly static screens). Sessions with frequent screen updates, such as video playback, consume significantly more CPU and RAM per session. Always measure your specific workload in a representative environment before setting production limits.

Long lived sessions

Session recording adds per-session RAM overhead. The tables below show RAM usage with and without it enabled.

With session recording enabled

concurrent sessionsRAM usage (MiB)
140
255
465
885
16105
32160

Without session recording enabled

concurrent sessionsRAM usage (MiB)
130
245
450
855
1670
3290

Registered desktops

A single Windows Desktop Service can serve multiple desktops via static configuration or dynamic discovery. Each registered desktop adds idle background overhead.

registered desktopsidle CPU (millicores)idle RAM (MiB)
100460
200465
500570
1000575
500020100
1000040150
5000085350
100000100600

Both CPU and RAM grow approximately linearly with the number of registered desktops. To serve more desktops, deploy multiple Windows Desktop Service instances.

Estimating resource requirements

To estimate the resource requirements for the Windows Desktop Service:

  1. Determine the maximum number of concurrent sessions.
  2. Determine the number of desktops served by each Windows Desktop Service instance.

There is no synthetic benchmark tool for Windows Desktop sessions. To measure resource usage under your expected workload, open representative sessions simultaneously through Teleport Connect or the Web UI and monitor the Windows Desktop Service process. Use the findings to set resource limits with an added margin (e.g., 20-50%) for safety.

Teleport SSH Service resource utilization

The SSH Service resource utilization can vary significantly based on workload, user behavior, and environment. For this reason it is challenging to provide absolute CPU and RAM requirements. This worked example is an illustration of one potential approach in determining the resource limits for a given SSH Service.

There are three primary factors that influence resource utilization by the SSH Service:

  1. User workload.
  2. Number of concurrent sessions.
  3. Number of new sessions per second.
Note

The figures listed in this guide are illustrative only using a synthetic workload. Always measure your specific workload in a representative environment before setting production limits.

Long lived sessions

concurrent sessionsRAM usage (MiB)
1300
2350
4500
8700
161200
322200
644250
1288200

For a typical agent RAM usage increases linearly with the number of concurrent sessions.

New session requests

sessions per secondCPU peak (millicores)
1200
2400
4900
81800
163800
328500

The primary driver of CPU usage by the SSH Service is the burst usage when new sessions are established.

Estimating resource requirements

To estimate the resource requirements for the SSH Service:

  1. Determine the worst case resource requirements of a typical user workload.
  2. Determine the maximum number of concurrent sessions.
  3. Determine the maximum number of new sessions per second.

Using tsh bench, simulate session activity to measure resource usage under expected conditions. Use the findings to set resource limits with an added margin (e.g., 20-50%) for safety.

For example to spawn 32 requests per second for 2 minutes against a specific agent:

tsh bench ssh --rate=32 --duration=2m user@node-agent -- ls

Similarly to test 64 concurrent sessions against a single agent using a unique label:

tsh bench web sessions --max=64 --duration=2m user@UNIQUE=example ls

Teleport Kubernetes Service resource utilization

Kubernetes Service resource utilization can vary significantly based on workload, RBAC configuration, and cluster topology. For this reason it is challenging to provide absolute CPU and RAM requirements. This worked example is an illustration of one potential approach in determining the resource limits for a given Kubernetes Service instance.

There are three primary factors that influence resource utilization by the Kubernetes Service:

  1. Number of API requests per second.
  2. Number of concurrent long-lived sessions (exec, port-forward).
  3. Number of registered Kubernetes clusters served by the agent.
Note

The figures listed in this guide are illustrative only using a synthetic workload. Always measure your specific workload in a representative environment before setting production limits.

API request rate

API request rate is the primary driver of CPU usage. List operations through Teleport's RBAC filtering scale linearly with rate.

requests per secondCPU peak (millicores)
15
210
415
830
1655
32100
64190
128410
256835

The number of users sending requests does not affect the agent independently, only the total request rate matters. The number of Kubernetes resources in the cluster and the number of RBAC rules in the user's role have minimal impact at typical request rates.

Concurrent long-lived sessions

Concurrent exec and port-forward sessions add modest RAM overhead per session.

concurrent sessionsRAM usage (MiB)
1220
2225
4225
8230
16240
32245
64260
128290
256365

These figures are for idle sessions. Sessions actively transferring data (interactive shells, log streams) consume more memory per session.

Registered Kubernetes clusters

A single Kubernetes Service can serve multiple Kubernetes clusters via static kubeconfig_file configuration or dynamic discovery. Each registered cluster adds idle background overhead from heartbeats, schema refresh, and health checks.

registered clustersidle CPU (millicores)idle RAM (MiB)
1100135
10240150
50630170
1001000200

Both CPU and RAM grow approximately linearly with the number of registered clusters. To serve more clusters, deploy multiple Kubernetes Service instances.

Estimating resource requirements

To estimate the resource requirements for the Kubernetes Service:

  1. Determine the maximum number of API requests per second across all users.
  2. Determine the maximum number of concurrent long-lived sessions.
  3. Determine the number of Kubernetes clusters served by each Kubernetes Service instance.

Using tsh bench, simulate request activity to measure resource usage under expected conditions. Use the findings to set resource limits with an added margin (e.g., 20-50%) for safety.

For example to send 32 list requests per second for 2 minutes against a Kubernetes cluster:

tsh bench kube ls eks-cluster --namespace default --rate=32 --duration=2m

To test 64 concurrent exec sessions against a single pod, run a parallel loop in a shell:

for i in $(seq 1 64); do
  kubectl exec -n default my-pod -- sleep 300 &
done
wait

The payload can be customized to represent a typical use case.

Teleport Database Service resource utilization

The Database Service resource utilization can vary significantly based on workload, user behavior, and environment. For this reason it is challenging to provide absolute CPU and RAM requirements. This worked example is an illustration of one potential approach in determining the resource limits for a given Database Service.

There are three primary factors that influence resource utilization by the Database Service:

  1. Number of concurrent sessions.
  2. Number of new sessions per second.
  3. Number of registered databases.
Note

The figures listed in this guide are illustrative only using a synthetic workload. Always measure your specific workload in a representative environment before setting production limits.

Long lived sessions

concurrent sessionsRAM usage (MiB)
140
250
460
880
16120
32200
64400
128800

For a typical agent RAM usage increases linearly with the number of concurrent sessions.

New session requests

sessions per secondCPU peak (millicores)
1100
2250
4500
8950
161800
323850

The primary driver of CPU usage by the Database Service is the burst usage when new sessions are established.

Registered databases

A single Database Service can proxy multiple databases. Each registered database adds idle background overhead from heartbeats and health checks.

registered databasesidle CPU (millicores)idle RAM (MiB)
1001070
2001280
50020130
100035200
5000150450
10000250800
5000013003300
10000019507450

Both CPU and RAM grow approximately linearly with the number of registered databases. To serve more databases, deploy multiple Database Service instances.

Estimating resource requirements

To estimate the resource requirements for the Database Service:

  1. Determine the maximum number of concurrent sessions.
  2. Determine the maximum number of new sessions per second.
  3. Determine the number of databases served by each Database Service instance.

Using tsh bench, simulate session activity to measure resource usage under expected conditions. Use the findings to set resource limits with an added margin (e.g., 20-50%) for safety.

For example, spawn 32 new session requests per second for 2 minutes against a specific database:

tsh bench postgres --rate=32 --duration=2m --db-user=alice --db-name=mydb mydb-resource

To measure memory usage under concurrent sessions, use your existing database tooling or clients to open simultaneous connections and run representative queries while monitoring the Database Service process.