- Version 15.x
- Version 14.x
- Version 13.x
- Version 12.x
- Older Versions
- Available for:
This section explains the recommended configuration settings for large-scale self-hosted deployments of Teleport.
Teleport Team takes care of this setup for you so you can provide secure access to your infrastructure right away.
Get started with a free trial of Teleport Team.
- Teleport v14.2.0 Open Source or Enterprise.
Set up Teleport with a High Availability configuration.
|Scenario||Max Recommended Count||Proxy||Auth Server||AWS Instance Types|
|Teleport SSH Nodes connected to Auth Service||10,000||2x 4 vCPUs, 8GB RAM||2x 8 vCPUs, 16GB RAM||m4.2xlarge|
|Teleport SSH Nodes connected to Auth Service||50,000||2x 4 vCPUs, 16GB RAM||2x 8 vCPUs, 16GB RAM||m4.2xlarge|
|Teleport SSH Nodes connected to Proxy Service through reverse tunnels||10,000||2x 4 vCPUs, 8GB RAM||2x 8 vCPUs, 16+GB RAM||m4.2xlarge|
Upgrade Teleport's connection limits from the default connection limit of
# Teleport Auth and Proxy teleport: connection_limits: max_connections: 65000 max_users: 1000
Agents cache roles and other configuration locally in order to make access-control decisions quickly.
By default agents are fairly aggressive in trying to re-initialize their caches if they lose connectivity
to the Auth Service. In very large clusters, this can contribute to a "thundering herd" effect,
where control plane elements experience excess load immediately after restart. Setting the
parameter to something in the 8-16 minute range can help mitigate this effect:
teleport: cache: enabled: yes max_backoff: 12m
Tweak Teleport's systemd unit parameters to allow a higher amount of open files:
Verify that Teleport's process has high enough file limits:
cat /proc/$(pidof teleport)/limits
Limit Soft Limit Hard Limit Units
Max open files 65536 65536 files
When using Teleport with DynamoDB, we recommend using on-demand provisioning. This allow DynamoDB to scale with cluster load.
For customers that can not use on-demand provisioning, we recommend at least 250 WCU and 100 RCU for 10k clusters.
When using Teleport with etcd, we recommend you do the following.
- For performance, use the fastest SSDs available and ensure low-latency network connectivity between etcd peers. See the etcd Hardware recommendations guide for more details.
- For debugging, ingest etcd's Prometheus metrics and visualize them over time using a dashboard. See the etcd Metrics guide for more details.
During an incident, we may ask you to run
etcdctl, test that you can run the
following command successfully.
etcdctl \ --write-out=table \ --cacert=/path/to/ca.cert \ --cert=/path/to/cert \ --key=/path/to/key.pem \ --endpoints=127.0.0.1:2379 \ endpoint status