
This guide explains the components of your Teleport deployment that must be backed up and lays out our recommended approach for performing backups.
Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.
Get started with a free trial of Teleport Cloud.
What you should back up
Teleport services
Teleport's Proxy Service and Nodes are stateless. For these components, only
teleport.yaml
should be backed up.
The Auth Service is Teleport's brain, and depending on the backend should be backed up regularly.
For example, a Teleport cluster running on AWS with DynamoDB must back up the following data:
What | Where ( Example AWS Customer ) |
---|---|
Local Users ( not SSO ) | DynamoDB |
Certificate Authorities | DynamoDB |
Trusted Clusters | DynamoDB |
Connectors: SSO | DynamoDB / File System |
RBAC | DynamoDB / File System |
teleport.yaml | File System |
teleport.service | File System |
license.pem | File System |
TLS key/certificate | File System / AWS Certificate Manager |
Audit log | DynamoDB |
Session recordings | S3 |
For this customer, we would recommend using AWS best practices for backing up DynamoDB. If DynamoDB is used for Teleport audit logs, logged events have a TTL of 1 year.
Backend | Recommended backup strategy |
---|---|
Local Filesystem | Back up the /var/lib/teleport/storage directory and the output of tctl get all --with-secrets . |
DynamoDB | Follow AWS's guidelines for backup and restore |
etcd | Follow etcd's guidelines for disaster recovery |
Firestore | Follow GCP's guidelines for automated backups |
Please use the latest version of Teleport Enterprise documentation.
Teleport resources
Teleport uses YAML resources for roles, Trusted Clusters, local users, and authentication connectors.
These could be created via tctl
or the Web UI.
You should back up your dynamic resource configurations to ensure that you can restore them in case of an outage.
Our recommended backup practice
If you're running Teleport at scale, your teams need to have an automated way to restore Teleport. At a high level, this is our recommended approach:
- Persist and back up your backend.
- Share that backend among Auth Service instances.
- Store your dynamic resource configurations as discrete files in a git repository.
- Have your continuous integration system run
tctl create -f *.yaml
from the git repository. The-f
flag instructstctl create
not to return an error if a resource already exists, so this command can be run regularly.
Please use the latest version of Teleport Enterprise documentation.
Migrating backends
As of version v4.1, you can now quickly export a collection of resources from Teleport. This feature was designed to help customers migrate from local storage to etcd.
Using tctl get all --with-secrets
will retrieve the below items:
- Users
- Certificate Authorities
- Trusted Clusters
- Connectors:
- GitHub
- SAML
- OIDC
- Roles
When migrating backends, you should back up your Auth Service's
data_dir/storage
directly.
Example of backing up and restoring a cluster
Log in to your cluster with tsh so you can use tctl from your local machine.
You can also run tctl on your Auth Service host without running "tsh login"
first.
tsh login --proxy=teleport.example.com --user=myuserExport dynamic configuration state from old cluster
tctl get all --with-secrets > state.yamlPrepare a new uninitialized backend (make sure to port
any non-default config values from the old config file)
mkdir fresh && cat > fresh.yaml << EOFteleport: data_dir: freshEOFbootstrap fresh server (kill the old one first!)
sudo teleport start --config fresh.yaml --bootstrap state.yamlfrom another terminal, verify state transferred correctly
tctl --config fresh.yaml get all<your state here>
The --bootstrap
flag has no effect, except when the Auth Service initializes
its backend initialization on first startup, so it is safe for use in
supervised/High Availability contexts.
Limitations
The --bootstrap
flag doesn't re-trigger Trusted Cluster handshakes, so Trusted
Cluster resources need to be recreated manually.
All the same limitations around modifying the config file of an existing cluster also apply to a new cluster being bootstrapped from the state of an old cluster:
- Changing the cluster name will break your CAs. This will be caught and Teleport will refuse to start.
- Some user authentication mechanisms (e.g. WebAuthn) require that the public endpoint of the Web UI remains the same. This cannot be caught by Teleport, so be careful!
- Any Node whose invite token is defined in the Auth Service's configuration file will be able to join automatically, but Nodes that were added dynamically will need to be re-invited.
As of version v4.1, you can now quickly export a collection of resources from Teleport. This feature was designed to help customers migrate from local storage to etcd.
Using tctl get all --with-secrets
will retrieve the below items:
- Users
- Certificate Authorities
- Trusted Clusters
- GitHub Connectors
- Roles
When migrating backends, you should back up your Auth Service's
data_dir/storage
directly.
Example of backing up and restoring a cluster
Log in to your cluster with tsh so you can use tctl from your local machine.
You can also run tctl on your Auth Service host without running "tsh login"
first.
tsh login --user=myuser --proxy=teleport.example.comExport dynamic configuration state from old cluster
tctl get all --with-secrets > state.yamlPrepare a new uninitialized backend (make sure to port
any non-default config values from the old config file)
mkdir fresh && cat > fresh.yaml << EOFteleport: data_dir: freshEOFbootstrap fresh server (kill the old one first!)
sudo teleport start --config fresh.yaml --bootstrap state.yamlfrom another terminal, verify state transferred correctly
tctl --config fresh.yaml get all<your state here>
The --bootstrap
flag has no effect, except when the Auth Service initializes
its backend initialization on first startup, so it is safe for use in
supervised/High Availability contexts.
Limitations
The --bootstrap
flag doesn't re-trigger Trusted Cluster handshakes, so Trusted
Cluster resources need to be recreated manually.
All the same limitations around modifying the config file of an existing cluster also apply to a new cluster being bootstrapped from the state of an old cluster:
- Changing the cluster name will break your CAs. This will be caught and Teleport will refuse to start.
- Some user authentication mechanisms (e.g. WebAuthn) require that the public endpoint of the Web UI remains the same. This cannot be caught by Teleport, so be careful!
- Any Node whose invite token is defined in the Auth Service's configuration file will be able to join automatically, but Nodes that were added dynamically will need to be re-invited.
Please use the latest version of Teleport Enterprise documentation.