Fork me on GitHub
Teleport

Backup and Restore

Improve

This guide explains the components of your Teleport deployment that must be backed up and lays out our recommended approach for performing backups.

What you should back up

Teleport services

Teleport's Proxy Service and Nodes are stateless. For these components, only teleport.yaml should be backed up.

The Auth Service is Teleport's brain, and depending on the backend should be backed up regularly.

For example, a Teleport cluster running on AWS with DynamoDB must back up the following data:

WhatWhere ( Example AWS Customer )
Local Users ( not SSO )DynamoDB
Certificate AuthoritiesDynamoDB
Trusted ClustersDynamoDB
Connectors: SSODynamoDB / File System
RBACDynamoDB / File System
teleport.yamlFile System
teleport.serviceFile System
license.pemFile System
TLS key/certificateFile System / AWS Certificate Manager
Audit logDynamoDB
Session recordingsS3

For this customer, we would recommend using AWS best practices for backing up DynamoDB. If DynamoDB is used for Teleport audit logs, logged events have a TTL of 1 year.

BackendRecommended backup strategy
Local FilesystemBack up the /var/lib/teleport/storage directory and the output of tctl get all --with-secrets.
DynamoDBFollow AWS's guidelines for backup and restore
etcdFollow etcd's guidelines for disaster recovery
FirestoreFollow GCP's guidelines for automated backups

Teleport Cloud manages all Auth Service and Proxy Service backups.

While Teleport Nodes are stateless, you should ensure that you can restore their configuration files.

Teleport resources

Teleport uses YAML resources for roles, Trusted Clusters, local users, and authentication connectors. These could be created via tctl or the Web UI.

You should back up your dynamic resource configurations to ensure that you can restore them in case of an outage.

If you're running Teleport at scale, your teams need to have an automated way to restore Teleport. At a high level, this is our recommended approach:

  • Persist and back up your backend.
  • Share that backend among Auth Service instances.
  • Store your dynamic resource configurations as discrete files in a git repository.
  • Have your continuous integration system run tctl create -f *.yaml from the git repository. The -f flag instructs tctl create not to return an error if a resource already exists, so this command can be run regularly.
  • Store your dynamic resource configurations as discrete files in a git repository.
  • Have your continuous integration system run tctl create -f *.yaml from the git repository. The -f flag instructs tctl create not to return an error if a resource already exists, so this command can be run regularly.

Migrating backends

As of version v4.1, you can now quickly export a collection of resources from Teleport. This feature was designed to help customers migrate from local storage to etcd.

Using tctl get all --with-secrets will retrieve the below items:

  • Users
  • Certificate Authorities
  • Trusted Clusters
  • Connectors:
    • Github
    • SAML
    • OIDC
  • Roles

When migrating backends, you should back up your Auth Service's data_dir/storage directly.

Example of backing up and restoring a cluster

Export dynamic configuration state from old cluster

tctl get all --with-secrets > state.yaml

Prepare a new uninitialized backend (make sure to port

any non-default config values from the old config file)

mkdir fresh && cat > fresh.yaml << EOFteleport: data_dir: freshEOF

bootstrap fresh server (kill the old one first!)

sudo teleport start --config fresh.yaml --bootstrap state.yaml

from another terminal, verify state transferred correctly

tctl --config fresh.yaml get all

<your state here>

The --bootstrap flag has no effect, except when the Auth Service initializes its backend initialization on first startup, so it is safe for use in supervised/High Availability contexts.

Limitations

The --bootstrap flag doesn't re-trigger Trusted Cluster handshakes, so Trusted Cluster resources need to be recreated manually.

All the same limitations around modifying the config file of an existing cluster also apply to a new cluster being bootstrapped from the state of an old cluster:

  • Changing the cluster name will break your CAs. This will be caught and Teleport will refuse to start.
  • Some user authentication mechanisms (e.g. WebAuthn and U2F) require that the public endpoint of the Web UI remains the same. This cannot be caught by Teleport, so be careful!
  • Any Node whose invite token is defined in the Auth Service's configuration file will be able to join automatically, but Nodes that were added dynamically will need to be re-invited.

As of version v4.1, you can now quickly export a collection of resources from Teleport. This feature was designed to help customers migrate from local storage to etcd.

Using tctl get all --with-secrets will retrieve the below items:

  • Users
  • Certificate Authorities
  • Trusted Clusters
  • GitHub Connectors
  • Roles

When migrating backends, you should back up your Auth Service's data_dir/storage directly.

Example of backing up and restoring a cluster

Export dynamic configuration state from old cluster

tctl get all --with-secrets > state.yaml

Prepare a new uninitialized backend (make sure to port

any non-default config values from the old config file)

mkdir fresh && cat > fresh.yaml << EOFteleport: data_dir: freshEOF

bootstrap fresh server (kill the old one first!)

sudo teleport start --config fresh.yaml --bootstrap state.yaml

from another terminal, verify state transferred correctly

tctl --config fresh.yaml get all

<your state here>

The --bootstrap flag has no effect, except when the Auth Service initializes its backend initialization on first startup, so it is safe for use in supervised/High Availability contexts.

Limitations

The --bootstrap flag doesn't re-trigger Trusted Cluster handshakes, so Trusted Cluster resources need to be recreated manually.

All the same limitations around modifying the config file of an existing cluster also apply to a new cluster being bootstrapped from the state of an old cluster:

  • Changing the cluster name will break your CAs. This will be caught and Teleport will refuse to start.
  • Some user authentication mechanisms (e.g. WebAuthn and U2F) require that the public endpoint of the Web UI remains the same. This cannot be caught by Teleport, so be careful!
  • Any Node whose invite token is defined in the Auth Service's configuration file will be able to join automatically, but Nodes that were added dynamically will need to be re-invited.

In Teleport Cloud, backend data is managed for you automatically. If you would like to migrate configuration resources to a self-hosted Teleport cluster, follow our recommended backup practice of storing configuration resources in a git repository and running tctl create -f regularly for each resource. This will enable you to keep your configuration resources up to date regardless of storage backend.