Fork me on GitHub
Teleport

Storage backends

Improve

A Teleport cluster stores different types of data in different locations. By default everything is stored in a local directory at the Auth server. Integration with other storage types is implemented based on the nature of the stored data (size, read/write ratio, mutability, etc.).

Data typeDescriptionSupported storage backends
core cluster stateCluster configuration (e.g. users, roles, auth connectors) and identity (e.g. certificate authorities, registered nodes, trusted clusters).Local directory (SQLite), etcd, AWS DynamoDB, GCP Firestore
audit eventsJSON-encoded events from the audit log (e.g. user logins, RBAC changes)Local directory, AWS DynamoDB, GCP Firestore
session recordingsRaw terminal recordings of interactive user sessionsLocal directory, AWS S3 (and any S3-compatible product), GCP Cloud Storage
teleport instance stateID and credentials of a non-auth teleport instance (e.g. node, proxy)Local directory
Tip

Before continuing, please make sure to take a look at the Cluster State section in the Teleport Architecture documentation.

There are two ways to achieve High Availability. You can "outsource" this function to the infrastructure. For example, using a highly available network-based disk volumes (similar to AWS EBS) and by migrating a failed VM to a new host. In this scenario, there's nothing Teleport-specific to be done.

If High Availability cannot be provided by the infrastructure (perhaps you're running Teleport on a bare metal cluster), you can still configure Teleport to run in a highly available fashion.

Auth Server State

To run multiple instances of Teleport Auth Server, you must switch to a High Availability secrets back-end first. Also, you must tell each node in a cluster that there is more than one auth server available. There are two ways to do this:

  • Use a load balancer to create a single auth API access point (AP) and specify this AP in auth_servers section of Teleport configuration for all nodes in a cluster. This load balancer should do TCP level forwarding.
  • If a load balancer is not an option, you must specify each instance of an auth server in auth_servers section of Teleport configuration.

IMPORTANT: with multiple instances of the auth servers running, special attention needs to be paid to keeping their configuration identical. Settings like cluster_name , tokens , storage , etc must be the same.

Proxy State

The Teleport Proxy is stateless which makes running multiple instances trivial. If using the default configuration, configure your load balancer to forward ports 3023 and 3080 to the servers that run the Teleport proxy. If you have configured your proxy to use non-default ports, you will need to configure your load balancer to forward the ports you specified for listen_addr and web_listen_addr in teleport.yaml. The load balancer for web_listen_addr can terminate TLS with your own certificate that is valid for your users, while the remaining ports should do TCP level forwarding, since Teleport will handle its own SSL on top of that with its own certificates.

NOTE

If you terminate TLS with your own certificate at a load balancer you'll need to run Teleport with --insecure-no-tls

If your load balancer supports HTTP health checks, configure it to hit the /readyz diagnostics endpoint on machines running Teleport. This endpoint must be enabled by using the --diag-addr flag to teleport start: teleport start --diag-addr=127.0.0.1:3000 The http://127.0.0.1:3000/readyz endpoint will reply {"status":"ok"} if the Teleport service is running without problems.

NOTE

As the new auth servers get added to the cluster and the old servers get decommissioned, nodes and proxies will refresh the list of available auth servers and store it in their local cache /var/lib/teleport/authservers.json - the values from the cache file will take precedence over the configuration file.

We'll cover how to use etcd, DynamoDB, and Firestore storage back-ends to make Teleport highly available below.

Etcd

Teleport can use etcd as a storage backend to achieve highly available deployments. You must take steps to protect access to etcd in this configuration because that is where Teleport secrets like keys and user records will be stored.

IMPORTANT

etcd can only currently be used to store Teleport's internal database in a highly-available way. This will allow you to have multiple auth servers in your cluster for an High Availability deployment, but it will not also store Teleport audit events for you in the same way that DynamoDB or Firestore will. etcd is not designed to handle large volumes of time series data like audit events.

To configure Teleport for using etcd as a storage backend:

  • Make sure you are using etcd versions 3.3 or newer.
  • Install etcd and configure peer and client TLS authentication using the etcd security guide.
  • Configure all Teleport Auth servers to use etcd in the "storage" section of the config file as shown below.
  • Deploy several auth servers connected to etcd backend.
  • Deploy several proxy nodes that have auth_servers pointed to the list of auth servers to connect to.
teleport:
  storage:
     type: etcd

     # List of etcd peers to connect to:
     peers: ["https://172.17.0.1:4001", "https://172.17.0.2:4001"]

     # Required path to TLS client certificate and key files to connect to etcd.
     #
     # To create these, follow
     # https://coreos.com/os/docs/latest/generate-self-signed-certificates.html
     # or use the etcd-provided script
     # https://github.com/etcd-io/etcd/tree/master/hack/tls-setup.
     tls_cert_file: /var/lib/teleport/etcd-cert.pem
     tls_key_file: /var/lib/teleport/etcd-key.pem

     # Optional file with trusted CA authority
     # file to authenticate etcd nodes
     #
     # If you used the script above to generate the client TLS certificate,
     # this CA certificate should be one of the other generated files
     tls_ca_file: /var/lib/teleport/etcd-ca.pem

     # Alternative password-based authentication, if not using TLS client
     # certificate.
     #
     # See https://etcd.io/docs/v3.4.0/op-guide/authentication/ for setting
     # up a new user.
     username: username
     password_file: /mnt/secrets/etcd-pass

     # etcd key (location) where teleport will be storing its state under.
     # make sure it ends with a '/'!
     prefix: /teleport/

     # NOT RECOMMENDED: enables insecure etcd mode in which self-signed
     # certificate will be accepted
     insecure: false

     # Optionally sets the limit on the client message size.
     # This is usually used to increase the default which is 2MiB
     # (1.5MiB server's default + gRPC overhead bytes).
     # Make sure this does not exceed the value for the etcd
     # server specified with `--max-request-bytes` (1.5MiB by default).
     # Keep the two values in sync.
     #
     # See https://etcd.io/docs/v3.4.0/dev-guide/limit/ for details
     #
     # This bumps the size to 15MiB as an example:
     etcd_max_client_msg_size_bytes: 15728640

S3

Tip

Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.

AWS Authentication

The configuration examples below contain AWS access keys and secret keys. They are optional, they exist for your convenience but we DO NOT RECOMMEND using them in production. If Teleport is running on an AWS instance it will automatically use the instance IAM role. Teleport also will pick up AWS credentials from the ~/.aws folder, just like the AWS CLI tool.

S3 buckets can only be used as storage for the recorded sessions. S3 cannot store the audit log or the cluster state. Below is an example of how to configure a Teleport auth server to store the recorded sessions in an S3 bucket.

teleport:
  storage:
      # The region setting sets the default AWS region for all AWS services
      # Teleport may consume (DynamoDB, S3)
      region: us-east-1

      # Path to S3 bucket to store the recorded sessions in.
      audit_sessions_uri: "s3://Example_TELEPORT_S3_BUCKET/records"

      # Teleport assumes credentials. Using provider chains, assuming IAM role or
      # standard .aws/credentials in the home folder.

The AWS authentication settings above can be omitted if the machine itself is running on an EC2 instance with an IAM role.

These optional GET parameters control how Teleport interacts with an S3 endpoint, including S3-compatible endpoints.

s3://bucket/path?region=us-east-1&endpoint=mys3.example.com&insecure=false&disablesse=false

  • region=us-east-1 - set the Amazon region to use.
  • endpoint=mys3.example.com - connect to a custom S3 endpoint.
  • insecure=true - set to true or false. If true, TLS will be disabled.
  • disablesse=true - set to true or false. If true, S3 server-side encryption will be disabled. If false, aws:kms (Key Management Service) will be used for server-side encryption. Other SSE types are not supported at this time.

DynamoDB

Tip

Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.

If you are running Teleport on AWS, you can use DynamoDB as a storage back-end to achieve High Availability. DynamoDB backend supports two types of Teleport data:

  • Cluster state
  • Audit log events

DynamoDB cannot store the recorded sessions. You are advised to use AWS S3 for that as shown above. To configure Teleport to use DynamoDB:

  • Make sure you have AWS access key and a secret key that give you access to DynamoDB account. If you're using (as recommended) an IAM role for this, the policy with the necessary permissions is listed below.
  • Configure all Teleport Auth servers to use DynamoDB back-end in the "storage" section of teleport.yaml as shown below.
  • Deploy several auth servers connected to DynamoDB storage back-end.
  • Deploy several proxy nodes.
  • Make sure that all Teleport nodes have auth_servers configuration setting populated with the auth servers.
teleport:
  storage:
    type: dynamodb
    # Region location of dynamodb instance, https://docs.aws.amazon.com/en_pv/general/latest/gr/rande.html#ddb_region
    region: us-east-1

    # Name of the DynamoDB table. If it does not exist, Teleport will create it.
    table_name: Example_TELEPORT_DYNAMO_TABLE_NAME

    # This setting configures Teleport to send the audit events to three places:
    # To keep a copy in DynamoDB, a copy on a local filesystem, and also output the events to stdout.
    # NOTE: The DynamoDB events table has a different schema to the regular Teleport
    # database table, so attempting to use the same table for both will result in errors.
    # When using highly available storage like DynamoDB, you should make sure that the list always specifies
    # the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
    audit_events_uri:  ['dynamodb://events_table_name', 'file:///var/lib/teleport/audit/events', 'stdout://']

    # This setting configures Teleport to save the recorded sessions in an S3 bucket:
    audit_sessions_uri: s3://Example_TELEPORT_S3_BUCKET/records
  • Replace us-east-1 and Example_TELEPORT_DYNAMO_TABLE_NAME with your own settings. Teleport will create the table automatically.
  • Example_TELEPORT_DYNAMO_TABLE_NAME and events_table_name must be different DynamoDB tables. The schema is different for each. Using the same table name for both will result in errors.
  • The AWS authentication setting above can be omitted if the machine itself is running on an EC2 instance with an IAM role.
  • Audit log settings above are optional. If specified, Teleport will store the audit log in DynamoDB and the session recordings must be stored in an S3 bucket, i.e. both audit_xxx settings must be present. If they are not set, Teleport will default to a local file system for the audit log, i.e. /var/lib/teleport/log on an auth server.
  • If DynamoDB is used for the audit log, the logged events will be stored with a TTL of 1 year. Currently, this TTL is not configurable.
Access to DynamoDB

Make sure that the IAM role assigned to Teleport is configured with sufficient access to DynamoDB. Below is the example of the IAM policy you can use:

{
    "Version": "2012-10-17",
    "Statement": [{
            "Sid": "AllAPIActionsOnTeleportAuth",
            "Effect": "Allow",
            "Action": "dynamodb:*",
            "Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth"
        },
        {
            "Sid": "AllAPIActionsOnTeleportStreams",
            "Effect": "Allow",
            "Action": "dynamodb:*",
            "Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth/stream/*"
        }
    ]
}

DynamoDB autoscaling

When setting up DynamoDB it's important to set up backup and autoscaling. We make setup simpler by allowing AWS DynamoDB settings to be set automatically during Teleport startup.

DynamoDB Continuous Backups

DynamoDB Autoscaling Options

# ...
teleport:
  storage:
    type: "dynamodb"
    [...]

    # continuous_backups is used to optionally enable continuous backups.
    # default: false
    continuous_backups: [true|false]

    # auto_scaling is used to optionally enable (and define settings for) auto scaling.
    # default: false
    auto_scaling:  [true|false]
    # Minimum/maximum read capacity in units
    read_min_capacity: int
    read_max_capacity: int
    read_target_value: float
    # Minimum/maximum write capacity in units
    write_min_capacity: int
    write_max_capacity: int
    write_target_value: float

To enable these options you will need to update the IAM Role for Teleport.

{
    "Action": [
        "application-autoscaling:PutScalingPolicy",
        "application-autoscaling:RegisterScalableTarget"
    ],
    "Effect": "Allow",
    "Resource": "*"
},
{
    "Action": [
        "iam:CreateServiceLinkedRole"
    ],
    "Condition": {
        "StringEquals": {
            "iam:AWSServiceName": [
                "dynamodb.application-autoscaling.amazonaws.com"
            ]
        }
    },
    "Effect": "Allow",
    "Resource": "*"
}

GCS

Tip

Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.

Google Cloud Storage (GCS) can only be used as storage for the recorded sessions. GCS cannot store the audit log or the cluster state. Below is an example of how to configure a Teleport auth server to store the recorded sessions in a GCS bucket.

teleport:
  storage:
      # Path to GCS to store the recorded sessions in.
      audit_sessions_uri: "gs://Example_TELEPORT_STORAGE/records"
      credentials_path: /var/lib/teleport/gcs_creds

Firestore

Tip

Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.

If you are running Teleport on GCP, you can use Firestore as a storage back-end to achieve high availability. Firestore backend supports two types of Teleport data:

  • Cluster state
  • Audit log events

Firestore cannot store the recorded sessions. You are advised to use Google Cloud Storage (GCS) for that as shown above. To configure Teleport to use Firestore:

  • Configure all Teleport Auth servers to use Firestore back-end in the "storage" section of teleport.yaml as shown below.
  • Deploy several auth servers connected to Firestore storage back-end.
  • Deploy several proxy nodes.
  • Make sure that all Teleport nodes have auth_servers configuration setting populated with the auth servers or use a load balancer for the auth servers in high availability mode.
teleport:
  storage:
    type: firestore
    # Project ID https://support.google.com/googleapi/answer/7014113?hl=en
    project_id: Example_GCP_Project_Name

    # Name of the Firestore table. If it does not exist, Teleport won't start
    collection_name: Example_TELEPORT_FIRESTORE_TABLE_NAME

    credentials_path: /var/lib/teleport/gcs_creds

    # This setting configures Teleport to send the audit events to three places:
    # To keep a copy in Firestore, a copy on a local filesystem, and also write the events to stdout.
    # NOTE: The Firestore events table has a different schema to the regular Teleport
    # database table, so attempting to use the same table for both will result in errors.
    # When using highly available storage like Firestore, you should make sure that the list always specifies
    # the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
    audit_events_uri:  ['firestore://Example_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME', 'file:///var/lib/teleport/audit/events', 'stdout://']

    # This setting configures Teleport to save the recorded sessions in GCP storage:
    audit_sessions_uri: gs://Example_TELEPORT_S3_BUCKET/records
  • Replace Example_GCP_Project_Name and Example_TELEPORT_FIRESTORE_TABLE_NAME with your own settings. Teleport will create the table automatically.
  • Example_TELEPORT_FIRESTORE_TABLE_NAME and Example_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME must be different Firestore tables. The schema is different for each. Using the same table name for both will result in errors.
  • The GCP authentication setting above can be omitted if the machine itself is running on a GCE instance with a Service Account that has access to the Firestore table.
  • Audit log settings above are optional. If specified, Teleport will store the audit log in Firestore and the session recordings must be stored in a GCP bucket, i.e.both audit_xxx settings must be present. If they are not set, Teleport will default to a local file system for the audit log, i.e. /var/lib/teleport/log on an auth server.
Have a suggestion or can’t find something?
IMPROVE THE DOCS