
A Teleport cluster stores different types of data in different locations. By default everything is stored in a local directory at the Auth server. Integration with other storage types is implemented based on the nature of the stored data (size, read/write ratio, mutability, etc.).
Data type | Description | Supported storage backends |
---|---|---|
core cluster state | Cluster configuration (e.g. users, roles, auth connectors) and identity (e.g. certificate authorities, registered nodes, trusted clusters). | Local directory (SQLite), etcd, AWS DynamoDB, GCP Firestore |
audit events | JSON-encoded events from the audit log (e.g. user logins, RBAC changes) | Local directory, AWS DynamoDB, GCP Firestore |
session recordings | Raw terminal recordings of interactive user sessions | Local directory, AWS S3 (and any S3-compatible product), GCP Cloud Storage |
teleport instance state | ID and credentials of a non-auth teleport instance (e.g. node, proxy) | Local directory |
Before continuing, please make sure to take a look at the Cluster State section in the Teleport Architecture documentation.
There are two ways to achieve High Availability. You can "outsource" this function to the infrastructure. For example, using a highly available network-based disk volumes (similar to AWS EBS) and by migrating a failed VM to a new host. In this scenario, there's nothing Teleport-specific to be done.
If High Availability cannot be provided by the infrastructure (perhaps you're running Teleport on a bare metal cluster), you can still configure Teleport to run in a highly available fashion.
Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.
Get started with a free trial of Teleport Cloud.
Auth Server State
To run multiple instances of Teleport Auth Server, you must switch to a High Availability secrets back-end first.
To do this, use a load balancer to create a single auth API access point (AP) and specify this AP in the auth_server
field of the Teleport configuration file for all Nodes in a cluster. This load balancer should do TCP-level forwarding.
IMPORTANT: with multiple instances of the auth servers running, special
attention needs to be paid to keeping their configuration identical. Settings
like cluster_name
, tokens
, storage
, etc must be the same.
Proxy State
The Teleport Proxy is stateless which makes running multiple instances trivial.
If using the default configuration, configure your load balancer to
forward ports 3023
and 3080
to the servers that run the Teleport proxy. If
you have configured your proxy to use non-default ports, you will need to
configure your load balancer to forward the ports you specified for
listen_addr
and web_listen_addr
in teleport.yaml
. The load balancer for
web_listen_addr
can terminate TLS with your own certificate that is valid for
your users, while the remaining ports should do TCP level forwarding, since
Teleport will handle its own SSL on top of that with its own certificates.
If you terminate TLS with your own certificate at a load
balancer you'll need to run Teleport with --insecure-no-tls
If your load balancer supports HTTP health checks, configure it to hit the
/readyz
diagnostics endpoint on machines running Teleport. This endpoint
must be enabled by using the --diag-addr
flag to teleport start: teleport start --diag-addr=127.0.0.1:3000
The http://127.0.0.1:3000/readyz endpoint will reply {"status":"ok"}
if the Teleport service
is running without problems.
As the new auth servers get added to the cluster and the old
servers get decommissioned, nodes and proxies will refresh the list of
available auth servers and store it in their local cache
/var/lib/teleport/authservers.json
- the values from the cache file will take
precedence over the configuration file.
We'll cover how to use etcd
, DynamoDB, and Firestore storage back-ends to make Teleport
highly available below.
Etcd
Teleport can use etcd as a storage backend to
achieve highly available deployments. You must take steps to protect access to
etcd
in this configuration because that is where Teleport secrets like keys
and user records will be stored.
etcd
can only currently be used to store Teleport's internal database in a
highly-available way. This will allow you to have multiple auth servers in your
cluster for an High Availability deployment, but it will not also store Teleport audit events
for you in the same way that DynamoDB or
Firestore will. etcd
is not designed to handle large volumes of time series data like audit events.
To configure Teleport for using etcd as a storage backend:
- Make sure you are using etcd versions 3.3 or newer.
- Follow etcd's cluster hardware recommendations. In particular, leverage SSD or high-performance virtualized block device storage for best performance.
- Install etcd and configure peer and client TLS authentication using the etcd
security guide.
- You can use this script provided by etcd if you don't already have a TLS setup.
- Configure all Teleport Auth servers to use etcd in the "storage" section of the config file as shown below.
- Deploy several auth servers connected to etcd backend.
- Deploy several Proxy Service instances that have
auth_server
pointed to the auth server to connect to.
teleport:
storage:
type: etcd
# List of etcd peers to connect to:
peers: ["https://172.17.0.1:4001", "https://172.17.0.2:4001"]
# Required path to TLS client certificate and key files to connect to etcd.
#
# To create these, follow
# https://coreos.com/os/docs/latest/generate-self-signed-certificates.html
# or use the etcd-provided script
# https://github.com/etcd-io/etcd/tree/master/hack/tls-setup.
tls_cert_file: /var/lib/teleport/etcd-cert.pem
tls_key_file: /var/lib/teleport/etcd-key.pem
# Optional file with trusted CA authority
# file to authenticate etcd nodes
#
# If you used the script above to generate the client TLS certificate,
# this CA certificate should be one of the other generated files
tls_ca_file: /var/lib/teleport/etcd-ca.pem
# Alternative password-based authentication, if not using TLS client
# certificate.
#
# See https://etcd.io/docs/v3.4.0/op-guide/authentication/ for setting
# up a new user.
username: username
password_file: /mnt/secrets/etcd-pass
# etcd key (location) where teleport will be storing its state under.
# make sure it ends with a '/'!
prefix: /teleport/
# NOT RECOMMENDED: enables insecure etcd mode in which self-signed
# certificate will be accepted
insecure: false
# Optionally sets the limit on the client message size.
# This is usually used to increase the default which is 2MiB
# (1.5MiB server's default + gRPC overhead bytes).
# Make sure this does not exceed the value for the etcd
# server specified with `--max-request-bytes` (1.5MiB by default).
# Keep the two values in sync.
#
# See https://etcd.io/docs/v3.4.0/dev-guide/limit/ for details
#
# This bumps the size to 15MiB as an example:
etcd_max_client_msg_size_bytes: 15728640
S3
Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.
S3 buckets can only be used as storage for the recorded sessions. S3 cannot store the audit log or the cluster state.
S3 buckets must have versioning enabled, which ensures that a session log cannot be permanently altered or deleted. Teleport will always look at the oldest version of a recording.
Authenticating to AWS
The Teleport Auth Service must be able to read AWS credentials in order to authenticate to S3.
Grant the Teleport Auth Service access to credentials that it can use to authenticate to AWS. If you are running the Teleport Auth Service on an EC2 instance, you should use the EC2 Instance Metadata Service method. Otherwise, you must use environment variables:
Teleport will detect when it is running on an EC2 instance and use the Instance Metadata Service to fetch credentials.
Teleport's built-in AWS client reads credentials from the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
When you start the Teleport Auth Service, the service reads environment variables from a
file at the path /etc/default/teleport
. Obtain these credentials from your
organization. Ensure that /etc/default/teleport
has the following content,
replacing the values of each variable:
AWS_ACCESS_KEY_ID=00000000000000000000
AWS_SECRET_ACCESS_KEY=0000000000000000000000000000000000000000
AWS_DEFAULT_REGION=<YOUR_REGION>
Teleport's AWS client loads credentials from different sources in the following order:
- Environment Variables
- Shared credentials file
- Shared configuration file (Teleport always enables shared configuration)
- EC2 Instance Metadata (credentials only)
While you can provide AWS credentials via a shared credentials file or shared
configuration file, you will need to run the Teleport Auth Service with the AWS_PROFILE
environment variable assigned to the name of your profile of choice.
If you have a specific use case that the instructions above do not account for, consult the documentation for the AWS SDK for Go for a detailed description of credential loading behavior.
Configuring the S3 backend
Below is an example of how to configure the Teleport Auth Service to store the recorded sessions in an S3 bucket.
teleport:
storage:
# The region setting sets the default AWS region for all AWS services
# Teleport may consume (DynamoDB, S3)
region: us-east-1
# Path to S3 bucket to store the recorded sessions in.
audit_sessions_uri: "s3://Example_TELEPORT_S3_BUCKET/records"
# Teleport assumes credentials. Using provider chains, assuming IAM role or
# standard .aws/credentials in the home folder.
You can add optional query parameters to the S3 URL. The Teleport Auth Service reads these parameters to configure its interactions with S3:
s3://bucket/path?region=us-east-1&endpoint=mys3.example.com&insecure=false&disablesse=false&acl=private&use_fips_endpoint=true
region=us-east-1
- set the Amazon region to use.endpoint=mys3.example.com
- connect to a custom S3 endpoint. Optional.insecure=true
- set totrue
orfalse
. Iftrue
, TLS will be disabled. Default value isfalse
.disablesse=true
- set totrue
orfalse
. Iftrue
, S3 server-side encryption will be disabled. Iffalse
, aws:kms (Key Management Service) will be used for server-side encryption. Other SSE types are not supported at this time. Settinginsecure=true
implicitly setsdisablesse=true
.sse_kms_key=kms_key_id
- If set to a valid AWS KMS CMK key ID all objects uploaded to S3 will be encrypted with this key. Details can be found below.acl=private
- set the canned ACL to use. Must be one of the predefined ACL values.use_fips_endpoint=true
- Configure S3 FIPS endpoints
S3 IAM policy
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
your-sessions-bucket | Name to use for the Teleport S3 session recording bucket |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketActions",
"Effect": "Allow",
"Action": [
"s3:PutEncryptionConfiguration",
"s3:PutBucketVersioning",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:ListBucket",
"s3:GetEncryptionConfiguration",
"s3:GetBucketVersioning",
"s3:CreateBucket"
],
"Resource": "arn:aws:s3:::your-sessions-bucket"
},
{
"Sid": "ObjectActions",
"Effect": "Allow",
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectRetention",
"s3:*Object",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::your-sessions-bucket/*"
}
]
}
S3 Server Side Encryption
Teleport supports using a custom AWS KMS Customer Managed Key for encrypting objects uploaded to S3. This allows you to restrict who can read objects like session recordings separately from those that have read access to a bucket by restricting key access.
The sse_kms_key
parameter above can be set to any valid KMS CMK ID corresponding to a symmetric standard spec KMS key.
Example template KMS key policies are provided below for common usage cases. IAM users do not have access to any
key by default. Permissions have to be explicitly granted in the policy.
Encryption/Decryption
This policy allows an IAM user to encrypt and decrypt objects. This allows a cluster auth to write and play back session recordings.
Replace [iam-key-admin-arn]
with the IAM ARN of the user(s) that should have
administrative key access and [auth-node-iam-arn]
with the IAM ARN
of the user the Teleport auth nodes are using.
{
"Id": "Teleport Encryption and Decryption",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Teleport CMK Admin",
"Effect": "Allow",
"Principal": {
"AWS": "[iam-key-admin-arn]"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Teleport CMK Auth",
"Effect": "Allow",
"Principal": {
"AWS": "[auth-node-iam-arn]"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
}
]
}
Encryption/Decryption with separate clusters
This policy allows specifying separate IAM users for encryption and decryption. This can be used to set up a multi cluster configuration where the main cluster cannot play back session recordings but only write them. A separate cluster authenticating as a different IAM user with decryption access can be used for playing back the session recordings.
Replace [iam-key-admin-arn]
with the IAM ARN of the user(s) that should have
administrative key access, [iam-node-write-arn]
with the IAM ARN of the user the
main write-only cluster auth nodes are using and [iam-node-read-arn]
with the
IAM ARN of the user used by the read-only cluster.
For this to work the second cluster has to be connected to the same audit log as the main cluster. This is needed to detect session recordings.
{
"Id": "Teleport Separate Encryption and Decryption",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Teleport CMK Admin",
"Effect": "Allow",
"Principal": {
"AWS": "[iam-key-admin-arn]"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Teleport CMK Auth Encrypt",
"Effect": "Allow",
"Principal": {
"AWS": "[auth-node-write-arn]"
},
"Action": [
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Teleport CMK Auth Decrypt",
"Effect": "Allow",
"Principal": {
"AWS": "[auth-node-read-arn]"
},
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
],
"Resource": "*"
}
]
}
ACL example: transferring object ownership
If you are uploading from AWS account A
to a bucket owned by AWS account B
and want A
to retain ownership of the objects, you can take one of two approaches.
Without ACLs
If ACLs are disabled, object ownership will be set to Bucket owner enforced
and no action will be needed.
With ACLs
- Set object ownership to
Bucket owner preferred
(under Permissions in the management console). - Add
acl=bucket-owner-full-control
toaudit_sessions_uri
.
To enforce the ownership transfer, set B
's bucket's policy to only allow uploads that include the bucket-owner-full-control
canned ACL.
{
"Version": "2012-10-17",
"Id": "[id]",
"Statement": [
{
"Sid": "[sid]",
"Effect": "Allow",
"Principal": {
"AWS": "[ARN of account A]"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BucketName/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
For more information, see the AWS Documentation.
DynamoDB
Before continuing, please make sure to take a look at the Cluster State section in the Teleport Architecture documentation.
If you are running Teleport on AWS, you can use DynamoDB as a storage back-end to achieve High Availability. DynamoDB backend supports two types of Teleport data:
- Cluster state
- Audit log events
DynamoDB cannot store the recorded sessions. You are advised to use AWS S3 for that as shown above.
Authenticating to AWS
The Teleport Auth Service must be able to read AWS credentials in order to authenticate to DynamoDB.
Grant the Teleport Auth Service access to credentials that it can use to authenticate to AWS. If you are running the Teleport Auth Service on an EC2 instance, you should use the EC2 Instance Metadata Service method. Otherwise, you must use environment variables:
Teleport will detect when it is running on an EC2 instance and use the Instance Metadata Service to fetch credentials.
Teleport's built-in AWS client reads credentials from the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
When you start the Teleport Auth Service, the service reads environment variables from a
file at the path /etc/default/teleport
. Obtain these credentials from your
organization. Ensure that /etc/default/teleport
has the following content,
replacing the values of each variable:
AWS_ACCESS_KEY_ID=00000000000000000000
AWS_SECRET_ACCESS_KEY=0000000000000000000000000000000000000000
AWS_DEFAULT_REGION=<YOUR_REGION>
Teleport's AWS client loads credentials from different sources in the following order:
- Environment Variables
- Shared credentials file
- Shared configuration file (Teleport always enables shared configuration)
- EC2 Instance Metadata (credentials only)
While you can provide AWS credentials via a shared credentials file or shared
configuration file, you will need to run the Teleport Auth Service with the AWS_PROFILE
environment variable assigned to the name of your profile of choice.
If you have a specific use case that the instructions above do not account for, consult the documentation for the AWS SDK for Go for a detailed description of credential loading behavior.
The IAM role that the Teleport Auth Service authenticates as must have the policies specified in the next section.
IAM policies
Make sure that the IAM role assigned to the Teleport Auth Service is configured with sufficient access to DynamoDB. Below is the example of the IAM policy you can use:
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllAPIActionsOnTeleportAuth",
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth"
},
{
"Sid": "AllAPIActionsOnTeleportStreams",
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:eu-west-1:123456789012:table/prod.teleport.auth/stream/*"
}
]
}
Configuring the DynamoDB backend
To configure Teleport to use DynamoDB:
- Configure all Teleport Auth servers to use DynamoDB back-end in the "storage"
section of
teleport.yaml
as shown below. - Deploy several auth servers connected to DynamoDB storage back-end.
- Deploy several proxy nodes.
- Make sure that all Teleport resource services have the
auth_servers
configuration setting populated with the addresses of your cluster's Auth Service instances.
teleport:
storage:
type: dynamodb
# Region location of dynamodb instance, https://docs.aws.amazon.com/en_pv/general/latest/gr/rande.html#ddb_region
region: us-east-1
# Name of the DynamoDB table. If it does not exist, Teleport will create it.
table_name: Example_TELEPORT_DYNAMO_TABLE_NAME
# This setting configures Teleport to send the audit events to three places:
# To keep a copy in DynamoDB, a copy on a local filesystem, and also output the events to stdout.
# NOTE: The DynamoDB events table has a different schema to the regular Teleport
# database table, so attempting to use the same table for both will result in errors.
# When using highly available storage like DynamoDB, you should make sure that the list always specifies
# the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
audit_events_uri: ['dynamodb://events_table_name', 'file:///var/lib/teleport/audit/events', 'stdout://']
# This setting configures Teleport to save the recorded sessions in an S3 bucket:
audit_sessions_uri: s3://Example_TELEPORT_S3_BUCKET/records
# By default, Teleport stores audit events with an AWS TTL of 1 year.
# This value can be configured as shown below. If set to 0 seconds, TTL is disabled.
audit_retention_period: 365d
- Replace
us-east-1
andExample_TELEPORT_DYNAMO_TABLE_NAME
with your own settings. Teleport will create the table automatically. Example_TELEPORT_DYNAMO_TABLE_NAME
andevents_table_name
must be different DynamoDB tables. The schema is different for each. Using the same table name for both will result in errors.- Audit log settings above are optional. If specified, Teleport will store the
audit log in DynamoDB and the session recordings must be stored in an S3
bucket, i.e. both
audit_xxx
settings must be present. If they are not set, Teleport will default to a local file system for the audit log, i.e./var/lib/teleport/log
on an auth server.
The optional GET
parameters shown below control how Teleport interacts with a DynamoDB endpoint.
dynamodb://events_table_name?region=us-east-1&endpoint=dynamo.example.com&use_fips_endpoint=true
region=us-east-1
- set the Amazon region to use.endpoint=dynamo.example.com
- connect to a custom S3 endpoint.use_fips_endpoint=true
- Configure DynamoDB FIPS endpoints.
Assigning permissions to the Teleport Auth Service
Make sure that the IAM role assigned to Teleport is configured with sufficient access to DynamoDB. Below is an example of the IAM policy we recommend.
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
us-west-2 | AWS region |
1234567890 | AWS account ID |
teleport-helm-backend | DynamoDB table name to use for the Teleport backend |
teleport-helm-events | DynamoDB table name to use for the Teleport audit log (must be different to the backend table) |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ClusterStateStorage",
"Effect": "Allow",
"Action": [
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:DescribeStream",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:CreateTable",
"dynamodb:DescribeTable",
"dynamodb:GetShardIterator",
"dynamodb:GetItem",
"dynamodb:UpdateTable",
"dynamodb:GetRecords",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend/stream/*"
]
},
{
"Sid": "ClusterEventsStorage",
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DescribeTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:UpdateTable",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events/index/*"
]
}
]
}
DynamoDB autoscaling
When setting up DynamoDB it's important to set up backups. Autoscaling is optional with Teleport, and it is advised to collect some production usage data before enabling autoscaling.
Autoscaling and backup can be configured on DynamoDB directly. We also make setup simpler by allowing AWS DynamoDB settings to be set automatically during Teleport startup. The configuration is described below.
DynamoDB Continuous Backups
DynamoDB Autoscaling Options
# ...
teleport:
storage:
type: "dynamodb"
[...]
# continuous_backups is used to optionally enable continuous backups.
# default: false
continuous_backups: [true|false]
# auto_scaling is used to optionally enable (and define settings for) auto scaling.
# default: false
auto_scaling: [true|false]
# Minimum/maximum read capacity in units
read_min_capacity: int
read_max_capacity: int
read_target_value: float
# Minimum/maximum write capacity in units
write_min_capacity: int
write_max_capacity: int
write_target_value: float
To enable these options you will need to update the IAM Policy for Teleport.
{
"Action": [
"application-autoscaling:PutScalingPolicy",
"application-autoscaling:RegisterScalableTarget"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"iam:CreateServiceLinkedRole"
],
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"dynamodb.application-autoscaling.amazonaws.com"
]
}
},
"Effect": "Allow",
"Resource": "*"
}
Configuring AWS FIPS endpoints
This config option applies to AWS S3 and AWS DynamoDB.
Set use_fips_endpoint
to true
or false
. If true
, FIPS Dynamo endpoints will be used.
If false
, normal Dynamo endpoints will be used. If unset, the AWS Environment Variable AWS_USE_FIPS_ENDPOINT
will determine which endpoint is used.
FIPS endpoints will also be used if Teleport is run with the --fips
flag.
Config option priority is applied in the following order:
- Setting the
use_fips_endpoint
query parameter as shown above - Using the
--fips
flag when running Teleport - Using the AWS environment variable
Setting this environment variable to true will enable FIPS endpoints for all AWS resource types. Some FIPS endpoints are not supported in certain regions or environments or are only supported in GovCloud.
GCS
Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.
Google Cloud Storage (GCS) can be used as storage for recorded sessions. GCS cannot store the audit log or the cluster state. Below is an example of how to configure a Teleport auth server to store the recorded sessions in a GCS bucket.
teleport:
storage:
# Path to GCS to store the recorded sessions in.
audit_sessions_uri: 'gs://$BUCKET_NAME/records?projectID=$PROJECT_ID&credentialsPath=$CREDENTIALS_PATH'
We recommend creating a bucket in Dual-Region
mode with the Standard
storage class to ensure cluster performance and high availability.
Replace the following variables in the above example with your own values:
-
$BUCKET_NAME
with the name of the desired GCS bucket. If the bucket does not exist it will be created. Please ensure the following permissions are granted for the given bucket:storage.buckets.get
storage.objects.create
storage.objects.get
storage.objects.list
storage.objects.update
If the bucket does not exist, please also ensure that the
storage.buckets.create
permission is granted. -
$PROJECT_ID
with a GCS-enabled GCP project. -
$CREDENTIALS_PATH
with the path to a JSON-formatted GCP credentials file configured for a service account applicable to the project.
Firestore
Before continuing, please make sure to take a look at the Cluster State section in Teleport Architecture documentation.
If you are running Teleport on GCP, you can use Firestore as a storage back-end to achieve high availability. Firestore backend supports two types of Teleport data:
- Cluster state
- Audit log events
Firestore cannot store the recorded sessions. You are advised to use Google Cloud Storage (GCS) for that as shown above. To configure Teleport to use Firestore:
- Configure all Teleport Auth servers to use Firestore back-end in the "storage"
section of
teleport.yaml
as shown below. - Deploy several auth servers connected to Firestore storage back-end.
- Deploy several proxy nodes.
- Make sure that all Teleport resource services have the
auth_servers
configuration setting populated with the addresses of your cluster's Auth Service instances or use a load balancer for Auth Service instances in high availability mode.
teleport:
storage:
type: firestore
# Project ID https://support.google.com/googleapi/answer/7014113?hl=en
project_id: Example_GCP_Project_Name
# Name of the Firestore table. If it does not exist, Teleport won't start
collection_name: Example_TELEPORT_FIRESTORE_TABLE_NAME
credentials_path: /var/lib/teleport/gcs_creds
# This setting configures Teleport to send the audit events to three places:
# To keep a copy in Firestore, a copy on a local filesystem, and also write the events to stdout.
# NOTE: The Firestore events table has a different schema to the regular Teleport
# database table, so attempting to use the same table for both will result in errors.
# When using highly available storage like Firestore, you should make sure that the list always specifies
# the High Availability storage method first, as this is what the Teleport web UI uses as its source of events to display.
audit_events_uri: ['firestore://Example_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME', 'file:///var/lib/teleport/audit/events', 'stdout://']
# This setting configures Teleport to save the recorded sessions in GCP storage:
audit_sessions_uri: gs://Example_TELEPORT_GCS_BUCKET/records
- Replace
Example_GCP_Project_Name
andExample_TELEPORT_FIRESTORE_TABLE_NAME
with your own settings. Teleport will create the table automatically. Example_TELEPORT_FIRESTORE_TABLE_NAME
andExample_TELEPORT_FIRESTORE_EVENTS_TABLE_NAME
must be different Firestore tables. The schema is different for each. Using the same table name for both will result in errors.- The GCP authentication setting above can be omitted if the machine itself is running on a GCE instance with a Service Account that has access to the Firestore table.
- Audit log settings above are optional. If specified, Teleport will store the audit log in Firestore
and the session recordings must be stored in a GCS bucket, i.e. both
audit_xxx
settings must be present. If they are not set, Teleport will default to a local filesystem for the audit log, i.e./var/lib/teleport/log
on an auth server.
SQLite
The Auth Service uses the SQLite backend when no type
is specified in the
storage section in the Teleport configuration file, or when type
is set to
sqlite
or dir
. The SQLite backend is not designed for high throughput and
it's not capable of serving the needs of Teleport's High Availability configurations.
If you are planning to use SQLite as your backend, scale your cluster slowly and
monitor the number of warning messages in the Auth Service's logs that say
SLOW TRANSACTION
, as that's a sign that the cluster has outgrown the capabilities
of the SQLite backend.
As a stopgap measure until it's possible to migrate the cluster to use a HA-capable backend, you can configure the SQLite backend to reduce the amount of disk synchronization, in exchange for less resilience against system crashes or power loss. For an explanation on what the options mean, see the official SQLite docs. No matter the configuration, we recommend you take regular backups of your cluster state.
To reduce disk synchronization:
teleport:
storage:
type: sqlite
sync: NORMAL
To disable disk synchronization altogether:
teleport:
storage:
type: sqlite
sync: "OFF"
When running on a filesystem that supports file locks (i.e. a local filesystem, not a networked one) it's possible to also configure the SQLite database to use Write-Ahead Logging (see the official docs on WAL mode) for significantly improved performance without sacrificing reliability:
teleport:
storage:
type: sqlite
sync: NORMAL
journal: WAL