Fork me on GitHub
Teleport

Running an HA Teleport cluster using AWS, EKS, and Helm

In this guide, we'll go through how to set up a High Availability Teleport cluster with multiple replicas in Kubernetes using Teleport Helm charts and AWS products (DynamoDB and S3).

Prerequisites

Verify that Helm and Kubernetes are installed and up to date.

Tip

The examples below may include the use of the sudo keyword, token UUIDs, and users with admin privileges to make following each step easier when creating resources from scratch.

Generally:

  1. We discourage using sudo in production environments unless it's needed.
  2. We encourage creating new, non-root, users or new test instances for experimenting with Teleport.
  3. We encourage adherence to the Principle of Least Privilege (PoLP) and Zero Admin best practices. Don't give users the admin role when giving them the more restrictive access,editor roles will do instead.
  4. Saving tokens into a file rather than sharing tokens directly as strings.

Learn more about Teleport Role-Based Access Control best practices.

Step 1. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

$ helm version
# version.BuildInfo{Version:"v3.4.2"}

$ kubectl version
# Client Version: version.Info{Major:"1", Minor:"17+"}
# Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

$ helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

$ helm repo update

Step 3. Set up AWS IAM configuration

For Teleport to be able to create the DynamoDB tables, indexes, and the S3 storage bucket it needs, you'll need to configure AWS IAM policies to allow access.

Note
These IAM policies should be added to your AWS account, then granted to the instance role associated with the EKS nodegroups which are running your Kubernetes nodes.

DynamoDB IAM policy

You'll need to replace these values in the policy example below:

Placeholder valueReplace with
us-west-2AWS region
1234567890AWS account ID
teleport-helm-backendDynamoDB table name to use for the Teleport backend
teleport-helm-eventsDynamoDB table name to use for the Teleport audit log (must be different to the backend table)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ClusterStateStorage",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchWriteItem",
                "dynamodb:UpdateTimeToLive",
                "dynamodb:PutItem",
                "dynamodb:DeleteItem",
                "dynamodb:Scan",
                "dynamodb:Query",
                "dynamodb:DescribeStream",
                "dynamodb:UpdateItem",
                "dynamodb:DescribeTimeToLive",
                "dynamodb:CreateTable",
                "dynamodb:DescribeTable",
                "dynamodb:GetShardIterator",
                "dynamodb:GetItem",
                "dynamodb:UpdateTable",
                "dynamodb:GetRecords"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend",
                "arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend/stream/*"
            ]
        },
        {
            "Sid": "ClusterEventsStorage",
            "Effect": "Allow",
            "Action": [
                "dynamodb:CreateTable",
                "dynamodb:BatchWriteItem",
                "dynamodb:UpdateTimeToLive",
                "dynamodb:PutItem",
                "dynamodb:DescribeTable",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:Scan",
                "dynamodb:Query",
                "dynamodb:UpdateItem",
                "dynamodb:DescribeTimeToLive",
                "dynamodb:UpdateTable"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events",
                "arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events/index/*"
            ]
        }
    ]
}

S3 IAM policy

You'll need to replace these values in the policy example below:

Placeholder valueReplace with
teleport-helm-sessionsName to use for the Teleport S3 session recording bucket
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ClusterSessionsStorage",
            "Effect": "Allow",
            "Action": [
                "s3:PutEncryptionConfiguration",
                "s3:PutObject",
                "s3:GetObject",
                "s3:GetEncryptionConfiguration",
                "s3:GetObjectRetention",
                "s3:ListBucketVersions",
                "s3:ListBucketMultipartUploads",
                "s3:CreateBucket",
                "s3:ListBucket",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::teleport-helm-sessions/*",
                "arn:aws:s3:::teleport-helm-sessions"
            ]
        }
    ]
}

Route53 IAM policy

This policy allows cert-manager to use DNS01 Let's Encrypt challenges to provision TLS certificates for your Teleport cluster.

You'll need to replace these values in the policy example below:

Placeholder valueReplace with
Z0159221358P96JYAUAA4Route 53 hosted zone ID for the domain hosting your Teleport cluster
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "route53:GetChange",
            "Resource": "arn:aws:route53:::change/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets"
            ],
            "Resource": "arn:aws:route53:::hostedzone/Z0159221358P96JYAUAA4"
        }
    ]
}

Step 4. Install and configure cert-manager

Reference: cert-manager docs

In this example, we are using multiple pods to create a High Availability Teleport cluster. As such, we will be using cert-manager to centrally provision TLS certificates using Let's Encrypt. These certificates will be mounted into each Teleport pod, and automatically renewed and kept up to date by cert-manager.

If you do not have cert-manager already configured in the Kubernetes cluster where you are installing Teleport, you should add the Jetstack Helm chart repository which hosts the cert-manager chart, and install the chart:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \--create-namespace \--namespace cert-manager \--set installCRDs=true \--set extraArgs="{--issuer-ambient-credentials}" # required to automount ambient AWS credentials when using an Issuer

Once cert-manager is installed, you should create and add an Issuer.

You'll need to replace these values in the Issuer example below:

Placeholder valueReplace with
[email protected]An email address to receive communications from Let's Encrypt
example.comThe name of the Route 53 domain hosting your Teleport cluster
us-east-1AWS region where the cluster is running
Z0159221358P96JYAUAA4Route 53 hosted zone ID for the domain hosting your Teleport cluster
cat << EOF > aws-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-production
  namespace: teleport
spec:
  acme:
    email: [email protected]                                # Change this
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-production
    solvers:
    - selector:
        dnsZones:
          - "example.com"                                  # Change this
      dns01:
        route53:
          region: us-east-1                                # Change this
          hostedZoneID: Z0159221358P96JYAUAA4              # Change this
EOF

After you have created the Issuer and updated the values, add it to your cluster using kubectl:

kubectl create namespace teleport
kubectl --namespace teleport create -f aws-issuer.yaml

Step 5. Set values to configure the cluster

There are two different ways to configure the teleport-cluster Helm chart to use aws mode - using a values.yaml file, or using --set on the command line.

We recommend using a values.yaml file as it can be easily kept in source control.

The --set CLI method is more appropriate for quick test deployments.

Create an aws-values.yaml file and write the values you've chosen above to it:

chartMode: aws
clusterName: teleport.example.com                 # Name of your cluster. Use the FQDN you intend to configure in DNS below.
aws:
  region: us-west-2                               # AWS region
  backendTable: teleport-helm-backend             # DynamoDB table to use for the Teleport backend
  auditLogTable: teleport-helm-events             # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
  sessionRecordingBucket: teleport-helm-sessions  # S3 bucket to use for Teleport session recordings
  backups: true                                   # Whether or not to turn on DynamoDB backups
highAvailability:
  replicaCount: 2                                 # Number of replicas to configure
  certManager:
    enabled: true                                 # Enable cert-manager support to get TLS certificates
    issuerName: letsencrypt-production            # Name of the cert-manager Issuer to use (as configured above)

Install the chart with the values from your aws-values.yaml file using this command:

helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ -f aws-values.yaml
Note
You cannot change the clusterName after the cluster is configured, so make sure you choose wisely. You should use the fully-qualified domain name that you'll use for external access to your Teleport cluster.

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport get all

NAME READY STATUS RESTARTS AGE

pod/teleport-5cf46ddf5f-dzh65 1/1 Running 0 4m21s

pod/teleport-5cf46ddf5f-mpghq 1/1 Running 0 4m21s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/teleport LoadBalancer 10.100.37.171 a232d92df01f940339adea0e645d88bb-1576732600.us-east-1.elb.amazonaws.com 443:30821/TCP,3023:30801/TCP,3026:32612/TCP,3024:31253/TCP 4m21s

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/teleport 2/2 2 2 4m21s

NAME DESIRED CURRENT READY AGE

replicaset.apps/teleport-5cf46ddf5f 2 2 2 4m21s

Step 6. Set up DNS

You'll need to set up two DNS A records: teleport.example.com for the web UI, and *.teleport.example.com for web apps using application access. In our example, both records are aliases to an ELB.

Here's how to do this in a hosted zone with AWS Route 53:

Change these parameters if you altered them above

NAMESPACE='teleport'
RELEASE_NAME='teleport'

DNS settings (change as necessary)

MYZONE_DNS='example.com'
MYDNS='teleport.example.com'
MY_CLUSTER_REGION='us-west-2'

Find AWS Zone ID and ELB Zone ID

MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)"
MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"
MYELB_NAME="${MYELB%%-*}"
MYELB_ZONE="$(aws elb describe-load-balancers --region "${MY_CLUSTER_REGION?}" --load-balancer-names "${MYELB_NAME?}" | jq -r '.LoadBalancerDescriptions[0].CanonicalHostedZoneNameID')"

Create a JSON file changeset for AWS.

jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \ '{

"Comment": "Create records",

"Changes": [

{

"Action": "CREATE",

"ResourceRecordSet": {

"Name": $dns,

"Type": "A",

"AliasTarget": {

"HostedZoneId": $elbz,

"DNSName": ("dualstack." + $elb),

"EvaluateTargetHealth": false

}

}

},

{

"Action": "CREATE",

"ResourceRecordSet": {

"Name": ("*." + $dns),

"Type": "A",

"AliasTarget": {

"HostedZoneId": $elbz,

"DNSName": ("dualstack." + $elb),

"EvaluateTargetHealth": false

}

}

}

]

}' > myrecords.json

Review records before applying.

cat myrecords.json | jq

Apply the records and capture change id

CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')"

Verify that change has been applied

aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status'

"INSYNC"

Step 7. Create a Teleport user

Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, so we can run the command using kubectl:

kubectl --namespace teleport exec deploy/teleport -- tctl users add test --roles=access,editor

User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:

https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68

NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access.

Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.

High Availability

In this guide, we have configured two replicas. This can be changed after cluster creation by altering the highAvailability.replicaCount value using helm upgrade as detailed below.

Upgrading the cluster after deployment

To make changes to your Teleport cluster after deployment, you can use helm upgrade.

Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest version of Teleport. You can make sure that the repo is up to date by running helm repo update.

Here's an example where we set the chart to use 3 replicas:

Edit your aws-values.yaml file from above and make the appropriate changes.

Upgrade the deployment with the values from your aws-values.yaml file using this command:

helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ -f aws-values.yaml
Note
To change chartMode, clusterName, or any aws settings, you must first uninstall the existing chart and then install a new version with the appropriate values.

Uninstalling Teleport

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport uninstall teleport

Uninstalling cert-manager

If you want to remove the cert-manager installation later, you can use this command:

helm --namespace cert-manager uninstall cert-manager

Next steps

You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.

See the high availability section of our Helm chart reference for more details on high availability.

Have a suggestion or can’t find something?
IMPROVE THE DOCS