In this guide, we'll go through how to set up a High Availability Teleport cluster with multiple replicas in Kubernetes using Teleport Helm charts and AWS products (DynamoDB and S3).
Prerequisites
- Kubernetes >= v1.17.0
- Helm >= v3.4.2
Verify that Helm and Kubernetes are installed and up to date.
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport agent to a cluster, save the invitation token to a
file. Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/7. Install Helm
Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.
Throughout this guide, we will assume that you have the helm
and kubectl
binaries available in your PATH
:
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
Step 2/7. Add the Teleport Helm chart repository
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Step 3/7. Set up AWS IAM configuration
For Teleport to be able to create the DynamoDB tables, indexes, and the S3 storage bucket it needs, you'll need to configure AWS IAM policies to allow access.
These IAM policies should be added to your AWS account, then granted to the instance role associated with the EKS nodegroups which are running your Kubernetes nodes.
DynamoDB IAM policy
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
us-west-2 | AWS region |
1234567890 | AWS account ID |
teleport-helm-backend | DynamoDB table name to use for the Teleport backend |
teleport-helm-events | DynamoDB table name to use for the Teleport audit log (must be different to the backend table) |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ClusterStateStorage",
"Effect": "Allow",
"Action": [
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:DescribeStream",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:CreateTable",
"dynamodb:DescribeTable",
"dynamodb:GetShardIterator",
"dynamodb:GetItem",
"dynamodb:UpdateTable",
"dynamodb:GetRecords",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend/stream/*"
]
},
{
"Sid": "ClusterEventsStorage",
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DescribeTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:UpdateTable",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events/index/*"
]
}
]
}
S3 IAM policy
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
teleport-helm-sessions | Name to use for the Teleport S3 session recording bucket |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ClusterSessionsStorage",
"Effect": "Allow",
"Action": [
"s3:PutEncryptionConfiguration",
"s3:PutObject",
"s3:GetObject",
"s3:GetEncryptionConfiguration",
"s3:GetObjectRetention",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:CreateBucket",
"s3:ListBucket",
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::teleport-helm-sessions/*",
"arn:aws:s3:::teleport-helm-sessions"
]
}
]
}
Step 4/7. TLS certificates for Teleport
The teleport-cluster
chart deploys a Kubernetes LoadBalancer
to handle incoming connections to the Teleport Proxy Service.
We now need to configure TLS certificates for Teleport to secure its communications and allow external clients to connect.
There are two supported options when using AWS:
- Use
cert-manager
to provision and automatically renew ACME certs (described in Step 4a).- This approach is recommended if you require CLI access to web applications using client certificates via Teleport Application access.
- Use AWS Certificate Manager to handle TLS termination with AWS-managed certificates (described in Step 4b).
- This will prevent Teleport Application access from working via CLI using client certificates. Application access will still work via a browser.
You must choose only one of these options.
Step 4a. Install and configure cert-manager to handle TLS
In this example, we are using multiple pods to create a High Availability Teleport cluster. As such, we will be using
cert-manager
to centrally provision TLS certificates using Let's Encrypt. These certificates will be mounted into each
Teleport pod, and automatically renewed and kept up to date by cert-manager
.
If you are planning to use cert-manager
, you will need to add one IAM policy to your cluster to enable it
to update Route53 records.
Route53 IAM policy
This policy allows cert-manager
to use DNS01 Let's Encrypt challenges to provision TLS certificates for your Teleport cluster.
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
Z0159221358P96JYAUAA4 | Route 53 hosted zone ID for the domain hosting your Teleport cluster |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "route53:GetChange",
"Resource": "arn:aws:route53:::change/*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
],
"Resource": "arn:aws:route53:::hostedzone/Z0159221358P96JYAUAA4"
}
]
}
Installing cert-manager
If you do not have cert-manager
already configured in the Kubernetes cluster where you are installing Teleport,
you should add the Jetstack Helm chart repository which hosts the cert-manager
chart, and install the chart:
helm repo add jetstack https://charts.jetstack.iohelm repo updatehelm install cert-manager jetstack/cert-manager \--create-namespace \--namespace cert-manager \--set installCRDs=true \--set extraArgs="{--issuer-ambient-credentials}" # required to automount ambient AWS credentials when using an Issuer
Once cert-manager
is installed, you should create and add an Issuer
.
You'll need to replace these values in the Issuer
example below:
Placeholder value | Replace with |
---|---|
[email protected] | An email address to receive communications from Let's Encrypt |
example.com | The name of the Route 53 domain hosting your Teleport cluster |
us-east-1 | AWS region where the cluster is running |
Z0159221358P96JYAUAA4 | Route 53 hosted zone ID for the domain hosting your Teleport cluster |
cat << EOF > aws-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-production
namespace: teleport
spec:
acme:
email: [email protected] # Change this
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-production
solvers:
- selector:
dnsZones:
- "example.com" # Change this
dns01:
route53:
region: us-east-1 # Change this
hostedZoneID: Z0159221358P96JYAUAA4 # Change this
EOF
After you have created the Issuer
and updated the values, add it to your cluster using kubectl
:
kubectl create namespace teleportkubectl --namespace teleport create -f aws-issuer.yaml
Step 4b. Configure Teleport to use ACM to handle TLS
Using ACM will prevent Teleport from handling Application Access via CLI (using client certificates), as Teleport will not be handling its own TLS termination.
If you need to use Teleport Application Access from the command line, you should use cert-manager
instead (as described in
Step 4a above).
To use ACM to handle TLS, add annotations to the chart specifying the ACM certificate ARN to use and the port it should be served on.
Replace arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece
with your actual ACM certificate ARN.
annotations:
service:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
You must escape values entered on the command line correctly for Helm's CLI to understand them. This gets harder and harder with
nested values containing dots like AWS annotations. We recommend using a values.yaml
file instead to avoid confusion and errors.
--set "annotations.service.service\.beta\.kubernetes\.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece" \
--set "annotations.service.service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports=\"443\"" \
--set "annotations.service.service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol=ssl"
To use an internal AWS network load balancer (as opposed to the default internet-facing NLB), you should add two annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Step 5/7. Set values to configure the cluster
There are two different ways to configure the teleport-cluster
Helm chart to use aws
mode - using a values.yaml
file, or using --set
on the command line.
We recommend using a values.yaml
file as it can be easily kept in source control.
The --set
CLI method is more appropriate for quick test deployments.
Create an aws-values.yaml
file and write the values you've chosen above to it:
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
auditLogTable: teleport-helm-events # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
sessionRecordingBucket: teleport-helm-sessions # S3 bucket to use for Teleport session recordings
backups: true # Whether or not to turn on DynamoDB backups
highAvailability:
replicaCount: 2 # Number of replicas to configure
certManager:
enabled: true # Enable cert-manager support to get TLS certificates
issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above)
Install the chart with the values from your aws-values.yaml
file using this command:
helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ -f aws-values.yaml
Install the chart using this command, replacing the placeholders with the values you've chosen above:
helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ --set chartMode=aws \ --set clusterName=teleport.example.com `# Name of your cluster. Use the FQDN you intend to configure in DNS below.` \ --set aws.region=us-west-2 `# AWS region` \ --set aws.backendTable=teleport-helm-backend `# DynamoDB table to use for the Teleport backend` \ --set aws.backups=true `# Whether or not to turn on DynamoDB backups` \ --set aws.auditLogTable=teleport-helm-events `# DynamoDB table to use for the Teleport audit log (must be different to the backend table)` \ --set aws.sessionRecordingBucket=teleport-helm-sessions `# S3 bucket to use for Teleport session recordings` \ --set highAvailability.replicaCount=2 `# Number of replicas to configure` \ --set highAvailability.certManager.enabled=true `# Enable cert-manager support to get TLS certificates` \ --set highAvailability.certManager.issuerName=letsencrypt-production `# Name of the cert-manager Issuer to use`
You cannot change the clusterName
after the cluster is configured, so make sure you choose wisely. You should use the fully-qualified domain name that you'll use for external access to your Teleport cluster.
Once the chart is installed, you can use kubectl
commands to view the deployment:
kubectl --namespace teleport get allNAME READY STATUS RESTARTS AGE
pod/teleport-5cf46ddf5f-dzh65 1/1 Running 0 4m21s
pod/teleport-5cf46ddf5f-mpghq 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/teleport LoadBalancer 10.100.37.171 a232d92df01f940339adea0e645d88bb-1576732600.us-east-1.elb.amazonaws.com 443:30821/TCP,3023:30801/TCP,3026:32612/TCP,3024:31253/TCP 4m21s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/teleport 2/2 2 2 4m21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/teleport-5cf46ddf5f 2 2 2 4m21s
Step 6/7. Set up DNS
You'll need to set up a DNS A
record for teleport.example.com
. In our example, this record is an alias to an ELB.
Teleport assigns a subdomain to each application you have configured for Application
Access (e.g., grafana.teleport.example.com
), so you will need to ensure that a DNS A record exists for each application-specific subdomain so clients can access your applications via Teleport.
You should create either a separate DNS A record for each subdomain or a single record with a wildcard subdomain such as *.teleport.example.com
. This way, your certificate authority (e.g., Let's Encrypt) can issue a certificate for each subdomain, enabling clients to verify your Teleport hosts regardless of the application they are accessing.
Here's how to do this in a hosted zone with AWS Route 53:
Change these parameters if you altered them above
NAMESPACE='teleport'RELEASE_NAME='teleport'DNS settings (change as necessary)
MYZONE_DNS='example.com'MYDNS='teleport.example.com'MY_CLUSTER_REGION='us-west-2'Find AWS Zone ID and ELB Zone ID
MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)"MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"MYELB_NAME="${MYELB%%-*}"MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')"Create a JSON file changeset for AWS.
jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \ '{"Comment": "Create records",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": $dns,
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": ("*." + $dns),
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
}
]
}' > myrecords.json
Review records before applying.
cat myrecords.json | jqApply the records and capture change id
CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')"Verify that change has been applied
aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status'"INSYNC"
Step 7/7. Create a Teleport user
Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server,
so we can run the command using kubectl
:
kubectl --namespace teleport exec deploy/teleport -- tctl users add test --roles=access,editorUser "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68
NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access.
Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.
High Availability
In this guide, we have configured two replicas. This can be changed after cluster creation by altering the highAvailability.replicaCount
value using helm upgrade
as detailed below.
Upgrading the cluster after deployment
To make changes to your Teleport cluster after deployment, you can use helm upgrade
.
Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest
version of Teleport. You can make sure that the repo is up to date by running helm repo update
.
Here's an example where we set the chart to use 3 replicas:
Edit your aws-values.yaml
file from above and make the appropriate changes.
Upgrade the deployment with the values from your aws-values.yaml
file using this command:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ -f aws-values.yaml
Run this command, editing your command line parameters as appropriate:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ --set highAvailability.replicaCount=3
To change chartMode
, clusterName
, or any aws
settings, you must first uninstall the existing chart and then install a new version with the appropriate values.
Uninstalling Teleport
To uninstall the teleport-cluster
chart, use helm uninstall <release-name>
. For example:
helm --namespace teleport uninstall teleport
Uninstalling cert-manager
If you want to remove the cert-manager
installation later, you can use this command:
helm --namespace cert-manager uninstall cert-manager
Next steps
- You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.
- See the high availability section of our Helm chart reference for more details on high availability.
- Read the
cert-manager
documentation.