
In this guide, we'll go through how to set up a High Availability Teleport cluster with multiple replicas in Kubernetes using Teleport Helm charts and AWS products (DynamoDB and S3).
If you are already running Teleport on another platform, you can use your existing Teleport deployment to access your Kubernetes cluster. Follow our guide to connect your Kubernetes cluster to Teleport.
Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.
Get started with a free trial of Teleport Cloud.
Prerequisites
- Kubernetes >= v1.17.0
- Helm >= v3.4.2
Verify that Helm and Kubernetes are installed and up to date.
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport resource service (e.g., the Database Service or
Application Service) to a cluster, save the invitation token to a file.
Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/7. Install Helm
Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.
Throughout this guide, we will assume that you have the helm
and kubectl
binaries available in your PATH
:
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
Step 2/7. Add the Teleport Helm chart repository
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Step 3/7. Set up AWS IAM configuration
For Teleport to be able to manage the DynamoDB tables, indexes, and the S3 storage bucket it needs, you'll need to configure AWS IAM policies to allow access.
These IAM policies should be added to your AWS account, then granted to the instance role associated with the EKS nodegroups which are running your Kubernetes nodes.
DynamoDB IAM policy
On startup, the Teleport Auth Service checks whether the DynamoDB table you have specified in its configuration file exists. If the table does not exist, the Auth Service attempts to create one.
The IAM permissions that the Auth Service requires to manage DynamoDB tables depends on whether you expect to create a table yourself or enable the Auth Service to create and configure one for you:
If you choose to manage a DynamoDB table yourself, the table must have the following attribute definitions:
Name | Type |
---|---|
HashKey | S |
FullPath | S |
The table must also have the following key schema elements:
Name | Type |
---|---|
HashKey | HASH |
FullPath | RANGE |
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
us-west-2 | AWS region |
1234567890 | AWS account ID |
teleport-helm-backend | DynamoDB table name to use for the Teleport backend |
teleport-helm-events | DynamoDB table name to use for the Teleport audit log (must be different to the backend table) |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ClusterStateStorage",
"Effect": "Allow",
"Action": [
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:DescribeStream",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:DescribeTable",
"dynamodb:GetShardIterator",
"dynamodb:GetItem",
"dynamodb:UpdateTable",
"dynamodb:GetRecords",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend/stream/*"
]
},
{
"Sid": "ClusterEventsStorage",
"Effect": "Allow",
"Action": [
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DescribeTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:UpdateTable",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events/index/*"
]
}
]
}
Note that you can omit the dynamodb:UpdateContinuousBackups
permission if
disabling continuous backups.
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
us-west-2 | AWS region |
1234567890 | AWS account ID |
teleport-helm-backend | DynamoDB table name to use for the Teleport backend |
teleport-helm-events | DynamoDB table name to use for the Teleport audit log (must be different to the backend table) |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ClusterStateStorage",
"Effect": "Allow",
"Action": [
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:DescribeStream",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:CreateTable",
"dynamodb:DescribeTable",
"dynamodb:GetShardIterator",
"dynamodb:GetItem",
"dynamodb:UpdateTable",
"dynamodb:GetRecords",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-backend/stream/*"
]
},
{
"Sid": "ClusterEventsStorage",
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:BatchWriteItem",
"dynamodb:UpdateTimeToLive",
"dynamodb:PutItem",
"dynamodb:DescribeTable",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem",
"dynamodb:DescribeTimeToLive",
"dynamodb:UpdateTable",
"dynamodb:UpdateContinuousBackups"
],
"Resource": [
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events",
"arn:aws:dynamodb:us-west-2:1234567890:table/teleport-helm-events/index/*"
]
}
]
}
S3 IAM policy
On startup, the Teleport Auth Service checks whether the S3 bucket you have configured for session recording storage exists. If it does not, the Auth Service attempts to create and configure the bucket.
The IAM permissions that the Auth Service requires to manage its session recording bucket depends on whether you expect to create the bucket yourself or enable the Auth Service to create and configure it for you:
Note that Teleport will only use S3 buckets with versioning enabled. This ensures that a session log cannot be permanently altered or deleted, as Teleport will always look at the oldest version of a recording.
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
your-sessions-bucket | Name to use for the Teleport S3 session recording bucket |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketActions",
"Effect": "Allow",
"Action": [
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:ListBucket",
"s3:GetEncryptionConfiguration",
"s3:GetBucketVersioning"
],
"Resource": "arn:aws:s3:::your-sessions-bucket"
},
{
"Sid": "ObjectActions",
"Effect": "Allow",
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectRetention",
"s3:*Object",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::your-sessions-bucket/*"
}
]
}
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
your-sessions-bucket | Name to use for the Teleport S3 session recording bucket |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketActions",
"Effect": "Allow",
"Action": [
"s3:PutEncryptionConfiguration",
"s3:PutBucketVersioning",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads",
"s3:ListBucket",
"s3:GetEncryptionConfiguration",
"s3:GetBucketVersioning",
"s3:CreateBucket"
],
"Resource": "arn:aws:s3:::your-sessions-bucket"
},
{
"Sid": "ObjectActions",
"Effect": "Allow",
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectRetention",
"s3:*Object",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::your-sessions-bucket/*"
}
]
}
Step 4/7. Configure TLS certificates for Teleport
The teleport-cluster
chart deploys a Kubernetes LoadBalancer
to handle incoming connections to the Teleport Proxy Service.
We now need to configure TLS certificates for Teleport to secure its communications and allow external clients to connect.
Determining an approach
There are three supported options when using AWS. You must choose only one of these options:
Using cert-manager
You can use cert-manager
to provision and automatically renew TLS credentials
by completing ACME challenges via Let's Encrypt. We recommend this approach if
you require CLI access to web applications using client certificates via
the Teleport Application Service.
Using AWS Certificate Manager
You can use AWS Certificate Manager to handle TLS termination with AWS-managed certificates.
You should be aware of the limitations for using AWS Certificate Manager to provision TLS credentials for Teleport:
- This will prevent the Teleport Application Service from working via CLI using client certificates. Application access will still work via a browser.
- Command-line application access does not work with ACM. Using ACM will prevent Teleport from facilitating application access via CLI (using client certificates), as Teleport will not be handling its own TLS termination.
- Using ACM through a AWS Load Balancer prevents the required traffic for Postgres or MongoDB through Teleport's web port. If you choose to use the ACM approach, we will show you how to configure a separate listener for Postgres or MongoDB.
If you would like the Teleport Application Service and Database Service to
function as expected, you should use the cert-manager
approach unless there is
a specific reason to use ACM.
Using your own TLS credentials
With this approach, you are responsible for determining how to obtain a TLS certificate and private key for your Teleport cluster, and for renewing your credentials periodically. Use this approach if you would like to use a trusted internal certificate authority instead of Let's Encrypt or AWS Certificate Manager.
In this example, we are using multiple pods to create a High Availability
Teleport cluster. As such, we will be using cert-manager
to centrally
provision TLS certificates using Let's Encrypt. These certificates will be
mounted into each Teleport pod, and automatically renewed and kept up to date by
cert-manager
.
If you are planning to use cert-manager
, you will need to add one IAM policy to your cluster to enable it
to update Route53 records.
Route53 IAM policy
This policy allows cert-manager
to use DNS01 Let's Encrypt challenges to provision TLS certificates for your Teleport cluster.
You'll need to replace these values in the policy example below:
Placeholder value | Replace with |
---|---|
Z0159221358P96JYAUAA4 | Route 53 hosted zone ID for the domain hosting your Teleport cluster |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "route53:GetChange",
"Resource": "arn:aws:route53:::change/*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
],
"Resource": "arn:aws:route53:::hostedzone/Z0159221358P96JYAUAA4"
}
]
}
Installing cert-manager
If you do not have cert-manager
already configured in the Kubernetes cluster where you are installing Teleport,
you should add the Jetstack Helm chart repository which hosts the cert-manager
chart, and install the chart:
helm repo add jetstack https://charts.jetstack.iohelm repo updatehelm install cert-manager jetstack/cert-manager \--create-namespace \--namespace cert-manager \--set installCRDs=true \--set extraArgs="{--issuer-ambient-credentials}" # required to automount ambient AWS credentials when using an Issuer
Once cert-manager
is installed, you should create and add an Issuer
.
You'll need to replace these values in the Issuer
example below:
Placeholder value | Replace with |
---|---|
[email protected] | An email address to receive communications from Let's Encrypt |
example.com | The name of the Route 53 domain hosting your Teleport cluster |
us-east-1 | AWS region where the cluster is running |
Z0159221358P96JYAUAA4 | Route 53 hosted zone ID for the domain hosting your Teleport cluster |
cat << EOF > aws-issuer.yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-production
namespace: teleport
spec:
acme:
email: [email protected] # Change this
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-production
solvers:
- selector:
dnsZones:
- "example.com" # Change this
dns01:
route53:
region: us-east-1 # Change this
hostedZoneID: Z0159221358P96JYAUAA4 # Change this
EOF
After you have created the Issuer
and updated the values, add it to your cluster using kubectl
:
kubectl create namespace teleportkubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline'kubectl --namespace teleport create -f aws-issuer.yaml
In this step, you will configure Teleport to use AWS Certificate Manager (ACM) to provision your Teleport instances with TLS credentials.
To use ACM to handle TLS, add annotations to the chart specifying the ACM certificate ARN to use and the port it should be served on.
Replace
arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece
with your actual ACM certificate ARN.
Edit your values.yaml
file to complete the annotations.service
field as
follows:
annotations:
service:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
To use an internal AWS network load balancer (as opposed to the default internet-facing NLB), you should add two annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
If you plan to use Postgres or MongoDB with Teleport, add the following options, depending on whether you are running PostgreSQL or MongoDB, to your values file:
separatePostgresListener: true
separateMongoListener: true
You can configure the teleport-cluster
Helm chart to secure the Teleport Web
UI using existing TLS credentials within a Kubernetes secret.
Use the following command to create your secret:
kubectl -n teleport create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file
Edit your values.yaml
file to refer to the name of your secret:
tls:
existingSecretName: my-tls-secret
Step 5/7. Set values to configure the cluster
Next, configure the teleport-cluster
Helm chart to use the aws
mode. Create
a file called aws-values.yaml
and write the values you've chosen above to it:
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
auditLogTable: teleport-helm-events # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
sessionRecordingBucket: teleport-helm-sessions # S3 bucket to use for Teleport session recordings
backups: true # Whether or not to turn on DynamoDB backups
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
certManager:
enabled: true # Enable cert-manager support to get TLS certificates
issuerName: letsencrypt-production # Name of the cert-manager Issuer to use (as configured above)
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
chartMode: aws
clusterName: teleport.example.com # Name of your cluster. Use the FQDN you intend to configure in DNS below.
aws:
region: us-west-2 # AWS region
backendTable: teleport-helm-backend # DynamoDB table to use for the Teleport backend
auditLogTable: teleport-helm-events # DynamoDB table to use for the Teleport audit log (must be different to the backend table)
auditLogMirrorOnStdout: false # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
sessionRecordingBucket: teleport-helm-sessions # S3 bucket to use for Teleport session recordings
backups: true # Whether or not to turn on DynamoDB backups
dynamoAutoScaling: false # Whether Teleport should configure DynamoDB's autoscaling.
highAvailability:
replicaCount: 2 # Number of replicas to configure
annotations:
service:
# Replace with your AWS certificate ARN
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567890:certificate/12345678-43c7-4dd1-a2f6-c495b91ebece"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
enabled: false
Install the chart with the values from your aws-values.yaml
file using this command:
helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ -f aws-values.yaml
You cannot change the clusterName
after the cluster is configured, so make sure you choose wisely. You should use the fully-qualified domain name that you'll use for external access to your Teleport cluster.
Once the chart is installed, you can use kubectl
commands to view the deployment:
kubectl --namespace teleport get allNAME READY STATUS RESTARTS AGE
pod/teleport-auth-57989d4cbd-4q2ds 1/1 Running 0 22h
pod/teleport-auth-57989d4cbd-rtrzn 1/1 Running 0 22h
pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h
pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/teleport LoadBalancer 10.40.11.180 xxxxx.elb.us-east-1.amazonaws.com 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h
service/teleport-auth ClusterIP 10.40.8.251 <none> 3025/TCP,3026/TCP 22h
service/teleport-auth-v11 ClusterIP None <none> <none> 22h
service/teleport-auth-v12 ClusterIP None <none> <none> 22h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/teleport-auth 2/2 2 2 22h
deployment.apps/teleport-proxy 2/2 2 2 22h
NAME DESIRED CURRENT READY AGE
replicaset.apps/teleport-auth-57989d4cbd 2 2 2 22h
replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h
Step 6/7. Set up DNS
You'll need to set up a DNS A
record for teleport.example.com
. In our example, this record is an alias to an ELB.
Teleport assigns a subdomain to each application you have configured for Application
Access (e.g., grafana.teleport.example.com
), so you will need to ensure that a
DNS A (or CNAME for services that only provide a hostname) record exists for each
application-specific subdomain so clients can access your applications via Teleport.
You should create either a separate DNS record for each subdomain, or a single
record with a wildcard subdomain such as *.teleport.example.com
. This way, your
certificate authority (e.g., Let's Encrypt) can issue a certificate for each
subdomain, enabling clients to verify your Teleport hosts regardless of the
application they are accessing.
Here's how to do this in a hosted zone with AWS Route 53:
Change these parameters if you altered them above
NAMESPACE='teleport'RELEASE_NAME='teleport'DNS settings (change as necessary)
MYZONE_DNS='example.com'MYDNS='teleport.example.com'MY_CLUSTER_REGION='us-west-2'Find AWS Zone ID and ELB Zone ID
MYZONE="$(aws route53 list-hosted-zones-by-name --dns-name="${MYZONE_DNS?}" | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)"MYELB="$(kubectl --namespace "${NAMESPACE?}" get "service/${RELEASE_NAME?}" -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')"MYELB_NAME="${MYELB%%-*}"MYELB_ZONE="$(aws elbv2 describe-load-balancers --region "${MY_CLUSTER_REGION?}" --names "${MYELB_NAME?}" | jq -r '.LoadBalancers[0].CanonicalHostedZoneId')"Create a JSON file changeset for AWS.
jq -n --arg dns "${MYDNS?}" --arg elb "${MYELB?}" --arg elbz "${MYELB_ZONE?}" \ '{"Comment": "Create records",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": $dns,
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": ("*." + $dns),
"Type": "A",
"AliasTarget": {
"HostedZoneId": $elbz,
"DNSName": ("dualstack." + $elb),
"EvaluateTargetHealth": false
}
}
}
]
}' > myrecords.json
Review records before applying.
cat myrecords.json | jqApply the records and capture change id
CHANGEID="$(aws route53 change-resource-record-sets --hosted-zone-id "${MYZONE?}" --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')"Verify that change has been applied
aws route53 get-change --id "${CHANGEID?}" | jq '.ChangeInfo.Status'"INSYNC"
Step 7/7. Create a Teleport user
Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server,
so we can run the command using kubectl
:
kubectl --namespace teleport exec deploy/teleport-auth -- tctl users add test --roles=access,editorUser "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
https://teleport.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68
NOTE: Make sure teleport.example.com:443 points at a Teleport proxy that users can access.
Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.
High Availability
In this guide, we have configured two replicas. This can be changed after cluster creation by altering the highAvailability.replicaCount
value using helm upgrade
as detailed below.
Upgrading the cluster after deployment
To make changes to your Teleport cluster after deployment, you can use helm upgrade
.
Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest
version of Teleport. You can make sure that the repo is up to date by running helm repo update
.
Here's an example where we set the chart to use 3 replicas:
Edit your aws-values.yaml
file from above and make the appropriate changes.
Upgrade the deployment with the values from your aws-values.yaml
file using this command:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ -f aws-values.yaml
Run this command, editing your command line parameters as appropriate:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ --set highAvailability.replicaCount=3
To change chartMode
, clusterName
, or any aws
settings, you must first uninstall the existing chart and then install a new version with the appropriate values.
Autoscaling
In order to reduce DynamoDB costs you might want to enable DynamoDB autoscaling. This step is usually done after a successful Teleport deployment, once you have gathered some data about Teleport's DynamoDB usage and know what regular usage looks like and how autoscaling should be tuned. You must know the desired read/write minimum, maximum and target capacity for your DynamoDB instance in order to enable autoscaling.
You can delegate your autoscaling configuration to Teleport or manage it by creating an AWS Application Auto Scaling policy. The following steps will set up Teleport-configured DynamoDB autoscaling.
You must grant autoscaling configuration rights to Teleport, as documented in the DynamoDB autoscaling section.
Set the following fields in your existing aws-values.yaml
file and replace the numeric values with yours:
aws:
# [...] already present values under `aws`
dynamoAutoScaling: true
readMinCapacity: 5 # integer
readMaxCapacity: 100 # integer
readTargetValue: 50.0 # float
writeMinCapacity: 5 # integer
writeMaxCapacity: 100 # integer
writeTargetValue: 50.0 # float
Then perform a cluster upgrade with the new values:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ -f aws-values.yaml
Uninstalling Teleport
To uninstall the teleport-cluster
chart, use helm uninstall <release-name>
. For example:
helm --namespace teleport uninstall teleport
Uninstalling cert-manager
If you want to remove the cert-manager
installation later, you can use this command:
helm --namespace cert-manager uninstall cert-manager
Troubleshooting
AWS quotas
If your deployment of Teleport services brings you over your default service quotas, you can request a quota increase from the AWS Support Center. See Amazon's AWS service quotas documentation for more information.
For example, when using DynamoDB as the backend for Teleport cluster state, you may need to request increases for read/write quotas.
Next steps
Now that you have deployed a Teleport cluster, read the Manage Access section to get started enrolling users and setting up RBAC.
See the high availability section of our Helm chart reference for more details on high availability.
Read the cert-manager
documentation.