Fork me on GitHub

Running a Teleport cluster with a custom configuration using Helm


In this guide, we'll go through how to set up a Teleport cluster in Kubernetes using a custom teleport.yaml config file using Teleport Helm charts.

This setup can be useful when you already have an existing Teleport cluster and would like to start running it in Kubernetes, or when migrating your setup from a legacy version of the Helm charts.


Verify that Helm and Kubernetes are installed and up to date.

The examples below may include the use of the sudo keyword, token UUIDs, and users with elevated privileges to make following each step easier.

We recommend you follow the best practices to avoid security incidents:

  1. Avoid using sudo in production environments unless it's necessary.
  2. Create new, non-root, users and use test instances for experimenting with Teleport.
  3. You can run many Teleport's services as a non root. For example, auth, proxy, application access, kubernetes access, and database access services can run as a non-root user. Only the SSH/node service requires root access. You will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443)
  4. Follow the "Principle of Least Privilege" (PoLP) and "Zero Admin" best practices. Don't give users permissive roles when giving them more restrictive access,editor roles will do instead.
  5. Save tokens into a file rather than sharing tokens directly as strings.

Step 1. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

$ helm version
# version.BuildInfo{Version:"v3.4.2"}

$ kubectl version
# Client Version: version.Info{Major:"1", Minor:"17+"}
# Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

$ helm repo add teleport

To update the cache of charts from the remote repository, run helm repo update:

$ helm repo update

Step 3. Setting up a Teleport cluster with Helm using a custom config

In custom mode, the teleport-cluster Helm chart does not create a ConfigMap containing a teleport.yaml file for you, but expects that you will provide this yourself.

For this example, we'll be using this teleport.yaml configuration file (with appropriately complex static tokens):

$ cat << EOF > teleport.yaml
    output: stderr
    severity: INFO

  enabled: true
  # These commands will generate random 32-chacter alphanumeric strings to use as join tokens
  - "proxy,node:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
  - "trusted_cluster:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"

  enabled: true

  enabled: true
    cluster: custom
  - name: kernel
    command: [/bin/uname, -r]
    period: 5m

You can skip this step if you already have a teleport.yaml file locally that you'd like to use.

You can create the namespace for the config and add the teleport.yaml from your local disk like this:

kubectl create namespace teleport
kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml

The name of the ConfigMap used must match the name of the Helm release that you install below (the name just after helm install). In this example, it's teleport.

The name (key) of the configuration file uploaded to your ConfigMap must be teleport.yaml. If your configuration file is named differently on disk, you can specify the key that should be used in the kubectl command:

kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml=my-teleport-config-file.yaml

After the ConfigMap has been created, you can deploy the Helm chart into a Kubernetes cluster with a command like this:

helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ --set chartMode=custom

Most settings from values.yaml will not be applied in custom mode.

It's important to specify any settings under the acme, aws, gcp, and logLevel sections of the chart in your own teleport.yaml file that you upload yourself.

You can control the externally-facing name of your cluster using the public_addr sections of teleport.yaml. In this example, our public_addrs are set to

External proxy port

Note that although the proxy_service listens on port 3080 inside the pod, the default LoadBalancer service configured by the chart will always listen externally on port 443 (which is redirected internally to port 3080).

Due to this, your proxy_service.public_addr should always end in :443:


It will help if you have access to the DNS provider which hosts so you can add a record and point it to the external IP or hostname of the Kubernetes load balancer.

Don't worry if you can't - you'll just have to remember to replace with the external IP or hostname of the Kubernetes load balancer to be able to access Teleport from your local machine.

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport get all


pod/teleport-5c56b4d869-znmqk 1/1 Running 0 5h8m


service/teleport LoadBalancer 443:30945/TCP,3023:32342/TCP,3026:30851/TCP,3024:31521/TCP 5h29m


deployment.apps/teleport 1/1 1 1 5h29m


replicaset.apps/teleport-5c56b4d869 1 1 1 5h8m

Step 4. Create a Teleport user (optional)

If you're not migrating an existing Teleport cluster, you'll need to create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, so we can run the command using kubectl:

kubectl --namespace teleport exec deploy/teleport -- tctl users add test --roles=access,editor

User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:

NOTE: Make sure points at a Teleport proxy that users can access.


If you didn't set up DNS for your hostname earlier, remember to replace with the external IP or hostname of the Kubernetes load balancer.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

$ kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'

GKE uses an IP address:

$ kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

You should modify your command accordingly and replace with either the IP or hostname depending on which you have available. You may need to accept insecure warnings in your browser to view the page successfully.


Using a Kubernetes-issued load balancer IP or hostname is OK for testing but is not viable when running a production Teleport cluster as the Subject Alternative Name on any public-facing certificate will be expected to match the cluster's configured public address (specified using public_addr when using custom mode)

You must configure DNS properly using the methods described above for production workloads.

Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.

Upgrading the cluster after deployment

Making changes to teleport.yaml

If you make changes to your Teleport ConfigMap, you can apply these changes by deleting the old ConfigMap and applying a new one:

kubectl --namespace teleport delete configmap teleport && \# kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml

Make sure that the name of the ConfigMap (e.g. teleport) matches the Helm release name used as described above.

You can list all available ConfigMaps in your namespace using this command:

kubectl --namespace teleport get configmap


teleport 1 2d21h

After editing the ConfigMap, you must initiate a rolling restart of your Teleport deployment to pick up the changed ConfigMap:

kubectl --namespace teleport rollout restart deploy/teleport

Making changes to other Helm values

To make changes to your Teleport cluster after deployment which are not covered by the functionality in teleport.yaml, you can use helm upgrade.

Run this command, editing your command line parameters as appropriate:

helm upgrade teleport teleport/teleport-cluster \ --set highAvailability.replicaCount=3

When using custom mode, you must use highly-available storage (e.g. etcd, DynamoDB, or Firestore) for multiple replicas to be supported.

Information on supported Teleport storage backends

Manually configuring NFS-based storage or ReadWriteMany volume claims is NOT supported for an HA deployment and will result in errors.

Uninstalling the Helm chart

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport uninstall teleport

To change chartMode, you must first uninstall the existing chart and then install a new version with the appropriate values.

Next steps

You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.

Have a suggestion or can’t find something?