Fork me on GitHub


Migrating from a legacy version of the teleport Helm chart


In this guide, we'll detail a way to migrate an existing Teleport cluster using the legacy teleport Helm chart to use the newer teleport-cluster Helm chart instead.


This guide details a very simple migration scenario for a smaller Teleport cluster which is not deployed for high availability.

If your Teleport cluster is required to support many users and should be deployed in a highly available configuration, you should consider following a different guide and storing your cluster's data in AWS DynamoDB or Google Cloud Firestore.


Verify that Helm and Kubernetes are installed and up to date.

When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:

  • Avoid using sudo in production environments unless it's necessary.
  • Create new, non-root, users and use test instances for experimenting with Teleport.
  • Run Teleport's services as a non-root user unless required. Only the SSH Service requires root access. Note that you will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443).
  • Follow the "Principle of Least Privilege" (PoLP). Don't give users permissive roles when giving them more restrictive roles will do instead. For example, assign users the built-in access,editor roles.
  • When joining a Teleport resource service (e.g., the Database Service or Application Service) to a cluster, save the invitation token to a file. Otherwise, the token will be visible when examining the teleport command that started the agent, e.g., via the history command on a compromised system.

Step 1/6. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

helm version


kubectl version

Client Version: version.Info{Major:"1", Minor:"17+"}

Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2/6. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Step 3/6. Get the Teleport configuration file from your existing cluster

Teleport storage in AWS or GCP

If your Teleport cluster's database is currently stored in AWS DynamoDB or Google Cloud Firestore rather than using a PersistentVolumeClaim or similar, you may wish to consider redeploying your cluster using the aws or gcp modes of the teleport-cluster chart instead.

The relevant guides are linked here:

Note on namespacing

This guide assumes that your old Helm release was called teleport and it is in the teleport Kubernetes namespace. If your release is different, you will need to update all kubectl commands accordingly.

The first thing you'll need to do is extract the Teleport config file for your existing Teleport cluster.

Firstly, check that the ConfigMap is present:

kubectl --namespace teleport get configmap/teleport -o yaml

apiVersion: v1


teleport.yaml: |



severity: INFO

output: stderr


type: dir



If you do not see the teleport ConfigMap, double-check that your Kubernetes context is set correctly and that you are using the correct namespace.

If you see a Teleport config under the teleport.yaml key, you can extract it to disk with a command like this:

kubectl --namespace teleport get configmap/teleport -o=jsonpath="{.data['teleport\.yaml']}" > teleport.yaml

cat teleport.yaml



severity: INFO

output: stderr


type: dir


Once you have the config, you should upload this to a separate Kubernetes namespace (where you intend to run the teleport-cluster chart).

kubectl create namespace teleport-cluster

namespace/teleport-cluster created

kubectl --namespace teleport-cluster create configmap teleport --from-file=teleport.yaml

configmap/teleport created

Step 4/6. Extracting the contents of Teleport's database

Note on namespacing

If you migrate your existing data, the cluster_name which is configured in teleport.yaml must stay the same.

If you wish to change the name of your cluster, you will need to deploy a new cluster from scratch and reconfigure your users, roles and nodes.

If you wish to keep the same users, roles, certificate authorities and nodes in your cluster, you can use Teleport's tctl tool to extract a backup of all your data.

You can get the backup with a command like this:

kubectl --namespace teleport exec deploy/teleport -- tctl get all --with-secrets > backup.yaml

The backup.yaml file you have just written contains private keys for your Teleport cluster's certificate authorities in plain text. You must protect this file carefully and delete it once your new cluster is running.

You can write the file to an in-memory tmpfs like /dev/shm/backup.yaml for greater security.

Add the backup to your new teleport-cluster namespace as a secret:

kubectl --namespace teleport-cluster create secret generic bootstrap --from-file=backup.yaml

Step 5/6. Start the new cluster with your old config file and backup

We will start the new cluster and bootstrap it using the backup of your cluster's data. Once this step is complete and the cluster is working, we'll modify the deployment to remove references to the backup data, and remove it from Kubernetes for security.

Write a teleport-cluster-values.yaml file containing the following values:

chartMode: custom
extraArgs: ['--bootstrap', '/etc/teleport-bootstrap/backup.yaml']
- name: bootstrap
    name: bootstrap
- name: bootstrap
  path: /etc/teleport-bootstrap
helm install teleport teleport/teleport-cluster \ --namespace teleport-cluster \ --create-namespace \ -f teleport-cluster-values.yaml
helm install teleport teleport/teleport-cluster \--create-namespace \--namespace teleport-cluster \--set chartMode=custom \--set extraArgs="{'--bootstrap', '/etc/teleport-bootstrap/backup.yaml'}" \--set extraVolumes[0].name="bootstrap" \--set extraVolumes[0]"bootstrap" \--set extraVolumeMounts[0].name="bootstrap" \--set extraVolumeMounts[0].path="/etc/teleport-bootstrap"

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport-cluster get all


pod/teleport-5cf46ddf5f-dzh65 1/1 Running 0 4m21s

pod/teleport-5cf46ddf5f-mpghq 1/1 Running 0 4m21s


service/teleport LoadBalancer 443:30821/TCP,3023:30801/TCP,3026:32612/TCP,3024:31253/TCP 4m21s


deployment.apps/teleport 2/2 2 2 4m21s


replicaset.apps/teleport-5cf46ddf5f 2 2 2 4m21s


You'll need to change the existing DNS record for your teleport chart installation to point to your new teleport-cluster chart installation. You should point the DNS record to the external IP or hostname of the Kubernetes load balancer.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'

GKE uses an IP address:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

For testing, you can access the load balancer's IP or hostname directly. You may need to accept insecure warnings in your browser to view the page successfully.

Step 6/6. Remove the bootstrap data and update the chart deployment

Once you've tested your new Teleport cluster and you're confident that your data has been migrated successfully, you should redeploy the chart without your backup data mounted for security.

Edit your teleport-cluster-values.yaml file to remove extraArgs, extraVolumes and extraVolumeMounts:

chartMode: custom

Upgrade the Helm deployment to use the new values:

helm upgrade teleport teleport/teleport-cluster \ --namespace teleport-cluster \ -f teleport-cluster-values.yaml
helm upgrade teleport teleport/teleport-cluster \--namespace teleport-cluster \--set chartMode=custom

After this, delete the Kubernetes secret containing the backup data:

kubectl --namespace delete secret/bootstrap

Finally, you should also delete the backup.yaml file from your local disk:

rm -f backup.yaml

Uninstalling Teleport

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport-cluster uninstall teleport