Fork me on GitHub
Teleport

Migrating from a legacy version of the teleport Helm chart

In this guide, we'll detail a way to migrate an existing Teleport cluster using the legacy teleport Helm chart to use the newer teleport-cluster Helm chart instead.

Warning

This guide details a very simple migration scenario for a smaller Teleport cluster which is not deployed for high availability.

If your Teleport cluster is required to support many users and should be deployed in a highly available configuration, you should consider following a different guide and storing your cluster's data in AWS DynamoDB or Google Cloud Firestore.

Prerequisites

Verify that Helm and Kubernetes are installed and up to date.

Tip

The examples below may include the use of the sudo keyword, token UUIDs, and users with admin privileges to make following each step easier when creating resources from scratch.

Generally:

  1. We discourage using sudo in production environments unless it's needed.
  2. We encourage creating new, non-root, users or new test instances for experimenting with Teleport.
  3. We encourage adherence to the Principle of Least Privilege (PoLP) and Zero Admin best practices. Don't give users the admin role when giving them the more restrictive access,editor roles will do instead.
  4. Saving tokens into a file rather than sharing tokens directly as strings.

Learn more about Teleport Role-Based Access Control best practices.

Step 1. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

$ helm version
# version.BuildInfo{Version:"v3.4.2"}

$ kubectl version
# Client Version: version.Info{Major:"1", Minor:"17+"}
# Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

$ helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

$ helm repo update

Step 3. Get the Teleport configuration file from your existing cluster

Teleport storage in AWS or GCP

If your Teleport cluster's database is currently stored in AWS DynamoDB or Google Cloud Firestore rather than using a PersistentVolumeClaim or similar, you may wish to consider redeploying your cluster using the aws or gcp modes of the teleport-cluster chart instead.

The relevant guides are linked here:

Note on namespacing
This guide assumes that your old Helm release was called teleport and it is in the teleport Kubernetes namespace. If your release is different, you will need to update all kubectl commands accordingly.

The first thing you'll need to do is extract the Teleport config file for your existing Teleport cluster.

Firstly, check that the ConfigMap is present:

kubectl --namespace teleport get configmap/teleport -o yaml

apiVersion: v1

data:

teleport.yaml: |

teleport:

log:

severity: INFO

output: stderr

storage:

type: dir

...

Note
If you do not see the teleport ConfigMap, double-check that your Kubernetes context is set correctly and that you are using the correct namespace.

If you see a Teleport config under the teleport.yaml key, you can extract it to disk with a command like this:

kubectl --namespace teleport get configmap/teleport -o=jsonpath="{.data['teleport\.yaml']}" > teleport.yaml

cat teleport.yaml

teleport:

log:

severity: INFO

output: stderr

storage:

type: dir

...

Once you have the config, you should upload this to a separate Kubernetes namespace (where you intend to run the teleport-cluster chart).

kubectl create namespace teleport-cluster

namespace/teleport-cluster created

kubectl --namespace teleport-cluster create configmap teleport --from-file=teleport.yaml

configmap/teleport created

Step 4. Extracting the contents of Teleport's database

Note on namespacing

If you migrate your existing data, the cluster_name which is configured in teleport.yaml must stay the same.

If you wish to change the name of your cluster, you will need to deploy a new cluster from scratch and reconfigure your users, roles and nodes.

If you wish to keep the same users, roles, certificate authorities and nodes in your cluster, you can use Teleport's tctl tool to extract a backup of all your data.

You can get the backup with a command like this:

kubectl --namespace teleport exec deploy/teleport -- tctl get all --with-secrets > backup.yaml
Warning

The backup.yaml file you have just written contains private keys for your Teleport cluster's certificate authorities in plain text. You must protect this file carefully and delete it once your new cluster is running.

You can write the file to an in-memory tmpfs like /dev/shm/backup.yaml for greater security.

Add the backup to your new teleport-cluster namespace as a secret:

kubectl --namespace teleport-cluster create secret generic bootstrap --from-file=backup.yaml

Step 5. Start the new cluster with your old config file and backup

We will start the new cluster and bootstrap it using the backup of your cluster's data. Once this step is complete and the cluster is working, we'll modify the deployment to remove references to the backup data, and remove it from Kubernetes for security.

Write a teleport-cluster-values.yaml file containing the following values:

chartMode: custom
extraArgs: ['--bootstrap', '/etc/teleport-bootstrap/backup.yaml']
extraVolumes:
- name: bootstrap
  secret:
    name: bootstrap
extraVolumeMounts:
- name: bootstrap
  path: /etc/teleport-bootstrap
helm install teleport teleport/teleport-cluster \ --namespace teleport-cluster \ --create-namespace \ -f teleport-cluster-values.yaml

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport-cluster get all

NAME READY STATUS RESTARTS AGE

pod/teleport-5cf46ddf5f-dzh65 1/1 Running 0 4m21s

pod/teleport-5cf46ddf5f-mpghq 1/1 Running 0 4m21s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/teleport LoadBalancer 10.100.37.171 a232d92df01f940339adea0e645d88bb-1576732600.us-east-1.elb.amazonaws.com 443:30821/TCP,3023:30801/TCP,3026:32612/TCP,3024:31253/TCP 4m21s

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/teleport 2/2 2 2 4m21s

NAME DESIRED CURRENT READY AGE

replicaset.apps/teleport-5cf46ddf5f 2 2 2 4m21s

Note

You'll need to change the existing DNS record for your teleport chart installation to point to your new teleport-cluster chart installation. You should point the DNS record to the external IP or hostname of the Kubernetes load balancer.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

$ kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'
# a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com

For testing, you can access the load balancer's IP or hostname directly. You may need to accept insecure warnings in your browser to view the page successfully.

Step 6. Remove the bootstrap data and update the chart deployment

Once you've tested your new Teleport cluster and you're confident that your data has been migrated successfully, you should redeploy the chart without your backup data mounted for security.

Edit your teleport-cluster-values.yaml file to remove extraArgs, extraVolumes and extraVolumeMounts:

chartMode: custom

Upgrade the Helm deployment to use the new values:

helm upgrade teleport teleport/teleport-cluster \ --namespace teleport-cluster \ -f teleport-cluster-values.yaml

After this, delete the Kubernetes secret containing the backup data:

kubectl --namespace delete secret/bootstrap

Finally, you should also delete the backup.yaml file from your local disk:

rm -f backup.yaml

Uninstalling Teleport

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport-cluster uninstall teleport
Have a suggestion or can’t find something?
IMPROVE THE DOCS