In this guide, we'll detail a way to migrate an existing Teleport cluster using the legacy teleport
Helm chart
to use the newer teleport-cluster
Helm chart instead.
This guide details a very simple migration scenario for a smaller Teleport cluster which is not deployed for high availability.
If your Teleport cluster is required to support many users and should be deployed in a highly available configuration, you should consider following a different guide and storing your cluster's data in AWS DynamoDB or Google Cloud Firestore.
Prerequisites
- Kubernetes >= v1.17.0
- Helm >= v3.4.2
Verify that Helm and Kubernetes are installed and up to date.
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport resource service (e.g., the Database Service or
Application Service) to a cluster, save the invitation token to a file.
Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/6. Install Helm
Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.
Throughout this guide, we will assume that you have the helm
and kubectl
binaries available in your PATH
:
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
Step 2/6. Add the Teleport Helm chart repository
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Step 3/6. Get the Teleport configuration file from your existing cluster
If your Teleport cluster's database is currently stored in AWS DynamoDB or Google Cloud Firestore rather than
using a PersistentVolumeClaim
or similar, you may wish to consider redeploying your cluster using the aws
or gcp
modes of the teleport-cluster
chart instead.
The relevant guides are linked here:
This guide assumes that your old Helm release was called teleport
and it is in the teleport
Kubernetes namespace. If your release
is different, you will need to update all kubectl
commands accordingly.
The first thing you'll need to do is extract the Teleport config file for your existing Teleport cluster.
Firstly, check that the ConfigMap
is present:
kubectl --namespace teleport get configmap/teleport -o yamlapiVersion: v1
data:
teleport.yaml: |
teleport:
log:
severity: INFO
output: stderr
storage:
type: dir
...
If you do not see the teleport
ConfigMap
, double-check that your Kubernetes context is set correctly and that
you are using the correct namespace.
If you see a Teleport config under the teleport.yaml
key, you can extract it to disk with a command like this:
kubectl --namespace teleport get configmap/teleport -o=jsonpath="{.data['teleport\.yaml']}" > teleport.yamlcat teleport.yaml
teleport:
log:
severity: INFO
output: stderr
storage:
type: dir
...
Once you have the config, you should upload this to a separate Kubernetes
namespace (where you intend to run the teleport-cluster
chart).
kubectl create namespace teleport-clusternamespace/teleport-cluster created
kubectl --namespace teleport-cluster create configmap teleport --from-file=teleport.yamlconfigmap/teleport created
Step 4/6. Extracting the contents of Teleport's database
If you migrate your existing data, the cluster_name
which is configured in teleport.yaml
must stay the same.
If you wish to change the name of your cluster, you will need to deploy a new cluster from scratch and reconfigure your users, roles and nodes.
If you wish to keep the same users, roles, certificate authorities and nodes in your cluster, you can use
Teleport's tctl
tool to extract a backup of all your data.
You can get the backup with a command like this:
kubectl --namespace teleport exec deploy/teleport -- tctl get all --with-secrets > backup.yaml
The backup.yaml
file you have just written contains private keys for your Teleport cluster's certificate
authorities in plain text. You must protect this file carefully and delete it once your new cluster is running.
You can write the file to an in-memory tmpfs
like /dev/shm/backup.yaml
for greater security.
Add the backup to your new teleport-cluster
namespace as a secret:
kubectl --namespace teleport-cluster create secret generic bootstrap --from-file=backup.yaml
Step 5/6. Start the new cluster with your old config file and backup
We will start the new cluster and bootstrap it using the backup of your cluster's data. Once this step is complete and the cluster is working, we'll modify the deployment to remove references to the backup data, and remove it from Kubernetes for security.
Write a teleport-cluster-values.yaml
file containing the following values:
chartMode: custom
extraArgs: ['--bootstrap', '/etc/teleport-bootstrap/backup.yaml']
extraVolumes:
- name: bootstrap
secret:
name: bootstrap
extraVolumeMounts:
- name: bootstrap
path: /etc/teleport-bootstrap
helm install teleport teleport/teleport-cluster \ --namespace teleport-cluster \ -f teleport-cluster-values.yaml
Obtain your Teleport Enterprise license file from the Teleport Customer Portal. Create a secret called "license" in the namespace you created:
kubectl -n teleport-cluster create secret generic license --from-file=license.pem
Write a teleport-cluster-values.yaml
file containing the following values:
enterprise: true
chartMode: custom
extraArgs: ['--bootstrap', '/etc/teleport-bootstrap/backup.yaml']
extraVolumes:
- name: bootstrap
secret:
name: bootstrap
extraVolumeMounts:
- name: bootstrap
path: /etc/teleport-bootstrap
helm install teleport teleport/teleport-cluster \ --namespace teleport-cluster \ -f teleport-cluster-values.yaml
Once the chart is installed, you can use kubectl
commands to view the deployment:
kubectl --namespace teleport-cluster get allNAME READY STATUS RESTARTS AGE
pod/teleport-5cf46ddf5f-dzh65 1/1 Running 0 4m21s
pod/teleport-5cf46ddf5f-mpghq 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/teleport LoadBalancer 10.100.37.171 a232d92df01f940339adea0e645d88bb-1576732600.us-east-1.elb.amazonaws.com 443:30821/TCP,3023:30801/TCP,3026:32612/TCP,3024:31253/TCP 4m21s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/teleport 2/2 2 2 4m21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/teleport-5cf46ddf5f 2 2 2 4m21s
You'll need to change the existing DNS record for your teleport
chart installation to point to your new teleport-cluster
chart installation. You should point the DNS record to the external IP or hostname of the Kubernetes load balancer.
Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.
EKS uses a hostname:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com
GKE uses an IP address:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'35.203.56.38
For testing, you can access the load balancer's IP or hostname directly. You may need to accept insecure warnings in your browser to view the page successfully.
Step 6/6. Remove the bootstrap data and update the chart deployment
Once you've tested your new Teleport cluster and you're confident that your data has been migrated successfully, you should redeploy the chart without your backup data mounted for security.
Edit your teleport-cluster-values.yaml
file to remove extraArgs
, extraVolumes
and extraVolumeMounts
:
chartMode: custom
Upgrade the Helm deployment to use the new values:
helm upgrade teleport teleport/teleport-cluster \ --namespace teleport-cluster \ -f teleport-cluster-values.yaml
helm upgrade teleport teleport/teleport-cluster \--namespace teleport-cluster \--set chartMode=custom
After this, delete the Kubernetes secret containing the backup data:
kubectl --namespace delete secret/bootstrap
Finally, you should also delete the backup.yaml
file from your local disk:
rm -f backup.yaml
Uninstalling Teleport
To uninstall the teleport-cluster
chart, use helm uninstall <release-name>
. For example:
helm --namespace teleport-cluster uninstall teleport
Next steps
To see all of the options you can set in the values file for the
teleport-cluster
Helm chart, consult our reference
guide.