Fork me on GitHub
Teleport

Running a Teleport cluster with a custom configuration using Helm

In this guide, we'll go through how to set up a Teleport cluster in Kubernetes using a custom teleport.yaml config file using Teleport Helm charts.

This setup can be useful when you already have an existing Teleport cluster and would like to start running it in Kubernetes, or when migrating your setup from a legacy version of the Helm charts.

Prerequisites

Verify that Helm and Kubernetes are installed and up to date.

Tip

The examples below may include the use of the sudo keyword, token UUIDs, and users with admin privileges to make following each step easier when creating resources from scratch.

Generally:

  1. We discourage using sudo in production environments unless it's needed.
  2. We encourage creating new, non-root, users or new test instances for experimenting with Teleport.
  3. We encourage adherence to the Principle of Least Privilege (PoLP) and Zero Admin best practices. Don't give users the admin role when giving them the more restrictive access,editor roles will do instead.
  4. Saving tokens into a file rather than sharing tokens directly as strings.

Learn more about Teleport Role-Based Access Control best practices.

Step 1. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

$ helm version
# version.BuildInfo{Version:"v3.4.2"}

$ kubectl version
# Client Version: version.Info{Major:"1", Minor:"17+"}
# Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

$ helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

$ helm repo update

Step 3. Setting up a Teleport cluster with Helm using a custom config

In custom mode, the teleport-cluster Helm chart does not create a ConfigMap containing a teleport.yaml file for you, but expects that you will provide this yourself.

For this example, we'll be using this teleport.yaml configuration file (with appropriately complex static tokens):

$ cat << EOF > teleport.yaml
teleport:
  log:
    output: stderr
    severity: INFO

auth_service:
  enabled: true
  cluster_name: custom.example.com
  tokens:
  # These commands will generate random 32-chacter alphanumeric strings to use as join tokens
  - "proxy,node:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
  - "trusted_cluster:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
  listen_addr: 0.0.0.0:3025
  public_addr: custom.example.com:3025

proxy_service:
  enabled: true
  listen_addr: 0.0.0.0:3080
  public_addr: custom.example.com:443

ssh_service:
  enabled: true
  labels:
    cluster: custom
  commands:
  - name: kernel
    command: [/bin/uname, -r]
    period: 5m
EOF
Tip
You can skip this step if you already have a teleport.yaml file locally that you'd like to use.

You can create the namespace for the config and add the teleport.yaml from your local disk like this:

kubectl create namespace teleport
kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml
Note

The name of the ConfigMap used must match the name of the Helm release that you install below (the name just after helm install). In this example, it's teleport.

The name (key) of the configuration file uploaded to your ConfigMap must be teleport.yaml. If your configuration file is named differently on disk, you can specify the key that should be used in the kubectl command:

kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml=my-teleport-config-file.yaml

After the ConfigMap has been created, you can deploy the Helm chart into a Kubernetes cluster with a command like this:

helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ --set chartMode=custom
Warning

Most settings from values.yaml will not be applied in custom mode.

It's important to specify any settings under the acme, aws, gcp, and logLevel sections of the chart in your own teleport.yaml file that you upload yourself.

You can control the externally-facing name of your cluster using the public_addr sections of teleport.yaml. In this example, our public_addrs are set to custom.example.com.

External proxy port

Note that although the proxy_service listens on port 3080 inside the pod, the default LoadBalancer service configured by the chart will always listen externally on port 443 (which is redirected internally to port 3080).

Due to this, your proxy_service.public_addr should always end in :443:

proxy_service:
  listen_addr: 0.0.0.0:3080
  public_addr: custom.example.com:443
Tip

It will help if you have access to the DNS provider which hosts example.com so you can add a custom.example.com record and point it to the external IP or hostname of the Kubernetes load balancer.

Don't worry if you can't - you'll just have to remember to replace custom.example.com with the external IP or hostname of the Kubernetes load balancer to be able to access Teleport from your local machine.

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport get all

NAME READY STATUS RESTARTS AGE

pod/teleport-5c56b4d869-znmqk 1/1 Running 0 5h8m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/teleport LoadBalancer 10.100.162.158 a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com 443:30945/TCP,3023:32342/TCP,3026:30851/TCP,3024:31521/TCP 5h29m

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/teleport 1/1 1 1 5h29m

NAME DESIRED CURRENT READY AGE

replicaset.apps/teleport-5c56b4d869 1 1 1 5h8m

Step 4. Create a Teleport user (optional)

If you're not migrating an existing Teleport cluster, you'll need to create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, so we can run the command using kubectl:

kubectl --namespace teleport exec deploy/teleport -- tctl users add test --roles=access,editor

User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:

https://custom.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68

NOTE: Make sure custom.example.com:443 points at a Teleport proxy that users can access.

Note

If you didn't set up DNS for your hostname earlier, remember to replace custom.example.com with the external IP or hostname of the Kubernetes load balancer.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

$ kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'
# a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com

You should modify your command accordingly and replace custom.example.com with either the IP or hostname depending on which you have available. You may need to accept insecure warnings in your browser to view the page successfully.

Warning

Using a Kubernetes-issued load balancer IP or hostname is OK for testing but is not viable when running a production Teleport cluster as the Subject Alternative Name on any public-facing certificate will be expected to match the cluster's configured public address (specified using public_addr when using custom mode)

You must configure DNS properly using the methods described above for production workloads.

Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.

Upgrading the cluster after deployment

Making changes to teleport.yaml

If you make changes to your Teleport ConfigMap, you can apply these changes by deleting the old ConfigMap and applying a new one:

kubectl --namespace teleport delete configmap teleport && \# kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml
Note

Make sure that the name of the ConfigMap (e.g. teleport) matches the Helm release name used as described above.

You can list all available ConfigMaps in your namespace using this command:

kubectl --namespace teleport get configmap

NAME DATA AGE

teleport 1 2d21h

After editing the ConfigMap, you must initiate a rolling restart of your Teleport deployment to pick up the changed ConfigMap:

kubectl --namespace teleport rollout restart deploy/teleport

Making changes to other Helm values

To make changes to your Teleport cluster after deployment which are not covered by the functionality in teleport.yaml, you can use helm upgrade.

Run this command, editing your command line parameters as appropriate:

helm upgrade teleport teleport/teleport-cluster \ --set highAvailability.replicaCount=3
Warning

When using custom mode, you must use highly-available storage (e.g. etcd, DynamoDB, or Firestore) for multiple replicas to be supported.

Information on supported Teleport storage backends

Manually configuring NFS-based storage or ReadWriteMany volume claims is NOT supported for an HA deployment and will result in errors.

Uninstalling the Helm chart

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport uninstall teleport
Note
To change chartMode, you must first uninstall the existing chart and then install a new version with the appropriate values.

Next steps

You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.

Have a suggestion or can’t find something?
IMPROVE THE DOCS