In this guide, we'll explain how to set up a Teleport cluster in Kubernetes using a custom teleport.yaml
config file using Teleport Helm charts.
This setup can be useful when you already have an existing Teleport cluster and would like to start running it in Kubernetes, or when migrating your setup from a legacy version of the Helm charts.
Prerequisites
- Kubernetes >= v1.17.0
- Helm >= v3.4.2
Verify that Helm and Kubernetes are installed and up to date.
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport resource service (e.g., the Database Service or
Application Service) to a cluster, save the invitation token to a file.
Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/4. Install Helm
Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.
Throughout this guide, we will assume that you have the helm
and kubectl
binaries available in your PATH
:
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
Step 2/4. Add the Teleport Helm chart repository
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Step 3/4. Setting up a Teleport cluster with Helm using a custom config
In custom
mode, the teleport-cluster
Helm chart does not create a ConfigMap
containing a teleport.yaml
file for you, but
expects that you will provide this yourself.
For this example, we'll be using this teleport.yaml
configuration file with a static join token (for more information on join tokens, see Adding Nodes to the Cluster):
cat << EOF > teleport.yamlteleport: log: output: stderr severity: INFOauth_service:
enabled: true
cluster_name: custom.example.com
tokens:
# These commands will generate random 32-character alphanumeric strings to use as join tokens
- "proxy,node:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
- "trusted_cluster:$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
listen_addr: 0.0.0.0:3025
public_addr: custom.example.com:3025
proxy_service:
enabled: true
listen_addr: 0.0.0.0:3080
public_addr: custom.example.com:443
ssh_service:
enabled: true
labels:
cluster: custom
commands:
- name: kernel
command: [/bin/uname, -r]
period: 5m
EOF
You can skip this step if you already have a teleport.yaml
file locally that you'd like to use.
Create the namespace for the config and add the teleport.yaml
from your local
disk:
kubectl create namespace teleportkubectl --namespace teleport create configmap teleport --from-file=teleport.yaml
The name of the ConfigMap
used must match the name of the Helm release that you install below (the name just after helm install
).
In this example, it's teleport
.
The name (key) of the configuration file uploaded to your ConfigMap
must be teleport.yaml
. If your configuration file is named differently
on disk, you can specify the key that should be used in the kubectl
command:
kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml=my-teleport-config-file.yaml
After the ConfigMap
has been created, you can
deploy the Helm chart into a Kubernetes cluster with a command like this:
helm install teleport teleport/teleport-cluster \ --namespace teleport \ --set chartMode=custom
Most settings from values.yaml
will not be applied in custom
mode.
It's important to specify any settings under the acme
, aws
, gcp
, and logLevel
sections of the chart in your own teleport.yaml
file that you upload yourself.
You can control the externally-facing name of your cluster using the public_addr
sections of teleport.yaml
. In this example,
our public_addr
s are set to custom.example.com
.
Note that although the proxy_service
listens on port 3080 inside the pod, the default LoadBalancer
service configured by the chart
will always listen externally on port 443 (which is redirected internally to port 3080).
Due to this, your proxy_service.public_addr
should always end in :443
:
proxy_service:
listen_addr: 0.0.0.0:3080
public_addr: custom.example.com:443
It will help if you have access to the DNS provider which hosts example.com
so you can add a custom.example.com
record
and point it to the external IP or hostname of the Kubernetes load balancer.
Don't worry if you can't - you'll just have to remember to replace custom.example.com
with the external IP or hostname of the Kubernetes load balancer to be able to access Teleport from your local machine.
Once the chart is installed, you can use kubectl
commands to view the deployment:
kubectl --namespace teleport get allNAME READY STATUS RESTARTS AGE
pod/teleport-5c56b4d869-znmqk 1/1 Running 0 5h8m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/teleport LoadBalancer 10.100.162.158 a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com 443:30945/TCP,3023:32342/TCP,3026:30851/TCP,3024:31521/TCP 5h29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/teleport 1/1 1 1 5h29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/teleport-5c56b4d869 1 1 1 5h8m
Step 4/4. Create a Teleport user (optional)
If you're not migrating an existing Teleport cluster, you'll need to create a user to be able to log into Teleport. This needs to be done on the
Teleport auth server, so we can run the command using kubectl
:
kubectl --namespace teleport exec deploy/teleport -- tctl users add test --roles=access,editorUser "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:
https://custom.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68
NOTE: Make sure custom.example.com:443 points at a Teleport proxy that users can access.
If you didn't set up DNS for your hostname earlier, remember to replace custom.example.com
with the external IP or hostname of the
Kubernetes load balancer.
Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.
EKS uses a hostname:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com
GKE uses an IP address:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'35.203.56.38
You should modify your command accordingly and replace custom.example.com
with either the IP or hostname depending on which you have available. You may need to accept insecure warnings in your browser to view the page successfully.
Using a Kubernetes-issued load balancer IP or hostname is OK for testing but is not viable when running a production Teleport cluster
as the Subject Alternative Name on any public-facing certificate will be expected to match the cluster's configured public address (specified
using public_addr
when using custom
mode)
You must configure DNS properly using the methods described above for production workloads.
Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.
Upgrading the cluster after deployment
Making changes to teleport.yaml
If you make changes to your Teleport ConfigMap
, you can apply these changes by deleting the old ConfigMap
and applying a new one:
kubectl --namespace teleport delete configmap teleport && \# kubectl --namespace teleport create configmap teleport --from-file=teleport.yaml
Make sure that the name of the ConfigMap
(e.g. teleport
) matches the Helm release name used as described above.
You can list all available ConfigMap
s in your namespace using this command:
kubectl --namespace teleport get configmapNAME DATA AGE
teleport 1 2d21h
After editing the ConfigMap
, you must initiate a rolling restart of your Teleport deployment to pick up the changed ConfigMap
:
kubectl --namespace teleport rollout restart deploy/teleport
Making changes to other Helm values
To make changes to your Teleport cluster after deployment which are not covered by the functionality in teleport.yaml
, you can
use helm upgrade
.
Run this command, editing your command line parameters as appropriate:
helm upgrade teleport teleport/teleport-cluster \ --set highAvailability.replicaCount=3
When using custom
mode, you must use highly-available storage (e.g. etcd, DynamoDB, or Firestore) for multiple replicas to be supported.
Information on supported Teleport storage backends
Manually configuring NFS-based storage or ReadWriteMany
volume claims is NOT supported for an HA deployment and will result in errors.
Uninstalling the Helm chart
To uninstall the teleport-cluster
chart, use helm uninstall <release-name>
. For example:
helm --namespace teleport uninstall teleport
To change chartMode
, you must first uninstall the existing chart and then install a new version with the appropriate values.
Next steps
To see all of the options you can set in the values file for the
teleport-cluster
Helm chart, consult our reference
guide.
You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.