Fork me on GitHub

Teleport

Running Teleport with a Custom Configuration using Helm

Improve

In this guide, we'll explain how to set up a Teleport cluster in Kubernetes with a custom teleport.yaml config file using Teleport Helm charts.

Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.

Get started with a free trial of Teleport Cloud.

This setup can be useful when you already have an existing Teleport cluster and would like to start running it in Kubernetes, or when migrating your setup from a legacy version of the Helm charts.

If you are already running Teleport on another platform, you can use your existing Teleport deployment to access your Kubernetes cluster. Follow our guide to connect your Kubernetes cluster to Teleport.

Prerequisites

Warning

Those instructions are both for v12 Teleport and the v12 teleport-cluster chart. If you are running an older Teleport version, use the version selector at the top of this page to choose the correct version.

Verify that Helm and Kubernetes are installed and up to date.

When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:

  • Avoid using sudo in production environments unless it's necessary.
  • Create new, non-root, users and use test instances for experimenting with Teleport.
  • Run Teleport's services as a non-root user unless required. Only the SSH Service requires root access. Note that you will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443).
  • Follow the "Principle of Least Privilege" (PoLP). Don't give users permissive roles when giving them more restrictive roles will do instead. For example, assign users the built-in access,editor roles.
  • When joining a Teleport resource service (e.g., the Database Service or Application Service) to a cluster, save the invitation token to a file. Otherwise, the token will be visible when examining the teleport command that started the agent, e.g., via the history command on a compromised system.

Step 1/4. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

helm version

version.BuildInfo{Version:"v3.4.2"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"17+"}

Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2/4. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Step 3/4. Setting up a Teleport cluster with Helm using a custom config

In scratch mode, the teleport-cluster Helm chart generates a minimal configuration and lets you pass your custom configuration through the chart's values.

teleport-cluster deploys two sets of pods: proxy and auth. You must provide two configurations, one for each pod type.

  • The proxy pod configuration should contain at least the proxy_service section and the teleport section without the storage part.
  • The auth pod configuration should contain at least the auth_service and teleport sections.

The chart automatically creates a Kubernetes join token, named after the Helm release, which will enable the proxy pods to seamlessly connect to the auth pods. If you do not want to use this automatic token, you must provide a valid Teleport join token in the proxy pods' configuration.

Warning

When using scratch or standalone mode, you must use highly-available storage (e.g. etcd, DynamoDB, or Firestore) for multiple replicas to be supported.

Information on supported Teleport storage backends

Manually configuring NFS-based storage or ReadWriteMany volume claims is NOT supported for an HA deployment and will result in errors.

Write the following my-values.yaml file, and adapt the teleport configuration as needed. You can find all possible configuration fields in the Teleport Config Reference.

chartMode: scratch

auth:
  teleportConfig:
    # put your teleport.yaml auth configuration here
    teleport:
      log:
        output: stderr
        severity: INFO

    auth_service:
      enabled: true
      listen_addr: 0.0.0.0:3025

proxy:
  teleportConfig:
    # put your teleport.yaml proxy configuration here
    teleport:
      # The join_params section must be provided for the proxies to join the auth servers
      # By default, the chart creates a Kubernetes join token which you can use.
      join_params:
        method: kubernetes
        # The token name pattern is "<RELEASE-NAME>-proxy"
        # Change this if you change the Helm release name.
        token_name: "teleport-proxy"
      # The auth server domain pattern is "<RELEASE-NAME>-auth.<RELEASE-NAMESPACE>.svc.cluster.local:3025"
      # If you change the Helm release name or namespace you must adapt the `auth_server` value.
      auth_server: "teleport-auth.teleport.svc.cluster.local:3025"
      log:
        output: stderr
        severity: INFO
    proxy_service:
      enabled: true
      web_listen_addr: 0.0.0.0:3080
      public_addr: custom.example.com:443
      
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
podSecurityPolicy:
  enabled: false

# OPTIONAL - when using an highly-available storage for both backend AND session recordings
# you can disable disk persistence and replicate auth pods.
#
# persistence:
#   enabled: false
# highAvailability:
#   replicaCount: 2

You can control the externally-facing name of your cluster using the public_addr sections of teleport.yaml. In this example, our public_addrs are set to custom.example.com.

Create the namespace containing the Teleport-related resources and configure the PodSecurityAdmission:

kubectl create namespace teleport

namespace/teleport created

kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline'

namespace/teleport labeled

External proxy port

Note that although the proxy_service listens on port 3080 inside the pod, the default LoadBalancer service configured by the chart will always listen externally on port 443 (which is redirected internally to port 3080).

Due to this, your proxy_service.public_addr should always end in :443:

proxy_service:
  web_listen_addr: 0.0.0.0:3080
  public_addr: custom.example.com:443

You can now deploy Teleport in your cluster with the command:

helm install teleport teleport/teleport-cluster \ --namespace teleport \ --values my-values.yaml
helm install teleport teleport/teleport-cluster \ --namespace teleport \ --set enterprise=true \ --values my-values.yaml

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport get all

NAME READY STATUS RESTARTS AGE

pod/teleport-auth-57989d4cbd-rtrzn 1/1 Running 0 22h

pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h

pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/teleport LoadBalancer 10.40.11.180 34.138.177.11 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h

service/teleport-auth ClusterIP 10.40.8.251 <none> 3025/TCP,3026/TCP 22h

service/teleport-auth-v11 ClusterIP None <none> <none> 22h

service/teleport-auth-v12 ClusterIP None <none> <none> 22h

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/teleport-auth 1/1 1 1 22h

deployment.apps/teleport-proxy 2/2 2 2 22h

NAME DESIRED CURRENT READY AGE

replicaset.apps/teleport-auth-57989d4cbd 1 1 1 22h

replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h

Step 4/4. Create a Teleport user (optional)

If you're not migrating an existing Teleport cluster, you'll need to create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, so we can run the command using kubectl:

kubectl --namespace teleport exec deployment/teleport-auth -- tctl users add test --roles=access,editor

User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:

https://custom.example.com:443/web/invite/91cfbd08bc89122275006e48b516cc68

NOTE: Make sure custom.example.com:443 points at a Teleport proxy that users can access.

Note

If you didn't set up DNS for your hostname earlier, remember to replace custom.example.com with the external IP or hostname of the Kubernetes load balancer.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'

a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com

GKE uses an IP address:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

35.203.56.38

You should modify your command accordingly and replace custom.example.com with either the IP or hostname depending on which you have available. You may need to accept insecure warnings in your browser to view the page successfully.

Warning

Using a Kubernetes-issued load balancer IP or hostname is OK for testing but is not viable when running a production Teleport cluster as the Subject Alternative Name on any public-facing certificate will be expected to match the cluster's configured public address (specified using public_addr in your configuration)

You must configure DNS properly using the methods described above for production workloads.

Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.

Uninstalling the Helm chart

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport uninstall teleport
Note

To change chartMode, you must first uninstall the existing chart and then install a new version with the appropriate values.

Next steps

Now that you have deployed a Teleport cluster, read the Manage Access section to get started enrolling users and setting up RBAC.

To see all of the options you can set in the values file for the teleport-cluster Helm chart, consult our reference guide.