Fork me on GitHub


Running an HA Teleport cluster using GCP, GKE, and Helm


In this guide, we'll go through how to set up a High Availability Teleport cluster with multiple replicas in Kubernetes using Teleport Helm charts and Google Cloud Platform products (Firestore and Google Cloud Storage).

Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.

Get started with a free trial of Teleport Cloud.


Verify that Helm and Kubernetes are installed and up to date.

When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:

  • Avoid using sudo in production environments unless it's necessary.
  • Create new, non-root, users and use test instances for experimenting with Teleport.
  • Run Teleport's services as a non-root user unless required. Only the SSH Service requires root access. Note that you will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443).
  • Follow the "Principle of Least Privilege" (PoLP). Don't give users permissive roles when giving them more restrictive roles will do instead. For example, assign users the built-in access,editor roles.
  • When joining a Teleport resource service (e.g., the Database Service or Application Service) to a cluster, save the invitation token to a file. Otherwise, the token will be visible when examining the teleport command that started the agent, e.g., via the history command on a compromised system.

Step 1/7. Install Helm

Teleport's charts require the use of Helm version 3. You can install Helm 3 by following these instructions.

Throughout this guide, we will assume that you have the helm and kubectl binaries available in your PATH:

helm version


kubectl version

Client Version: version.Info{Major:"1", Minor:"17+"}

Server Version: version.Info{Major:"1", Minor:"17+"}

Step 2/7. Add the Teleport Helm chart repository

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

The steps below apply to Google Cloud Google Kubernetes Engine (GKE) Standard deployments.

Step 3/7. Google Cloud IAM configuration

For Teleport to be able to create the Firestore collections, indexes, and the Google Cloud Storage bucket it needs, you'll need to configure a Google Cloud service account with permissions to use these services.

Create an IAM role granting the storage.buckets.create permission

Go to the "Roles" section of Google Cloud IAM & Admin.

  1. Click the "Create Role" button at the top.
Roles section
Roles section
  1. Fill in the details of a "Storage Bucket Creator" role (we suggest using the name storage-bucket-creator-role)
Create role
Create role
  1. Click the "Add Permissions" button.
Storage bucket creator role
Storage bucket creator role
  1. Use the "Filter" box to enter storage.buckets.create and select it in the list.
Filter the list
Filter the list
  1. Check the storage.buckets.create permission in the list and click the "Add" button to add it to the role.
Select storage.buckets.create
Select storage.buckets.create
  1. Once all these settings are entered successfully, click the "Create" button.
Create role
Create role

Create an IAM role granting Cloud DNS permissions

Go to the "Roles" section of Google Cloud IAM & Admin.

  1. Click the "Create Role" button at the top.
Roles section
Roles section
  1. Fill in the details of a "DNS Updater" role (we suggest using the name dns-updater-role)
Create role
Create role
  1. Click the "Add Permissions" button.
DNS updater role
DNS updater role
  1. Use the "Filter" box to find each of the following permissions in the list and add it. You can type things like dns.resourceRecordSets.* to quickly filter the list.
  1. Once all these settings are entered successfully, click the "Create" button.
Add DNS permissions
Add DNS permissions

Create a service account for the Teleport Helm chart


If you already have a JSON private key for an appropriately-provisioned service account that you wish to use, you can skip this creation process and go to the "Create the Kubernetes secret containing the JSON private key for the service account" section below.

Go to the "Service Accounts" section of Google Cloud IAM & Admin.

  1. Click the "Create Service Account" button at the top.
Create service account
Create service account
  1. Enter details for the service account (we recommend using the name teleport-helm) and click the "Create" button.
Enter service account details
Enter service account details
  1. In the "Grant this service account access to project" section, add these four roles:
storage-bucket-creator-roleRole you just created allowing creation of storage buckets
dns-updater-roleRole you just created allowing updates to Cloud DNS records
Cloud Datastore OwnerGrants permissions to create Cloud Datastore collections
Storage Object AdminAllows read/write/delete of Google Cloud storage objects
Add roles
Add roles
  1. Click the "continue" button to save these settings, then click the "create" button to create the service account.

Generate an access key for the service account

Go back to the "Service Accounts" view in Google Cloud IAM & Admin.

  1. Click on the teleport-helm service account that you just created.
Click on the service account
Click on the service account
  1. Click the "Keys" tab at the top and click "Add Key". Choose "JSON" and click "Create".
Create JSON key
Create JSON key
  1. The JSON private key will be downloaded to your computer. Take note of the filename (bens-demos-24150b1a0a7f.json in this example) as you will need it shortly.
Private key saved
Private key saved

Create the Kubernetes secret containing the JSON private key for the service account

Find the path where the JSON private key was just saved (likely your browser's default "Downloads" directory).

Use kubectl to create the teleport namespace, set its security policy, and create the secret using the path to the JSON private key:

kubectl create namespace teleport

namespace/teleport created

kubectl label namespace teleport ''

namespace/teleport labeled

kubectl --namespace teleport create secret generic teleport-gcp-credentials --from-file=gcp-credentials.json=/path/to/downloads/bens-demos-24150b1a0a7f.json

secret/teleport-gcp-credentials created


If you installed the Teleport chart into a specific namespace, the teleport-gcp-credentials secret you create must also be added to the same namespace.


The default name configured for the secret is teleport-gcp-credentials.

If you already have a secret created, you can skip this creation process and specify the name of the secret using gcp.credentialSecretName.

The credentials file stored in any secret used must have the key name gcp-credentials.json.

Step 4/7. Install and configure cert-manager

Reference the cert-manager docs.

In this example, we are using multiple pods to create a High Availability Teleport cluster. As such, we will be using cert-manager to centrally provision TLS certificates using Let's Encrypt. These certificates will be mounted into each Teleport pod, and automatically renewed and kept up to date by cert-manager.

If you do not have cert-manager already configured in the Kubernetes cluster where you are installing Teleport, you should add the Jetstack Helm chart repository which hosts the cert-manager chart, and install the chart:

helm repo add jetstack
helm repo update
helm install cert-manager jetstack/cert-manager \--create-namespace \--namespace cert-manager \--set installCRDs=true

Once cert-manager is installed, you should create and add an Issuer.

You'll need to replace these values in the Issuer example below:

Placeholder valueReplace with
[email protected]An email address to receive communications from Let's Encrypt
example.comThe name of the Cloud DNS domain hosting your Teleport cluster
gcp-project-idGCP project ID where the Cloud DNS domain is registered
cat << EOF > gcp-issuer.yaml
kind: Issuer
  name: letsencrypt-production
  namespace: teleport
    email: [email protected]                                # Change this
      name: letsencrypt-production
    - selector:
          - ""                                  # Change this
          project: gcp-project-id                          # Change this
            name: teleport-gcp-credentials
            key: gcp-credentials.json

The secret name under serviceAccountSecretRef here defaults to teleport-gcp-credentials.

If you have changed gcp.credentialSecretName in your chart values, you must also make sure it matches here.

After you have created the Issuer and updated the values, add it to your cluster using kubectl:

kubectl --namespace teleport create -f gcp-issuer.yaml

Step 5/7. Set values to configure the cluster

Before you can install Teleport Enterprise in your Kubernetes cluster, you will need to create a secret that contains your Teleport license information.

Download your Teleport Enterprise license from the Customer Portal and save it to a file called license.pem.

Create a secret from your license file. Teleport will automatically discover this secret as long as your file is named license.pem.

kubectl -n teleport create secret generic license --from-file=license.pem

If you are installing Teleport in a brand new GCP project, make sure you have enabled the Cloud Firestore API and created a Firestore Database in your project before continuing.

Next, configure the teleport-cluster Helm chart to use the gcp mode. Create a file called gcp-values.yaml file and write the values you've chosen above to it:

chartMode: gcp
clusterName:                 # Name of your cluster. Use the FQDN you intend to configure in DNS below
  projectId: gcpproj-123456                       # Google Cloud project ID
  backendTable: teleport-helm-backend             # Firestore collection to use for the Teleport backend
  auditLogTable: teleport-helm-events             # Firestore collection to use for the Teleport audit log (must be different to the backend collection)
  auditLogMirrorOnStdout: false                   # Whether to mirror audit log entries to stdout in JSON format (useful for external log collectors)
  sessionRecordingBucket: teleport-helm-sessions  # Google Cloud Storage bucket to use for Teleport session recordings
  replicaCount: 2                                 # Number of replicas to configure
    enabled: true                                 # Enable cert-manager support to get TLS certificates
    issuerName: letsencrypt-production            # Name of the cert-manager Issuer to use (as configured above)
# If you are running Kubernetes 1.23 or above, disable PodSecurityPolicies
  enabled: false

Install the chart with the values from your gcp-values.yaml file using this command:

helm install teleport teleport/teleport-cluster \ --create-namespace \ --namespace teleport \ -f gcp-values.yaml

You cannot change the clusterName after the cluster is configured, so make sure you choose wisely. We recommend using the fully-qualified domain name that you'll use for external access to your Teleport cluster.

Once the chart is installed, you can use kubectl commands to view the deployment:

kubectl --namespace teleport get all


pod/teleport-auth-57989d4cbd-4q2ds 1/1 Running 0 22h

pod/teleport-auth-57989d4cbd-rtrzn 1/1 Running 0 22h

pod/teleport-proxy-c6bf55cfc-w96d2 1/1 Running 0 22h

pod/teleport-proxy-c6bf55cfc-z256w 1/1 Running 0 22h


service/teleport LoadBalancer 443:30258/TCP,3023:31802/TCP,3026:32182/TCP,3024:30101/TCP,3036:30302/TCP 22h

service/teleport-auth ClusterIP <none> 3025/TCP,3026/TCP 22h

service/teleport-auth-v11 ClusterIP None <none> <none> 22h

service/teleport-auth-v12 ClusterIP None <none> <none> 22h


deployment.apps/teleport-auth 2/2 2 2 22h

deployment.apps/teleport-proxy 2/2 2 2 22h


replicaset.apps/teleport-auth-57989d4cbd 2 2 2 22h

replicaset.apps/teleport-proxy-c6bf55cfc 2 2 2 22h

Step 6/7. Set up DNS

You'll need to set up a DNS A record for

Teleport assigns a subdomain to each application you have configured for Application Access (e.g.,, so you will need to ensure that a DNS A (or CNAME for services that only provide a hostname) record exists for each application-specific subdomain so clients can access your applications via Teleport.

You should create either a separate DNS record for each subdomain, or a single record with a wildcard subdomain such as * This way, your certificate authority (e.g., Let's Encrypt) can issue a certificate for each subdomain, enabling clients to verify your Teleport hosts regardless of the application they are accessing.

Here's how to do this using Google Cloud DNS:

Change these parameters if you altered them above


MYIP=$(kubectl --namespace ${NAMESPACE?} get service/${RELEASE_NAME?} -o jsonpath='{.status.loadBalancer.ingress[*].ip}')

gcloud dns record-sets transaction start --zone="${MYZONE?}"
gcloud dns record-sets transaction add ${MYIP?} --name="${MYDNS?}" --ttl="300" --type="A" --zone="${MYZONE?}"
gcloud dns record-sets transaction add ${MYIP?} --name="*.${MYDNS?}" --ttl="300" --type="A" --zone="${MYZONE?}"
gcloud dns record-sets transaction describe --zone="${MYZONE?}"
gcloud dns record-sets transaction execute --zone="${MYZONE?}"

Step 7/7. Create a Teleport user

Create a user to be able to log into Teleport. This needs to be done on the Teleport auth server, so we can run the command using kubectl:

kubectl --namespace teleport exec deployment/teleport-auth -- tctl users add test --roles=access,editor

User "test" has been created but requires a password. Share this URL with the user to complete user setup, link is valid for 1h:

NOTE: Make sure points at a Teleport proxy which users can access.

Load the user creation link to create a password and set up 2-factor authentication for the Teleport user via the web UI.

High Availability

In this guide, we have configured 2 replicas. This can be changed after cluster creation by altering the highAvailability.replicaCount value using helm upgrade as detailed below.

Upgrading the cluster after deployment

To make changes to your Teleport cluster after deployment, you can use helm upgrade.

Helm defaults to using the latest version of the chart available in the repo, which will also correspond to the latest version of Teleport. You can make sure that the repo is up to date by running helm repo update.

If you want to use a different version of Teleport, set the teleportVersionOverride value.

Here's an example where we set the chart to use 3 replicas:

Edit your gcp-values.yaml file from above and make the appropriate changes.

Upgrade the deployment with the values from your gcp-values.yaml file using this command:

helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ -f gcp-values.yaml

Run this command, editing your command line parameters as appropriate:

helm upgrade teleport teleport/teleport-cluster \ --namespace teleport \ --set highAvailability.replicaCount=3

To change chartMode, clusterName or any gcp settings, you must first uninstall the existing chart and then install a new version with the appropriate values.

Uninstalling Teleport

To uninstall the teleport-cluster chart, use helm uninstall <release-name>. For example:

helm --namespace teleport uninstall teleport

Uninstalling cert-manager

If you want to remove the cert-manager installation later, you can use this command:

helm --namespace cert-manager uninstall cert-manager

Next steps

You can follow our Getting Started with Teleport guide to finish setting up your Teleport cluster.

See the high availability section of our Helm chart reference for more details on high availability.