The teleport-cluster
Helm chart deploys the teleport
daemon on Kubernetes.
You can use our preset configurations to deploy the Auth Service and Proxy
Service, or a custom configuration to deploy resource services such as the
Teleport Kubernetes Service or Database Service.
You can browse the source on GitHub.
The teleport-cluster
chart runs two Teleport services:
Teleport service | Purpose | Documentation |
---|---|---|
auth_service | Authenticates users and hosts, and issues certificates | Auth documentation |
proxy_service | Runs the externally-facing parts of a Teleport cluster, such as the web UI, SSH proxy and reverse tunnel service | Proxy documentation |
The teleport-cluster
chart can be deployed in four different modes. Get started with a guide for each mode:
This reference details available values for the teleport-cluster
chart.
Backing up production instances, environments, and/or settings before making permanent modifications is encouraged as a best practice. Doing so allows you to roll back to an existing state if needed.
clusterName
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | nil | When chartMode is aws , gcp or standalone | auth_service.cluster_name , proxy_service.public_addr | ✅ |
clusterName
controls the name used to refer to the Teleport cluster, along with the externally-facing public address to use to access it.
If using a fully qualified domain name as your clusterName
, you will also need to configure the DNS provider for this domain to point
to the external load balancer address of your Teleport cluster.
Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.
EKS uses a hostname:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com
GKE uses an IP address:
kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'35.203.56.38
You will need to manually add a DNS A record pointing teleport.example.com
to either the IP or hostname of the Kubernetes load balancer.
Teleport assigns a subdomain to each application you have configured for Application
Access (e.g., grafana.teleport.example.com
), so you will need to ensure that a
DNS A (or CNAME for services that only provide a hostname) record exists for each
application-specific subdomain so clients can access your applications via Teleport.
You should create either a separate DNS record for each subdomain, or a single
record with a wildcard subdomain such as *.teleport.example.com
. This way, your
certificate authority (e.g., Let's Encrypt) can issue a certificate for each
subdomain, enabling clients to verify your Teleport hosts regardless of the
application they are accessing.
If you are not using ACME certificates, you may also need to accept insecure warnings in your browser to view the page successfully.
kubeClusterName
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | clusterName value | no | kubernetes_service.kube_cluster_name | ✅ |
kubeClusterName
sets the name used for the Kubernetes cluster. This name will be shown to Teleport users connecting to the cluster.
authentication
authentication.type
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | local | Yes | auth_service.authentication.type | ❌ |
authentication.type
controls the authentication scheme used by Teleport. Possible values are local
and github
for OSS, plus oidc
and saml
for Enterprise.
authentication.connectorName
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | "" | No | auth_service.authentication.connector_name | ❌ |
authentication.connectorName
sets the default authentication connector.
The SSO documentation explains how to create authentication connectors for common identity
providers. In addition to SSO connector names, the following built-in connectors are supported:
local
for local userspasswordless
to enable by default passwordless authentication.
Defaults to local
.
authentication.localAuth
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
bool | true | No | auth_service.authentication.local_auth | ❌ |
authentication.localAuth
controls whether local authentication is enabled.
When disabled, users can only log in through authentication connectors like saml
, oidc
or github
.
Disabling local auth is required for FedRAMP / FIPS.
authentication.lockingMode
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | "" | No | auth_service.authentication.locking_mode | ❌ |
authentication.lockingMode
controls the locking mode cluster-wide. Possible values are best_effort
and strict
.
See the locking modes documentation for more
details.
Defaults to Teleport's binary default when empty: best_effort
.
authentication.secondFactor
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | otp | Yes | auth_service.authentication.second_factor | ❌ |
authentication.secondFactor
controls the second factor used for local user authentication. Possible values supported by this chart
are off
(not recommended), on
, otp
, optional
and webauthn
.
When set to on
, optional
or webauthn
, the authenticationSecondFactor.webauthn
section can also be used. The configured rp_id
defaults to
the FQDN which is used to access the Teleport cluster cluster-wide.
authentication.webauthn
See Second Factor - WebAuthn for more details.
authentication.webauthn.attestationAllowedCas
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
array | [] | No | auth_service.authentication.webauthn.attestation_allowed_cas | ❌ |
authentication.webauthn.attestationAllowedCas
is an optional allow list of certificate authorities (as local file paths or in-line PEM certificate string) for
device verification.
This field allows you to restrict which device models and vendors you trust.
Devices outside of the list will be rejected during registration.
By default all devices are allowed.
authentication.webauthn.attestationDeniedCas
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
array | [] | No | auth_service.authentication.webauthn.attestation_denied_cas | ❌ |
authentication.webauthn.attestationDeniedCas
is optional deny list of certificate authorities (as local file paths or in-line PEM certificate string) for
device verification.
This field allows you to forbid specific device models and vendors, while allowing all others (provided they clear attestation_allowed_cas as well).
Devices within this list will be rejected during registration.
By default no devices are forbidden.
proxyListenerMode
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | nil | no | auth_service.proxy_listener_mode | ❌ |
proxyListenerMode
controls proxy TLS routing used by Teleport. Possible values are multiplex
.
proxyListenerMode: multiplex
--set proxyListenerMode=multiplex
sessionRecording
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
string | "" | no | auth_service.session_recording | ❌ |
sessionRecording
controls the session_recording
field in the teleport.yaml
configuration.
It is passed as-is in the configuration.
For possible values, see the Teleport Configuration Reference.
sessionRecording: proxy
--set sessionRecording=proxy
separatePostgresListener
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
bool | false | no | proxy_service.postgres_listen_addr | ❌ |
separatePostgresListener
controls whether Teleport will multiplex PostgreSQL traffic for Teleport Database Access
over a separate TLS listener to Teleport's web UI.
When separatePostgresListener
is false
(the default), PostgreSQL traffic will be directed to port 443 (the default Teleport web
UI port). This works in situations when Teleport is terminating its own TLS traffic, i.e. when using certificates from LetsEncrypt
or providing a certificate/private key pair via Teleport's proxy_service.https_keypairs
config.
When separatePostgresListener
is true
, PostgreSQL traffic will be directed to a separate Postgres-only listener on port 5432.
This also adds the port to the Service
that the chart creates. This is useful when terminating TLS at a load balancer
in front of Teleport, such as when using AWS ACM.
These settings will not apply if proxyListenerMode
is set to multiplex
.
separatePostgresListener: true
--set separatePostgresListener=true
separateMongoListener
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
bool | false | no | proxy_service.mongo_listen_addr | ❌ |
separateMongoListener
controls whether Teleport will multiplex PostgreSQL traffic for Teleport Database Access
over a separate TLS listener to Teleport's web UI.
When separateMongoListener
is false
(the default), MongoDB traffic will be directed to port 443 (the default Teleport web
UI port). This works in situations when Teleport is terminating its own TLS traffic, i.e. when using certificates from LetsEncrypt
or providing a certificate/private key pair via Teleport's proxy_service.https_keypairs
config.
When separateMongoListener
is true
, MongoDB traffic will be directed to a separate Mongo-only listener on port 27017.
This also adds the port to the Service
that the chart creates. This is useful when terminating TLS at a load balancer
in front of Teleport, such as when using AWS ACM.
These settings will not apply if proxyListenerMode
is set to multiplex
.
separateMongoListener: true
--set separateMongoListener=true
publicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.public_addr | ❌ |
publicAddr
controls the advertised addresses for TLS connections.
When publicAddr
is not set, the address used is clusterName
on port 443.
publicAddr: ["loadbalancer.example.com:443"]
--set publicAddr[0]=loadbalancer.example.com:443
kubePublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.kube_public_addr | ❌ |
kubePublicAddr
controls the advertised addresses for the Kubernetes proxy.
This setting will not apply if proxyListenerMode
is set to multiplex
.
When kubePublicAddr
is not set, the addresses are inferred from publicAddr
if set,
else clusterName
is used. Default port is 3026.
kubePublicAddr: ["loadbalancer.example.com:3026"]
--set kubePublicAddr[0]=loadbalancer.example.com:3026
mongoPublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.mongo_public_addr | ❌ |
mongoPublicAddr
controls the advertised addresses to MongoDB clients.
This setting will not apply if proxyListenerMode
is set to multiplex
and
requires separateMongoListener
enabled.
When mongoPublicAddr
is not set, the addresses are inferred from clusterName
is used.
Default port is 27017.
mongoPublicAddr: ["loadbalancer.example.com:27017"]
--set mongoPublicAddr[0]=loadbalancer.example.com:27017
mysqlPublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.mysql_public_addr | ❌ |
mysqlPublicAddr
controls the advertised addresses for the MySQL proxy.
This setting will not apply if proxyListenerMode
is set to multiplex
.
When mysqlPublicAddr
is not set, the addresses are inferred from publicAddr
if set,
else clusterName
is used. Default port is 3036.
mysqlPublicAddr: ["loadbalancer.example.com:3036"]
--set mysqlPublicAddr[0]=loadbalancer.example.com:3036
postgresPublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.postgres_public_addr | ❌ |
postgresPublicAddr
controls the advertised addresses to postgres clients.
This setting will not apply if proxyListenerMode
is set to multiplex
and
requires separatePostgresListener
enabled.
When postgresPublicAddr
is not set, the addresses are inferred from publicAddr
if set,
else clusterName
is used. Default port is 5432.
postgresPublicAddr: ["loadbalancer.example.com:5432"]
--set postgresPublicAddr[0]=loadbalancer.example.com:5432
sshPublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.ssh_public_addr | ❌ |
sshPublicAddr
controls the advertised addresses for SSH clients. This is also used by the tsh
client.
This setting will not apply if proxyListenerMode
is set to multiplex
.
hen sshPublicAddr
is not set, the addresses are inferred from publicAddr
if set,
else clusterName
is used. Default port is 3023.
sshPublicAddr: ["loadbalancer.example.com:3023"]
--set sshPublicAddr[0]=loadbalancer.example.com:3023
tunnelPublicAddr
Type | Default value | Required? | teleport.yaml equivalent | Can be used in custom mode? |
---|---|---|---|---|
list[string] | [] | no | proxy_service.tunnel_public_addr | ❌ |
tunnelPublicAddr
controls the advertised addresses to trusted clusters or nodes joining via node-tunneling.
This setting will not apply if proxyListenerMode
is set to multiplex
.
When tunnelPublicAddr
is not set, the addresses are inferred from publicAddr
if set,
else clusterName
is used. Default port is 3024.
tunnelPublicAddr: ["loadbalancer.example.com:3024"]
--set tunnelPublicAddr[0]=loadbalancer.example.com:3024
enterprise
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | false | ✅ |
enterprise
controls whether to use Teleport Community Edition or Teleport Enterprise.
Setting enterprise
to true
will use the Teleport Enterprise image.
You will also need to download your Enterprise license from the Teleport dashboard and add it as a Kubernetes secret to use this:
kubectl --namespace teleport create secret generic license --from-file=/path/to/downloaded/license.pem
If you installed the Teleport chart into a specific namespace, the license
secret you create must also be added to the same namespace.
The file added to the secret must be called license.pem
. If you have renamed it, you can specify the filename to use in the secret creation command:
kubectl --namespace teleport create secret generic license --from-file=license.pem=/path/to/downloaded/this-is-my-teleport-license.pem
enterprise: true
--set enterprise=true
installCRDs
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | false | ✅ |
CRDs are not namespace-scoped resources - they can be installed only once in a cluster.
CRDs are required by the Teleport Kubernetes Operator and are installed by default when operator.enabled
is true.
installCRDs
overrides this behavior and allows users to indicate whether to deploy Teleport CRDs.
If several releases of the teleport-cluster
chart are deployed in the same Kubernetes cluster, only one
release should have installCRDs
enabled. Unless you are deploying multiple teleport-cluster
Helm releases in
the same Kubernetes cluster or installing the CRDs on your own you should not have to set this value.
installCRDs: true
--set installCRDs=true
operator
operator.enabled
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | false | ✅ |
operator.enabled
controls whether to deploy the Teleport Kubernetes Operator as a side-car.
Enabling the operator will also deploy the Teleport CRDs in the Kubernetes cluster.
If you are deploying multiple releases of the Helm chart in the same cluster you can override this behavior with
installCRDs
.
operator:
enabled: true
--set operator.enabled=true
operator.image
Type | Default value | Can be used in custom mode? |
---|---|---|
string | public.ecr.aws/gravitational/teleport-operator | ✅ |
operator.image
sets the Teleport Kubernetes Operator container image used for Teleport pods in the cluster.
You can override this to use your own Teleport Operator image rather than a Teleport-published image.
This setting requires operator.enabled
.
operator:
image: my.docker.registry/teleport-operator-image-name
--set operator.image=my.docker.registry/teleport-operator-image-name
operator.resources
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
See the Kubernetes resource documentation.
It is recommended to set resource requests/limits for each container based on their observed usage.
operator:
resources:
requests:
cpu: 1
memory: 2Gi
--set operator.resources.requests.cpu=1 \--set operator.resources.requests.memory=2Gi
teleportVersionOverride
Type | Default value | Can be used in custom mode? |
---|---|---|
string | nil | ✅ |
Normally the version of Teleport being used will match the version of the chart being installed. If you install chart version 7.0.0, you'll be using Teleport 7.0.0. Upgrading the Helm chart will use the latest version from the repo.
You can optionally override this to use a different published Teleport Docker image tag like 6.0.2
or 7
.
See our installation guide for information on Docker image versions.
teleportVersionOverride: "7"
--set teleportVersionOverride="7"
acme
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
bool | false | ❌ | proxy_service.acme.enabled |
ACME is a protocol for getting Web X.509 certificates.
Setting acme to true
enables the ACME protocol and will attempt to get a free TLS certificate from Let's Encrypt.
Setting acme to false
(the default) will cause Teleport to generate and use self-signed certificates for its web UI.
ACME can only be used for single-pod clusters. It is not suitable for use in HA configurations.
Using a self-signed TLS certificate and disabling TLS verification is OK for testing, but is not viable when running a production Teleport cluster as it will drastically reduce security. You must configure valid TLS certificates on your Teleport cluster for production workloads.
One option might be to use Teleport's built-in ACME support or enable cert-manager support.
acmeEmail
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | nil | ❌ | proxy_service.acme.email |
acmeEmail
is the email address to provide during certificate registration (this is a Let's Encrypt requirement).
acmeURI
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | Let's Encrypt production server | ❌ | proxy_service.acme.uri |
acmeURI
is the ACME server to use for getting certificates.
As an example, this can be overridden to use the Let's Encrypt staging server for testing.
You can also use any other ACME-compatible server.
acme: true
acmeEmail: [email protected]
acmeURI: https://acme-staging-v02.api.letsencrypt.org/directory
--set acme=true \--set [email protected] \--set acmeURI=https://acme-staging-v02.api.letsencrypt.org/directory
podSecurityPolicy
podSecurityPolicy.enabled
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | true | ✅ |
By default, Teleport charts also install a podSecurityPolicy
.
To disable this, you can set enabled
to false
.
podSecurityPolicy:
enabled: false
--set podSecurityPolicy.enabled=false
labels
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
object | {} | ❌ | kubernetes_service.labels |
labels
can be used to add a map of key-value pairs relating to the Teleport cluster being deployed. These labels can then be used with
Teleport's RBAC policies to define access rules for the cluster.
These are Teleport-specific RBAC labels, not Kubernetes labels.
labels:
environment: production
region: us-east
--set labels.environment=production \--set labels.region=us-east
chartMode
Type | Default value |
---|---|
string | standalone |
chartMode
is used to configure the chart's operation mode. You can find more information about each mode on its specific guide page:
persistence
Changes in Kubernetes 1.23+ mean that persistent volumes will not automatically be provisioned in AWS EKS clusters without additional configuration.
See AWS documentation on the EBS CSI driver for more details. This driver addon must be configured to use persistent volumes in EKS clusters after Kubernetes 1.23.
persistence.enabled
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | true | ✅ |
persistence.enabled
can be used to enable data persistence using either a new or pre-existing PersistentVolumeClaim
.
persistence:
enabled: true
--set persistence.enabled=true
persistence.existingClaimName
Type | Default value | Can be used in custom mode? |
---|---|---|
string | nil | ✅ |
persistence.existingClaimName
can be used to provide the name of a pre-existing PersistentVolumeClaim
to use if desired.
The default is left blank, which will automatically create a PersistentVolumeClaim
to use for Teleport storage in standalone
or custom
mode.
persistence:
existingClaimName: my-existing-pvc-name
--set persistence.existingClaimName=my-existing-pvc-name
persistence.volumeSize
Type | Default value | Can be used in custom mode? |
---|---|---|
string | 10Gi | ✅ |
You can set volumeSize
to request a different size of persistent volume when installing the Teleport chart in standalone
or custom
mode.
volumeSize
will be ignored if existingClaimName
is set.
persistence:
volumeSize: 50Gi
--set persistence.volumeSize=50Gi
aws
Can be used in custom mode? | teleport.yaml equivalent |
---|---|
❌ | See Using DynamoDB and Using Amazon S3 for details |
aws
settings are described in the AWS guide: Running an HA Teleport cluster using an AWS EKS Cluster
gcp
Can be used in custom mode? | teleport.yaml equivalent |
---|---|
❌ | See Using Firestore and Using GCS for details |
gcp
settings are described in the GCP guide: Running an HA Teleport cluster using a Google Cloud GKE cluster
highAvailability
highAvailability.replicaCount
Type | Default value | Can be used in custom mode? |
---|---|---|
int | 1 | ✅ (when using HA storage) |
highAvailability.replicaCount
can be used to set the number of replicas used in the deployment.
Set to a number higher than 1
for a high availability mode where multiple Teleport pods will be deployed and connections will be load balanced between them.
Setting highAvailability.replicaCount
to a value higher than 1
will disable the use of ACME certs.
As a rough guide, we recommend configuring one replica per distinct availability zone where your cluster has worker nodes.
2 replicas/availability zones will be fine for smaller workloads. 3-5 replicas/availability zones will be more appropriate for bigger clusters with more traffic.
When using custom
mode, you must use highly-available storage (e.g. etcd, DynamoDB or Firestore) for multiple replicas to be supported.
Information on supported Teleport storage backends
Manually configuring NFS-based storage or ReadWriteMany
volume claims is NOT supported for an HA deployment and will result in errors.
highAvailability:
replicaCount: 3
--set highAvailability.replicaCount=3
highAvailability.requireAntiAffinity
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | false | ✅ (when using HA storage) |
Setting highAvailability.requireAntiAffinity
to true
will use requiredDuringSchedulingIgnoredDuringExecution
to require that multiple
Teleport pods must not be scheduled on the same physical host.
This can result in Teleport pods failing to be scheduled in very small clusters or during node downtime, so should be used with caution.
Setting highAvailability.requireAntiAffinity
to false
(the default) uses preferredDuringSchedulingIgnoredDuringExecution
to make node
anti-affinity a soft requirement.
This setting only has any effect when highAvailability.replicaCount
is greater than 1
.
highAvailability:
requireAntiAffinity: true
--set highAvailability.requireAntiAffinity=true
highAvailability.podDisruptionBudget
highAvailability.podDisruptionBudget.enabled
Type | Default value | Can be used in custom mode? |
---|---|---|
bool | false | ✅ (when using HA storage) |
Enable a Pod Disruption Budget for the Teleport Pod to ensure HA during voluntary disruptions.
highAvailability:
podDisruptionBudget:
enabled: true
--set highAvailability.podDisruptionBudget.enabled=true
highAvailability.podDisruptionBudget.minAvailable
Type | Default value | Can be used in custom mode? |
---|---|---|
int | 1 | ✅ (when using HA storage) |
Ensures that this number of replicas is available during voluntary disruptions, can be a number of replicas or a percentage.
highAvailability:
podDisruptionBudget:
minAvailable: 1
--set highAvailability.podDisruptionBudget.minAvailable=1
highAvailability.certManager
See the cert-manager docs for more information.
highAvailability.certManager.enabled
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
bool | false | ❌ | proxy_service.https_keypairs (to provide your own certificates) |
Setting highAvailability.certManager.enabled
to true
will use cert-manager
to provision a TLS certificate for a Teleport
cluster deployed in HA mode.
You must install and configure cert-manager
in your Kubernetes cluster yourself.
See the cert-manager Helm install instructions and the relevant sections of the AWS and GCP guides for more information.
highAvailability.certManager.addCommonName
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
bool | false | ❌ | proxy_service.https_keypairs (to provide your own certificates) |
Setting highAvailability.certManager.addCommonName
to true
will instruct cert-manager
to set the commonName field in its certificate signing request to the issuing CA.
You must install and configure cert-manager
in your Kubernetes cluster yourself.
See the cert-manager Helm install instructions and the relevant sections of the AWS and GCP guides for more information.
highAvailability:
certManager:
enabled: true
addCommonName: true
issuerName: letsencrypt-production
--set highAvailability.certManager.enabled=true \--set highAvailability.certManager.addCommonName=true \--set highAvailability.certManager.issuerName=letsencrypt-production
highAvailability.certManager.issuerName
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | nil | ❌ | None |
Sets the name of the cert-manager
Issuer
or ClusterIssuer
to use for issuing certificates.
You must install configure an appropriate Issuer
supporting a DNS01 challenge yourself.
Please see the cert-manager DNS01 docs and the relevant sections of the AWS and GCP guides for more information.
highAvailability:
certManager:
enabled: true
issuerName: letsencrypt-production
--set highAvailability.certManager.enabled=true \--set highAvailability.certManager.issuerName=letsencrypt-production
highAvailability.certManager.issuerKind
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | Issuer | ❌ | None |
Sets the Kind
of Issuer
to be used when issuing certificates with cert-manager
. Defaults to Issuer
to keep permissions
scoped to a single namespace.
highAvailability:
certManager:
issuerKind: ClusterIssuer
--set highAvailability.certManager.issuerKind=ClusterIssuer
highAvailability.certManager.issuerGroup
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | cert-manager.io | ❌ | None |
Sets the Group
of Issuer
to be used when issuing certificates with cert-manager
. Defaults to cert-manager.io
to use built-in issuers.
highAvailability:
certManager:
issuerGroup: cert-manager.io
--set highAvailability.certManager.issuerGroup=cert-manager.io
highAvailability.minReadySeconds
| Type | Default value | Can be used in custom
mode? |
| - | - | - | - |
| integer
| 15
| ✅ |
Amount of time to wait during a pod rollout before moving to the next pod. See Kubernetes documentation.
This is used to give time for the agents to connect back to newly created pods before continuing the rollout.
highAvailability:
minReadySeconds: 15
--set highAvailability.minReadySeconds=15
tls.existingSecretName
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | "" | ✅ | proxy_service.https_keypairs |
tls.existingSecretName
tells Teleport to use an existing Kubernetes TLS secret to secure its web UI using HTTPS. This can be
set to use a TLS certificate issued by a trusted internal CA rather than a public-facing CA like Let's Encrypt.
You should create the secret in the same namespace as Teleport using a command like this:
kubectl create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file
See https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets for more information.
tls:
existingSecretName: my-tls-secret
--set tls.existingSecretName=my-tls-secret
tls.existingCASecretName
Type | Default value | Can be used in custom mode? |
---|---|---|
string | "" | ✅ |
tls.existingCASecretName
sets the SSL_CERT_FILE
environment variable to load a trusted CA or bundle in PEM format into Teleport pods.
This can be set to inject a root and/or intermediate CA so that Teleport can build a full trust chain on startup.
This can also be used to trust private CAs when contacting an OIDC provider, an S3-compatible backend, or any external service without
modifying the Teleport base image.
This is likely to be needed
if Teleport fails to start when tls.existingSecretName
is set with a User Message: unable to verify HTTPS certificate chain
error
in the pod logs.
You should create the secret in the same namespace as Teleport using a command like this:
kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem
The filename used for the root CA in the secret must be ca.pem
.
tls:
existingCASecretName: my-root-ca
--set tls.existingSecretName=my-root-ca
image
Type | Default value | Can be used in custom mode? |
---|---|---|
string | public.ecr.aws/gravitational/teleport | ✅ |
image
sets the Teleport container image used for Teleport Community pods in the cluster.
You can override this to use your own Teleport Community image rather than a Teleport-published image.
image: my.docker.registry/teleport-community-image-name
--set image=my.docker.registry/teleport-community-image-name
enterpriseImage
Type | Default value | Can be used in custom mode? |
---|---|---|
string | public.ecr.aws/gravitational/teleport-ent | ✅ |
enterpriseImage
sets the container image used for Teleport Enterprise pods in the cluster.
You can override this to use your own Teleport Enterprise image rather than a Teleport-published image.
enterpriseImage: my.docker.registry/teleport-enterprise-image-name
--set enterpriseImage=my.docker.registry/teleport-enterprise-image
log
log.level
This field used to be called logLevel
. For backwards compatibility this name can still be used, but we recommend changing your values file to use log.level
.
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | INFO | ❌ | teleport.log.severity |
log.level
sets the log level used for the Teleport process.
Available log levels (in order of most to least verbose) are: DEBUG
, INFO
, WARNING
, ERROR
.
The default is INFO
, which is recommended in production.
DEBUG
is useful during first-time setup or to see more detailed logs for debugging.
log:
level: DEBUG
--set log.level=DEBUG
log.output
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | stderr | ❌ | teleport.log.output |
log.output
sets the output destination for the Teleport process.
This can be set to any of the built-in values: stdout
, stderr
or syslog
to use that destination.
The value can also be set to a file path (such as /var/log/teleport.log
) to write logs to a file. Bear in mind that a few service startup messages will still go to stderr
for resilience.
log:
output: stderr
--set log.output=stderr
log.format
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | text | ❌ | teleport.log.format.output |
log.format
sets the output type for the Teleport process.
Possible values are text
(default) or json
.
log:
format: json
--set log.format=json
log.extraFields
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
list | ["timestamp", "level", "component", "caller"] | ❌ | teleport.log.format.extra_fields |
log.extraFields
sets the fields used in logging for the Teleport process.
See the Teleport config file reference for more details on possible values for extra_fields
.
log:
extraFields: ["timestamp", "level"]
--set "log.extraFields[0]=timestamp" \
--set "log.extraFields[1]=level"
nodeSelector
Type | Default value |
---|---|
object | {} |
nodeSelector
can be used to add a map of key-value pairs to constrain the
nodes that Teleport pods will run on.
nodeSelector:
role: bastion
environment: security
--set nodeSelector.role=bastion \
--set nodeSelector.environment=security
affinity
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes affinity to set for pod assignments.
You cannot set both affinity
and highAvailability.requireAntiAffinity
as they conflict with each other. Only set one or the other.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gravitational.io/dedicated
operator: In
values:
- teleport
--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key=gravitational.io/dedicated \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator=In \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0]=teleport
annotations.config
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
object | {} | ❌ | None |
Kubernetes annotations which should be applied to the ConfigMap
created by the chart.
These annotations will not be applied in custom
mode, as the ConfigMap
is not managed by the chart.
In this instance, you should apply annotations manually to your created ConfigMap
.
annotations:
config:
kubernetes.io/annotation: value
--set annotations.config."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.deployment
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the Deployment
created by the chart.
annotations:
deployment:
kubernetes.io/annotation: value
--set annotations.deployment."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.pod
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to each Pod
created by the chart.
annotations:
pod:
kubernetes.io/annotation: value
--set annotations.pod."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.service
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the Service
created by the chart.
annotations:
service:
kubernetes.io/annotation: value
--set annotations.service."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.serviceAccount
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the serviceAccount
created by the chart.
annotations:
serviceAccount:
kubernetes.io/annotation: value
--set annotations.serviceAccount."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.certSecret
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the secret
generated by
cert-manager
from the certificate
created by the chart. Only valid when
highAvailability.certManager.enabled
is set to true
and requires
cert-manager
v1.5.0+.
annotations:
certSecret:
kubernetes.io/annotation: value
--set annotations.certSecret."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
serviceAccount.create
Type | Default value | Required? | Can be used in custom mode? |
---|---|---|---|
boolean | true | No | ✅ |
Boolean value that specifies whether service account should be created or not.
serviceAccount.name
Type | Default value | Required? | Can be used in custom mode? |
---|---|---|---|
string | "" | No | ✅ |
Name to use for teleport service account.
If serviceAccount.create
is false, service account with this name should be created in current namespace before installing helm chart.
service.type
Type | Default value | Required? | Can be used in custom mode? |
---|---|---|---|
string | LoadBalancer | Yes | ✅ |
Allows to specify the service type.
service:
type: LoadBalancer
--set service.type=LoadBalancer
service.spec.loadBalancerIP
Type | Default value | Required? | Can be used in custom mode? |
---|---|---|---|
string | nil | No | ✅ |
Allows to specify the loadBalancerIP
.
service:
spec:
loadBalancerIP: 1.2.3.4
--set service.spec.loadBalancerIP=1.2.3.4
extraArgs
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra arguments to pass to the teleport start
command when running a Teleport Pod.
extraArgs:
- "--bootstrap=/etc/teleport-bootstrap/roles.yaml"
--set "extraArgs={--bootstrap=/etc/teleport-bootstrap/roles.yaml}"
extraEnv
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra environment variables to be set on the main Teleport container.
extraEnv:
- name: MY_ENV
value: my-value
--set "extraEnv[0].name=MY_ENV" \--set "extraEnv[0].value=my-value"
extraVolumes
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra Kubernetes Volumes
which should be available to any Pod
created by the chart. These volumes
will also be available to any initContainers
configured by the chart.
extraVolumes:
- name: myvolume
secret:
secretName: mysecret
--set "extraVolumes[0].name=myvolume" \--set "extraVolumes[0].secret.secretName=mysecret"
extraVolumeMounts
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra Kubernetes volume mounts which should be mounted into any Pod
created by the chart. These volume
mounts will also be mounted into any initContainers
configured by the chart.
extraVolumeMounts:
- name: myvolume
mountPath: /path/to/mount/volume
--set "extraVolumeMounts[0].name=myvolume" \--set "extraVolumeMounts[0].path=/path/to/mount/volume"
imagePullPolicy
Type | Default value | Can be used in custom mode? |
---|---|---|
string | IfNotPresent | ✅ |
Allows the imagePullPolicy
for any pods created by the chart to be overridden.
imagePullPolicy: Always
--set imagePullPolicy=Always
initContainers
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of initContainers
which will be run before the main Teleport container in any pod created by the chart.
initContainers:
- name: teleport-init
image: alpine
args: ['echo test']
--set "initContainers[0].name=teleport-init" \--set "initContainers[0].image=alpine" \--set "initContainers[0].args={echo test}"
postStart
A postStart
lifecycle handler to be configured on the main Teleport container.
postStart:
command:
- echo
- foo
--set "postStart.command[0]=echo" \
--set "postStart.command[1]=foo"
resources
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Resource requests/limits which should be configured for Teleport containers. These resource limits will also be
applied to initContainers
.
resources:
requests:
cpu: 1
memory: 2Gi
--set resources.requests.cpu=1 \--set resources.requests.memory=2Gi
securityContext
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
The securityContext
applies to any pods created by the chart, including initContainers
.
securityContext:
runAsUser: 99
--set securityContext.runAsUser=99
tolerations
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
Kubernetes Tolerations to set for pod assignment.
tolerations:
- key: "dedicated"
operator: "Equal"
value: "teleport"
effect: "NoSchedule"
--set tolerations[0].key=dedicated \--set tolerations[0].operator=Equal \--set tolerations[0].value=teleport \--set tolerations[0].effect=NoSchedule
priorityClassName
Type | Default value | Can be used in custom mode? |
---|---|---|
string | "" | ✅ |
Kubernetes PriorityClass to set for pod.
priorityClassName: "system-cluster-critical"
--set priorityClassName=system-cluster-critical
probeTimeoutSeconds
Type | Default value | Can be used in custom mode? |
---|---|---|
integer | 1 | ✅ |
Kubernetes timeouts for the liveness and readiness probes.
probeTimeoutSeconds: 5
--set probeTimeoutSeconds=5