Fork me on GitHub

Teleport

teleport-cluster Chart Reference

Improve

The teleport-cluster Helm chart deploys the teleport daemon on Kubernetes. You can use our preset configurations to deploy the Auth Service and Proxy Service, or a custom configuration to deploy resource services such as the Teleport Kubernetes Service or Database Service.

You can browse the source on GitHub.

The teleport-cluster chart runs two Teleport services:

Teleport servicePurposeDocumentation
auth_serviceAuthenticates users and hosts, and issues certificatesAuth documentation
proxy_serviceRuns the externally-facing parts of a Teleport cluster, such as the web UI, SSH proxy and reverse tunnel serviceProxy documentation

The teleport-cluster chart can be deployed in four different modes. Get started with a guide for each mode:

chartModeGuide
standaloneGetting Started - Kubernetes with SSO
awsRunning an HA Teleport cluster using an AWS EKS Cluster
gcpRunning an HA Teleport cluster using a Google Cloud GKE cluster
customRunning a Teleport cluster with a custom config

This reference details available values for the teleport-cluster chart.

Warning

Backing up production instances, environments, and/or settings before making permanent modifications is encouraged as a best practice. Doing so allows you to roll back to an existing state if needed.

clusterName

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
stringnilWhen chartMode is aws, gcp or standaloneauth_service.cluster_name, proxy_service.public_addr

clusterName controls the name used to refer to the Teleport cluster, along with the externally-facing public address to use to access it.

Note

If using a fully qualified domain name as your clusterName, you will also need to configure the DNS provider for this domain to point to the external load balancer address of your Teleport cluster.

Whether an IP or hostname is provided as an external address for the load balancer varies according to the provider.

EKS uses a hostname:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].hostname}'

a5f22a02798f541e58c6641c1b158ea3-1989279894.us-east-1.elb.amazonaws.com

GKE uses an IP address:

kubectl --namespace teleport-cluster get service/teleport -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

35.203.56.38

You will need to manually add a DNS A record pointing teleport.example.com to either the IP or hostname of the Kubernetes load balancer.

Teleport assigns a subdomain to each application you have configured for Application Access (e.g., grafana.teleport.example.com), so you will need to ensure that a DNS A record exists for each application-specific subdomain so clients can access your applications via Teleport.

You should create either a separate DNS A record for each subdomain or a single record with a wildcard subdomain such as *.teleport.example.com. This way, your certificate authority (e.g., Let's Encrypt) can issue a certificate for each subdomain, enabling clients to verify your Teleport hosts regardless of the application they are accessing.

If you are not using ACME certificates, you may also need to accept insecure warnings in your browser to view the page successfully.

kubeClusterName

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
stringclusterName valuenokubernetes_service.kube_cluster_name

kubeClusterName sets the name used for the Kubernetes cluster. This name will be shown to Teleport users connecting to the cluster.

authenticationType

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
stringlocalYesauth_service.authentication.type

authenticationType controls the authentication scheme used by Teleport. Possible values are local and github for OSS, plus oidc, saml, and false for Enterprise.

authenticationSecondFactor

authenticationSecondFactor.secondFactor

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
stringotpYesauth_service.authentication.second_factor

authenticationSecondFactor.secondFactor controls the second factor used for local user authentication. Possible values supported by this chart are off (not recommended), on, otp, optional and webauthn.

When set to on, optional or webauthn, the authenticationSecondFactor.webauthn section can also be used. The configured rp_id defaults to the FQDN which is used to access the Teleport cluster.

authenticationSecondFactor.webauthn

See Second Factor - WebAuthn for more details.

authenticationSecondFactor.webauthn.attestationAllowedCas

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
array[]Noauth_service.authentication.webauthn.attestation_allowed_cas

authenticationSecondFactor.webauthn.attestationAllowedCas is an optional allow list of certificate authorities (as local file paths or in-line PEM certificate string) for device verification. This field allows you to restrict which device models and vendors you trust. Devices outside of the list will be rejected during registration. By default all devices are allowed.

authenticationSecondFactor.webauthn.attestationDeniedCas

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
array[]Noauth_service.authentication.webauthn.attestation_denied_cas

authenticationSecondFactor.webauthn.attestationDeniedCas is optional deny list of certificate authorities (as local file paths or in-line PEM certificate string) for device verification. This field allows you to forbid specific device models and vendors, while allowing all others (provided they clear attestation_allowed_cas as well). Devices within this list will be rejected during registration. By default no devices are forbidden.

proxyListenerMode

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
stringnilnoauth_service.proxy_listener_mode

proxyListenerMode controls proxy TLS routing used by Teleport. Possible values are multiplex.

proxyListenerMode: multiplex
--set proxyListenerMode=multiplex

sessionRecording

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
string""noauth_service.session_recording

sessionRecording controls the session_recording field in the teleport.yaml configuration. It is passed as-is in the configuration. For possible values, see the Teleport Configuration Reference.

sessionRecording: proxy
--set sessionRecording=proxy

separatePostgresListener

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
boolfalsenoproxy_service.postgres_listen_addr

separatePostgresListener controls whether Teleport will multiplex PostgreSQL traffic for Teleport Database Access over a separate TLS listener to Teleport's web UI.

When separatePostgresListener is false (the default), PostgreSQL traffic will be directed to port 443 (the default Teleport web UI port). This works in situations when Teleport is terminating its own TLS traffic, i.e. when using certificates from LetsEncrypt or providing a certificate/private key pair via Teleport's proxy_service.https_keypairs config.

When separatePostgresListener is true, PostgreSQL traffic will be directed to a separate Postgres-only listener on port 5432. This also adds the port to the Service that the chart creates. This is useful when terminating TLS at a load balancer in front of Teleport, such as when using AWS ACM.

These settings will not apply if proxyListenerMode is set to multiplex.

separatePostgresListener: true
--set separatePostgresListener=true

separateMongoListener

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
boolfalsenoproxy_service.mongo_listen_addr

separateMongoListener controls whether Teleport will multiplex PostgreSQL traffic for Teleport Database Access over a separate TLS listener to Teleport's web UI.

When separateMongoListener is false (the default), MongoDB traffic will be directed to port 443 (the default Teleport web UI port). This works in situations when Teleport is terminating its own TLS traffic, i.e. when using certificates from LetsEncrypt or providing a certificate/private key pair via Teleport's proxy_service.https_keypairs config.

When separateMongoListener is true, MongoDB traffic will be directed to a separate Mongo-only listener on port 27017. This also adds the port to the Service that the chart creates. This is useful when terminating TLS at a load balancer in front of Teleport, such as when using AWS ACM.

These settings will not apply if proxyListenerMode is set to multiplex.

separateMongoListener: true
--set separateMongoListener=true

publicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.public_addr

publicAddr controls the advertised addresses for TLS connections.

When publicAddr is not set, the address used is clusterName on port 443.

publicAddr: ["loadbalancer.example.com:443"]
--set publicAddr[0]=loadbalancer.example.com:443

kubePublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.kube_public_addr

kubePublicAddr controls the advertised addresses for the Kubernetes proxy. This setting will not apply if proxyListenerMode is set to multiplex.

When kubePublicAddr is not set, the addresses are inferred from publicAddr if set, else clusterName is used. Default port is 3026.

kubePublicAddr: ["loadbalancer.example.com:3026"]
--set kubePublicAddr[0]=loadbalancer.example.com:3026

mongoPublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.mongo_public_addr

mongoPublicAddr controls the advertised addresses to MongoDB clients. This setting will not apply if proxyListenerMode is set to multiplex and requires separateMongoListener enabled.

When mongoPublicAddr is not set, the addresses are inferred from clusterName is used. Default port is 27017.

mongoPublicAddr: ["loadbalancer.example.com:27017"]
--set mongoPublicAddr[0]=loadbalancer.example.com:27017

mysqlPublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.mysql_public_addr

mysqlPublicAddr controls the advertised addresses for the MySQL proxy. This setting will not apply if proxyListenerMode is set to multiplex.

When mysqlPublicAddr is not set, the addresses are inferred from publicAddr if set, else clusterName is used. Default port is 3036.

mysqlPublicAddr: ["loadbalancer.example.com:3036"]
--set mysqlPublicAddr[0]=loadbalancer.example.com:3036

postgresPublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.postgres_public_addr

postgresPublicAddr controls the advertised addresses to postgres clients. This setting will not apply if proxyListenerMode is set to multiplex and requires separatePostgresListener enabled.

When postgresPublicAddr is not set, the addresses are inferred from publicAddr if set, else clusterName is used. Default port is 5432.

postgresPublicAddr: ["loadbalancer.example.com:5432"]
--set postgresPublicAddr[0]=loadbalancer.example.com:5432

sshPublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.ssh_public_addr

sshPublicAddr controls the advertised addresses for SSH clients. This is also used by the tsh client. This setting will not apply if proxyListenerMode is set to multiplex.

hen sshPublicAddr is not set, the addresses are inferred from publicAddr if set, else clusterName is used. Default port is 3023.

sshPublicAddr: ["loadbalancer.example.com:3023"]
--set sshPublicAddr[0]=loadbalancer.example.com:3023

tunnelPublicAddr

TypeDefault valueRequired?teleport.yaml equivalentCan be used in custom mode?
list[string][]noproxy_service.tunnel_public_addr

tunnelPublicAddr controls the advertised addresses to trusted clusters or nodes joining via node-tunneling. This setting will not apply if proxyListenerMode is set to multiplex.

When tunnelPublicAddr is not set, the addresses are inferred from publicAddr if set, else clusterName is used. Default port is 3024.

tunnelPublicAddr: ["loadbalancer.example.com:3024"]
--set tunnelPublicAddr[0]=loadbalancer.example.com:3024

enterprise

TypeDefault valueCan be used in custom mode?
boolfalse

enterprise controls whether to use Teleport Community Edition or Teleport Enterprise.

Setting enterprise to true will use the Teleport Enterprise image.

You will also need to download your Enterprise license from the Teleport dashboard and add it as a Kubernetes secret to use this:

kubectl --namespace teleport create secret generic license --from-file=/path/to/downloaded/license.pem
Tip

If you installed the Teleport chart into a specific namespace, the license secret you create must also be added to the same namespace.

Note

The file added to the secret must be called license.pem. If you have renamed it, you can specify the filename to use in the secret creation command:

kubectl --namespace teleport create secret generic license --from-file=license.pem=/path/to/downloaded/this-is-my-teleport-license.pem
enterprise: true
--set enterprise=true

teleportVersionOverride

TypeDefault valueCan be used in custom mode?
stringnil

Normally the version of Teleport being used will match the version of the chart being installed. If you install chart version 7.0.0, you'll be using Teleport 7.0.0. Upgrading the Helm chart will use the latest version from the repo.

You can optionally override this to use a different published Teleport Docker image tag like 6.0.2 or 7.

See these links for information on Docker image versions:

teleportVersionOverride: "7"
--set teleportVersionOverride="7"

acme

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
boolfalseproxy_service.acme.enabled

ACME is a protocol for getting Web X.509 certificates.

Setting acme to true enables the ACME protocol and will attempt to get a free TLS certificate from Let's Encrypt. Setting acme to false (the default) will cause Teleport to generate and use self-signed certificates for its web UI.

Note

ACME can only be used for single-pod clusters. It is not suitable for use in HA configurations.

Warning

Using a self-signed TLS certificate and disabling TLS verification is OK for testing, but is not viable when running a production Teleport cluster as it will drastically reduce security. You must configure valid TLS certificates on your Teleport cluster for production workloads.

One option might be to use Teleport's built-in ACME support or enable cert-manager support.

acmeEmail

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringnilproxy_service.acme.email

acmeEmail is the email address to provide during certificate registration (this is a Let's Encrypt requirement).

acmeURI

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringLet's Encrypt production serverproxy_service.acme.uri

acmeURI is the ACME server to use for getting certificates.

As an example, this can be overridden to use the Let's Encrypt staging server for testing.

You can also use any other ACME-compatible server.

acme: true
acmeEmail: [email protected]
acmeURI: https://acme-staging-v02.api.letsencrypt.org/directory
--set acme=true \--set [email protected] \--set acmeURI=https://acme-staging-v02.api.letsencrypt.org/directory

podSecurityPolicy

podSecurityPolicy.enabled

TypeDefault valueCan be used in custom mode?
booltrue

By default, Teleport charts also install a podSecurityPolicy.

To disable this, you can set enabled to false.

Kubernetes reference

podSecurityPolicy:
  enabled: false
--set podSecurityPolicy.enabled=false

labels

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
object{}kubernetes_service.labels

labels can be used to add a map of key-value pairs relating to the Teleport cluster being deployed. These labels can then be used with Teleport's RBAC policies to define access rules for the cluster.

Note

These are Teleport-specific RBAC labels, not Kubernetes labels.

labels:
  environment: production
  region: us-east
--set labels.environment=production \--set labels.region=us-east

chartMode

TypeDefault value
stringstandalone

chartMode is used to configure the chart's operation mode. You can find more information about each mode on its specific guide page:

chartModeGuide
standaloneGetting Started - Kubernetes with SSO
awsRunning an HA Teleport cluster using an AWS EKS Cluster
gcpRunning an HA Teleport cluster using a Google Cloud GKE cluster
customRunning a Teleport cluster with a custom config

persistence

persistence.enabled

TypeDefault valueCan be used in custom mode?
booltrue

persistence.enabled can be used to enable data persistence using either a new or pre-existing PersistentVolumeClaim.

persistence:
  enabled: true
--set persistence.enabled=true

persistence.existingClaimName

TypeDefault valueCan be used in custom mode?
stringnil

persistence.existingClaimName can be used to provide the name of a pre-existing PersistentVolumeClaim to use if desired.

The default is left blank, which will automatically create a PersistentVolumeClaim to use for Teleport storage in standalone or custom mode.

persistence:
  existingClaimName: my-existing-pvc-name
--set persistence.existingClaimName=my-existing-pvc-name

persistence.volumeSize

TypeDefault valueCan be used in custom mode?
string10Gi

You can set volumeSize to request a different size of persistent volume when installing the Teleport chart in standalone or custom mode.

Note

volumeSize will be ignored if existingClaimName is set.

persistence:
  volumeSize: 50Gi

--set persistence.volumeSize=50Gi

aws

Can be used in custom mode?teleport.yaml equivalent
See Using DynamoDB and Using Amazon S3 for details

aws settings are described in the AWS guide: Running an HA Teleport cluster using an AWS EKS Cluster

gcp

Can be used in custom mode?teleport.yaml equivalent
See Using Firestore and Using GCS for details

gcp settings are described in the GCP guide: Running an HA Teleport cluster using a Google Cloud GKE cluster

highAvailability

highAvailability.replicaCount

TypeDefault valueCan be used in custom mode?
int1✅ (when using HA storage)

highAvailability.replicaCount can be used to set the number of replicas used in the deployment.

Set to a number higher than 1 for a high availability mode where multiple Teleport pods will be deployed and connections will be load balanced between them.

Note

Setting highAvailability.replicaCount to a value higher than 1 will disable the use of ACME certs.

Sizing guidelines

As a rough guide, we recommend configuring one replica per distinct availability zone where your cluster has worker nodes.

2 replicas/availability zones will be fine for smaller workloads. 3-5 replicas/availability zones will be more appropriate for bigger clusters with more traffic.

Warning

When using custom mode, you must use highly-available storage (e.g. etcd, DynamoDB or Firestore) for multiple replicas to be supported.

Information on supported Teleport storage backends

Manually configuring NFS-based storage or ReadWriteMany volume claims is NOT supported for an HA deployment and will result in errors.

highAvailability:
  replicaCount: 3
--set highAvailability.replicaCount=3

highAvailability.requireAntiAffinity

TypeDefault valueCan be used in custom mode?
boolfalse✅ (when using HA storage)

Kubernetes reference

Setting highAvailability.requireAntiAffinity to true will use requiredDuringSchedulingIgnoredDuringExecution to require that multiple Teleport pods must not be scheduled on the same physical host.

Warning

This can result in Teleport pods failing to be scheduled in very small clusters or during node downtime, so should be used with caution.

Setting highAvailability.requireAntiAffinity to false (the default) uses preferredDuringSchedulingIgnoredDuringExecution to make node anti-affinity a soft requirement.

Note

This setting only has any effect when highAvailability.replicaCount is greater than 1.

highAvailability:
  requireAntiAffinity: true
--set highAvailability.requireAntiAffinity=true

highAvailability.podDisruptionBudget

highAvailability.podDisruptionBudget.enabled

TypeDefault valueCan be used in custom mode?
boolfalse✅ (when using HA storage)

Kubernetes reference

Enable a Pod Disruption Budget for the Teleport Pod to ensure HA during voluntary disruptions.

highAvailability:
  podDisruptionBudget:
    enabled: true
--set highAvailability.podDisruptionBudget.enabled=true

highAvailability.podDisruptionBudget.minAvailable

TypeDefault valueCan be used in custom mode?
int1✅ (when using HA storage)

Kubernetes reference

Ensures that this number of replicas is available during voluntary disruptions, can be a number of replicas or a percentage.

highAvailability:
  podDisruptionBudget:
    minAvailable: 1
--set highAvailability.podDisruptionBudget.minAvailable=1

highAvailability.certManager

See the cert-manager docs for more information.

highAvailability.certManager.enabled

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
boolfalseproxy_service.https_keypairs (to provide your own certificates)

Setting highAvailability.certManager.enabled to true will use cert-manager to provision a TLS certificate for a Teleport cluster deployed in HA mode.

Installing cert-manager

You must install and configure cert-manager in your Kubernetes cluster yourself.

See the cert-manager Helm install instructions and the relevant sections of the AWS and GCP guides for more information.

highAvailability.certManager.addCommonName

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
boolfalseproxy_service.https_keypairs (to provide your own certificates)

Setting highAvailability.certManager.addCommonName to true will instruct cert-manager to set the commonName field in its certificate signing request to the issuing CA.

Enabling common name field

You must install and configure cert-manager in your Kubernetes cluster yourself.

See the cert-manager Helm install instructions and the relevant sections of the AWS and GCP guides for more information.

highAvailability:
  certManager:
    enabled: true
    addCommonName: true
    issuerName: letsencrypt-production
--set highAvailability.certManager.enabled=true \--set highAvailability.certManager.addCommonName=true \--set highAvailability.certManager.issuerName=letsencrypt-production

highAvailability.certManager.issuerName

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringnilNone

Sets the name of the cert-manager Issuer or ClusterIssuer to use for issuing certificates.

Configuring an Issuer

You must install configure an appropriate Issuer supporting a DNS01 challenge yourself.

Please see the cert-manager DNS01 docs and the relevant sections of the AWS and GCP guides for more information.

highAvailability:
  certManager:
    enabled: true
    issuerName: letsencrypt-production
--set highAvailability.certManager.enabled=true \--set highAvailability.certManager.issuerName=letsencrypt-production

highAvailability.certManager.issuerKind

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringIssuerNone

Sets the Kind of Issuer to be used when issuing certificates with cert-manager. Defaults to Issuer to keep permissions scoped to a single namespace.

highAvailability:
  certManager:
    issuerKind: ClusterIssuer

--set highAvailability.certManager.issuerKind=ClusterIssuer

highAvailability.certManager.issuerGroup

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringcert-manager.ioNone

Sets the Group of Issuer to be used when issuing certificates with cert-manager. Defaults to cert-manager.io to use built-in issuers.

highAvailability:
  certManager:
    issuerGroup: cert-manager.io

--set highAvailability.certManager.issuerGroup=cert-manager.io

tls.existingSecretName

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
string""proxy_service.https_keypairs

tls.existingSecretName tells Teleport to use an existing Kubernetes TLS secret to secure its web UI using HTTPS. This can be set to use a TLS certificate issued by a trusted internal CA rather than a public-facing CA like Let's Encrypt.

You should create the secret in the same namespace as Teleport using a command like this:

kubectl create secret tls my-tls-secret --cert=/path/to/cert/file --key=/path/to/key/file

See https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets for more information.

tls:
  existingSecretName: my-tls-secret
--set tls.existingSecretName=my-tls-secret

tls.existingCASecretName

TypeDefault valueCan be used in custom mode?
string""

tls.existingCASecretName sets the SSL_CERT_FILE environment variable to load a trusted CA or bundle in PEM format into Teleport pods. This can be set to inject a root and/or intermediate CA so that Teleport can build a full trust chain on startup.

This is likely to be needed if Teleport fails to start when tls.existingSecretName is set with a User Message: unable to verify HTTPS certificate chain error in the pod logs.

You should create the secret in the same namespace as Teleport using a command like this:

kubectl create secret generic my-root-ca --from-file=ca.pem=/path/to/root-ca.pem

The filename used for the root CA in the secret must be ca.pem.

tls:
  existingCASecretName: my-root-ca
--set tls.existingSecretName=my-root-ca

image

TypeDefault valueCan be used in custom mode?
stringquay.io/gravitational/teleport

image sets the container image used for Teleport Community pods in the cluster.

You can override this to use your own Teleport Community image rather than a Teleport-published image.

image: my.docker.registry/teleport-community-image-name

--set image=my.docker.registry/teleport-community-image-name

enterpriseImage

TypeDefault valueCan be used in custom mode?
stringquay.io/gravitational/teleport-ent

enterpriseImage sets the container image used for Teleport Enterprise pods in the cluster.

You can override this to use your own Teleport Enterprise image rather than a Teleport-published image.

enterpriseImage: my.docker.registry/teleport-enterprise-image-name

--set enterpriseImage=my.docker.registry/teleport-enterprise-image

log

log.level

Note

This field used to be called logLevel. For backwards compatibility this name can still be used, but we recommend changing your values file to use log.level.

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringINFOteleport.log.severity

log.level sets the log level used for the Teleport process.

Available log levels (in order of most to least verbose) are: DEBUG, INFO, WARNING, ERROR.

The default is INFO, which is recommended in production.

DEBUG is useful during first-time setup or to see more detailed logs for debugging.

log:
  level: DEBUG

--set log.level=DEBUG

log.output

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringstderrteleport.log.output

log.output sets the output destination for the Teleport process.

This can be set to any of the built-in values: stdout, stderr or syslog to use that destination.

The value can also be set to a file path (such as /var/log/teleport.log) to write logs to a file. Bear in mind that a few service startup messages will still go to stderr for resilience.

log:
  output: stderr

--set log.output=stderr

log.format

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
stringtextteleport.log.format.output

log.format sets the output type for the Teleport process.

Possible values are text (default) or json.

log:
  format: json

--set log.format=json

log.extraFields

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
list["timestamp", "level", "component", "caller"]teleport.log.format.extra_fields

log.extraFields sets the fields used in logging for the Teleport process.

See the Teleport config file reference for more details on possible values for extra_fields.

log:
  extraFields: ["timestamp", "level"]

--set "log.extraFields[0]=timestamp" \

--set "log.extraFields[1]=level"

affinity

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes affinity to set for pod assignments.

Note

You cannot set both affinity and highAvailability.requireAntiAffinity as they conflict with each other. Only set one or the other.

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: gravitational.io/dedicated
          operator: In
          values:
          - teleport
--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key=gravitational.io/dedicated \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator=In \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0]=teleport

annotations.config

TypeDefault valueCan be used in custom mode?teleport.yaml equivalent
object{}None

Kubernetes reference

Kubernetes annotations which should be applied to the ConfigMap created by the chart.

Note

These annotations will not be applied in custom mode, as the ConfigMap is not managed by the chart. In this instance, you should apply annotations manually to your created ConfigMap.

annotations:
  config:
    kubernetes.io/annotation: value
--set annotations.config."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

annotations.deployment

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes annotations which should be applied to the Deployment created by the chart.

annotations:
  deployment:
    kubernetes.io/annotation: value
--set annotations.deployment."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

annotations.pod

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes annotations which should be applied to each Pod created by the chart.

annotations:
  pod:
    kubernetes.io/annotation: value
--set annotations.pod."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

annotations.service

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes annotations which should be applied to the Service created by the chart.

annotations:
  service:
    kubernetes.io/annotation: value
--set annotations.service."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

annotations.serviceAccount

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes annotations which should be applied to the serviceAccount created by the chart.

annotations:
  serviceAccount:
    kubernetes.io/annotation: value
--set annotations.serviceAccount."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

annotations.certSecret

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Kubernetes annotations which should be applied to the secret generated by cert-manager from the certificate created by the chart. Only valid when highAvailability.certManager.enabled is set to true and requires cert-manager v1.5.0+.

annotations:
  certSecret:
    kubernetes.io/annotation: value
--set annotations.certSecret."kubernetes\.io\/annotation"=value
Escaping values

You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend using a values.yaml file instead to avoid confusion and errors.

service.type

TypeDefault valueRequired?Can be used in custom mode?
stringLoadBalancerYes

Kubernetes reference

Allows to specify the service type.

service:
  type: LoadBalancer
--set service.type=LoadBalancer

service.spec.loadBalancerIP

TypeDefault valueRequired?Can be used in custom mode?
stringnilNo

Kubernetes reference

Allows to specify the loadBalancerIP.

service:
  spec:
    loadBalancerIP: 1.2.3.4
--set service.spec.loadBalancerIP=1.2.3.4

extraArgs

TypeDefault valueCan be used in custom mode?
list[]

A list of extra arguments to pass to the teleport start command when running a Teleport Pod.

extraArgs:
- "--bootstrap=/etc/teleport-bootstrap/roles.yaml"
--set "extraArgs={--bootstrap=/etc/teleport-bootstrap/roles.yaml}"

extraEnv

TypeDefault valueCan be used in custom mode?
list[]

Kubernetes reference

A list of extra environment variables to be set on the main Teleport container.

extraEnv:
- name: MY_ENV
  value: my-value
--set "extraEnv[0].name=MY_ENV" \--set "extraEnv[0].value=my-value"

extraVolumes

TypeDefault valueCan be used in custom mode?
list[]

Kubernetes reference

A list of extra Kubernetes Volumes which should be available to any Pod created by the chart. These volumes will also be available to any initContainers configured by the chart.

extraVolumes:
- name: myvolume
  secret:
    secretName: mysecret
--set "extraVolumes[0].name=myvolume" \--set "extraVolumes[0].secret.secretName=mysecret"

extraVolumeMounts

TypeDefault valueCan be used in custom mode?
list[]

Kubernetes reference

A list of extra Kubernetes volume mounts which should be mounted into any Pod created by the chart. These volume mounts will also be mounted into any initContainers configured by the chart.

extraVolumeMounts:
- name: myvolume
  mountPath: /path/to/mount/volume
--set "extraVolumeMounts[0].name=myvolume" \--set "extraVolumeMounts[0].path=/path/to/mount/volume"

imagePullPolicy

TypeDefault valueCan be used in custom mode?
stringIfNotPresent

Kubernetes reference

Allows the imagePullPolicy for any pods created by the chart to be overridden.

imagePullPolicy: Always
--set imagePullPolicy=Always

initContainers

TypeDefault valueCan be used in custom mode?
list[]

Kubernetes reference

A list of initContainers which will be run before the main Teleport container in any pod created by the chart.

initContainers:
- name: teleport-init
  image: alpine
  args: ['echo test']
--set "initContainers[0].name=teleport-init" \--set "initContainers[0].image=alpine" \--set "initContainers[0].args={echo test}"

postStart

Kubernetes reference

A postStart lifecycle handler to be configured on the main Teleport container.

postStart:
  command:
  - echo
  - foo
--set "postStart.command[0]=echo" \
--set "postStart.command[1]=foo"

resources

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

Resource requests/limits which should be configured for each container inside the pod. These resource limits will also be applied to initContainers.

resources:
  requests:
    cpu: 1
    memory: 2Gi
--set resources.requests.cpu=1 \--set resources.requests.memory=2Gi

securityContext

TypeDefault valueCan be used in custom mode?
object{}

Kubernetes reference

The securityContext applies to any pods created by the chart, including initContainers.

securityContext:
  runAsUser: 99
--set securityContext.runAsUser=99

tolerations

TypeDefault valueCan be used in custom mode?
list[]

Kubernetes reference

Kubernetes Tolerations to set for pod assignment.

tolerations:
- key: "dedicated"
  operator: "Equal"
  value: "teleport"
  effect: "NoSchedule"
--set tolerations[0].key=dedicated \--set tolerations[0].operator=Equal \--set tolerations[0].value=teleport \--set tolerations[0].effect=NoSchedule

priorityClassName

TypeDefault valueCan be used in custom mode?
string""

Kubernetes reference

Kubernetes PriorityClass to set for pod.

priorityClassName: "system-cluster-critical"
--set priorityClassName=system-cluster-critical

probeTimeoutSeconds

TypeDefault valueCan be used in custom mode?
integer1

Kubernetes reference

Kubernetes timeouts for the liveness and readiness probes.

probeTimeoutSeconds: 5
--set probeTimeoutSeconds=5