The teleport-kube-agent
Helm chart is used to configure a Teleport instance
that runs in a remote Kubernetes cluster and joins back to a Teleport cluster to
provide access to services running there.
You can browse the source on GitHub.
The teleport-kube-agent
chart can run any or all of three Teleport services:
Teleport service | Name for roles and tctl tokens add | Purpose |
---|---|---|
kubernetes_service | kube | Uses Teleport to handle authentication with and proxy access to a Kubernetes cluster |
application_service | app | Uses Teleport to handle authentication with and proxy access to web-based applications |
database_service | db | Uses Teleport to handle authentication with and proxy access to databases |
This reference details available values for the teleport-kube-agent
chart.
Backing up production instances, environments, and/or settings before making permanent modifications is encouraged as a best practice. Doing so allows you to roll back to an existing state if needed.
roles
This parameter is not mandatory to preserve backwards compatibility with older chart versions, but we recommend it is set.
Type | Default value |
---|---|
string | kube |
roles
is a comma-separated list of services which should be enabled when running the teleport-kube-agent
chart.
Services | Value for roles | Mandatory additional settings for this role |
---|---|---|
Teleport Kubernetes service | kube | kubeClusterName |
Teleport Application service | app | apps |
Teleport Database service | db | databases |
roles: kube,app,db
--set roles=kube\,app\,db
When specifying multiple roles using --set
syntax, you must escape the commas using a backslash (\
).
This is a quirk of Helm's CLI parser.
If you specify a role here, you may also need to specify some other settings which are detailed in this reference.
authToken
Type | Default value | Required? |
---|---|---|
string | nil | Yes |
authToken
provides a Teleport join token which will be used to join the Teleport instance to a Teleport cluster.
This value must be provided for the chart to work. The token that you use must also be valid for every Teleport service that you
are trying to add with the teleport-kube-agent
chart. Here are a few examples:
Services | Service Name | tctl tokens add example | teleport.yaml static token example |
---|---|---|---|
Kubernetes | kube | tctl tokens add --type=kube | "kube:<replace-with-actual-token>" |
Kubernetes, Application | kube,app | tctl tokens add --type=kube,app | "kube,app:<replace-with-actual-token>" |
Kubernetes, Application, Database | kube,app,db | tctl tokens add --type=kube,app,db | "kube,app,db:<replace-with-actual-token>" |
When you use a token, all of the services that it provides must be used.
For example, you cannot use a token of type app,db
to only join a Teleport app
service by itself. It must join both app
and db
services at once.
Also, each static token you configure must be unique, as the token itself is used to define which services will be supported.
You cannot reuse the same static token and specify a different set of services.
If you do not have the correct services (Teleport refers to these internally as Roles
) assigned to your join token, the Teleport instance will
fail to join the Teleport cluster.
authToken: <replace-with-actual-token>
--set authToken=<replace-with-actual-token>
proxyAddr
Type | Default value | Required? |
---|---|---|
string | nil | Yes |
proxyAddr
provides the public-facing Teleport proxy endpoint which should be used to join the cluster. This is the same URL that is used
to access the web UI of your Teleport cluster. It is the same as the value configured for proxy_service.public_addr
in a traditional
Teleport cluster. The port used is usually either 3080 or 443.
Here are a few examples:
Deployment method | Example proxy_service.public_addr |
---|---|
On-prem Teleport cluster | teleport.example.com:3080 |
Teleport Cloud cluster | example.teleport.sh:443 |
teleport-cluster Helm chart | teleport.example.com:443 |
kubeClusterName
Type | Default value | Required? |
---|---|---|
string | nil | When kube chart role is used |
kubeClusterName
sets the name used for the Kubernetes cluster proxied by the Teleport agent. This name will be shown to Teleport users
connecting to the cluster.
kubeClusterName: my-gke-cluster
--set kubeClusterName=my-gke-cluster
apps
Type | Default value | Required? |
---|---|---|
list | [] | When app chart role is used at least one of apps and appResources is required. |
apps
is a YAML list object detailing the applications that should be proxied by Teleport Application access.
You can specify multiple apps by adding additional list elements.
apps:
- name: grafana
uri: http://localhost:3000
labels:
purpose: monitoring
- name: jenkins
uri: http://jenkins:8080
labels:
purpose: ci
YAML is very sensitive to correct spacing. When specifying lists in a values.yaml
file, you might like
to use a linter to validate your YAML list and ensure that it is correctly formatted.
--set "apps[0].name=grafana" \--set "apps[0].uri=http://localhost:3000" \--set "apps[0].purpose=monitoring" \--set "apps[1].name=grafana" \--set "apps[1].uri=http://jenkins:8080" \--set "apps[1].purpose=ci"
Note that when using --set
syntax, YAML list elements must be indexed starting at 0
.
You can see a list of all the supported values which can be used in a Teleport application access configuration here.
appResources
Type | Default value | Required? |
---|---|---|
list | [] | When app chart role is used at least one of apps and appResources is required. |
appResources
is a YAML list object detailing the resource selectors of the applications that should be proxied by Teleport Application Access.
You can specify multiple selectors by including additional list elements.
appResources:
- labels:
"env": "prod"
- labels:
"env": "test"
--set "appResources[0].labels.env=prod" \--set "appResources[1].labels.env=test"
Note that when using --set
syntax, YAML list elements must be indexed starting at 0
.
Once appResources
is set, you can dynamically register application with tsh
by following this guide.
awsDatabases
This section configures database auto-discovery, which is only currently supported on AWS. You can configure databases for other platforms using the databases
section below this.
For AWS database auto-discovery to work, your agent pods will need to use a role which has appropriate IAM permissions as per the database documentation.
After configuring a role, you can use an eks.amazonaws.com/role-arn
annotation with the annotations.serviceAccount
value to associate it with the service account and grant permissions:
annotations:
serviceAccount:
eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/my-rds-autodiscovery-role
Type | Default value | Required? |
---|---|---|
list | [] | When the db chart role is used at least one of databases , awsDatabases , dbResources is required. |
awsDatabases
is a YAML list object detailing the filters for the AWS databases that should be discovered and proxied by Teleport Database access.
You can specify multiple database filters by adding additional list elements.
types
is a list containing the types of AWS databases that should be discovered.regions
is a list of AWS regions which should be scanned for databases.tags
can be used to set AWS tags that must be matched for databases to be discovered.
awsDatabases:
- types: ["rds"]
regions: ["us-east-1", "us-west-2"]
tags:
"environment": "production"
- types: ["rds"]
regions: ["us-east-1"]
tags:
"environment": "dev"
- types: ["rds"]
regions: ["eu-west-1"]
tags:
"*": "*"
YAML is very sensitive to correct spacing. When specifying lists in a values.yaml
file, you might like
to use a linter to validate your YAML list and ensure that it is correctly formatted.
--set "awsDatabases[0].types[0]=rds" \
--set "awsDatabases[0].regions[0]=us-east-1" \
--set "awsDatabases[0].regions[1]=us-west-2" \
--set "awsDatabases[0].tags[0].environment=production" \
--set "awsDatabases[1].types[0]=rds" \
--set "awsDatabases[1].regions[0]=us-east-1" \
--set "awsDatabases[1].tags[0].environment=dev" \
--set "awsDatabases[2].types[0]=rds" \
--set "awsDatabases[2].regions[0]=eu-west-1" \
--set "awsDatabases[2].tags[0].*=*"
Note that when using --set
syntax, YAML list elements must be indexed starting at 0
.
databases
Type | Default value | Required? |
---|---|---|
list | [] | When the db chart role is used at least one of databases , awsDatabases , dbResources is required. |
databases
is a YAML list object detailing the databases that should be proxied by Teleport Database access.
You can specify multiple databases by adding additional list elements.
databases:
- name: aurora-postgres
uri: postgres-aurora-instance-1.xxx.us-east-1.rds.amazonaws.com:5432
protocol: postgres
aws:
region: us-east-1
static_labels:
env: staging
- name: mysql
uri: mysql-instance-1.xxx.us-east-1.rds.amazonaws.com:3306
protocol: mysql
aws:
region: us-east-1
static_labels:
env: staging
YAML is very sensitive to correct spacing. When specifying lists in a values.yaml
file, you might like
to use a linter to validate your YAML list and ensure that it is correctly formatted.
--set "databases[0].name=aurora" \--set "databases[0].uri=postgres-aurora-instance-1.xxx.us-east-1.rds.amazonaws.com:5432" \--set "databases[0].protocol=postgres" \--set "databases[0].aws.region=us-east-1" \--set "databases[0].static_labels.env=staging" \--set "databases[1].name=mysql" \--set "databases[1].uri=mysql-instance-1.xxx.us-east-1.rds.amazonaws.com:3306" \--set "databases[1].protocol=mysql" \--set "databases[1].aws.region=us-east-1" \--set "databases[1].static_labels.env=staging"
Note that when using --set
syntax, YAML list elements must be indexed starting at 0
.
You can see a list of all the supported values which can be used in a Teleport database service configuration here.
dbResources
Type | Default value | Required? |
---|---|---|
list | [] | When the db chart role is used at least one of databases , awsDatabases , dbResources is required. |
dbResources
is a YAML list object detailing the resource selectors of the databases that should be proxied by Teleport Database Access.
You can specify multiple selectors by adding elements to the list.
dbResources:
- labels:
"env": "prod"
"engine": "postgres"
- labels:
"env": "test"
"engine": "mysql"
--set "dbResources[0].labels.env=prod" \
--set "dbResources[0].labels.engine=postgres" \
--set "dbResources[1].labels.env=test" \
--set "dbResources[0].labels.engine=mysql"
Note that when using --set
syntax, YAML list elements must be indexed starting at 0
.
Once dbResources
is set, you can dynamically register database with tsh
by following this guide.
teleportVersionOverride
Type | Default value |
---|---|
string | nil |
Normally the version of Teleport being used will match the version of the chart being installed. If you install chart version 7.0.0, you'll be using Teleport 7.0.0.
You can optionally override this to use a different published Teleport Docker image tag like 6.0.2
or 7
.
See this link for information on Community Docker image versions.
The teleport-kube-agent
chart always runs using Teleport Community edition as it does not require any Enterprise features, so it does
not require a Teleport license file to be provided.
teleportVersionOverride: "7"
--set teleportVersionOverride="7"
caPin
Type | Default value |
---|---|
list | [] |
When caPin
is set, the Teleport pod will use its values to check the
Auth Service's identity when first joining a cluster. This enables a more secure
way of adding new Teleport instances to a cluster. See
"Adding Nodes to the Cluster".
Each list element can be the pin itself (recommended, works out of the box),
or a path to a file containing the pin. For the latter it is your
responsibility to mount the file using extraVolumes
.
caPin: ["sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1"]
--set caPin[0]="sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1"
insecureSkipProxyTLSVerify
Type | Default value |
---|---|
bool | false |
When insecureSkipProxyTLSVerify
is set to true
, the Teleport instance will skip the verification of the TLS certificate presented by the Teleport
Proxy Service specified using proxyAddr
.
This can be used for joining a Teleport instance to a Teleport cluster which does not have valid TLS certificates for testing.
insecureSkipProxyTLSVerify: false
--set insecureSkipProxyTLSVerify=false
Using a self-signed TLS certificate and disabling TLS verification is OK for testing, but is not viable when running a production Teleport cluster as it will drastically reduce security. You must configure valid TLS certificates on your Teleport cluster for production workloads.
One option might be to use Teleport's built-in ACME support or enable cert-manager support.
existingDataVolume
Type | Default value |
---|---|
string | "" |
When existingDataVolume
is set to the name of an existing volume, the /var/lib/teleport
mount will use this volume instead of creating a new emptyDir
volume.
existingDataVolume: my-volume
--set existingDataVolume=my-volume
podSecurityPolicy
podSecurityPolicy.enabled
Type | Default value |
---|---|
bool | true |
By default, Teleport charts also install a podSecurityPolicy
.
To disable this, you can set enabled
to false
.
podSecurityPolicy:
enabled: false
--set podSecurityPolicy.enabled=false
labels
Type | Default value |
---|---|
object | {} |
labels
can be used to add a map of key-value pairs for the kubernetes_service
which is deployed using the teleport-kube-agent
chart.
These labels can then be used with Teleport's RBAC policies to define access rules for the cluster.
These are Teleport-specific RBAC labels, not Kubernetes labels.
For historical/backwards compatibility reasons, these labels will only be applied to the Kubernetes cluster being joined via the Teleport Kubernetes service.
To set labels for applications, add a labels
element to the apps
section.
To set labels for databases, add a static_labels
element to the databases
section.
For more information on how to set static/dynamic labels for Teleport services, see labelling nodes and applications.
labels:
environment: production
region: us-east
--set labels.environment=production \--set labels.region=us-east
storage
storage.enabled
Type | Default value |
---|---|
bool | false |
Enables the creation of a Kubernetes persistent volume to hold Teleport instance state.
storage:
enabled: true
--set storage.enabled=true
storage.storageClassName
Type | Default value |
---|---|
string | nil |
The storage class name the persistent volume should use when creating persistent volume claims. The provided storage class name needs to exist on the Kubernetes cluster for Teleport to use.
storage:
storageClassName: teleport-storage-class
--set storage.storageClassName=teleport-storage-class
storage.requests
Type | Default value |
---|---|
string | 128Mi |
The size of persistent volume to create.
storage:
requests: 128Mi
--set storage.requests=128Mi
image
Type | Default value |
---|---|
string | quay.io/gravitational/teleport |
image
sets the container image used for Teleport pods run by the teleport-kube-agent
chart.
You can override this to use your own Teleport image rather than a Teleport-published image.
image: my.docker.registry/teleport-image-name
--set image=my.docker.registry/teleport-image-name
imagePullSecrets
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of secrets containing authorization tokens which can be optionally used to access a private Docker registry.
imagePullSecrets:
- name: my-docker-registry-key
--set "imagePullSecrets[0].name=my-docker-registry-key"
highAvailability
highAvailability.replicaCount
Type | Default value |
---|---|
int | 1 |
highAvailability.replicaCount
can be used to set the number of replicas used in the deployment.
Set to a number higher than 1
for a high availability mode where multiple Teleport pods will be deployed.
As a rough guide, we recommend configuring one replica per distinct availability zone where your cluster has worker nodes.
2 replicas/availability zones will be fine for smaller workloads. 3-5 replicas/availability zones will be more appropriate for bigger clusters with more traffic.
highAvailability:
replicaCount: 3
--set highAvailability.replicaCount=3
highAvailability.requireAntiAffinity
Type | Default value |
---|---|
bool | false |
Setting highAvailability.requireAntiAffinity
to true
will use requiredDuringSchedulingIgnoredDuringExecution
to require that multiple
Teleport pods must not be scheduled on the same physical host.
This can result in Teleport pods failing to be scheduled in very small clusters or during node downtime, so should be used with caution.
Setting highAvailability.requireAntiAffinity
to false
(the default) uses preferredDuringSchedulingIgnoredDuringExecution
to make node
anti-affinity a soft requirement.
This setting only has any effect when highAvailability.replicaCount
is greater than 1
.
highAvailability:
requireAntiAffinity: true
--set highAvailability.requireAntiAffinity=true
highAvailability.podDisruptionBudget
highAvailability.podDisruptionBudget.enabled
Type | Default value |
---|---|
bool | false |
Enable a Pod Disruption Budget for the Teleport Pod to ensure HA during voluntary disruptions.
highAvailability:
podDisruptionBudget:
enabled: true
--set highAvailability.podDisruptionBudget.enabled=true
highAvailability.podDisruptionBudget.minAvailable
Type | Default value |
---|---|
int | 1 |
Ensures that this number of replicas is available during voluntary disruptions, can be a number of replicas or a percentage.
highAvailability:
podDisruptionBudget:
minAvailable: 1
--set highAvailability.podDisruptionBudget.minAvailable=1
clusterRoleName
Type | Default value |
---|---|
string | nil |
clusterRoleName
can be optionally used to override the name of the Kubernetes ClusterRole
used by the teleport-kube-agent
chart's ServiceAccount
.
Most users will not need to change this.
clusterRoleName: kubernetes-clusterrole
--set clusterRoleName=kubernetes-clusterrole
clusterRoleBindingName
Most users will not need to change this.
Type | Default value |
---|---|
string | nil |
clusterRoleBindingName
can be optionally used to override the name of the Kubernetes ClusterRoleBinding
used by the teleport-kube-agent
chart's ServiceAccount
.
clusterRoleBindingName: kubernetes-clusterrolebinding
--set clusterRoleBindingName=kubernetes-clusterrolebinding
priorityClassName
Type | Default value |
---|---|
string | nil |
priorityClassName
allows to specify a priority class for the teleport-kube-agent
deployment/statefulset.
priorityClassName: "teleport-kube-agent"
--set priorityClassName=teleport-kube-agent
serviceAccountName
Most users will not need to change this.
Type | Default value |
---|---|
string | nil |
serviceAccountName
can be optionally used to override the name of the Kubernetes ServiceAccount
used by the teleport-kube-agent
chart.
serviceAccountName: kubernetes-serviceaccount
--set serviceAccountName=kubernetes-serviceaccount
secretName
Type | Default value |
---|---|
string | teleport-kube-agent-join-token |
secretName
is the name of the Kubernetes Secret
used to store the Teleport join token which is used by the teleport-kube-agent
chart.
If you set this to a blank value, the chart will not attempt to create the secret itself and will instead read the value of the
existing teleport-kube-agent-join-token
secret. This allows you to configure this secret externally and avoid having a plaintext
join token stored in your Teleport chart values.
To create your own join token secret, you can use a command like this:
kubectl --namespace teleport create secret generic teleport-kube-agent-join-token --from-literal=auth-token=<replace-with-actual-token>
The key used for the auth token inside the secret must be auth-token
, as in the command above.
serviceAccountName:
--set serviceAccountName=""
log
log.level
This field used to be called logLevel
. For backwards compatibility this name can still be used, but we recommend changing your values file to use log.level
.
Type | Default value |
---|---|
string | INFO |
log.level
sets the log level used for the Teleport process.
Available log levels (in order of most to least verbose) are: DEBUG
, INFO
, WARNING
, ERROR
.
The default is INFO
, which is recommended in production.
DEBUG
is useful during first-time setup or to see more detailed logs for debugging.
log:
level: DEBUG
--set log.level=DEBUG
log.output
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | stderr | ❌ | teleport.log.output |
log.output
sets the output destination for the Teleport process.
This can be set to any of the built-in values: stdout
, stderr
or syslog
to use that destination.
The value can also be set to a file path (such as /var/log/teleport.log
) to write logs to a file. Bear in mind that a few service startup messages will still go to stderr
for resilience.
log:
output: stderr
--set log.output=stderr
log.format
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
string | text | ❌ | teleport.log.format.output |
log.format
sets the output type for the Teleport process.
Possible values are text
(default) or json
.
log:
format: json
--set log.format=json
log.extraFields
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
list | ["timestamp", "level", "component", "caller"] | ❌ | teleport.log.format.extra_fields |
log.extraFields
sets the fields used in logging for the Teleport process.
See the Teleport config file reference for more details on possible values for extra_fields
.
log:
extraFields: ["timestamp", "level"]
--set "log.extraFields[0]=timestamp" \
--set "log.extraFields[1]=level"
affinity
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes affinity to set for pod assignments.
You cannot set both affinity
and highAvailability.requireAntiAffinity
as they conflict with each other.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gravitational.io/dedicated
operator: In
values:
- teleport
--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key=gravitational.io/dedicated \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].operator=In \--set affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].values[0]=teleport
nodeSelector
Type | Default value |
---|---|
object | {} |
nodeSelector
can be used to add a map of key-value pairs to constrain the
nodes that Teleport pods will run on.
nodeSelector:
role: node
region: us-east
--set nodeSelector.role=node \
--set nodeSelector.region=us-east
extraLabels.clusterRole
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the ClusterRole
created by the chart.
extraLabels:
clusterRole:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.clusterRole."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.clusterRoleBinding
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the ClusterRoleBinding
created by the chart.
extraLabels:
clusterRoleBinding:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.clusterRoleBinding."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.config
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the ConfigMap
created by the chart.
extraLabels:
config:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.config."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.deployment
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the Deployment
or StatefulSet
created by the chart.
extraLabels:
deployment:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.deployment."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.pod
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to every Pod
in the Deployment
or StatefulSet
created by the chart.
extraLabels:
pod:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.pod."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.podDisruptionBudget
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the PodDisruptionBudget
created by the chart (if enabled).
extraLabels:
podDisruptionBudget:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.podDisruptionBudget."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.podSecurityPolicy
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the PodSecurityPolicy
created by the chart (if enabled).
extraLabels:
podSecurityPolicy:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.podSecurityPolicy."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.secret
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the Secret
created by the chart (if enabled).
extraLabels:
secret:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.secret."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraLabels.serviceAccount
Type | Default value |
---|---|
object | {} |
Kubernetes labels which should be applied to the ServiceAccount
created by the chart.
extraLabels:
serviceAccount:
app.kubernetes.io/name: teleport-kube-agent
--set extraLabels.serviceAccount."app\.kubernetes\.io\/name"=teleport-kube-agent
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.config
Type | Default value | Can be used in custom mode? | teleport.yaml equivalent |
---|---|---|---|
object | {} | ❌ | None |
Kubernetes annotations which should be applied to the ConfigMap
created by the chart.
These annotations will not be applied in custom
mode, as the ConfigMap
is not managed by the chart.
In this instance, you should apply annotations manually to your created ConfigMap
.
annotations:
config:
kubernetes.io/annotation: value
--set annotations.config."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.deployment
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the Deployment
created by the chart.
annotations:
deployment:
kubernetes.io/annotation: value
--set annotations.deployment."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.pod
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to each Pod
created by the chart.
annotations:
pod:
kubernetes.io/annotation: value
--set annotations.pod."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
annotations.serviceAccount
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Kubernetes annotations which should be applied to the ServiceAccount
created by the chart.
annotations:
serviceAccount:
kubernetes.io/annotation: value
--set annotations.serviceAccount."kubernetes\.io\/annotation"=value
You must escape values entered on the command line correctly for Helm's CLI to understand them. We recommend
using a values.yaml
file instead to avoid confusion and errors.
extraVolumes
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra Kubernetes Volumes
which should be available to any Pod
created by the chart. These volumes
will also be available to any initContainers
configured by the chart.
extraVolumes:
- name: myvolume
secret:
secretName: mysecret
--set "extraVolumes[0].name=myvolume" \--set "extraVolumes[0].secret.secretName=mysecret"
extraArgs
Type | Default value |
---|---|
list | [] |
A list of extra arguments to pass to the teleport start
command when running a Teleport Pod.
extraArgs:
- "--debug"
--set "extraArgs={--debug}"
extraEnv
Type | Default value |
---|---|
list | [] |
A list of extra environment variables to be set on the main Teleport container.
extraEnv:
- name: HTTPS_PROXY
value: "http://username:[email protected]:3128"
--set "extraEnv[0].name=HTTPS_PROXY" \--set "extraEnv[0].value=\"http://username:[email protected]:3128\""
extraVolumeMounts
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of extra Kubernetes volume mounts which should be mounted into any Pod
created by the chart. These volume
mounts will also be mounted into any initContainers
configured by the chart.
extraVolumeMounts:
- name: myvolume
mountPath: /path/to/mount/volume
--set "extraVolumeMounts[0].name=myvolume" \--set "extraVolumeMounts[0].path=/path/to/mount/volume"
imagePullPolicy
Type | Default value | Can be used in custom mode? |
---|---|---|
string | IfNotPresent | ✅ |
Allows the imagePullPolicy
for any pods created by the chart to be overridden.
imagePullPolicy: Always
--set imagePullPolicy=Always
initContainers
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
A list of initContainers
which will be run before the main Teleport container in any pod created by the chart.
initContainers:
- name: teleport-init
image: alpine
args: ['echo test']
--set "initContainers[0].name=teleport-init" \--set "initContainers[0].image=alpine" \--set "initContainers[0].args={echo test}"
resources
Type | Default value | Can be used in custom mode? |
---|---|---|
object | {} | ✅ |
Resource requests/limits which should be configured for each container inside the pod. These resource limits
will also be applied to initContainers
.
resources:
requests:
cpu: 1
memory: 2Gi
--set resources.requests.cpu=1 \--set resources.requests.memory=2Gi
tolerations
Type | Default value | Can be used in custom mode? |
---|---|---|
list | [] | ✅ |
Kubernetes Tolerations to set for pod assignment.
tolerations:
- key: "dedicated"
operator: "Equal"
value: "teleport"
effect: "NoSchedule"
--set tolerations[0].key=dedicated \--set tolerations[0].operator=Equal \--set tolerations[0].value=teleport \--set tolerations[0].effect=NoSchedule
probeTimeoutSeconds
Type | Default value |
---|---|
integer | 1 |
Kubernetes timeouts for the liveness and readiness probes.
probeTimeoutSeconds: 5
--set probeTimeoutSeconds=5