
In this guide, we will show you how to register a Kubernetes cluster with Teleport by deploying the Teleport Kubernetes Service on that cluster. The Kubernetes Service automatically determines that it is running a Kubernetes cluster and, if so, registers itself with Teleport.
You can register multiple Kubernetes clusters with Teleport by deploying the Teleport Kubernetes Service on each cluster you want to register.
Prerequisites
-
A running Teleport cluster. For details on how to set this up, see one of our Getting Started guides.
-
The
tctl
admin tool andtsh
client tool version >= 12.1.1.tctl versionTeleport v12.1.1 go1.19
tsh versionTeleport v12.1.1 go1.19
See Installation for details.
-
A running Teleport Enterprise cluster. For details on how to set this up, see our Enterprise Getting Started guide.
-
The Enterprise
tctl
admin tool andtsh
client tool version >= 12.1.1, which you can download by visiting the customer portal.tctl versionTeleport Enterprise v12.1.1 go1.19
tsh versionTeleport v12.1.1 go1.19
Please use the latest version of Teleport Enterprise documentation.
- The Teleport Kubernetes Service running in a Kubernetes cluster, version >= v1.17.0. We will assume that you have already followed Connect a Kubernetes Cluster to Teleport
- The
jq
tool to processJSON
output. This is available via common package managers - An additional Kubernetes cluster version >= v1.17.0
- Helm >= 3.4.2
Verify that Helm and Kubernetes are installed and up to date.
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
To connect to Teleport, log in to your cluster using tsh
, then use tctl
remotely:
tsh login --proxy=teleport.example.com [email protected]tctl statusCluster teleport.example.com
Version 12.1.1
CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
You can run subsequent tctl
commands in this guide on your local machine.
For full privileges, you can also run tctl
commands on your Auth Service host.
To connect to Teleport, log in to your cluster using tsh
, then use tctl
remotely:
tsh login --proxy=myinstance.teleport.sh [email protected]tctl statusCluster myinstance.teleport.sh
Version 12.1.2
CA pin sha256:sha-hash-here
You must run subsequent tctl
commands in this guide on your local machine.
Connecting clusters
Teleport can act as an access plane for multiple Kubernetes clusters.
We will assume that the domain of your Teleport cluster is tele.example.com
.
Let's start the Teleport Kubernetes Service in another Kubernetes cluster,
cookie
, and connect it to tele.example.com
.
We will need a join token from tele.example.com
:
A trick to save the pod ID in tele.example.com
POD=$(kubectl get pod -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')Create a join token for the cluster cookie to authenticate
TOKEN=$(kubectl exec -ti "${POD?}" -- tctl nodes add --roles=kube --ttl=1h --format=json | jq -r '.[0]')echo $TOKEN
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Switch kubectl
to the Kubernetes cluster cookie
and run:
Deploy a Kubernetes agent. It dials back to the Teleport cluster tele.example.com.
CLUSTER=cookiePROXY=tele.example.com:443helm install teleport-agent teleport/teleport-kube-agent \ --set kubeClusterName=${CLUSTER?} \ --set proxyAddr=${PROXY?} \ --set authToken=${TOKEN?} \ --create-namespace \ --namespace=teleport-agent \ --version 12.1.1
List connected clusters using tsh kube ls
and switch between
them using tsh kube login
:
tsh kube lsKube Cluster Name Selected
----------------- --------
cookie
tele.example.com *
kubeconfig now points to the cookie cluster
tsh kube login cookieLogged into Kubernetes cluster "cookie". Try 'kubectl version' to test the connection.
kubectl command executed on `cookie` but is routed through the `tele.example.com` cluster.
kubectl get pods
Please use the latest version of Teleport Enterprise documentation.
Kubernetes authentication
To authenticate to a Kubernetes cluster via Teleport, your Teleport roles must allow access as at least one Kubernetes user or group. Ensure that you have a Teleport role that grants access to the cluster you plan to interact with.
Run the following command to get the Kubernetes user for your current context:
kubectl config view \-o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.user}"
Create a file called kube-access.yaml
with the following content, replacing
USER
with the output of the command above.
kind: role
metadata:
name: kube-access
version: v6
spec:
allow:
kubernetes_labels:
'*': '*'
kubernetes_resources:
- kind: pod
namespace: "*"
name: "*"
kubernetes_groups:
- viewers
kubernetes_users:
- USER
deny: {}
Apply your changes:
tctl create -f kube-access.yaml
Assign the kube-access
role to your Teleport user by running the following
commands, depending on whether you authenticate as a local Teleport user or via
the github
, saml
, or oidc
authentication connectors:
Retrieve your local user's configuration resource:
tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
Edit out.yaml
, adding kube-access
to the list of existing roles:
roles:
- access
- auditor
- editor
+ - kube-access
Apply your changes:
tctl create -f out.yaml
Retrieve your github
configuration resource:
tctl get github/github --with-secrets > github.yaml
Edit github.yaml
, adding kube-access
to the
teams_to_roles
section. The team you will map to this role will depend on how
you have designed your organization's RBAC, but it should be the smallest team
possible within your organization. This team must also include your user.
Here is an example:
teams_to_roles:
- organization: octocats
team: admins
roles:
- access
+ - kube-access
Apply your changes:
tctl create -f github.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to saml.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Retrieve your saml
configuration resource:
tctl get --with-secrets saml/mysaml > saml.yaml
Edit saml.yaml
, adding kube-access
to the
attributes_to_roles
section. The attribute you will map to this role will
depend on how you have designed your organization's RBAC, but it should be the
smallest group possible within your organization. This group must also include
your user.
Here is an example:
attributes_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - kube-access
Apply your changes:
tctl create -f saml.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to saml.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Retrieve your oidc
configuration resource:
tctl get oidc/myoidc --with-secrets > oidc.yaml
Edit oidc.yaml
, adding kube-access
to the
claims_to_roles
section. The claim you will map to this role will depend on
how you have designed your organization's RBAC, but it should be the smallest
group possible within your organization. This group must also include your
user.
Here is an example:
claims_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - kube-access
Apply your changes:
tctl create -f saml.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to saml.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Log out of your Teleport cluster and log in again to assume the new role.
Now that Teleport RBAC is configured, you can authenticate to your Kubernetes cluster via Teleport. To interact with your Kubernetes cluster, you will need to configure authorization within Kubernetes.
Kubernetes authorization
To configure authorization within your Kubernetes cluster, you need to create Kubernetes RoleBinding
s or
ClusterRoleBindings
that grant permissions to the subjects listed in kubernetes_users
and
kubernetes_groups
.
For example, you can grant some limited read-only permissions to the viewers
group used in the kube-access
role defined above:
Create a file called viewers-bind.yaml
with the following contents:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: viewers-crb
subjects:
- kind: Group
# Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
name: viewers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
# "view" is a default ClusterRole that grants read-only access to resources
# See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
name: view
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
with kubectl
:
kubectl apply -f viewers-bind.yaml
Log out of Teleport and log in again.
Next steps
To see all of the options you can set in the values file for the
teleport-kube-agent
Helm chart, consult our reference
guide.