Fork me on GitHub

Teleport

Kubernetes Access Multiple Clusters

Improve

This guide will show you how to use Teleport as an access plane for multiple Kubernetes clusters.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see one of our Getting Started guides.

  • The tctl admin tool and tsh client tool version >= 11.0.3.

    tctl version

    Teleport v11.0.3 go1.19

    tsh version

    Teleport v11.0.3 go1.19

    See Installation for details.

  • A running Teleport cluster. For details on how to set this up, see our Enterprise Getting Started guide.

  • The tctl admin tool and tsh client tool version >= 11.0.3, which you can download by visiting the customer portal.

    tctl version

    Teleport v11.0.3 go1.19

    tsh version

    Teleport v11.0.3 go1.19

  • A Teleport Cloud account. If you do not have one, visit the sign up page to begin your free trial.

  • The tctl admin tool and tsh client tool version >= 10.3.8. To download these tools, visit the Downloads page.

    tctl version

    Teleport v10.3.8 go1.19

    tsh version

    Teleport v10.3.8 go1.19

  • The Teleport Kubernetes Service running in a Kubernetes cluster, version >= v1.17.0. We will assume that you have already followed Connect a Kubernetes Cluster to Teleport
  • The jq tool to process JSON output. This is available via common package managers
  • An additional Kubernetes cluster version >= v1.17.0
  • Helm >= 3.4.2

Verify that Helm and Kubernetes are installed and up to date.

helm version

version.BuildInfo{Version:"v3.4.2"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"17+"}

Server Version: version.Info{Major:"1", Minor:"17+"}

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=teleport.example.com [email protected]ple.com
tctl status

Cluster teleport.example.com

Version 11.0.3

CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

You can run subsequent tctl commands in this guide on your local machine.

For full privileges, you can also run tctl commands on your Auth Service host.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=myinstance.teleport.sh [email protected]
tctl status

Cluster myinstance.teleport.sh

Version 10.3.8

CA pin sha256:sha-hash-here

You must run subsequent tctl commands in this guide on your local machine.

Connecting clusters

Teleport can act as an access plane for multiple Kubernetes clusters.

We will assume that the domain of your Teleport cluster is tele.example.com.

Let's start the Teleport Kubernetes Service in another Kubernetes cluster, cookie, and connect it to tele.example.com.

We will need a join token from tele.example.com:

A trick to save the pod ID in tele.example.com

POD=$(kubectl get pod -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')

Create a join token for the cluster cookie to authenticate

TOKEN=$(kubectl exec -ti "${POD?}" -- tctl nodes add --roles=kube --ttl=10000h --format=json | jq -r '.[0]')
echo $TOKEN

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Switch kubectl to the Kubernetes cluster cookie and run:

Deploy a Kubernetes agent. It dials back to the Teleport cluster tele.example.com.

CLUSTER=cookie
PROXY=tele.example.com:443
helm install teleport-agent teleport/teleport-kube-agent --set kubeClusterName=${CLUSTER?} \ --set proxyAddr=${PROXY?} --set authToken=${TOKEN?} --create-namespace --namespace=teleport-agent

List connected clusters using tsh kube ls and switch between them using tsh kube login:

tsh kube ls

Kube Cluster Name Selected

----------------- --------

cookie

tele.example.com *

kubeconfig now points to the cookie cluster

tsh kube login cookie

Logged into Kubernetes cluster "cookie". Try 'kubectl version' to test the connection.

kubectl command executed on `cookie` but is routed through the `tele.example.com` cluster.

kubectl get pods

Teleport can act as an access plane for multiple Kubernetes clusters.

We will assume that the domain of your Teleport cluster is mytenant.teleport.sh.

Let's start the Teleport Kubernetes Service in another Kubernetes cluster, cookie, and connect it to tele.example.com.

We will need a join token from mytenant.teleport.sh:

Create a join token for the cluster cookie to authenticate

TOKEN=$(tctl nodes add --roles=kube --ttl=10000h --format=json | jq -r '.[0]')
echo $TOKEN

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Switch kubectl to the Kubernetes cluster cookie and run:

Deploy a Kubernetes agent. It dials back to the Teleport cluster mytenant.teleport.sh.

CLUSTER=cookie
PROXY=mytenant.teleport.sh
helm install teleport-agent teleport/teleport-kube-agent --set kubeClusterName=${CLUSTER?} \ --set proxyAddr=${PROXY?} --set authToken=${TOKEN?} --create-namespace --namespace=teleport-agent

List connected clusters using tsh kube ls and switch between them using tsh kube login:

tsh kube ls

Kube Cluster Name Selected

----------------- --------

cookie

mytenant.teleport.sh *

kubeconfig now points to the cookie cluster

tsh kube login cookie

Logged into Kubernetes cluster "cookie". Try 'kubectl version' to test the connection.

kubectl command executed on `cookie` but is routed through the `mytenant.teleport.sh` cluster.

kubectl get pods

Kubernetes authentication

To authenticate to a Kubernetes cluster via Teleport, your Teleport roles must allow access as at least one Kubernetes user or group. Ensure that you have a Teleport role that grants access to the cluster you plan to interact with.

Run the following command to get the Kubernetes user for your current context:

kubectl config view \-o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.user}"

Create a file called kube-access.yaml with the following content, replacing USER with the output of the command above.

kind: role
metadata:
  name: kube-access
version: v5
spec:
  allow:
    kubernetes_labels:
      '*': '*'
    kubernetes_groups:
    - viewers
    kubernetes_users:
    - USER
  deny: {}

Retrieve your Teleport user:

e.g., myuser

TELEPORT_USER=
tctl get user/${TELEPORT_USER?} > user.yaml

Add kube-access to your Teleport user's list of roles:

   roles:
+  - kube-access

Apply your changes:

tctl create -f kube-access.yaml
tctl create -f user.yaml

Now that Teleport RBAC is configured, you can authenticate to your Kubernetes cluster via Teleport. To interact with your Kubernetes cluster, you will need to configure authorization within Kubernetes.

Kubernetes authorization

To configure authorization within your Kubernetes cluster, you need to create Kubernetes RoleBindings or ClusterRoleBindings that grant permissions to the subjects listed in kubernetes_users and kubernetes_groups.

For example, you can grant some limited read-only permissions to the viewers group used in the kube-access role defined above:

Create a file called viewers-bind.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: viewers-crb
subjects:
- kind: Group
  # Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
  name: viewers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  # "view" is a default ClusterRole that grants read-only access to resources
  # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
  name: view
  apiGroup: rbac.authorization.k8s.io

Apply the ClusterRoleBinding with kubectl:

kubectl apply -f viewers-bind.yaml

Log out of Teleport and log in again.