Fork me on GitHub

Teleport

Connect a Kubernetes Cluster to Teleport

Improve
Getting Started With Teleport 12 | Kubernetes Edition

Getting Started With Teleport 12 | Kubernetes Edition

Length: 14:35

In this guide, we will show you how to register a Kubernetes cluster with Teleport by deploying the Teleport Kubernetes Service on the Kubernetes cluster you want to register.

In this setup, the Teleport Kubernetes Service pod detects that it is running on Kubernetes and registers the cluster automatically.

You can also run the Teleport Kubernetes Service on a Linux host in a separate network from your Kubernetes cluster. Learn how in Kubernetes Access from a Standalone Teleport Cluster.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see one of our Getting Started guides.

  • The tctl admin tool and tsh client tool version >= 12.1.1.

    tctl version

    Teleport v12.1.1 go1.19

    tsh version

    Teleport v12.1.1 go1.19

    See Installation for details.

  • A running Teleport cluster. For details on how to set this up, see our Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 12.1.1, which you can download by visiting the customer portal.

    tctl version

    Teleport Enterprise v12.1.1 go1.19

    tsh version

    Teleport v12.1.1 go1.19

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.
  • The jq tool to process JSON output. This is available via common package managers.

Verify that Helm and Kubernetes are installed and up to date.

helm version

version.BuildInfo{Version:"v3.4.2"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"17+"}

Server Version: version.Info{Major:"1", Minor:"17+"}

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=teleport.example.com [email protected]
tctl status

Cluster teleport.example.com

Version 12.1.1

CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

You can run subsequent tctl commands in this guide on your local machine.

For full privileges, you can also run tctl commands on your Auth Service host.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=myinstance.teleport.sh [email protected]
tctl status

Cluster myinstance.teleport.sh

Version 12.1.1

CA pin sha256:sha-hash-here

You must run subsequent tctl commands in this guide on your local machine.

Deployment overview

In this guide, we deploy the Teleport Kubernetes Service, which connects Kubernetes cluster cookie to Teleport cluster tele.example.com:

In your Teleport Cloud account, the name of your cluster will be your tenant domain name, e.g., mytenant.teleport.sh, rather than teleport.example.com.

Kubernetes agent
Kubernetes Service dialing back to the Teleport cluster

Step 1/3. Get a join token

In order to start the Teleport Kubernetes Service, we will need to request a join token from the Teleport Auth Service:

Create a join token for the Teleport Kubernetes Service to authenticate

TOKEN=$(tctl nodes add --roles=kube --ttl=10000h --format=json | jq -r '.[0]')
echo $TOKEN

Step 2/3. Deploy teleport-kube-agent

The Teleport Kubernetes Service version should be the same as the Teleport Cluster version or up to one major version back. You can set the version override with the override variable, ex: --set teleportVersionOverride=12.1.1.

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Switch kubectl to the Kubernetes cluster cookie and run the following commands, assigning PROXY_ADDR to the address of your Auth Service or Proxy Service.

PROXY_ADDR=tele.example.com:443

Install Kubernetes agent. It dials back to the Teleport cluster at $PROXY_ADDR

CLUSTER=cookie
helm install teleport-agent teleport/teleport-kube-agent \ --set kubeClusterName=${CLUSTER?} \ --set proxyAddr=${PROXY_ADDR?} \ --set authToken=${TOKEN?} \ --create-namespace \ --namespace=teleport-agent \ --version 12.1.1
Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.

Step 3/3 Access your Kubernetes cluster

Kubernetes authentication

To authenticate to a Kubernetes cluster via Teleport, your Teleport roles must allow access as at least one Kubernetes user or group. Ensure that you have a Teleport role that grants access to the cluster you plan to interact with.

Run the following command to get the Kubernetes user for your current context:

kubectl config view \-o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.user}"

Create a file called kube-access.yaml with the following content, replacing USER with the output of the command above.

kind: role
metadata:
  name: kube-access
version: v6
spec:
  allow:
    kubernetes_labels:
      '*': '*'
    kubernetes_resources:
      - kind: pod
        namespace: "*"
        name: "*"
    kubernetes_groups:
    - viewers
    kubernetes_users:
    - USER
  deny: {}

Apply your changes:

tctl create -f kube-access.yaml

Assign the kube-access role to your Teleport user by running the following commands, depending on whether you authenticate as a local Teleport user or via the github, saml, or oidc authentication connectors:

Retrieve your local user's configuration resource:

tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml

Edit out.yaml, adding kube-access to the list of existing roles:

  roles:
   - access
   - auditor
   - editor
+  - kube-access

Apply your changes:

tctl create -f out.yaml

Retrieve your github configuration resource:

tctl get github/github --with-secrets > github.yaml

Edit github.yaml, adding kube-access to the teams_to_roles section. The team you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest team possible within your organization. This team must also include your user.

Here is an example:

  teams_to_roles:
    - organization: octocats
      team: admins
      roles:
        - access
+       - kube-access

Apply your changes:

tctl create -f github.yaml

Retrieve your saml configuration resource:

tctl get saml/mysaml --with-secrets > saml.yaml

Edit saml.yaml, adding kube-access to the attributes_to_roles section. The attribute you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest group possible within your organization. This group must also include your user.

Here is an example:

  attributes_to_roles:
    - name: "groups"
      value: "my-group"
      roles:
        - access
+       - kube-access

Apply your changes:

tctl create -f saml.yaml

Retrieve your oidc configuration resource:

tctl get oidc/myoidc --with-secrets > oidc.yaml

Edit oidc.yaml, adding kube-access to the claims_to_roles section. The claim you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest group possible within your organization. This group must also include your user.

Here is an example:

  claims_to_roles:
    - name: "groups"
      value: "my-group"
      roles:
        - access
+       - kube-access

Apply your changes:

tctl create -f saml.yaml

Log out of your Teleport cluster and log in again to assume the new role.

Now that Teleport RBAC is configured, you can authenticate to your Kubernetes cluster via Teleport. To interact with your Kubernetes cluster, you will need to configure authorization within Kubernetes.

Kubernetes authorization

To configure authorization within your Kubernetes cluster, you need to create Kubernetes RoleBindings or ClusterRoleBindings that grant permissions to the subjects listed in kubernetes_users and kubernetes_groups.

For example, you can grant some limited read-only permissions to the viewers group used in the kube-access role defined above:

Create a file called viewers-bind.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: viewers-crb
subjects:
- kind: Group
  # Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
  name: viewers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  # "view" is a default ClusterRole that grants read-only access to resources
  # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
  name: view
  apiGroup: rbac.authorization.k8s.io

Apply the ClusterRoleBinding with kubectl:

kubectl apply -f viewers-bind.yaml

Log out of Teleport and log in again.

View pods in your cluster

List connected clusters using tsh kube ls and switch between them using tsh kube login:

tsh kube ls

Kube Cluster Name Selected

----------------- --------

cookie

kubeconfig now points to the cookie cluster

tsh kube login cookie

Logged into kubernetes cluster "cookie". Try 'kubectl version' to test the connection.

kubectl command executed on `cookie` but is routed through the Teleport cluster.

kubectl get pods

Next Steps