Teleport 15 Unveiled: Elevating Access and Security Across Infrastructure
Feb 21
Virtual
Register Today
Teleport logoTry For Free
Fork me on GitHub

Teleport

Enroll a Kubernetes Cluster

  • Available for:
  • OpenSource
  • Team
  • Cloud
  • Enterprise
Getting Started With Teleport 12 | Kubernetes Edition

Getting Started With Teleport 12 | Kubernetes Edition

Length: 14:35

This guide demonstrates how to enroll a Kubernetes cluster as a resource by deploying the Teleport Kubernetes Service on the Kubernetes cluster you want to protect. In this scenario, the Teleport Kubernetes Service pod detects that it is running on Kubernetes and enrolls the Kubernetes cluster automatically. The following diagram provides a simplified overview of this deployment scenario with the Teleport Kubernetes Service running on the Kubernetes cluster:

You can also run the Teleport Kubernetes Service on a Linux host in a separate network from your Kubernetes cluster. For more information about protecting access to a Kubernetes cluster from a separate host, see Kubernetes Access from a Standalone Teleport Cluster.

Prerequisites

  • Access to a running Teleport cluster, tctl admin tool, and tsh client tool, version >= 15.0.2.

    For Teleport Enterprise, Teleport Team, and Teleport Enterprise Cloud, you should use the Enterprise version of tctl. You can verify the tools you have installed by running the following commands:

    tctl version

    Teleport Enterprise v15.0.2 go1.21


    tsh version

    Teleport v15.0.2 go1.21

    You can download these tools by following the appropriate Installation instructions for your environment.

  • Kubernetes >= v1.17.0

  • Helm >= 3.4.2

    Verify that Helm and Kubernetes are installed and up to date.

    helm version

    version.BuildInfo{Version:"v3.4.2"}


    kubectl version

    Client Version: version.Info{Major:"1", Minor:"17+"}

    Server Version: version.Info{Major:"1", Minor:"17+"}

  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. tctl is supported on macOS and Linux machines. For example:
    tsh login --proxy=teleport.example.com --user=[email protected]
    tctl status

    Cluster teleport.example.com

    Version 15.0.2

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/4. Select guided or manual enrollment

There are two options for enrolling a Kubernetes cluster as a resource:

  • You can follow the guided enrollment steps in the Teleport Web UI.
  • You run commands and edit files manually in a terminal.

The guided enrollment simplifies the deployment process by pre-populating commands and files with required information—for example, the token used to provision the Kubernetes cluster—and information you specify, such as the Kubernetes namespace, users, and groups to grant access to.

For information about other ways to enroll and discover Kubernetes clusters, see Registering Kubernetes Clusters with Teleport.

Step 2/4. Follow guided enrollment

To enroll a Kubernetes cluster using the Teleport Web UI:

  1. Open the Teleport Web UI and sign in using your administrative account.

  2. Click Enroll New Resources.

  3. Type all or part of Kubernetes in the Search field to filter the resource types displayed, then click Kubernetes.

  4. Copy the command to add the teleport-agent chart repository and paste it in a terminal on a workstation where kubectl is installed.

  5. Type the Teleport service namespace and the display name to use when connecting to this cluster namespace, then click Next.

    After you click Next, Teleport generates a script to configure and enroll the Kubernetes cluster as a resource in the Teleport cluster.

  6. Copy the command displayed in the Teleport Web UI and run it in a terminal with access to your Kubernetes cluster.

    The Teleport Web UI displays "Successfully detected your new Kubernetes cluster" as confirmation that your cluster is enrolled. When you see this message, click Next to continue.

Step 3/4. Test Kubernetes access

You can now set up access for specific Kubernetes groups and users to test access to the Kubernetes cluster you just enrolled.

To set up and test access:

  1. Type a Kubernetes group name and, optionally, one or more Kubernetes user names that should have access to Kubernetes resources, then click Next.

    You must specify at least one Kubernetes group. If you don't specify a Kubernetes user, you can connect to the cluster using your Teleport user by default.

  2. (Optional) Specify the namespace, a Kubernetes group from the previous step, and either your Teleport user or a Kubernetes user, then click Test Connection.

  3. (Optional) Copy and run the commands displayed in the Teleport Web UI to interact with the Kubernetes cluster to verify access through Teleport.

    tsh login --proxy=teleport.example.com:443 --auth=local --user=[email protected] teleport.example.com
    tsh kube login Kubernetes-cluster-name
    tsh kubectl get pods
  4. Click Finish.

  5. Click Browse Existing Resources to see your Kubernetes cluster and discovered applications.

Step 4/4. Configure roles and authorize access

To authenticate to a Kubernetes cluster using Teleport, you must have a Teleport role that grants access to the cluster you plan to interact with through at least one Kubernetes user or group.

The following example illustrates how to configure a kube-access role for one Kubernetes group named viewers and one Kubernetes user.

  1. Create a file called kube-access.yaml with the following content:

    kind: role
    metadata:
      name: kube-access
    version: v7
    spec:
      allow:
        kubernetes_labels:
          '*': '*'
        kubernetes_resources:
          - kind: '*'
            namespace: '*'
            name: '*'
            verbs: ['*']
        kubernetes_groups:
        - viewers
        kubernetes_users:
        - myuser
      deny: {}
    
  2. Apply your changes:

    tctl create -f kube-access.yaml
  3. Assign the kube-access role to your Teleport user by running the appropriate commands for your authentication provider:

    1. Retrieve your local user's configuration resource:

      tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
    2. Edit out.yaml, adding kube-access to the list of existing roles:

        roles:
         - access
         - auditor
         - editor
      +  - kube-access 
      
    3. Apply your changes:

      tctl create -f out.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your github authentication connector:

      tctl get github/github --with-secrets > github.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the github.yaml file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource.

    2. Edit github.yaml, adding kube-access to the teams_to_roles section.

      The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

      Here is an example:

        teams_to_roles:
          - organization: octocats
            team: admins
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f github.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your saml configuration resource:

      tctl get --with-secrets saml/mysaml > saml.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the saml.yaml file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

    2. Edit saml.yaml, adding kube-access to the attributes_to_roles section.

      The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

        attributes_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f saml.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your oidc configuration resource:

      tctl get oidc/myoidc --with-secrets > oidc.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the oidc.yaml file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

    2. Edit oidc.yaml, adding kube-access to the claims_to_roles section.

      The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

        claims_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f oidc.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    You now have a Teleport tole that enables a Teleport user with the kube-access role to authenticate to the Kubernetes cluster using Teleport credentials. To interact with the Kubernetes cluster, you also need to configure authorization within Kubernetes.

    To configure authorization within your Kubernetes cluster, you must create a Kubernetes RoleBinding or ClusterRoleBindings that grants permission to the kubernetes_users and kubernetes_groups you specified in the kube-access role.

    For this example, you grant read-only permissions to the viewers group specified in the kube-access role.

  4. Create a file called viewers-bind.yaml with the following contents:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: viewers-crb
    subjects:
    - kind: Group
      # Bind the group "viewers" to the kubernetes_groups assigned in the "kube-access" role
      name: viewers
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      # "view" is a default ClusterRole that grants read-only access to resources
      # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
      name: view
      apiGroup: rbac.authorization.k8s.io
    
  5. Apply the ClusterRoleBinding with kubectl:

    tsh kubectl apply -f viewers-bind.yaml
  6. Log out of Teleport and log in again.

  7. List connected clusters using tsh kube ls:

    tsh kube ls

    The command displays current Kubernetes clusters.

    tsh kube ls
    Kube Cluster Name Labels Selected 
    ----------------- ------ -------- 
    example-minikube        *        
    

    If you have more than one cluster enrolled, you can switch between clusters by running a tsh kube login cluster-name command.

  8. View pods using the kubectl command routed through the Teleport cluster:

    tsh kubectl get pods

    The command displays output similar to the following:

    NAME                              READY   STATUS    RESTARTS   AGE
    balanced-567b5f87b5-abcde         1/1     Running   0          143m
    hello-minikube-59d4768566-abcde   1/1     Running   0          144m
    

Next steps

This guide demonstrated how to enroll a Kubernetes cluster by running te=he Teleport Kubernetes Service directly on a member of the Kubernetes cluster.