Skip to main content

Guides for running Teleport using Helm via ArgoCD

Teleport can provide secure, unified access to your Kubernetes clusters. This guide will show you how to deploy Teleport Kubernetes agent on a Kubernetes cluster using Helm and ArgoCD.

How it works

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. This is used to orchestrate large deployments, and avoid the Kubernetes resources to drift from the desired deployment.

Teleport has an official Helm chart (teleport-kube-agent) that deploys a Teleport agent in a Kubernetes cluster. The agent can be configured to run several services, but by default it runs the kubernetes_service to provide access to the Kubernetes API via Teleport.

This guide leverages ArgoCD's native Helm support to deploy the Teleport agent using the teleport-kube-agent Helm chart.

Prerequisites

  • An existing Kubernetes cluster you wish to provide access to via Teleport.
  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and [email protected] to your Teleport username:
    tsh login --proxy=teleport.example.com --user=[email protected]
    tctl status

    Cluster teleport.example.com

    Version 18.0.2

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.
  • An existing ArgoCD instance (version 2.10 or greater) that can deploy to the above Kubernetes cluster.
  • The tsh client tool v18.0.2+ installed on your workstation. You can download this from our installation page.

Step 1/3. Generate a join token

Teleport agents use a join token to obtain certificates and connect to Teleport. See joining docs for more information. The token is only used to join initially, the Teleport Kube agent will store its certificates in Kubernetes and won't need a token to join again in the future. In this section, we will create a token for the agent to join the Teleport cluster.

tctl tokens add --type=kube,app --ttl=5m

You can specify the following token types:

RoleTeleport Service
appApplication Service
authAuth Service
botMachine ID
dbDatabase Service
discoveryDiscovery Service
kubeKubernetes Service
nodeSSH Service
proxyProxy Service
windowsdesktopWindows Desktop Service

See the teleport-kube-agent chart reference for the roles and token types that the chart supports.

Step 2/3. Configure and deploy the teleport-kube-agent Helm chart via ArgoCD

  1. Create a namespace for Teleport and configure its Pod Security Admission, which enforces security standards on pods in the namespace:

    kubectl create namespace teleport
    namespace/teleport created
    kubectl label namespace teleport 'pod-security.kubernetes.io/enforce=baseline'
    namespace/teleport labeled
  2. Create a new ArgoCD application using the following as a template.

project: default
source:
  repoURL: 'https://charts.releases.teleport.dev'
  targetRevision: 18.0.2
  helm:
    values: |-
      roles: kube,app
      authToken: $YOUR_AUTH_TOKEN
      proxyAddr: $YOUR_PROXY_ADDRESS
      kubeClusterName: $YOUR_KUBE_CLUSTER_NAME

      highAvailability:
          replicaCount: 2
          podDisruptionBudget:
              enabled: true
              minAvailable: 1
  chart: teleport-kube-agent
destination:
  server: 'https://kubernetes.default.svc'
  namespace: teleport
# This section is used to allow the teleport-kube-agent-updater to update the agent
# without ArgoCD reverting the update.
ignoreDifferences:
  - group: apps
    kind: StatefulSet
    name: $YOUR_APPLICATION_NAME
    namespace: teleport
    jqPathExpressions:
      - '.spec.template.spec.containers[] | select(.name == "teleport").image'
  1. Sync your changes to apply the configuration using the following command:
$ argocd app sync $YOUR_APPLICATION_NAME
  1. To verify setup, navigate to the 'Resources' page in your Teleport cluster to confirm the Kubernetes cluster is registered.

Step 3/3. Manage access to your new resource

In this step, we'll create a Teleport role called kube-access that allows users to send requests to any Teleport-protected Kubernetes cluster as a member of the viewers group. The Teleport Kubernetes Service will impersonate the viewers group when proxying requests from those users.

  1. Create a file called kube-access.yaml with the following content:

    kind: role
    metadata:
      name: kube-access
    version: v7
    spec:
      allow:
        kubernetes_labels:
          '*': '*'
        kubernetes_resources:
          - kind: '*'
            namespace: '*'
            name: '*'
            verbs: ['*']
        kubernetes_groups:
        - viewers
      deny: {}
    
  2. Apply your changes:

    tctl create -f kube-access.yaml
    tip

    You can also create and edit roles using the Web UI. Go to Access -> Roles and click Create New Role or pick an existing role to edit.

  3. Assign the kube-access role to your Teleport user by running the appropriate commands for your authentication provider:

    1. Retrieve your local user's roles as a comma-separated list:

      ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
    2. Edit your local user to add the new role:

      tctl users update $(tsh status -f json | jq -r '.active.username') \ --set-roles "${ROLES?},kube-access"
    3. Sign out of the Teleport cluster and sign in again to assume the new role.

While you have authorized the kube-access role to access Kubernetes clusters as a member of the viewers group, this group does not yet have permissions within its Kubernetes cluster. To assign these permissions, create a Kubernetes RoleBinding or ClusterRoleBindings that grants permission to the viewers group.

  1. Create a file called viewers-bind.yaml with the following contents:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: viewers-crb
    subjects:
    - kind: Group
      # Bind the group "viewers" to the kubernetes_groups assigned in the "kube-access" role
      name: viewers
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      # "view" is a default ClusterRole that grants read-only access to resources
      # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
      name: view
      apiGroup: rbac.authorization.k8s.io
    
  2. Apply the ClusterRoleBinding with kubectl:

    kubectl apply -f viewers-bind.yaml

Your Teleport user now has permissions to assume membership in the viewers group when accessing your Kubernetes cluster, and the viewers group now has permissions to view resources in the cluster.