Transforming Privileged Access: A Dialogue on Secretless, Zero Trust Architecture
Mar 28
Virtual
Register Today
Teleport logoTry For Free
Fork me on GitHub

Teleport

Teleport GKE Auto-Discovery

  • Available for:
  • OpenSource
  • Enterprise
  • Cloud

The Teleport Discovery Service can automatically register your Google Kubernetes Engine (GKE) clusters with Teleport. With Teleport Kubernetes Discovery, you can configure the Teleport Kubernetes Service and Discovery Service once, then create GKE clusters without needing to register them with Teleport after each creation.

In this guide, we will show you how to get started with Teleport Kubernetes Discovery for GKE.

Overview

Teleport Kubernetes Auto-Discovery involves two components.

The first, the Discovery Service, is responsible for watching your cloud provider and checking if there are any new clusters or if there have been any modifications to previously discovered clusters. The second, the Kubernetes Service, monitors the clusters created by the Discovery Service. It proxies communications between users and the API servers of these clusters.

Tip

This guide presents the Discovery Service and Kubernetes Service running in the same process, however both can run independently and on different machines.

For example, you can run an instance of the Kubernetes Service in each Kubernetes cluster you want to register with Teleport, and an instance of the Discovery Service in any network you wish.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see the Getting Started guide.

  • The tctl admin tool and tsh client tool version >= 15.1.10.

    See Installation for details.

To check version information, run the tctl version and tsh version commands. For example:

tctl version

Teleport v15.1.10 git:api/14.0.0-gd1e081e go1.21

tsh version

Teleport v15.1.10 go1.21

Proxy version: 15.1.10Proxy: teleport.example.com
  • A running Teleport Enterprise cluster. For details on how to set this up, see the Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 15.1.10.

    You can download these tools by visiting your Teleport account workspace.

To check version information, run the tctl version and tsh version commands. For example:

tctl version

Teleport Enterprise v15.1.10 git:api/14.0.0-gd1e081e go1.21

tsh version

Teleport v15.1.10 go1.21

Proxy version: 15.1.10Proxy: teleport.example.com
  • A Teleport Enterprise Cloud account. If you don't have an account, sign up to begin a free trial.

  • The Enterprise tctl admin tool and tsh client tool version >= 15.1.9.

    You can download these tools from the Cloud Downloads page.

To check version information, run the tctl version and tsh version commands. For example:

tctl version

Teleport Enterprise v15.1.9 git:api/14.0.0-gd1e081e go1.21

tsh version

Teleport v15.1.9 go1.21

Proxy version: 15.1.9Proxy: teleport.example.com
  • A Google Cloud account with permissions to create GKE clusters, IAM roles, and service accounts.
  • The gcloud CLI tool. Follow the Google Cloud documentation page to install and authenticate to gcloud.
  • One or more GKE clusters running. Your Kubernetes user must have permissions to create ClusterRole and ClusterRoleBinding resources in your clusters.
  • A Linux host where you will run the Teleport Discovery and Kubernetes services. You can run this host on any cloud provider or even use a local machine.
  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. tctl is supported on macOS and Linux machines. For example:
    tsh login --proxy=teleport.example.com --user=[email protected]
    tctl status

    Cluster teleport.example.com

    Version 15.1.10

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/3. Obtain Google Cloud credentials

The Teleport Discovery Service and Kubernetes Service use a Google Cloud service account to discover GKE clusters and manage access from Teleport users. In this step, you will create a service account and download a credentials file for the Teleport Discovery Service.

Create an IAM role for the Discovery Service

The Teleport Discovery Service needs permissions to retrieve GKE clusters associated with your Google Cloud project.

To grant these permissions, create a file called GKEKubernetesAutoDisc.yaml with the following content:

title: GKE Cluster Discoverer
description: "Get and list GKE clusters"
stage: GA
includedPermissions:
- container.clusters.get
- container.clusters.list

Create the role, assigning the --project flag to the name of your Google Cloud project:

gcloud iam roles create GKEKubernetesAutoDisc \ --project=google-cloud-project \ --file=GKEKubernetesAutoDisc.yaml

Create an IAM role for the Kubernetes Service

The Teleport Kubernetes Service needs Google Cloud IAM permissions in order to forward user traffic to your GKE clusters.

Create a file called GKEAccessManager.yaml with the following content:

title: GKE Cluster Access Manager
description: "Manage access to GKE clusters"
stage: GA
includedPermissions:
- container.clusters.get
- container.clusters.impersonate
- container.pods.get
- container.selfSubjectAccessReviews.create
- container.selfSubjectRulesReviews.create

Create the role, assigning the --project flag to the name of your Google Cloud project. If you receive a prompt indicating that certain permissions are in TESTING, enter y:

gcloud iam roles create GKEAccessManager \ --project=google-cloud-project \ --file=GKEAccessManager.yaml

Create a service account

Now that you have declared roles for the Discovery Service and Kubernetes Service, create a service account so you can assign these roles.

Run the following command to create a service account called teleport-discovery-kubernetes:

gcloud iam service-accounts create teleport-discovery-kubernetes \ --description="Teleport Discovery Service and Kubernetes Service" \ --display-name="teleport-discovery-kubernetes"

Grant the roles you defined earlier to your service account, assigning PROJECT_ID to the name of your Google Cloud project:

PROJECT_ID=google-cloud-project
gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"
gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"

Create a service account for each service:

gcloud iam service-accounts create teleport-discovery-service \ --description="Teleport Discovery Service" \ --display-name="teleport-discovery-service"
gcloud iam service-accounts create teleport-kubernetes-service \ --description="Teleport Kubernetes Service" \ --display-name="teleport-kubernetes-service"

Grant the roles you defined earlier to your service account, assigning PROJECT_ID to the name of your Google Cloud project:

PROJECT_ID=google-cloud-project
gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"
gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"

Retrieve credentials for your Teleport services

Now that you have created a Google Cloud service account and attached roles to it, associate your service account with the Teleport Kubernetes Service and Discovery Service.

The process is different depending on whether you are deploying the Teleport Kubernetes Service and Discovery Service on Google Cloud or some other way (e.g., via Amazon EC2 or on a local network).

Stop your VM so you can attach your service account to it:

gcloud compute instances stop vm-name --zone=google-cloud-region

Attach your service account to the instance, assigning the name of your VM to vm-name and the name of your Google Cloud region to google-cloud-region:

gcloud compute instances set-service-account vm-name \ --service-account teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform

Stop each VM you plan to use to run the Teleport Kubernetes Service and Discovery Service.

Attach the teleport-kubernetes-service service account to the VM running the Kubernetes Service:

gcloud compute instances set-service-account ${VM1_NAME?} \ --service-account teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform

Attach the teleport-discovery-service service account to the VM running the Discovery Service:

gcloud compute instances set-service-account ${VM2_NAME?} \ --service-account teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform

You must use the scopes flag in the gcloud compute instances set-service-account command. Otherwise, your Google Cloud VM will fail to obtain the required authorization to access the GKE API.

Once you have attached the service account, restart your VM:

gcloud compute instances start vm-name --zone google-cloud-region

Download a credentials file for the service account used by the Discovery Service and Kubernetes Service:

PROJECT_ID=google-cloud-project
gcloud iam service-accounts keys create google-cloud-credentials.json \ --iam-account=teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com

Move your credentials file to the host running the Teleport Discovery Service and Kubernetes Service the path /var/lib/teleport/google-cloud-credentials.json. We will use this credentials file when running this service later in this guide.

Download separate credentials files for each service:

PROJECT_ID=google-cloud-project
gcloud iam service-accounts keys create discovery-service-credentials.json \ --iam-account=teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com
gcloud iam service-accounts keys create kube-service-credentials.json \ --iam-account=teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com

Move discovery-service-credentials.json to the host running the Teleport Discovery Service at the path /var/lib/teleport/google-cloud-credentials.json.

Move kubernetes-service-credentials.json to the host running the Teleport Kubernetes Service at the path /var/lib/teleport/google-cloud-credentials.json.

We will use these credentials files when running this services later in this guide.

Step 2/3. Configure Teleport to discover GKE clusters

Now that you have created a service account that can discover GKE clusters and a cluster role that can manage access, configure the Teleport Discovery Service to detect GKE clusters and the Kubernetes Service to proxy user traffic.

Install Teleport

Install Teleport on the host you are using to run the Kubernetes Service and Discovery Service:

Select an edition, then follow the instructions for that edition to install Teleport.

Teleport Edition

The following command updates the repository for the package manager on the local operating system and installs the provided Teleport version:

curl https://goteleport.com/static/install.sh | bash -s 15.1.10

Download Teleport's PGP public key

sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.asc

Source variables about OS version

source /etc/os-release

Add the Teleport APT repository for v15. You'll need to update this

file for each major release of Teleport.

echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/v15" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null

sudo apt-get update
sudo apt-get install teleport-ent

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo apt-get install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for v15. You'll need to update this

file for each major release of Teleport.

First, get the major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v15/teleport.repo")"
sudo yum install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo yum install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository for v15. You'll need to update this

file for each major release of Teleport.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")
sudo yum install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo yum install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for v15. You'll need to update this

file for each major release of Teleport.

First, get the major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use the dnf config manager plugin to add the teleport RPM repo

sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v15/teleport.repo")"

Install teleport

sudo dnf install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo dnf install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use Zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v15/teleport-zypper.repo")

Install teleport

sudo zypper install teleport-ent

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo zypper install teleport-ent-fips

In the example commands below, update $SYSTEM_ARCH with the appropriate value (amd64, arm64, or arm). All example commands using this variable will update after one is filled out.

curl https://cdn.teleport.dev/teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-bin.tar.gz.sha256

<checksum> <filename>

curl -O https://cdn.teleport.dev/teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-bin.tar.gz
shasum -a 256 teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-bin.tar.gz

Verify that the checksums match

tar -xvf teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-bin.tar.gz
cd teleport-ent
sudo ./install

For FedRAMP/FIPS-compliant installations of Teleport Enterprise, package URLs will be slightly different:

curl https://cdn.teleport.dev/teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-fips-bin.tar.gz.sha256

<checksum> <filename>

curl -O https://cdn.teleport.dev/teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-fips-bin.tar.gz
shasum -a 256 teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-fips-bin.tar.gz

Verify that the checksums match

tar -xvf teleport-ent-v15.1.10-linux-$SYSTEM_ARCH-fips-bin.tar.gz
cd teleport-ent
sudo ./install

OS repository channels

The following channels are available for APT, YUM, and Zypper repos. They may be used in place of stable/v15 anywhere in the Teleport documentation.

Channel nameDescription
stable/<major>Receives releases for the specified major release line, i.e. v15
stable/cloudRolling channel that receives releases compatible with current Cloud version
stable/rollingRolling channel that receives all published Teleport releases

Add the Teleport repository to your repository list:

Download Teleport's PGP public key

sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.asc

Source variables about OS version

source /etc/os-release

Add the Teleport APT repository for cloud.

echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null

Provide your Teleport domain to query the latest compatible Teleport version

export TELEPORT_DOMAIN=example.teleport.com
export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"

Update the repo and install Teleport and the Teleport updater

sudo apt-get update
sudo apt-get install "teleport-ent=$TELEPORT_VERSION" teleport-ent-updater

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"

Provide your Teleport domain to query the latest compatible Teleport version

export TELEPORT_DOMAIN=example.teleport.com
export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"

Install Teleport and the Teleport updater

sudo yum install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use the dnf config manager plugin to add the teleport RPM repo

sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"

Provide your Teleport domain to query the latest compatible Teleport version

export TELEPORT_DOMAIN=example.teleport.com
export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"

Install Teleport and the Teleport updater

sudo dnf install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use Zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")

Provide your Teleport domain to query the latest compatible Teleport version

export TELEPORT_DOMAIN=example.teleport.com
export TELEPORT_VERSION="$(curl https://$TELEPORT_DOMAIN/v1/webapi/automaticupgrades/channel/default/version | sed 's/v//')"

Install Teleport and the Teleport updater

sudo zypper install "teleport-ent-$TELEPORT_VERSION" teleport-ent-updater

OS repository channels

The following channels are available for APT, YUM, and Zypper repos. They may be used in place of stable/v15 anywhere in the Teleport documentation.

Channel nameDescription
stable/<major>Receives releases for the specified major release line, i.e. v15
stable/cloudRolling channel that receives releases compatible with current Cloud version
stable/rollingRolling channel that receives all published Teleport releases

Before installing a teleport binary with a version besides v15, read our compatibility rules to ensure that the binary is compatible with Teleport Enterprise Cloud.

Teleport uses Semantic Versioning. Version numbers include a major version, minor version, and patch version, separated by dots. When running multiple teleport binaries within a cluster, the following rules apply:

  • Patch and minor versions are always compatible, for example, any 8.0.1 component will work with any 8.0.3 component and any 8.1.0 component will work with any 8.3.0 component.
  • Servers support clients that are one major version behind, but do not support clients that are on a newer major version. For example, an 8.x.x Proxy Service instance is compatible with 7.x.x agents and 7.x.x tsh, but we don't guarantee that a 9.x.x agent will work with an 8.x.x Proxy Service instance. This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. You must upgrade to 7.x.x first.
  • Proxy Service instances and agents do not support Auth Service instances that are on an older major version, and will fail to connect to older Auth Service instances by default. You can override version checks by passing --skip-version-check when starting agents and Proxy Service instances.

Create a join token

The Teleport Discovery Service and Kubernetes Service require an authentication token in order to to join the cluster. Generate one by running the following tctl command:

tctl tokens add --type=discovery,kube --format=text
abcd123-insecure-do-not-use-this

Copy the token (e.g., abcd123-insecure-do-not-use-this above) and save the token in /tmp/token on the machine that will run the Discovery Service and Kubernetes Service, for example:

echo abcd123-insecure-do-not-use-this | sudo tee /tmp/token

abcd123-insecure-do-not-use-this

Generate separate tokens for the Kubernetes Service and Discovery Service by running the following tctl commands:

tctl tokens add --type=discovery --format=text

efgh456-insecure-do-not-use-this

tctl tokens add --type=kube --format=text

ijkl789-insecure-do-not-use-this

Copy each token (e.g., efgh456-insecure-do-not-use-this and ijkl789-insecure-do-not-use-this above) and save it in /tmp/token on the machine that will run the appropriate service.

Configure the Kubernetes Service and Discovery Service

On the host running the Kubernetes Service and Discovery Service, create a Teleport configuration file with the following content at /etc/teleport.yaml:

Discovery Service exposes a configuration parameter - discovery_service.discovery_group - that allows you to group discovered resources into different sets. This parameter is used to prevent Discovery Agents watching different sets of cloud resources from colliding against each other and deleting resources created by another services.

When running multiple Discovery Services, you must ensure that each service is configured with the same discovery_group value if they are watching the same cloud resources or a different value if they are watching different cloud resources.

It is possible to run a mix of configurations in the same Teleport cluster meaning that some Discovery Services can be configured to watch the same cloud resources while others watch different resources. As an example, a 4-agent high availability configuration analyzing data from two different cloud accounts would run with the following configuration.

  • 2 Discovery Services configured with discovery_group: "prod" polling data from Production account.
  • 2 Discovery Services configured with discovery_group: "staging" polling data from Staging account.
version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: "teleport.example.com:443"
auth_service:
  enabled: off
proxy_service:
  enabled: off
ssh_service:
  enabled: off
discovery_service:
  enabled: "yes"
  discovery_group: "gke-myproject"
  gcp:
    - types: ["gke"]
      locations: ["*"]
      project_ids: ["myproject"] # replace with my project ID
      tags:
        "*" : "*"
kubernetes_service:
  enabled: "yes"
  resources:
  - labels:
      "*": "*"

Follow the instructions in this section with two configuration files. The configuration file you will save at /etc/teleport.yaml on the Kubernetes Service host will include the following:

version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: teleport.example.com:443    
auth_service:
  enabled: off
proxy_service:
  enabled: off
ssh_service:
  enabled: off
kubernetes_service:
  enabled: "yes"
  resources:
  - labels:
      "*": "*"

On the Discovery Service host, the file will include the following:

version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: teleport.example.com:443    
auth_service:
  enabled: off
proxy_service:
  enabled: off
ssh_service:
  enabled: off
discovery_service:
  enabled: "yes"
  gcp:
    - types: ["gke"]
      locations: ["*"]
      project_ids: ["myproject"] # replace with my project ID
      tags:
        "*" : "*"

Edit this configuration for your environment as explained below.

proxy_server

Replace teleport.example.com:443 with the host and port of your Teleport Proxy Service (e.g., mytenant.teleport.sh:443 for a Teleport Cloud tenant).

discovery_service.gcp

Each item in discovery_service.gcp is a matcher for Kubernetes clusters running on GKE. The Discovery Service periodically executes a request to the Google Cloud API based on each matcher to list GKE clusters. In this case, we have declared a single matcher.

Each matcher searches for clusters that match all properties of the matcher, i.e., that belong to the specified locations and projects and have the specified tags. The Discovery Service registers GKE clusters that match any configured matcher.

This means that if you declare the following two matchers, the Discovery Service will register clusters in project myproj-dev running in us-east1, as well as clusters in project myproj-prod running in us-east2, but not clusters in myproj-dev running in us-east2:

discovery_service:
  enabled: "yes"
  gcp:
    - types: ["gke"]
      locations: ["us-east1"]
      project_ids: ["myproj-dev"]
      tags:
        "*" : "*"
    - types: ["gke"]
      locations: ["us-east2"]
      project_ids: ["myproj-prod"]
      tags:
        "*" : "*"

discovery_service.gcp[0].types

Each matcher's types field must be set to an array with a single string value, gke.

discovery_service.gcp[0].project_ids

In your matcher, replace myproject with the ID of your Google Cloud project. The project_ids field must include at least one value, and it must not be the wildcard character (*).

discovery_service.gcp[0].locations

Each matcher's locations field contains an array of Google Cloud region or zone names that the matcher will search for GKE clusters. The wildcard character, *, configures the matcher to search all locations.

discovery_service.gcp[0].tags

Like locations, tags consists of a map where each key is a string that represents the key of a tag, and each value is either a single string or an array of strings, representing one tag value or a list of tag values.

A wildcard key or value matches any tag key or value in your Google Cloud account. If you include another value, the matcher will match all GKE clusters with the provided tag.

Start the Kubernetes Service and Discovery Service

On the host where you will run the Kubernetes Service, execute the following command, depending on:

  • Whether you installed Teleport using a package manager or via a TAR archive
  • Whether you are running the Discovery and Kubernetes Service on Google Cloud or another platform

How your host is running:

On the host where you will run the Teleport Kubernetes Service and Discovery Service, start the Teleport service:

sudo systemctl start teleport

On the host where you will run the Teleport Kubernetes Service and Discovery Service, create a systemd service configuration for Teleport, enable the Teleport service, and start Teleport:

sudo teleport install systemd -o /etc/systemd/system/teleport.service
sudo systemctl enable teleport
sudo systemctl start teleport

When you installed Teleport via package manager, the installation process created a configuration for the init system systemd to run Teleport as a daemon.

This service reads environment variables from a file at the path /etc/default/teleport. Teleport's built-in Google Cloud client reads the credentials file at the location given by the GOOGLE_APPLICATION_CREDENTIALS variable.

Ensure that /etc/default/teleport has the following content:

GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"

Start the Teleport service:

sudo systemctl enable teleport
sudo systemctl start teleport

On the host where you are running the Teleport Discovery Service and Kubernetes Service, create a systemd configuration that you can use to run Teleport in the background:

sudo teleport install systemd -o /etc/systemd/system/teleport.service
sudo systemctl enable teleport

This service reads environment variables from a file at the path /etc/default/teleport. Teleport's built-in Google Cloud client reads the credentials file at the location given by the GOOGLE_APPLICATION_CREDENTIALS variable.

Ensure that /etc/default/teleport has the following content:

GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"

Start the Discovery Service and Kubernetes Service:

sudo systemctl start teleport

Step 3/3. Connect to your GKE cluster

Allow access to your Kubernetes cluster

Ensure that you are in the correct Kubernetes context for the cluster you would like to enable access to:

kubectl config current-context

Retrieve all available contexts:

kubectl config get-contexts

Switch to your context, replacing CONTEXT_NAME with the name of your chosen context:

kubectl config use-context CONTEXT_NAME
Switched to context CONTEXT_NAME

To authenticate to a Kubernetes cluster via Teleport, your Teleport user's roles must allow access as at least one Kubernetes user or group.

  1. Retrieve a list of your current user's Teleport roles. The example below requires the jq utility for parsing JSON:

    CURRENT_ROLES=$(tsh status -f json | jq -r '.active.roles | join ("\n")')
  2. Retrieve the Kubernetes groups your roles allow you to access:

    echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \ jq '.[0].spec.allow.kubernetes_groups[]?'
  3. Retrieve the Kubernetes users your roles allow you to access:

    echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \ jq '.[0].spec.allow.kubernetes_users[]?'
  4. If the output of one of the previous two commands is non-empty, your user can access at least one Kubernetes user or group, so you can proceed to the next step.

  5. If both lists are empty, create a Teleport role for the purpose of this guide that can view Kubernetes resources in your cluster.

    Create a file called kube-access.yaml with the following content:

    kind: role
    metadata:
      name: kube-access
    version: v7
    spec:
      allow:
        kubernetes_labels:
          '*': '*'
        kubernetes_resources:
          - kind: '*'
            namespace: '*'
            name: '*'
            verbs: ['*']
        kubernetes_groups:
        - viewers
      deny: {}
    
  6. Apply your changes:

    tctl create -f kube-access.yaml
  7. Assign the kube-access role to your Teleport user by running the appropriate commands for your authentication provider:

    1. Retrieve your local user's roles as a comma-separated list:

      ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
    2. Edit your local user to add the new role:

      tctl users update $(tsh status -f json | jq -r '.active.username') \ --set-roles "${ROLES?},kube-access"
    3. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your github authentication connector:

      tctl get github/github --with-secrets > github.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the github.yaml file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource.

    2. Edit github.yaml, adding kube-access to the teams_to_roles section.

      The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

      Here is an example:

        teams_to_roles:
          - organization: octocats
            team: admins
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f github.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your saml configuration resource:

      tctl get --with-secrets saml/mysaml > saml.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the saml.yaml file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

    2. Edit saml.yaml, adding kube-access to the attributes_to_roles section.

      The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

        attributes_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f saml.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

    1. Retrieve your oidc configuration resource:

      tctl get oidc/myoidc --with-secrets > oidc.yaml

      Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the oidc.yaml file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

    2. Edit oidc.yaml, adding kube-access to the claims_to_roles section.

      The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

        claims_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access
      
    3. Apply your changes:

      tctl create -f oidc.yaml
    4. Sign out of the Teleport cluster and sign in again to assume the new role.

  8. Configure the viewers group in your Kubernetes cluster to have the built-in view ClusterRole. When your Teleport user assumes the kube-access role and sends requests to the Kubernetes API server, the Teleport Kubernetes Service impersonates the viewers group and proxies the requests.

    Create a file called viewers-bind.yaml with the following contents, binding the built-in view ClusterRole with the viewers group you enabled your Teleport user to access:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: viewers-crb
    subjects:
    - kind: Group
      # Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
      name: viewers
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      # "view" is a default ClusterRole that grants read-only access to resources
      # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
      name: view
      apiGroup: rbac.authorization.k8s.io
    
  9. Apply the ClusterRoleBinding with kubectl:

    kubectl apply -f viewers-bind.yaml

Access your cluster

When you ran the Discovery Service, it discovered your GKE cluster and registered the cluster with Teleport. You can confirm this by running the following tctl command:

tctl get kube_clusters
kind: kube_clustermetadata: description: GKE cluster "mycluster-gke" in us-east1 id: 0000000000000000000 labels: location: us-east1 project-id: myproject teleport.dev/cloud: GCP teleport.dev/origin: cloud name: mycluster-gkespec: aws: {} azure: {}version: v3

Run the following command to list the Kubernetes clusters that your Teleport user has access to. The list should now include your GKE cluster:

tsh kube ls
Kube Cluster Name Labels Selected------------------- -------------------------------------------------------------------------------------------------------- --------mycluster-gke location=us-east1 project-id=myproject teleport.dev/cloud=GCP teleport.dev/origin=cloud

Log in to your cluster, replacing mycluster-gke with the name of a cluster you listed previously:

tsh kube login mycluster-gke
Logged into kubernetes cluster "mycluster-gke". Try 'kubectl version' to test the connection.

As you can see, Teleport GKE Auto-Discovery enabled you to access a GKE cluster in your Google Cloud account without requiring you to register that cluster manually within Teleport. When you create or remove clusters in GKE, Teleport will update its state to reflect the available clusters in your account.

Troubleshooting

Discovery Service

To check if the Discovery Service is working correctly, you can check if any Kubernetes clusters have been discovered. To do this, you can use the tctl get kube_cluster command and inspect if the expected clusters have already been imported into Teleport.

If some clusters do not appear in the list, check if the filtering labels match the missing cluster tags or look into the service logs for permission errors.

Kubernetes Service

If the tctl get kube_cluster command returns the discovered clusters, but the tsh kube ls command does not include them, check that you have set the kubernetes_service.resources section correctly.

kubernetes_service:
  enabled: "yes"
  resources:
  - tags:
      "env": "prod"

If the section is correctly configured, but clusters still do not appear or return authentication errors, please check if permissions have been correctly configured in your target cluster or that you have the correct permissions to list Kubernetes clusters in Teleport.