
Teleport
Teleport GKE Auto-Discovery
- Version 15.x
- Version 14.x
- Version 13.x
- Version 12.x
- Older Versions
- Available for:
- OpenSource
- Team
- Cloud
- Enterprise
The Teleport Discovery Service can automatically register your Google Kubernetes Engine (GKE) clusters with Teleport. With Teleport Kubernetes Discovery, you can configure the Teleport Kubernetes Service and Discovery Service once, then create GKE clusters without needing to register them with Teleport after each creation.
In this guide, we will show you how to get started with Teleport Kubernetes Discovery for GKE.
Overview
Teleport Kubernetes Auto-Discovery involves two components.
The first, the Discovery Service, is responsible for watching your cloud provider and checking if there are any new clusters or if there have been any modifications to previously discovered clusters. The second, the Kubernetes Service, monitors the clusters created by the Discovery Service. It proxies communications between users and the API servers of these clusters.
This guide presents the Discovery Service and Kubernetes Service running in the same process, however both can run independently and on different machines.
For example, you can run an instance of the Kubernetes Service in each Kubernetes cluster you want to register with Teleport, and an instance of the Discovery Service in any network you wish.
Prerequisites
-
A running Teleport cluster. For details on how to set this up, see the Getting Started guide.
-
The
tctl
admin tool andtsh
client tool version >= 14.2.0.See Installation for details.
To check version information, run the tctl version
and tsh version
commands.
For example:
tctl versionTeleport v14.2.0 git:api/14.0.0-gd1e081e go1.21
tsh versionTeleport v14.2.0 go1.21
Proxy version: 14.2.0Proxy: teleport.example.com
-
A Teleport Team account. If you don't have an account, sign up to begin your free trial.
-
The Enterprise
tctl
admin tool andtsh
client tool, version >= 14.1.3.You can download these tools from the Cloud Downloads page.
To check version information, run the tctl version
and tsh version
commands.
For example:
tctl versionTeleport Enterprise v14.1.3 git:api/14.0.0-gd1e081e go1.21
tsh versionTeleport v14.1.3 go1.21
Proxy version: 14.1.3Proxy: teleport.example.com
-
A running Teleport Enterprise cluster. For details on how to set this up, see the Enterprise Getting Started guide.
-
The Enterprise
tctl
admin tool andtsh
client tool version >= 14.2.0.You can download these tools by visiting your Teleport account workspace.
To check version information, run the tctl version
and tsh version
commands.
For example:
tctl versionTeleport Enterprise v14.2.0 git:api/14.0.0-gd1e081e go1.21
tsh versionTeleport v14.2.0 go1.21
Proxy version: 14.2.0Proxy: teleport.example.com
-
A Teleport Enterprise Cloud account. If you don't have an account, sign up to begin a free trial of Teleport Team and upgrade to Teleport Enterprise Cloud.
-
The Enterprise
tctl
admin tool andtsh
client tool version >= 14.1.3.You can download these tools from the Cloud Downloads page.
To check version information, run the tctl version
and tsh version
commands.
For example:
tctl versionTeleport Enterprise v14.1.3 git:api/14.0.0-gd1e081e go1.21
tsh versionTeleport v14.1.3 go1.21
Proxy version: 14.1.3Proxy: teleport.example.com
- A Google Cloud account with permissions to create GKE clusters, IAM roles, and service accounts.
- The
gcloud
CLI tool. Follow the Google Cloud documentation page to install and authenticate togcloud
. - One or more GKE clusters running. Your Kubernetes user must have permissions
to create
ClusterRole
andClusterRoleBinding
resources in your clusters. - A Linux host where you will run the Teleport Discovery and Kubernetes services. You can run this host on any cloud provider or even use a local machine.
- To check that you can connect to your Teleport cluster, sign in with
tsh login
, then verify that you can runtctl
commands using your current credentials.tctl
is supported on macOS and Linux machines. For example:If you can connect to the cluster and run thetsh login --proxy=teleport.example.com --user=[email protected]tctl statusCluster teleport.example.com
Version 14.2.0
CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
tctl status
command, you can use your current credentials to run subsequenttctl
commands from your workstation. If you host your own Teleport cluster, you can also runtctl
commands on the computer that hosts the Teleport Auth Service for full permissions.
Step 1/3. Obtain Google Cloud credentials
The Teleport Discovery Service and Kubernetes Service use a Google Cloud service account to discover GKE clusters and manage access from Teleport users. In this step, you will create a service account and download a credentials file for the Teleport Discovery Service.
Create an IAM role for the Discovery Service
The Teleport Discovery Service needs permissions to retrieve GKE clusters associated with your Google Cloud project.
To grant these permissions, create a file called GKEKubernetesAutoDisc.yaml
with the following content:
title: GKE Cluster Discoverer
description: "Get and list GKE clusters"
stage: GA
includedPermissions:
- container.clusters.get
- container.clusters.list
Create the role, assigning the --project
flag to the name of your Google Cloud
project:
gcloud iam roles create GKEKubernetesAutoDisc \ --project=google-cloud-project \ --file=GKEKubernetesAutoDisc.yaml
Create an IAM role for the Kubernetes Service
The Teleport Kubernetes Service needs Google Cloud IAM permissions in order to forward user traffic to your GKE clusters.
Create a file called GKEAccessManager.yaml
with the following content:
title: GKE Cluster Access Manager
description: "Manage access to GKE clusters"
stage: GA
includedPermissions:
- container.clusters.get
- container.clusters.impersonate
- container.pods.get
- container.selfSubjectAccessReviews.create
- container.selfSubjectRulesReviews.create
Create the role, assigning the --project
flag to the name of your Google Cloud
project. If you receive a prompt indicating that certain permissions are in
TESTING
, enter y
:
gcloud iam roles create GKEAccessManager \ --project=google-cloud-project \ --file=GKEAccessManager.yaml
Create a service account
Now that you have declared roles for the Discovery Service and Kubernetes Service, create a service account so you can assign these roles.
Run the following command to create a service account called
teleport-discovery-kubernetes
:
gcloud iam service-accounts create teleport-discovery-kubernetes \ --description="Teleport Discovery Service and Kubernetes Service" \ --display-name="teleport-discovery-kubernetes"
Grant the roles you defined earlier to your service account, assigning
PROJECT_ID
to the name of your Google Cloud project:
PROJECT_ID=google-cloud-projectgcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"
Create a service account for each service:
gcloud iam service-accounts create teleport-discovery-service \ --description="Teleport Discovery Service" \ --display-name="teleport-discovery-service"gcloud iam service-accounts create teleport-kubernetes-service \ --description="Teleport Kubernetes Service" \ --display-name="teleport-kubernetes-service"
Grant the roles you defined earlier to your service account, assigning
PROJECT_ID
to the name of your Google Cloud project:
PROJECT_ID=google-cloud-projectgcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"gcloud projects add-iam-policy-binding ${PROJECT_ID?} \ --member="serviceAccount:teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com" \ --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"
Retrieve credentials for your Teleport services
Now that you have created a Google Cloud service account and attached roles to it, associate your service account with the Teleport Kubernetes Service and Discovery Service.
The process is different depending on whether you are deploying the Teleport Kubernetes Service and Discovery Service on Google Cloud or some other way (e.g., via Amazon EC2 or on a local network).
Stop your VM so you can attach your service account to it:
gcloud compute instances stop vm-name --zone=google-cloud-region
Attach your service account to the instance, assigning the name of your VM to vm-name and the name of your Google Cloud region to google-cloud-region:
gcloud compute instances set-service-account vm-name \ --service-account teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform
Stop each VM you plan to use to run the Teleport Kubernetes Service and Discovery Service.
Attach the teleport-kubernetes-service
service account to the VM running the
Kubernetes Service:
gcloud compute instances set-service-account ${VM1_NAME?} \ --service-account teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform
Attach the teleport-discovery-service
service account to the VM running the
Discovery Service:
gcloud compute instances set-service-account ${VM2_NAME?} \ --service-account teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com \ --zone google-cloud-region \ --scopes=cloud-platform
You must use the scopes
flag in the gcloud compute instances set-service-account
command. Otherwise, your Google Cloud VM will fail to
obtain the required authorization to access the GKE API.
Once you have attached the service account, restart your VM:
gcloud compute instances start vm-name --zone google-cloud-region
Download a credentials file for the service account used by the Discovery Service and Kubernetes Service:
PROJECT_ID=google-cloud-projectgcloud iam service-accounts keys create google-cloud-credentials.json \ --iam-account=teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com
Move your credentials file to the host running the Teleport Discovery Service
and Kubernetes Service the path
/var/lib/teleport/google-cloud-credentials.json
. We will use this credentials
file when running this service later in this guide.
Download separate credentials files for each service:
PROJECT_ID=google-cloud-projectgcloud iam service-accounts keys create discovery-service-credentials.json \ --iam-account=teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.comgcloud iam service-accounts keys create kube-service-credentials.json \ --iam-account=teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com
Move discovery-service-credentials.json
to the host running the Teleport
Discovery Service at the path /var/lib/teleport/google-cloud-credentials.json
.
Move kubernetes-service-credentials.json
to the host running the Teleport
Kubernetes Service at the path
/var/lib/teleport/google-cloud-credentials.json
.
We will use these credentials files when running this services later in this guide.
Step 2/3. Configure Teleport to discover GKE clusters
Now that you have created a service account that can discover GKE clusters and a cluster role that can manage access, configure the Teleport Discovery Service to detect GKE clusters and the Kubernetes Service to proxy user traffic.
Install Teleport
Install Teleport on the host you are using to run the Kubernetes Service and Discovery Service:
Select an edition, then follow the instructions for that edition to install Teleport.
Teleport Edition
- Teleport Community Edition
- Teleport Team
- Teleport Enterprise
- Teleport Enterprise Cloud
curl https://goteleport.com/static/install.sh | bash -s 14.2.0
Add the Teleport repository to your repository list:
Download Teleport's PGP public key
sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.ascSource variables about OS version
source /etc/os-releaseAdd the Teleport APT repository for cloud.
echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/nullsudo apt-get updatesudo apt-get install teleport-ent-updater
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")sudo yum install -y yum-utilssudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"sudo yum install teleport-ent-updaterTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use the dnf config manager plugin to add the teleport RPM repo
sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"Install teleport
sudo dnf install teleport-ent-updaterTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
Source variables about OS version
source /etc/os-releaseAdd the Teleport Zypper repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use Zypper to add the teleport RPM repo
sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")Install teleport
sudo zypper install teleport-ent-updater
OS repository channels
The following channels are available for APT, YUM, and Zypper repos. They may be used in place of
stable/v14
anywhere in the Teleport documentation.
Channel name | Description |
---|---|
stable/<major> | Receives releases for the specified major release line, i.e. v14 |
stable/cloud | Rolling channel that receives releases compatible with current Cloud version |
stable/rolling | Rolling channel that receives all published Teleport releases |
Before installing a teleport
binary with a version besides
v14, read our compatibility rules to ensure that the
binary is compatible with Teleport Cloud.
When running multiple teleport
binaries within a cluster, the following rules
apply:
- Patch and minor versions are always compatible, for example, any 8.0.1 component will work with any 8.0.3 component and any 8.1.0 component will work with any 8.3.0 component.
- Servers support clients that are 1 major version behind, but do not support
clients that are on a newer major version. For example, an 8.x.x Proxy Service
is compatible with 7.x.x resource services and 7.x.x
tsh
, but we don't guarantee that a 9.x.x resource service will work with an 8.x.x Proxy Service. This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. You must upgrade to 7.x.x first. - Proxy Services and resource services do not support Auth Services that are on
an older major version, and will fail to connect to older Auth Services by
default. This behavior can be overridden by passing
--skip-version-check
when starting Proxy Services and resource services.
Download Teleport's PGP public key
sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.ascSource variables about OS version
source /etc/os-releaseAdd the Teleport APT repository for v14. You'll need to update this
file for each major release of Teleport.
echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/v14" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/nullsudo apt-get updatesudo apt-get install teleport-ent
For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips
package instead:
sudo apt-get install teleport-ent-fips
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for v14. You'll need to update this
file for each major release of Teleport.
First, get the major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")sudo yum install -y yum-utilssudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport.repo")"sudo yum install teleport-entTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips
package instead:
sudo yum install teleport-ent-fips
Source variables about OS version
source /etc/os-releaseAdd the Teleport Zypper repository for v14. You'll need to update this
file for each major release of Teleport.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use zypper to add the teleport RPM repo
sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")sudo yum install teleport-entTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips
package instead:
sudo yum install teleport-ent-fips
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for v14. You'll need to update this
file for each major release of Teleport.
First, get the major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use the dnf config manager plugin to add the teleport RPM repo
sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport.repo")"Install teleport
sudo dnf install teleport-entTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips
package instead:
sudo dnf install teleport-ent-fips
Source variables about OS version
source /etc/os-releaseAdd the Teleport Zypper repository.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use Zypper to add the teleport RPM repo
sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport-zypper.repo")Install teleport
sudo zypper install teleport-ent
For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips
package instead:
sudo zypper install teleport-ent-fips
In the example commands below, update $SYSTEM_ARCH
with the appropriate
value (amd64
, arm64
, or arm
). All example commands using this variable
will update after one is filled out.
curl https://get.gravitational.com/teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-bin.tar.gz.sha256<checksum> <filename>
curl -O https://cdn.teleport.dev/teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-bin.tar.gzshasum -a 256 teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-bin.tar.gzVerify that the checksums match
tar -xvf teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-bin.tar.gzcd teleport-entsudo ./install
For FedRAMP/FIPS-compliant installations of Teleport Enterprise, package URLs will be slightly different:
curl https://get.gravitational.com/teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-fips-bin.tar.gz.sha256<checksum> <filename>
curl -O https://cdn.teleport.dev/teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-fips-bin.tar.gzshasum -a 256 teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-fips-bin.tar.gzVerify that the checksums match
tar -xvf teleport-ent-v14.2.0-linux-$SYSTEM_ARCH-fips-bin.tar.gzcd teleport-entsudo ./install
OS repository channels
The following channels are available for APT, YUM, and Zypper repos. They may be used in place of
stable/v14
anywhere in the Teleport documentation.
Channel name | Description |
---|---|
stable/<major> | Receives releases for the specified major release line, i.e. v14 |
stable/cloud | Rolling channel that receives releases compatible with current Cloud version |
stable/rolling | Rolling channel that receives all published Teleport releases |
Add the Teleport repository to your repository list:
Download Teleport's PGP public key
sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.ascSource variables about OS version
source /etc/os-releaseAdd the Teleport APT repository for cloud.
echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/nullsudo apt-get updatesudo apt-get install teleport-ent-updater
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")sudo yum install -y yum-utilssudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"sudo yum install teleport-ent-updaterTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
Source variables about OS version
source /etc/os-releaseAdd the Teleport YUM repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use the dnf config manager plugin to add the teleport RPM repo
sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"Install teleport
sudo dnf install teleport-ent-updaterTip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)
echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path
Source variables about OS version
source /etc/os-releaseAdd the Teleport Zypper repository for cloud.
First, get the OS major version from $VERSION_ID so this fetches the correct
package version.
VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")Use Zypper to add the teleport RPM repo
sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")Install teleport
sudo zypper install teleport-ent-updater
OS repository channels
The following channels are available for APT, YUM, and Zypper repos. They may be used in place of
stable/v14
anywhere in the Teleport documentation.
Channel name | Description |
---|---|
stable/<major> | Receives releases for the specified major release line, i.e. v14 |
stable/cloud | Rolling channel that receives releases compatible with current Cloud version |
stable/rolling | Rolling channel that receives all published Teleport releases |
Before installing a teleport
binary with a version besides v14,
read our compatibility rules to ensure that the binary is compatible with
Teleport Enterprise Cloud.
When running multiple teleport
binaries within a cluster, the following rules
apply:
- Patch and minor versions are always compatible, for example, any 8.0.1 component will work with any 8.0.3 component and any 8.1.0 component will work with any 8.3.0 component.
- Servers support clients that are 1 major version behind, but do not support
clients that are on a newer major version. For example, an 8.x.x Proxy Service
is compatible with 7.x.x resource services and 7.x.x
tsh
, but we don't guarantee that a 9.x.x resource service will work with an 8.x.x Proxy Service. This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. You must upgrade to 7.x.x first. - Proxy Services and resource services do not support Auth Services that are on
an older major version, and will fail to connect to older Auth Services by
default. This behavior can be overridden by passing
--skip-version-check
when starting Proxy Services and resource services.
Create a join token
The Teleport Discovery Service and Kubernetes Service require an authentication
token in order to to join the cluster. Generate one by running the following
tctl
command:
tctl tokens add --type=discovery,kube --format=textabcd123-insecure-do-not-use-this
Copy the token (e.g., abcd123-insecure-do-not-use-this
above) and save the token in
/tmp/token
on the machine that will run the Discovery Service and Kubernetes
Service, for example:
echo abcd123-insecure-do-not-use-this | sudo tee /tmp/tokenabcd123-insecure-do-not-use-this
Generate separate tokens for the Kubernetes Service and Discovery Service by
running the following tctl
commands:
tctl tokens add --type=discovery --format=textefgh456-insecure-do-not-use-this
tctl tokens add --type=kube --format=textijkl789-insecure-do-not-use-this
Copy each token (e.g., efgh456-insecure-do-not-use-this
and
ijkl789-insecure-do-not-use-this
above) and save it in /tmp/token
on the machine
that will run the appropriate service.
Configure the Kubernetes Service and Discovery Service
On the host running the Kubernetes Service and Discovery Service, create a
Teleport configuration file with the following content at /etc/teleport.yaml
:
Discovery Service exposes a configuration parameter - discovery_service.discovery_group
-
that allows you to group discovered resources into different sets. This parameter
is used to prevent Discovery Agents watching different sets of cloud resources
from colliding against each other and deleting resources created by another services.
When running multiple Discovery Services, you must ensure that each service is configured
with the same discovery_group
value if they are watching the same cloud resources
or a different value if they are watching different cloud resources.
It is possible to run a mix of configurations in the same Teleport cluster meaning that some Discovery Services can be configured to watch the same cloud resources while others watch different resources. As an example, a 4-agent high availability configuration analyzing data from two different cloud accounts would run with the following configuration.
- 2 Discovery Services configured with
discovery_group: "prod"
polling data from Production account. - 2 Discovery Services configured with
discovery_group: "staging"
polling data from Staging account.
version: v3
teleport:
join_params:
token_name: "/tmp/token"
method: token
proxy_server: "teleport.example.com:443"
auth_service:
enabled: off
proxy_service:
enabled: off
ssh_service:
enabled: off
discovery_service:
enabled: "yes"
discovery_group: "gke-myproject"
gcp:
- types: ["gke"]
locations: ["*"]
project_ids: ["myproject"] # replace with my project ID
tags:
"*" : "*"
kubernetes_service:
enabled: "yes"
resources:
- labels:
"*": "*"
Follow the instructions in this section with two configuration files. The
configuration file you will save at /etc/teleport.yaml
on the Kubernetes
Service host will include the following:
version: v3
teleport:
join_params:
token_name: "/tmp/token"
method: token
proxy_server: teleport.example.com:443
auth_service:
enabled: off
proxy_service:
enabled: off
ssh_service:
enabled: off
kubernetes_service:
enabled: "yes"
resources:
- labels:
"*": "*"
On the Discovery Service host, the file will include the following:
version: v3
teleport:
join_params:
token_name: "/tmp/token"
method: token
proxy_server: teleport.example.com:443
auth_service:
enabled: off
proxy_service:
enabled: off
ssh_service:
enabled: off
discovery_service:
enabled: "yes"
gcp:
- types: ["gke"]
locations: ["*"]
project_ids: ["myproject"] # replace with my project ID
tags:
"*" : "*"
Edit this configuration for your environment as explained below.
proxy_server
Replace teleport.example.com:443
with the host and port of your Teleport
Proxy Service (e.g., mytenant.teleport.sh:443
for a Teleport Cloud tenant).
discovery_service.gcp
Each item in discovery_service.gcp
is a matcher for Kubernetes clusters
running on GKE. The Discovery Service periodically executes a request to the
Google Cloud API based on each matcher to list GKE clusters. In this case, we
have declared a single matcher.
Each matcher searches for clusters that match all properties of the matcher, i.e., that belong to the specified locations and projects and have the specified tags. The Discovery Service registers GKE clusters that match any configured matcher.
This means that if you declare the following two matchers, the Discovery Service
will register clusters in project myproj-dev
running in us-east1
, as well as
clusters in project myproj-prod
running in us-east2
, but not clusters in
myproj-dev
running in us-east2
:
discovery_service:
enabled: "yes"
gcp:
- types: ["gke"]
locations: ["us-east1"]
project_ids: ["myproj-dev"]
tags:
"*" : "*"
- types: ["gke"]
locations: ["us-east2"]
project_ids: ["myproj-prod"]
tags:
"*" : "*"
discovery_service.gcp[0].types
Each matcher's types
field must be set to an array with a single string
value, gke
.
discovery_service.gcp[0].project_ids
In your matcher, replace myproject
with the ID of your Google Cloud project.
The project_ids
field must include at least one value, and it must not be the
wildcard character (*
).
discovery_service.gcp[0].locations
Each matcher's locations
field contains an array of Google Cloud region or
zone names that the matcher will search for GKE clusters. The wildcard
character, *
, configures the matcher to search all locations.
discovery_service.gcp[0].tags
Like locations
, tags
consists of a map where each key is a string that
represents the key of a tag, and each value is either a single string or an
array of strings, representing one tag value or a list of tag values.
A wildcard key or value matches any tag key or value in your Google Cloud account. If you include another value, the matcher will match all GKE clusters with the provided tag.
Start the Kubernetes Service and Discovery Service
On the host where you will run the Kubernetes Service, execute the following command, depending on:
- Whether you installed Teleport using a package manager or via a TAR archive
- Whether you are running the Discovery and Kubernetes Service on Google Cloud or another platform
How your host is running:
- Google Cloud
- Other Platform
On the host where you will run the Teleport Kubernetes Service and Discovery Service, start the Teleport service:
sudo systemctl start teleport
On the host where you will run the Teleport Kubernetes Service and Discovery Service, create a systemd service configuration for Teleport, enable the Teleport service, and start Teleport:
sudo teleport install systemd -o /etc/systemd/system/teleport.servicesudo systemctl enable teleportsudo systemctl start teleport
When you installed Teleport via package manager, the installation process
created a configuration for the init system systemd
to run Teleport as a
daemon.
This service reads environment variables from a file at the path
/etc/default/teleport
. Teleport's built-in Google Cloud client reads the
credentials file at the location given by the GOOGLE_APPLICATION_CREDENTIALS
variable.
Ensure that /etc/default/teleport
has the following content:
GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"
Start the Teleport service:
sudo systemctl enable teleportsudo systemctl start teleport
On the host where you are running the Teleport Discovery Service and Kubernetes Service, create a systemd configuration that you can use to run Teleport in the background:
sudo teleport install systemd -o /etc/systemd/system/teleport.servicesudo systemctl enable teleport
This service reads environment variables from a file at the path
/etc/default/teleport
. Teleport's built-in Google Cloud client reads the
credentials file at the location given by the GOOGLE_APPLICATION_CREDENTIALS
variable.
Ensure that /etc/default/teleport
has the following content:
GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"
Start the Discovery Service and Kubernetes Service:
sudo systemctl start teleport
Step 3/3. Connect to your GKE cluster
Allow access to your Kubernetes cluster
Ensure that you are in the correct Kubernetes context for the cluster you would like to enable access to:
kubectl config current-context
Retrieve all available contexts:
kubectl config get-contexts
Switch to your context, replacing CONTEXT_NAME
with the name of your chosen
context:
kubectl config use-context CONTEXT_NAMESwitched to context CONTEXT_NAME
Kubernetes authentication
To authenticate to a Kubernetes cluster via Teleport, your Teleport roles must allow access as at least one Kubernetes user or group. Ensure that you have a Teleport role that grants access to the cluster you plan to interact with.
Run the following command to get the Kubernetes user for your current context:
kubectl config view \-o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.user}"
Create a file called kube-access.yaml
with the following content, replacing
USER
with the output of the command above.
kind: role
metadata:
name: kube-access
version: v7
spec:
allow:
kubernetes_labels:
'*': '*'
kubernetes_resources:
- kind: '*'
namespace: '*'
name: '*'
verbs: ['*']
kubernetes_groups:
- viewers
kubernetes_users:
# Replace USER with the Kubernetes user for your current context.
- USER
deny: {}
Apply your changes:
tctl create -f kube-access.yaml
Assign the kube-access
role to your Teleport user by running the appropriate
commands for your authentication provider:
-
Retrieve your local user's configuration resource:
tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml -
Edit
out.yaml
, addingkube-access
to the list of existing roles:roles: - access - auditor - editor + - kube-access
-
Apply your changes:
tctl create -f out.yaml -
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
github
authentication connector:tctl get github/github --with-secrets > github.yamlNote that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to thegithub.yaml
file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource. -
Edit
github.yaml
, addingkube-access
to theteams_to_roles
section.The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.
Here is an example:
teams_to_roles: - organization: octocats team: admins roles: - access + - kube-access
-
Apply your changes:
tctl create -f github.yaml -
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
saml
configuration resource:tctl get --with-secrets saml/mysaml > saml.yamlNote that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to thesaml.yaml
file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource. -
Edit
saml.yaml
, addingkube-access
to theattributes_to_roles
section.The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
attributes_to_roles: - name: "groups" value: "my-group" roles: - access + - kube-access
-
Apply your changes:
tctl create -f saml.yaml -
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
oidc
configuration resource:tctl get oidc/myoidc --with-secrets > oidc.yamlNote that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to theoidc.yaml
file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource. -
Edit
oidc.yaml
, addingkube-access
to theclaims_to_roles
section.The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
claims_to_roles: - name: "groups" value: "my-group" roles: - access + - kube-access
-
Apply your changes:
tctl create -f oidc.yaml -
Sign out of the Teleport cluster and sign in again to assume the new role.
Now that Teleport RBAC is configured, you can authenticate to your Kubernetes cluster via Teleport. To interact with your Kubernetes cluster, you will need to configure authorization within Kubernetes.
Kubernetes authorization
To configure authorization within your Kubernetes cluster, you need to create Kubernetes RoleBinding
s or
ClusterRoleBindings
that grant permissions to the subjects listed in kubernetes_users
and
kubernetes_groups
.
For example, you can grant some limited read-only permissions to the viewers
group used in the kube-access
role defined above:
Create a file called viewers-bind.yaml
with the following contents:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: viewers-crb
subjects:
- kind: Group
# Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
name: viewers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
# "view" is a default ClusterRole that grants read-only access to resources
# See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
name: view
apiGroup: rbac.authorization.k8s.io
Apply the ClusterRoleBinding
with kubectl
:
kubectl apply -f viewers-bind.yaml
Log out of Teleport and log in again.
Access your cluster
When you ran the Discovery Service, it discovered your GKE cluster and
registered the cluster with Teleport. You can confirm this by running the
following tctl
command:
tctl get kube_clusterskind: kube_clustermetadata: description: GKE cluster "mycluster-gke" in us-east1 id: 0000000000000000000 labels: location: us-east1 project-id: myproject teleport.dev/cloud: GCP teleport.dev/origin: cloud name: mycluster-gkespec: aws: {} azure: {}version: v3
Run the following command to list the Kubernetes clusters that your Teleport user has access to. The list should now include your GKE cluster:
tsh kube lsKube Cluster Name Labels Selected------------------- -------------------------------------------------------------------------------------------------------- --------mycluster-gke location=us-east1 project-id=myproject teleport.dev/cloud=GCP teleport.dev/origin=cloud
Log in to your cluster, replacing mycluster-gke
with the name of a cluster
you listed previously:
tsh kube login mycluster-gkeLogged into kubernetes cluster "mycluster-gke". Try 'kubectl version' to test the connection.
As you can see, Teleport GKE Auto-Discovery enabled you to access a GKE cluster in your Google Cloud account without requiring you to register that cluster manually within Teleport. When you create or remove clusters in GKE, Teleport will update its state to reflect the available clusters in your account.
Troubleshooting
Discovery Service
To check if the Discovery Service is working correctly, you can check if any Kubernetes
clusters have been discovered. To do this, you can use the tctl get kube_cluster
command and inspect if the expected clusters have already been imported into Teleport.
If some clusters do not appear in the list, check if the filtering labels match the missing cluster tags or look into the service logs for permission errors.
Kubernetes Service
If the tctl get kube_cluster
command returns the discovered clusters, but the
tsh kube ls
command does not include them, check that you have set the
kubernetes_service.resources
section correctly.
kubernetes_service:
enabled: "yes"
resources:
- tags:
"env": "prod"
If the section is correctly configured, but clusters still do not appear or return authentication errors, please check if permissions have been correctly configured in your target cluster or that you have the correct permissions to list Kubernetes clusters in Teleport.