Teleport Workload Identity with SPIFFE: Achieving Zero Trust in Modern Infrastructure
May 23
Virtual
Register Today
Teleport logoTry For Free
Home > Teleport Academy > Authentication and Privileges

Authenticating Google Kubernetes Engine Clusters with Okta SSO

Posted 16th Mar 2023 by Janakiram Msv

Authenticating Google Kubernetes Engine Clusters with Okta SSO

In this article on securing access to Kubernetes, we will explore how Teleport brings an additional layer of security to clusters based on Google Kubernetes Engine (GKE) managed service from Google Cloud Platform (GCP).

Before proceeding further, ensure you have configured the Teleport authentication server to work with Okta SSO. Refer to the guide and the documentation for the setup.

This guide will use a configured Okta account. The configuration will use the domain cloudnativelabs.in while the Teleport proxy and authentication server are associated with the proxy.teleport-demo.in domain.

Step 1 - Provisioning a Google Kubernetes Engine (GKE) cluster

This tutorial will launch a GKE cluster with three Ubuntu nodes and the most recent version of Kubernetes. You can replace the parameters such as the zone and number of nodes. Download the CLI for Google Cloud SDK and configure the gcloud CLI to follow along.

touch gke-config
export KUBECONFIG=gke-config

# set GCP Project
PROJECT_NAME=<YOUR PROJECT>

gcloud container clusters create "cluster1" \
	--project "$PROJECT_NAME"  \
	--zone "asia-south1-a" \
	--node-locations "asia-south1-a" \
	--no-enable-basic-auth \
	--cluster-version "1.24.7-gke.900" \
	--release-channel "regular" \
	--machine-type "e2-standard-8" \
	--image-type "UBUNTU_CONTAINERD" \
	--disk-type "pd-balanced" \
	--disk-size "100" \
	--metadata disable-legacy-endpoints=true \
	--scopes "https://www.googleapis.com/auth/cloud-platform" \
	--max-pods-per-node "110" \
	--num-nodes "3" \
	--logging=SYSTEM,WORKLOAD \
	--monitoring=SYSTEM \
	--enable-ip-alias \
	--network "projects/$PROJECT_NAME/global/networks/default" \
	--subnetwork "projects/$PROJECT_NAME/regions/asia-south1/subnetworks/default" \
	--no-enable-intra-node-visibility \
	--default-max-pods-per-node "110" \
	--no-enable-master-authorized-networks \
	--addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
	--enable-autoupgrade \
	--enable-autorepair \
	--max-surge-upgrade 1 \
	--max-unavailable-upgrade 0 \
	--enable-shielded-nodes

When the provisioning is complete, the kubeconfig contents are written to gke-config file in the current directory.

gcloud container clusters get-credentials cluster1 \
	--zone asia-south1-a \
	--project $PROJECT_NAME 

You can now list the nodes and verify access to the cluster.

kubectl get nodes
NAME                                      STATUS   ROLES    AGE     VERSION
gke-cluster1-default-pool-0ae13613-2xdw   Ready    <none>   2m35s   v1.24.9-gke.3200
gke-cluster1-default-pool-0ae13613-llkp   Ready    <none>   2m35s   v1.24.9-gke.3200
gke-cluster1-default-pool-0ae13613-mr0k   Ready    <none>   2m35s   v1.24.9-gke.3200

Let’s create a cluster binding to associate the current user with the cluster-admin role.

kubectl create clusterrolebinding cluster-admin-binding \
	--clusterrole cluster-admin \
	--user $(gcloud config get-value account)

Step 2 - Registering GKE cluster with Teleport

Similar to other configurations, such as servers, Teleport expects an agent to run within the target cluster. This agent can be installed through a Helm chart by pointing it to the Teleport proxy server endpoint.

Before proceeding further, we need to get the token responsible for validating the agent. Run the below command to create the token based on the Kubernetes role. Make sure you log in to Teleport as a user with roles editor and access.

tsh --proxy=proxy.teleport-demo.in login --auth=local --user=tele-admin
Enter password for Teleport user tele-admin:
Enter your OTP token:
> Profile URL:        https://proxy.teleport-demo.in:443
  Logged in as:       tele-admin
  Cluster:            proxy.teleport-demo.in
  Roles:              access, editor
  Logins:             root, janakiramm, -teleport-internal-join
  Kubernetes:         enabled
  Valid until:        2023-03-12 07:30:09 +0530 IST [valid for 12h0m0s]
  Extensions:         client-ip, permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy
TOKEN=$(tctl nodes add --roles=kube --ttl=10000h --format=json | jq -r '.[0]') 

The next step is to add Teleport’s repo and update Helm, which provides us access to the Helm chart.

helm repo add teleport https://charts.releases.teleport.dev
helm repo update

The below environment variables contain key parameters needed by the Helm chart.

PROXY_ADDR=proxy.teleport-demo.in:443
CLUSTER=gke-cluster

helm install teleport-agent teleport/teleport-kube-agent \
  --set kubeClusterName=${CLUSTER?} \
  --set proxyAddr=${PROXY_ADDR?} \
  --set authToken=${TOKEN?} \
  --create-namespace \
  --namespace=teleport-agent \
  --version 12.1.0

Wait for a few minutes and check if the agent pod is up and running within the teleport-agent namespace.

kubectl get pods -n teleport-agent
NAME               READY   STATUS    RESTARTS   AGE
teleport-agent-0   1/1     Running   0          22s

Step 3 - Configuring Okta as SSO provider for GKE cluster

Kubernetes uses an identity embedded within the kubeconfig file to access the cluster. We need to add that identity to Teleport’s kubernetes_users role for the user to assume the role.

First, let’s get the current user from the kubeconfig file.

kubectl config view  -o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.user}"

By default, GKE creates a user based on the format gke_PROJECTNAME_ZONE_CLUSTERNAME. Let’s tell Teleport that this user will assume the role of kubernetes_users by creating an RBAC definition in a file by name kube-access.yaml and applying it.

kind: role
metadata:
  name: kube-access
version: v5
spec:
  allow:
    kubernetes_labels:
      '*': '*'
    kubernetes_groups:
    - viewers
    kubernetes_users:
    - gke_janakiramm-sandbox_asia-south1-a_cluster1
  deny: {}
tctl create -f kube-access.yaml
role 'kube-access' has been created

After this, we must update the ODIC connector created in the previous tutorial. This step ensures that users belonging to specific groups within the Okta directory can gain access to the cluster.

Let’s update the Okta connector definition to add auditor and kube-access role to the okta-admin group.

Notice how the okta-admin group configured in Okta is mapped to the Teleport roles.

Overwrite the connector configuration by applying it again.

tctl create -f okta-sso.yaml
authentication connector 'okta' has been created

Finally, we need to create a cluster-wide role binding in Kubernetes to bind the groups defined in Teleport to the local roles. Since the Teleport RBAC definition only has viewers under the kubernetes_groups section, let’s bind that role with the view cluster role in Kubernetes.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: viewers-crb
subjects:
- kind: Group
  name: viewers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f viewers-bind.yaml

This step essentially closes the loop by mapping Teleport roles with Kubernetes cluster roles.

Step 4 - Accessing GKE clusters through Okta Identity

It’s time to test the configuration by signing into Teleport as an Okta user and then using the tsh CLI to list the registered clusters.

tsh --proxy=proxy.teleport-demo.in login --auth=okta

This opens up the browser window for entering the Okta credentials. Once you log in, the CLI confirms your identity.

tsh --proxy=proxy.teleport-demo.in login --auth=okta --browser=none
> Profile URL:        https://proxy.teleport-demo.in:443
  Logged in as:       [email protected]
  Cluster:            proxy.teleport-demo.in
  Roles:              auditor, editor, kube-access
  Logins:             -teleport-nologin-069ca8fe-4e26-4666-86de-1efdfd68a043, -teleport-internal-join
  Kubernetes:         enabled
  Kubernetes users:   gke_janakiramm-sandbox_asia-south1-a_cluster1
  Kubernetes groups:  viewers
  Valid until:        2023-03-12 07:36:57 +0530 IST [valid for 12h0m0s]
  Extensions:         permit-agent-forwarding, permit-port-forwarding, permit-pty, private-key-policy

Notice that the current user is [email protected]. Now, let’s list the registered Kubernetes clusters and log onto the GKE cluster which is running in Google Cloud.

tsh kube ls
Kube Cluster Name Labels Selected
----------------- ------ --------
gke-cluster

At this point, the Teleport client has added another context to the original kubeconfig file. You can verify it by running the following command:

kubectl config view

Notice the current context pointing to proxy.teleport-demo.in-cluster1. You can continue to use the standard Kubernetes client CLI, kubectl, to access the cluster through Teleport’s proxy server transparently.

Let’s try to list all the pods running in the kube-system namespace.

kubectl get pods -n kube-system
NAME                                                 READY   STATUS    RESTARTS   AGE
event-exporter-gke-857959888b-dt9hs                  2/2     Running   0          15m
fluentbit-gke-knzrx                                  2/2     Running   0          15m
fluentbit-gke-q4h2h                                  2/2     Running   0          15m
fluentbit-gke-zfvdw                                  2/2     Running   0          15m
gke-metrics-agent-jlvj7                              1/1     Running   0          15m
gke-metrics-agent-nrzvr                              1/1     Running   0          15m
gke-metrics-agent-x64cn                              1/1     Running   0          15m
konnectivity-agent-7fb5ddc7f9-lwct4                  1/1     Running   0          15m
konnectivity-agent-7fb5ddc7f9-xp9hp                  1/1     Running   0          15m
konnectivity-agent-7fb5ddc7f9-z68kr                  1/1     Running   0          15m
konnectivity-agent-autoscaler-bd45744cc-bdpsq        1/1     Running   0          15m
kube-dns-7d5998784c-8rmtx                            4/4     Running   0          15m
kube-dns-7d5998784c-frk4z                            4/4     Running   0          15m
kube-dns-autoscaler-9f89698b6-x6l8g                  1/1     Running   0          15m
kube-proxy-gke-cluster1-default-pool-0ae13613-2xdw   1/1     Running   0          15m
kube-proxy-gke-cluster1-default-pool-0ae13613-llkp   1/1     Running   0          15m
kube-proxy-gke-cluster1-default-pool-0ae13613-mr0k   1/1     Running   0          14m
l7-default-backend-6dc845c45d-8t5hl                  1/1     Running   0          15m
metrics-server-v0.5.2-6bf845b67f-hdnv4               2/2     Running   0          14m
pdcsi-node-hwst8                                     2/2     Running   0          15m
pdcsi-node-kkx47                                     2/2     Running   0          15m
pdcsi-node-xz7gg                                     2/2     Running   0          15m

Since the Kubernetes role has view permission, we can list the pods. To verify the policy, let’s try to create a namespace.

kubectl create ns test
Error from server (Forbidden): namespaces is forbidden: User "default" cannot create resource "namespaces" in API group "" at the cluster scope

This results in error due to the lack of permission to create resources.

Since the action violates the access policy defined by the RBAC, it is also visible in Teleport audit logs.

The details clearly explain that a POST request has been made to the Kubernetes API server attempting to create the namespace, which was declined.

We can define fine-grained RBAC policies that map the users from the Okta groups to Teleport users to Kubernetes roles. This gives ultimate control to cluster administrators and DevOps teams to allow or restrict access to Kubernetes resources.

In this tutorial, we have learned how to leverage Okta as an SSO provider for Teleport to define access policies for the Google GKE cluster.