Fork me on GitHub
Teleport

Getting Started - Kubernetes with SSO

Improve

Getting Started

Let's deploy Teleport in a Kubernetes with SSO and Audit logs:

  • Install Teleport in a Kubernetes cluster.
  • Set up Single Sign-On (SSO).
  • Capture and playback Kubernetes commands.

Follow along with our video guide

Prerequisites

Verify that helm and kubernetes are installed and up to date.

$ helm version
# version.BuildInfo{Version:"v3.4.2"}

$ kubectl version
# Client Version: version.Info{Major:"1", Minor:"17+"}
# Server Version: version.Info{Major:"1", Minor:"17+"}

The examples below may include the use of the sudo keyword, token UUIDs, and users with elevated privileges to make following each step easier.

We recommend you follow the best practices to avoid security incidents:

  1. Avoid using sudo in production environments unless it's necessary.
  2. Create new, non-root, users and use test instances for experimenting with Teleport.
  3. You can run many Teleport's services as a non root. For example, auth, proxy, application access, kubernetes access, and database access services can run as a non-root user. Only the SSH/node service requires root access. You will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443)
  4. Follow the "Principle of Least Privilege" (PoLP) and "Zero Admin" best practices. Don't give users permissive roles when giving them more restrictive access,editor roles will do instead.
  5. Save tokens into a file rather than sharing tokens directly as strings.

Step 1/3. Install Teleport

Let's start with a single-pod Teleport using persistent volume as a backend.

helm repo add teleport https://charts.releases.teleport.dev

Install a single node teleport cluster and provision a cert using ACME.

Set clusterName to unique hostname, for example tele.example.com

Set acmeEmail to receive correspondence from Letsencrypt certificate authority.

helm install teleport-cluster teleport/teleport-cluster --create-namespace --namespace=teleport-cluster \ --set clusterName=${CLUSTER_NAME?} --set acme=true --set acmeEmail=${EMAIL?}
helm repo add teleport https://charts.releases.teleport.dev

Create a namespace for a deployment.

kubectl create namespace teleport-cluster-ent

Set kubectl context to the namespace to save some typing

kubectl config set-context --current --namespace=teleport-cluster-ent

Get a license from Teleport and create a secret "license" in the namespace teleport-cluster-ent

kubectl -n teleport-cluster-ent create secret generic license --from-file=license.pem

Install Teleport

helm install teleport-cluster teleport/teleport-cluster --namespace=teleport-cluster-ent \ --set clusterName=${CLUSTER_NAME?} --set acme=true --set acmeEmail=${EMAIL?} --set enterprise=true

Teleport's helm chart uses an external load balancer to create a public IP for Teleport.

Set kubectl context to the namespace to save some typing

kubectl config set-context --current --namespace=teleport-cluster

Service is up, load balancer is created

kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

teleport-cluster LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s

Save the pod IP or Hostname. If the IP is not available, check the pod and load balancer status.

MYIP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $MYIP

192.168.2.1

Set kubectl context to the namespace to set some typing

kubectl config set-context --current --namespace=teleport-cluster-ent

Service is up, load balancer is created

kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

teleport-cluster-ent LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s

Save the pod IP or Hostname. If the IP is not available, check the pod and load balancer status.

MYIP=$(kubectl get services teleport-cluster-ent -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $MYIP

192.168.2.1

Set up two A DNS records - tele.example.com for UI and *.tele.example.com for web apps using application access.

MYZONE="myzone"
MYDNS="tele.example.com"

gcloud dns record-sets transaction start --zone="${MYZONE?}"
gcloud dns record-sets transaction add ${MYIP?} --name="${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"
gcloud dns record-sets transaction add ${MYIP?} --name="*.${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"
gcloud dns record-sets transaction describe --zone="${MYZONE?}"
gcloud dns record-sets transaction execute --zone="${MYZONE?}"

Tip for finding AWS zone id by the domain name.

MYZONE_DNS="example.com"
MYZONE=$(aws route53 list-hosted-zones-by-name --dns-name=${MYZONE_DNS?} | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)

MYDNS="tele.example.com"

Create a JSON file changeset for AWS.

jq -n --arg ip ${MYIP?} --arg dns ${MYDNS?} '{"Comment": "Create records", "Changes": [

{"Action": "CREATE", "ResourceRecordSet": {"Name": $dns, "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}},

{"Action": "CREATE", "ResourceRecordSet": {"Name": ("*." + $dns), "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}}

]}' > myrecords.json

Review records before applying.

cat myrecords.json | jq

Apply the records and capture change id

CHANGEID=$(aws route53 change-resource-record-sets --hosted-zone-id ${MYZONE?} --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')

Verify that change has been applied

aws route53 get-change --id ${CHANGEID?} | jq '.ChangeInfo.Status'

"INSYNC"

The first request to Teleport's API will take a bit longer because it gets a cert from Let's Encrypt. Teleport will respond with discovery info:

curl https://tele.example.com/webapi/ping

{"server_version":"6.0.0","min_client_version":"3.0.0"}

Step 2/3. Create a local user

Local users are a reliable fallback for cases when the SSO provider is down. Let's create a local user alice who has access to Kubernetes group system:masters.

Save this role as member.yaml:

kind: role
version: v4
metadata:
  name: member
spec:
  allow:
    kubernetes_groups: ["system:masters"]

Create the role and add a user:

To create a local user, we are going to run Teleport's admin tool tctl from the pod.

POD=$(kubectl get pod -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')

Create a role

kubectl exec -i ${POD?} -- tctl create -f < member.yaml

Generate an invite link for the user.

kubectl exec -ti ${POD?} -- tctl users add alice --roles=member

User "alice" has been created but requires a password. Share this URL with the user to

complete user setup, link is valid for 1h:

https://tele.example.com:443/web/invite/random-token-id-goes-here

NOTE: Make sure tele.example.com:443 points at a Teleport proxy which users can access.

Let's install tsh and tctl on Linux. For other install options, check out install guide

curl -L -O https://get.gravitational.com/teleport-v8.0.7-linux-amd64-bin.tar.gz
tar -xzf teleport-v8.0.7-linux-amd64-bin.tar.gz
sudo mv teleport/tsh /usr/local/bin/tsh
sudo mv teleport/tctl /usr/local/bin/tctl
curl -L -O https://get.gravitational.com/teleport-ent-v8.0.7-linux-amd64-bin.tar.gz
tar -xzf teleport-ent-v8.0.7-linux-amd64-bin.tar.gz
sudo mv teleport/tsh /usr/local/bin/tsh
sudo mv teleport/tctl /usr/local/bin/tctl

Try tsh login with a local user. Use a custom KUBECONFIG to prevent overwriting the default one in case there is a problem.

KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com:443 --user=alice

Teleport updated KUBECONFIG with a short-lived 12-hour certificate.

List connected Kubernetes clusters

tsh kube ls

Kube Cluster Name Selected

----------------- --------

tele.example.com

Login to Kubernetes by name

tsh kube login tele.example.com

Once working, remove the KUBECONFIG= override to switch to teleport

KUBECONFIG=${HOME?}/teleport.yaml kubectl get -n teleport-cluster pods

NAME READY STATUS RESTARTS AGE

teleport-cluster-6c9b88fd8f-glmhf 1/1 Running 0 127m

Step 3/3. SSO for Kubernetes

We are going to setup Github connector for OSS and Okta for Enterprise version.

Save the file below as github.yaml and update the fields. You will need to set up Github OAuth 2.0 Connector app. Any member with the team admin in the organization octocats will be able to assume a builtin role access.

kind: github
version: v3
metadata:
  # connector name that will be used with `tsh --auth=github login`
  name: github
spec:
  # client ID of Github OAuth app
  client_id: client-id
  # client secret of Github OAuth app
  client_secret: client-secret
  # This name will be shown on UI login screen
  display: Github
  # Change tele.example.com to your domain name
  redirect_url: https://tele.example.com:443/v1/webapi/github/callback
  # Map github teams to teleport roles
  teams_to_logins:
    - organization: octocats # Github organization name
      team: admin            # Github team name within that organization
      # map Github's "admin" team to Teleport's "access" role
      logins: ["access"]

Follow SAML Okta Guide to create a SAML app. Check out OIDC guides for OpenID Connect apps. Save the file below as okta.yaml and update the acs field. Any member in Okta group okta-admin will assume a builtin role access.

kind: saml
version: v2
metadata:
  name: okta
spec:
  acs: https://tele.example.com/v1/webapi/saml/acs
  attributes_to_roles:
  - {name: "groups", value: "okta-admin", roles: ["access"]}
  entity_descriptor: |
    <?xml !!! Make sure to shift all lines in XML descriptor
    with 4 spaces, otherwise things will not work

To create a connector, we are going to run Teleport's admin tool tctl from the pod.

kubectl config set-context --current --namespace=teleport-cluster
POD=$(kubectl get po -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')

kubectl exec -i ${POD?} -- tctl create -f < github.yaml

authentication connector "github" has been created

To create an Okta connector, we are going to run Teleport's admin tool tctl from the pod.

POD=$(kubectl get po -l app=teleport-cluster-ent -o jsonpath='{.items[0].metadata.name}')

kubectl exec -i ${POD?} -- tctl create -f < okta.yaml

authentication connector 'okta' has been created

Try tsh login with Github user. I am using a custom KUBECONFIG to prevent overwriting the default one in case there is a problem.

KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=github
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=okta
Debugging SSO

If you are getting a login error, take a look at the audit log for details:

kubectl exec -ti "${POD?}" -- tail -n 100 /var/lib/teleport/log/events.log

{"error":"user \"alice\" does not belong to any teams configured in \"github\" connector","method":"github","attributes":{"octocats":["devs"]}}

Next steps

Have a suggestion or can’t find something?
IMPROVE THE DOCS