Teleport can provide secure, unified access to your Kubernetes clusters. This guide will show you how to:
- Deploy Teleport in a Kubernetes cluster.
- Set up Single Sign-On (SSO) for authentication to your Teleport cluster.
While completing this guide, you will deploy a single Teleport pod running the Auth Service and Proxy Service in your Kubernetes cluster, and a load balancer that allows outside traffic to your Teleport cluster. Users can then access your Kubernetes cluster via the Teleport cluster running within it.
If you are already running Teleport on another platform, you can use your existing Teleport deployment to access your Kubernetes cluster. Follow our guide to connect your Kubernetes cluster to Teleport.
Follow along with our video guide
Prerequisites
- A registered domain name. This is required for Teleport to set up TLS via Let's Encrypt and for Teleport clients to verify the Proxy Service host.
- A Kubernetes cluster hosted by a cloud provider, which is required for the load balancer we deploy in this guide.
Teleport also supports Kubernetes in on-premise and air-gapped environments. If you would like to try out Teleport on your local machine, we recommend following our Docker Compose guide.
- Kubernetes >= v1.17.0
- Helm >= 3.4.2
Verify that Helm and Kubernetes are installed and up to date.
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport agent to a cluster, save the invitation token to a
file. Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/3. Install Teleport
Let's start with a single-pod Teleport deployment using a persistent volume as a backend. Modify the values of CLUSTER_NAME
and EMAIL
according to your environment, where CLUSTER_NAME
is the domain name you are using for your Teleport deployment and EMAIL
is an email address used for notifications.
CLUSTER_NAME="tele.example.com"EMAIL="[email protected]"helm repo add teleport https://charts.releases.teleport.devInstall a single node teleport cluster and provision a cert using ACME.
Set clusterName to unique hostname, for example tele.example.com
Set acmeEmail to receive correspondence from Letsencrypt certificate authority.
helm install teleport-cluster teleport/teleport-cluster --create-namespace --namespace=teleport-cluster \ --set clusterName=${CLUSTER_NAME?} --set acme=true --set acmeEmail=${EMAIL?}
CLUSTER_NAME="tele.example.com"EMAIL="[email protected]"helm repo add teleport https://charts.releases.teleport.devCreate a namespace for a deployment.
kubectl create namespace teleport-cluster-entSet kubectl context to the namespace to save some typing
kubectl config set-context --current --namespace=teleport-cluster-entGet a license from Teleport and create a secret "license" in the namespace teleport-cluster-ent
kubectl -n teleport-cluster-ent create secret generic license --from-file=license.pemInstall Teleport
helm install teleport-cluster teleport/teleport-cluster --namespace=teleport-cluster-ent \ --set clusterName=${CLUSTER_NAME?} --set acme=true --set acmeEmail=${EMAIL?} --set enterprise=true
Teleport's Helm chart uses an external load balancer to create a public IP for Teleport.
Set kubectl context to the namespace to save some typing
kubectl config set-context --current --namespace=teleport-clusterService is up, load balancer is created
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
teleport-cluster LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s
Save the pod IP or hostname.
MYIP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].ip}')echo $MYIP192.168.2.1
If $MYIP
is blank, your cloud provider may have assigned a hostname to the load balancer rather than an IP address. Run the following command to retrieve the hostname, which you will use in place of $MYIP
for subsequent commands.
MYIP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Set kubectl context to the namespace to set some typing
kubectl config set-context --current --namespace=teleport-cluster-entService is up, load balancer is created
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
teleport-cluster-ent LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s
Save the pod IP or hostname.
MYIP=$(kubectl get services teleport-cluster-ent -o jsonpath='{.status.loadBalancer.ingress[0].ip}')echo $MYIP192.168.2.1
If $MYIP
is blank, your cloud provider may have assigned a hostname to the load balancer rather than an IP address. Run the following command to retrieve the hostname, which you will use in place of $MYIP
for subsequent commands.
MYIP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Set up two A
DNS records: tele.example.com
for all traffic and
*.tele.example.com
for web apps using Application Access. (We are assuming
that your domain name is example.com
.)
Teleport assigns a subdomain to each application you have configured for Application
Access (e.g., grafana.teleport.example.com
), so you will need to ensure that a DNS A record exists for each application-specific subdomain so clients can access your applications via Teleport.
You should create either a separate DNS A record for each subdomain or a single record with a wildcard subdomain such as *.teleport.example.com
. This way, your certificate authority (e.g., Let's Encrypt) can issue a certificate for each subdomain, enabling clients to verify your Teleport hosts regardless of the application they are accessing.
Tip for finding AWS zone id by the domain name.
MYIP="$(curl https://ipv4.icanhazip.com/)"MYZONE_DNS="example.com"MYZONE=$(aws route53 list-hosted-zones-by-name --dns-name=${MYZONE_DNS?} | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)The fully qualified domain name for your Teleport Proxy Service.
These commands will also create an A record for a wildcard subdomain.
MYDNS="tele.example.com"Create a JSON file changeset for AWS.
jq -n --arg ip ${MYIP?} --arg dns ${MYDNS?} '{"Comment": "Create records", "Changes": [{"Action": "CREATE","ResourceRecordSet": {"Name": $dns, "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}},{"Action": "CREATE", "ResourceRecordSet": {"Name": ("*." + $dns), "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}}]}' > myrecords.jsonReview records before applying.
cat myrecords.json | jqApply the records and capture change id
CHANGEID=$(aws route53 change-resource-record-sets --hosted-zone-id ${MYZONE?} --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')Verify that change has been applied
aws route53 get-change --id ${CHANGEID?} | jq '.ChangeInfo.Status'"INSYNC"
MYZONE="myzone"The fully qualified domain name for your Teleport Proxy Service.
These commands will also create an A record for a wildcard subdomain.
MYDNS="tele.example.com"MYIP="$(curl https://ipv4.icanhazip.com/)"gcloud dns record-sets transaction start --zone="${MYZONE?}"gcloud dns record-sets transaction add ${MYIP?} --name="${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"gcloud dns record-sets transaction add ${MYIP?} --name="*.${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"gcloud dns record-sets transaction describe --zone="${MYZONE?}"gcloud dns record-sets transaction execute --zone="${MYZONE?}"
You can use dig
to make sure that DNS records are propagated:
dig tele.example.com
Use the following command to confirm that Teleport is running:
curl https://tele.example.com/webapi/ping{"server_version":"6.0.0","min_client_version":"3.0.0"}
Step 2/3. Create a local user
Local users are a reliable fallback for cases when the SSO provider is down.
Let's create a local user alice
who has access to Kubernetes group system:masters
.
Save this role as member.yaml
:
kind: role
version: v5
metadata:
name: member
spec:
allow:
kubernetes_groups: ["system:masters"]
kubernetes_labels:
'*': '*'
Create the role and add a user:
To create a local user, we are going to run Teleport's admin tool tctl from the pod.
POD=$(kubectl get pod -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')Create a role
kubectl exec -i ${POD?} -- tctl create -f < member.yamlGenerate an invite link for the user.
kubectl exec -ti ${POD?} -- tctl users add alice --roles=memberUser "alice" has been created but requires a password. Share this URL with the user to
complete user setup, link is valid for 1h:
https://tele.example.com:443/web/invite/random-token-id-goes-here
NOTE: Make sure tele.example.com:443 points at a Teleport proxy which users can access.
Let's install tsh
and tctl
on Linux.
For other install options, check out install guide
curl -L -O https://get.gravitational.com/teleport-v9.3.7-linux-amd64-bin.tar.gztar -xzf teleport-v9.3.7-linux-amd64-bin.tar.gzsudo mv teleport/tsh /usr/local/bin/tshsudo mv teleport/tctl /usr/local/bin/tctl
curl -L -O https://get.gravitational.com/teleport-ent-v9.3.7-linux-amd64-bin.tar.gztar -xzf teleport-ent-v9.3.7-linux-amd64-bin.tar.gzsudo mv teleport-ent/tsh /usr/local/bin/tshsudo mv teleport-ent/tctl /usr/local/bin/tctl
Try tsh login
with a local user. Use a custom KUBECONFIG
to prevent overwriting
the default one in case there is a problem.
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com:443 --user=alice
Teleport updates KUBECONFIG
with a short-lived 12-hour certificate.
List connected Kubernetes clusters
tsh kube lsKube Cluster Name Selected
----------------- --------
tele.example.com
Login to Kubernetes by name
tsh kube login tele.example.comOnce working, remove the KUBECONFIG= override to switch to teleport
KUBECONFIG=${HOME?}/teleport.yaml kubectl get -n teleport-cluster podsNAME READY STATUS RESTARTS AGE
teleport-cluster-6c9b88fd8f-glmhf 1/1 Running 0 127m
Step 3/3. SSO for Kubernetes
In this step, we will set up the GitHub Single Sign-On connector for the OSS version of Teleport and Okta for the Enterprise version.
Save the file below as github.yaml
and update the fields. You will need to set up the
GitHub OAuth 2.0 Connector app.
Any member with the team admin
in the organization octocats
will be able to assume a builtin role access
.
kind: github
version: v3
metadata:
# connector name that will be used with `tsh --auth=github login`
name: github
spec:
# client ID of your GitHub OAuth app
client_id: client-id
# client secret of your GitHub OAuth app
client_secret: client-secret
# This name will be shown on UI login screen
display: GitHub
# Change tele.example.com to your domain name
redirect_url: https://tele.example.com:443/v1/webapi/github/callback
# Map github teams to teleport roles
teams_to_logins:
- organization: octocats # GitHub organization name
team: admin # GitHub team name within that organization
# map GitHub's "admin" team to Teleport's "access" role
logins: ["access"]
Follow the SAML Okta Guide to create a SAML app.
Check out OIDC guides for OpenID Connect apps.
Save the file below as okta.yaml
and update the acs
field.
Any member in Okta group okta-admin
will assume a builtin role access
.
kind: saml
version: v2
metadata:
name: okta
spec:
acs: https://tele.example.com/v1/webapi/saml/acs
attributes_to_roles:
- {name: "groups", value: "okta-admin", roles: ["access"]}
entity_descriptor: |
<?xml !!! Make sure to shift all lines in XML descriptor
with 4 spaces, otherwise things will not work
To create a connector, we are going to run Teleport's admin tool tctl
from the pod.
kubectl config set-context --current --namespace=teleport-clusterPOD=$(kubectl get po -l app=teleport-cluster -o jsonpath='{.items[0].metadata.name}')kubectl exec -i ${POD?} -- tctl create -f < github.yamlauthentication connector "github" has been created
To create an Okta connector, we are going to run Teleport's admin tool tctl from the pod.
POD=$(kubectl get po -l app=teleport-cluster-ent -o jsonpath='{.items[0].metadata.name}')kubectl exec -i ${POD?} -- tctl create -f < okta.yamlauthentication connector 'okta' has been created
Try tsh login
with a GitHub user. This example uses a custom KUBECONFIG
to prevent overwriting
the default one in case there is a problem.
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=github
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=okta
If you are getting a login error, take a look at the audit log for details:
kubectl exec -ti "${POD?}" -- tail -n 100 /var/lib/teleport/log/events.log{"error":"user \"alice\" does not belong to any teams configured in \"github\" connector","method":"github","attributes":{"octocats":["devs"]}}