
Teleport can provide secure, unified access to your Kubernetes clusters. This guide will show you how to:
- Deploy Teleport in a Kubernetes cluster.
- Set up Single Sign-On (SSO) for authentication to your Teleport cluster.
While completing this guide, you will deploy one Teleport pod each for the Auth Service and Proxy Service in your Kubernetes cluster, and a load balancer that allows outside traffic to your Teleport cluster. Users can then access your Kubernetes cluster via the Teleport cluster running within it.
If you are already running Teleport on another platform, you can use your existing Teleport deployment to access your Kubernetes cluster. Follow our guide to connect your Kubernetes cluster to Teleport.
Teleport Cloud takes care of this setup for you so you can provide secure access to your infrastructure right away.
Get started with a free trial of Teleport Cloud.
Follow along with our video guide
Prerequisites
- A registered domain name. This is required for Teleport to set up TLS via Let's Encrypt and for Teleport clients to verify the Proxy Service host.
- A Kubernetes cluster hosted by a cloud provider, which is required for the load balancer we deploy in this guide.
Teleport also supports Kubernetes in on-premise and air-gapped environments. If you would like to try out Teleport on your local machine, we recommend following our Docker Compose guide.
- Kubernetes >= v1.17.0
- Helm >= 3.4.2
Verify that Helm and Kubernetes are installed and up to date.
helm versionversion.BuildInfo{Version:"v3.4.2"}
kubectl versionClient Version: version.Info{Major:"1", Minor:"17+"}
Server Version: version.Info{Major:"1", Minor:"17+"}
When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:
- Avoid using
sudo
in production environments unless it's necessary. - Create new, non-root, users and use test instances for experimenting with Teleport.
- Run Teleport's services as a non-root user unless required. Only the SSH
Service requires root access. Note that you will need root permissions (or
the
CAP_NET_BIND_SERVICE
capability) to make Teleport listen on a port numbered <1024
(e.g.443
). - Follow the "Principle of Least Privilege" (PoLP). Don't give users
permissive roles when giving them more restrictive roles will do instead.
For example, assign users the built-in
access,editor
roles. - When joining a Teleport resource service (e.g., the Database Service or
Application Service) to a cluster, save the invitation token to a file.
Otherwise, the token will be visible when examining the
teleport
command that started the agent, e.g., via thehistory
command on a compromised system.
Step 1/3. Install Teleport
Let's start with a Teleport deployment using a persistent
volume as a backend. Modify the values of CLUSTER_NAME
and EMAIL
according to your environment, where CLUSTER_NAME
is the domain name you
are using for your Teleport deployment and EMAIL
is an email address
used for notifications.
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
CLUSTER_NAME="tele.example.com"EMAIL="[email protected]"Create the namespace and configure its PodSecurityAdmission
kubectl create namespace teleport-clusternamespace/teleport-cluster created
kubectl label namespace teleport-cluster 'pod-security.kubernetes.io/enforce=baseline'namespace/teleport-cluster labeled
Install a single node teleport cluster and provision a cert using ACME.
Set clusterName to unique hostname, for example tele.example.com
Set acmeEmail to receive correspondence from Letsencrypt certificate authority.
helm install teleport-cluster teleport/teleport-cluster \ --create-namespace \ --namespace=teleport-cluster \ --set clusterName=${CLUSTER_NAME?} \ --set acme=true \ --set acmeEmail=${EMAIL?} \ --version 12.1.1
CLUSTER_NAME="tele.example.com"EMAIL="[email protected]"Create the namespace and configure its PodSecurityAdmission
kubectl create namespace teleport-cluster-entnamespace/teleport-cluster-ent created
kubectl label namespace teleport-cluster-ent 'pod-security.kubernetes.io/enforce=baseline'namespace/teleport-cluster-ent labeled
Set the kubectl context to the namespace to save some typing
kubectl config set-context --current --namespace=teleport-cluster-entGet a license from Teleport and create a secret called "license" in the
namespace you created
kubectl create secret generic license --from-file=license.pemInstall Teleport
helm install teleport-cluster teleport/teleport-cluster \ --namespace=teleport-cluster-ent \ --version 12.1.1 \ --set clusterName=${CLUSTER_NAME?} \ --set acme=true \ --set enterprise=true \ --set acmeEmail=${EMAIL?}
Teleport's Helm chart uses an external load balancer to create a public IP for Teleport.
Set kubectl context to the namespace to save some typing
kubectl config set-context --current --namespace=teleport-clusterService is up, load balancer is created
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
teleport-cluster LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s
teleport-cluster-auth ClusterIP 10.4.2.51 <none> 3025/TCP,3026/TCP 89s
Save the pod IP or hostname.
SERVICE_IP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].ip}')echo $SERVICE_IP104.199.126.88
If $SERVICE_IP
is blank, your cloud provider may have assigned a hostname to the load balancer rather than an IP address. Run the following command to retrieve the hostname, which you will use in place of $SERVICE_IP
for subsequent commands.
SERVICE_IP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Set kubectl context to the namespace to set some typing
kubectl config set-context --current --namespace=teleport-cluster-entService is up, load balancer is created
kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
teleport-cluster-ent LoadBalancer 10.4.4.73 104.199.126.88 443:31204/TCP,3026:32690/TCP 89s
teleport-cluster-ent-auth ClusterIP 10.4.2.51 <none> 3025/TCP,3026/TCP 89s
Save the pod IP or hostname.
SERVICE_IP=$(kubectl get services teleport-cluster-ent -o jsonpath='{.status.loadBalancer.ingress[0].ip}')echo $SERVICE_IP104.199.126.88
If $SERVICE_IP
is blank, your cloud provider may have assigned a hostname to the load balancer rather than an IP address. Run the following command to retrieve the hostname, which you will use in place of $SERVICE_IP
for subsequent commands.
SERVICE_IP=$(kubectl get services teleport-cluster -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Set up two A
DNS records: tele.example.com
for all traffic and
*.tele.example.com
for web apps using Application Access. We are assuming
that your domain name is example.com
. Use your own subdomain instead of
tele
.
Teleport assigns a subdomain to each application you have configured for Application
Access (e.g., grafana.teleport.example.com
), so you will need to ensure that a
DNS A (or CNAME for services that only provide a hostname) record exists for each
application-specific subdomain so clients can access your applications via Teleport.
You should create either a separate DNS record for each subdomain, or a single
record with a wildcard subdomain such as *.teleport.example.com
. This way, your
certificate authority (e.g., Let's Encrypt) can issue a certificate for each
subdomain, enabling clients to verify your Teleport hosts regardless of the
application they are accessing.
Execute the following commands on the host where you are running the Teleport Proxy Service:
Tip for finding AWS zone id by the domain name.
MYIP="$(curl https://ipv4.icanhazip.com/)"MYZONE_DNS="example.com"MYZONE=$(aws route53 list-hosted-zones-by-name --dns-name=${MYZONE_DNS?} | jq -r '.HostedZones[0].Id' | sed s_/hostedzone/__)The fully qualified domain name for your Teleport Proxy Service.
These commands will also create an A record for a wildcard subdomain.
MYDNS="tele.example.com"Create a JSON file changeset for AWS.
jq -n --arg ip ${MYIP?} --arg dns ${MYDNS?} '{"Comment": "Create records", "Changes": [{"Action": "CREATE","ResourceRecordSet": {"Name": $dns, "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}},{"Action": "CREATE", "ResourceRecordSet": {"Name": ("*." + $dns), "Type": "A", "TTL": 300, "ResourceRecords": [{ "Value": $ip}]}}]}' > myrecords.jsonReview records before applying.
cat myrecords.json | jqApply the records and capture change id
CHANGEID=$(aws route53 change-resource-record-sets --hosted-zone-id ${MYZONE?} --change-batch file://myrecords.json | jq -r '.ChangeInfo.Id')Verify that change has been applied
aws route53 get-change --id ${CHANGEID?} | jq '.ChangeInfo.Status'"INSYNC"
MYZONE="myzone"The fully qualified domain name for your Teleport Proxy Service.
These commands will also create an A record for a wildcard subdomain.
MYDNS="tele.example.com"MYIP="$(curl https://ipv4.icanhazip.com/)"gcloud dns record-sets transaction start --zone="${MYZONE?}"gcloud dns record-sets transaction add ${MYIP?} --name="${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"gcloud dns record-sets transaction add ${MYIP?} --name="*.${MYDNS?}" --ttl="30" --type="A" --zone="${MYZONE?}"gcloud dns record-sets transaction describe --zone="${MYZONE?}"gcloud dns record-sets transaction execute --zone="${MYZONE?}"
You can use dig
to make sure that DNS records are propagated:
dig tele.example.com
Use the following command to confirm that Teleport is running:
curl https://tele.example.com/webapi/ping{"server_version":"6.0.0","min_client_version":"3.0.0"}
Step 2/3. Create a local user
Local users are a reliable fallback for cases when the SSO provider is down.
Let's create a local user alice
who has access to Kubernetes group system:masters
.
Save this role as member.yaml
:
kind: role
version: v6
metadata:
name: member
spec:
allow:
kubernetes_groups: ["system:masters"]
kubernetes_labels:
'*': '*'
kubernetes_resources:
- kind: pod
namespace: "*"
name: "*"
Create the role and add a user:
Create a role
kubectl exec -i deployment/teleport-cluster-auth -- tctl create -f < member.yamlGenerate an invite link for the user.
kubectl exec -ti deployment/teleport-cluster-auth -- tctl users add alice --roles=memberUser "alice" has been created but requires a password. Share this URL with the user to
complete user setup, link is valid for 1h:
https://tele.example.com:443/web/invite/random-token-id-goes-here
NOTE: Make sure tele.example.com:443 points at a Teleport proxy which users can access.
Create a role
kubectl exec -i deployment/teleport-cluster-ent-auth -- tctl create -f < member.yamlGenerate an invite link for the user.
kubectl exec -ti deployment/teleport-cluster-ent-auth -- tctl users add alice --roles=memberUser "alice" has been created but requires a password. Share this URL with the user to
complete user setup, link is valid for 1h:
https://tele.example.com:443/web/invite/<invite-token>
NOTE: Make sure tele.example.com:443 points at a Teleport proxy which users can access.
Let's install tsh
and tctl
on Linux.
For other install options, check out the installation guide
curl -L -O https://get.gravitational.com/teleport-v12.1.1-linux-amd64-bin.tar.gztar -xzf teleport-v12.1.1-linux-amd64-bin.tar.gzsudo mv teleport/tsh /usr/local/bin/tshsudo mv teleport/tctl /usr/local/bin/tctl
curl -L -O https://get.gravitational.com/teleport-ent-v12.1.1-linux-amd64-bin.tar.gztar -xzf teleport-ent-v12.1.1-linux-amd64-bin.tar.gzsudo mv teleport-ent/tsh /usr/local/bin/tshsudo mv teleport-ent/tctl /usr/local/bin/tctl
Try tsh login
with a local user.
tsh login --proxy=tele.example.com:443 --user=alice
Once you're connected to the Teleport cluster, list the available Kubernetes clusters for your user:
List connected Kubernetes clusters
tsh kube lsKube Cluster Name Selected
----------------- --------
tele.example.com
Login to the Kubernetes cluster and create a new separate kubeconfig to connect to the Kubernetes cluster. Using a separate kubeconfig file allows you to easily switch between the kubeconfig you used to install Teleport and the one issued by Teleport. This is useful during the install process if something goes wrong.
$ KUBECONFIG=$HOME/teleport-kubeconfig.yaml tsh kube login tele.example.com
$ KUBECONFIG=$HOME/teleport-kubeconfig.yaml kubectl get -n teleport-cluster pods
NAME READY STATUS RESTARTS AGE
pod/teleport-cluster-auth-57989d4-4q2ds 1/1 Running 0 22h
pod/teleport-cluster-auth-57989d4-rtrzn 1/1 Running 0 22h
pod/teleport-cluster-proxy-c6bf55-w96d2 1/1 Running 0 22h
pod/teleport-cluster-proxy-c6bf55-z256w 1/1 Running 0 22h
Step 3/3. SSO for Kubernetes
In this step, we will set up the GitHub Single Sign-On connector for the OSS version of Teleport and Okta for the Enterprise version.
Save the file below as github.yaml
and update the fields. You will need to set up the
GitHub OAuth 2.0 Connector app.
Any member with the team admin
in the organization octocats
will be able to assume a builtin role access
.
kind: github
version: v3
metadata:
# connector name that will be used with `tsh --auth=github login`
name: github
spec:
# client ID of your GitHub OAuth app
client_id: client-id
# client secret of your GitHub OAuth app
client_secret: client-secret
# This name will be shown on UI login screen
display: GitHub
# Change tele.example.com to your domain name
redirect_url: https://tele.example.com:443/v1/webapi/github/callback
# Map github teams to teleport roles
teams_to_roles:
- organization: octocats # GitHub organization name
team: admin # GitHub team name within that organization
# map GitHub's "admin" team to Teleport's "access" role
roles: ["access"]
Follow the SAML Okta Guide to create a SAML app.
Check out OIDC guides for OpenID Connect apps.
Save the file below as okta.yaml
and update the acs
field.
Any member in Okta group okta-admin
will assume a builtin role access
.
kind: saml
version: v2
metadata:
name: okta
spec:
acs: https://tele.example.com/v1/webapi/saml/acs
attributes_to_roles:
- {name: "groups", value: "okta-admin", roles: ["access"]}
entity_descriptor: |
<?xml !!! Make sure to shift all lines in XML descriptor
with 4 spaces, otherwise things will not work
To create a connector, we are going to run Teleport's admin tool tctl
from the pod.
kubectl config set-context --current --namespace=teleport-clusterkubectl exec -i deployment/teleport-cluster-auth -- tctl create -f < github.yamlauthentication connector "github" has been created
kubectl exec -i deployment/teleport-cluster-ent-auth -- tctl create -f < okta.yamlauthentication connector 'okta' has been created
Try tsh login
with a GitHub user. This example uses a custom KUBECONFIG
to prevent overwriting
the default one in case there is a problem.
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=github
KUBECONFIG=${HOME?}/teleport.yaml tsh login --proxy=tele.example.com --auth=okta
If you are getting a login error, take a look at the audit log for details:
kubectl exec -ti deployment/teleport-cluster-auth -- tail -n 100 /var/lib/teleport/log/events.log{"error":"user \"alice\" does not belong to any teams configured in \"github\" connector","method":"github","attributes":{"octocats":["devs"]}}
Troubleshooting
If you are experiencing errors connecting to the Teleport cluster, check the status of the Auth Service and Proxy Service pods. A successful state should show both pods running as below:
kubectl get pods -n teleport-clusterNAME READY STATUS RESTARTS AGE
teleport-cluster-auth-5f8587bfd4-p5zv6 1/1 Running 0 48s
teleport-cluster-proxy-767747dd94-vkxz6 1/1 Running 0 48s
If a pod's status is Pending
, use the kubectl logs
and kubectl describe
commands
for that pod to check the status. The Auth Service pod relies on being able to allocate a Persistent Volume Claim, and may enter a Pending
state if no Persistent Volume is available.
Next steps
To see all of the options you can set in the values file for the
teleport-cluster
Helm chart, consult our reference
guide.
Read our guides to additional ways you can protect Kubernetes clusters with Teleport: