The design of trusted clusters allows Teleport users to connect to compute infrastructure located behind firewalls without any open TCP ports. The real world usage examples of this capability include:
Example of a MSP provider using trusted cluster to obtain access to clients clusters.
If you haven't already looked at the introduction to Trusted Clusters in the Admin Guide we recommend you review that for an overview before continuing with this guide.
The Trusted Clusters chapter in the Admin Guide offers an example of a simple configuration which:
This guide's focus is on more in-depth coverage of trusted clusters features and will cover the following topics:
If you have a large amount of devices on different networks, such as managed IoT devices or a couple of nodes on a different network you can utilize the Teleport Node Tunneling.
As explained in the architecture document, Teleport can partition compute infrastructure into multiple clusters. A cluster is a group of SSH nodes connected to the cluster's auth server acting as a certificate authority (CA) for all users and nodes.
To retrieve an SSH certificate, users must authenticate with a cluster through a
proxy server. So, if users want to connect to nodes belonging to different
clusters, they would normally have to use different --proxy
flags for each
cluster. This is not always convenient.
The concept of leaf clusters allows Teleport administrators to connect multiple clusters together and establish trust between them. Trusted clusters allow users of one cluster, the root cluster to seamlessly SSH into the nodes of another cluster without having to "hop" between proxy servers. Moreover, users don't even need to have a direct connection to other clusters' proxy servers. The user experience looks like this:
# login using the root "root" cluster credentials:
$ tsh login --proxy=root.example.com
# SSH into some host inside the "root" cluster:
$ tsh ssh host
# SSH into the host located in another cluster called "leaf"
# The connection is established through root.example.com:
$ tsh ssh --cluster=leaf host
# See what other clusters are available
$ tsh clusters
Leaf clusters also have their own restrictions on user access, i.e. permissions mapping takes place.
Once connection has been established it's easy to switch from the "root" root cluster
Let's take a look at how a connection is established between the "root" cluster and the "leaf" cluster:
This setup works as follows:
The "leaf" creates an outbound reverse SSH tunnel to "root" and keeps the tunnel open.
Accessibility only works in one direction. The "leaf" cluster allows users from "root" to access its nodes but users in the "leaf" cluster can not access the "root" cluster.
When a user tries to connect to a node inside "leaf" using root's proxy, the reverse tunnel from step 1 is used to establish this connection shown as the green line above.
The scheme above also works even if the "root" cluster uses multiple proxies behind a load balancer (LB) or a DNS entry with multiple values. This works by "leaf" establishing a tunnel to every proxy in "root". This requires that an LB uses round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (a.k.a. "session affinity" or "sticky sessions") with Teleport proxies.
Lets start with the diagram of how connection between two clusters is established:
The first step in establishing a secure tunnel between two clusters is for the leaf cluster "leaf" to connect to the root cluster "root". When this happens for the first time, clusters know nothing about each other, thus a shared secret needs to exist in order for "root" to accept the connection from "leaf".
This shared secret is called a "join token". There are two ways to create join
tokens: to statically define them in a configuration file, or to create them on
the fly using tctl
tool.
It is important to realize that join tokens are only used to establish the connection for the first time. The clusters will exchange certificates and won't be using the token to re-establish the connection in the future.
To create a static join token, update the configuration file on "root" cluster to look like this:
# fragment of /etc/teleport.yaml:
auth_service:
enabled: true
tokens:
# If using static tokens we recommend using tools like `pwgen -s 32`
# to generate sufficiently random tokens of 32+ byte length
- trusted_cluster:mk9JgEVqsgz6pSsHf4kJPAHdVDVtpuE0
This token can be used an unlimited number of times.
Consider the security implications when deciding which token method to use. Short lived tokens decrease the window for attack, but will require any automation which uses these tokens to refresh them on a regular basis.
Creating a token dynamically with a CLI tool offers the advantage of applying a time to live (TTL) interval on it, i.e. it will be impossible to re-use such token after a specified period of time.
To create a token using the CLI tool, execute this command on the auth server of cluster "root":
# generates a trusted cluster token to allow an inbound connection from a leaf cluster:
$ tctl tokens add --type=trusted_cluster --ttl=5m
# Example output
# The cluster invite token: ba4825847f0378bcdfe18113c4998498
# This token will expire in 5 minutes
# generates a trusted cluster token with labels:
# every cluster joined using this token will inherit env:prod labels.
$ tctl tokens add --type=trusted_cluster --labels=env=prod
# you can also list the outstanding non-expired tokens:
$ tctl tokens ls
# ... or delete/revoke an invitation:
$ tctl tokens rm ba4825847f0378bcdfe18113c4998498
Users of Teleport will recognize that this is the same way you would add any node to a cluster. The token created above can be used multiple times and has an expiration time of 5 minutes.
Now, the administrator of "leaf" must create the following resource file:
# cluster.yaml
kind: trusted_cluster
version: v2
metadata:
# the trusted cluster name MUST match the 'cluster_name' setting of the
# root cluster
name: root
spec:
# this field allows to create tunnels that are disabled, but can be enabled later.
enabled: true
# the token expected by the "root" cluster:
token: ba4825847f0378bcdfe18113c4998498
# the address in 'host:port' form of the reverse tunnel listening port on the
# "root" proxy server:
tunnel_addr: root.example.com:3024
# the address in 'host:port' form of the web listening port on the
# "root" proxy server:
web_proxy_addr: root.example.com:3080
# the role mapping allows to map user roles from one cluster to another
# (enterprise editions of Teleport only)
role_map:
- remote: "admin" # users who have "admin" role on "root"
local: ["auditor"] # will be assigned "auditor" role when logging into "leaf"
Then, use tctl create
to add the file:
$ tctl create cluster.yaml
At this point the users of the "root" cluster should be able to see "leaf" in the list of available clusters.
If the web_proxy_addr
endpoint of the root
cluster uses a self-signed or invalid HTTPS certificate, you will get an
error: "the trusted cluster uses misconfigured HTTP/TLS certificate". For
ease of testing, the Teleport daemon on "leaf" can be started with the
--insecure
CLI flag to accept self-signed certificates. Make sure to configure
HTTPS properly and remove the insecure flag for production use.
The RBAC section is applicable only to Teleport Enterprise. The open source version does not support SSH roles.
When a leaf cluster "leaf" from the diagram above establishes trust with the root cluster "root", it needs a way to configure which users from "root" should be allowed in and what permissions should they have. Teleport offers two methods of limiting access, by using role mapping or cluster labels.
Consider the following:
Lets make a few assumptions for this example:
The cluster "root" has two roles: user for regular users and admin for local administrators.
We want administrators from "root" (but not regular users!) to have restricted access to "leaf". We want to deny them access to machines with "environment=production" and any Government cluster labeled "customer=gov"
First, we need to create a special role for root users on "leaf":
# save this into root-user-role.yaml on the leaf cluster and execute:
# tctl create root-user-role.yaml
kind: role
version: v3
metadata:
name: local-admin
spec:
allow:
node_labels:
'*': '*'
# Cluster labels control what clusters user can connect to. The wildcard ('*') means
# any cluster. If no role in the role set is using labels and the cluster is not labeled,
# the cluster labels check is not applied. Otherwise, cluster labels are always enforced.
# This makes the feature backwards-compatible.
cluster_labels:
'env': 'staging'
deny:
# cluster labels control what clusters user can connect to. The wildcard ('*') means
# any cluster. By default none is set in deny rules to preserve backwards compatibility
cluster_labels:
'customer': 'gov'
node_labels:
'environment': 'production'
Now, we need to establish trust between roles "root:admin" and "leaf:admin". This is done by creating a trusted cluster resource on "leaf" which looks like this:
# save this as root-cluster.yaml on the auth server of "leaf" and then execute:
# tctl create root-cluster.yaml
kind: trusted_cluster
version: v1
metadata:
name: "name-of-root-cluster"
spec:
enabled: true
role_map:
- remote: admin
# admin <-> admin works for community edition. Enterprise users
# have great control over RBAC.
local: [admin]
token: "join-token-from-root"
tunnel_addr: root.example.com:3024
web_proxy_addr: root.example.com:3080
What if we wanted to let any user from "root" to be allowed to connect to
nodes on "leaf"? In this case we can use a wildcard *
in the role_map
like this:
role_map:
- remote: "*"
local: [admin]
role_map:
- remote: 'cluster-*'
local: [clusteradmin]
You can even use regular expressions to map user roles from one cluster to another, you can even capture parts of the remote role name and use reference it to name the local role:
# in this example, remote users with remote role called 'remote-one' will be
# mapped to a local role called 'local-one', and `remote-two` becomes `local-two`, etc:
- remote: "^remote-(.*)$"
local: [local-$1]
NOTE: The regexp matching is activated only when the expression starts
with ^
and ends with $
For customers using Teleport Enterprise, they can easily configure leaf nodes using the Teleport Proxy UI.
Creating Trust from the Leaf node to the root node.
In order to update the role map for a trusted cluster, first we will need to remove the cluster by executing:
$ tctl rm tc/root-cluster
Then following updating the role map, we can re-create the cluster by executing:
$ tctl create root-user-updated-role.yaml
Teleport gives administrators of root clusters the ability to control cluster labels. Allowing leaf clusters to propagate their own labels could create a problem with rogue clusters updating their labels to bad values.
An administrator of a root cluster can control a remote/leaf cluster's labels using the remote cluster API without any fear of override:
$ tctl get rc
kind: remote_cluster
metadata:
name: two
status:
connection: online
last_heartbeat: "2020-09-14T03:13:59.35518164Z"
version: v3
Using tctl
to update the labels on the remote/leaf cluster:
$ tctl update rc/two --set-labels=env=prod
cluster two has been updated
Using tctl
to confirm that the updated labels have been set:
$ tctl get rc
kind: remote_cluster
metadata:
labels:
env: prod
name: two
status:
connection: online
last_heartbeat: "2020-09-14T03:13:59.35518164Z"
Now an admin from the "root" cluster can see and access the "leaf" cluster:
# log into the root cluster:
$ tsh --proxy=root.example.com login admin
# see the list of available clusters
$ tsh clusters
Cluster Name Status
------------ ------
root online
leaf online
# see the list of machines (nodes) behind the leaf cluster:
$ tsh ls --cluster=leaf
Node Name Node ID Address Labels
--------- ------------------ -------------- -----------
db1.leaf cf7cc5cd-935e-46f1 10.0.5.2:3022 role=db-leader
db2.leaf 3879d133-fe81-3212 10.0.5.3:3022 role=db-follower
# SSH into any node in "leaf":
$ tsh ssh --cluster=leaf [email protected]
Trusted clusters work only one way. So, in the example above users from "leaf" cannot see or connect to the nodes in "root".
To temporarily disable trust between clusters, i.e. to disconnect the "leaf"
cluster from "root", edit the YAML definition of the trusted cluster resource
and set enabled
to "false", then update it:
$ tctl create --force cluster.yaml
Once established, to fully remove a trust relationship between two clusters, do the following:
tctl rm tc/root.example.com
(tc
= trusted cluster)tctl rm lc/leaf.example.com
.Remove the relationship from the root cluster: tctl rm rc/leaf.example.com
.
The leaf.example.com
cluster will continue to try and ping the root cluster,
but will not be able to connect. To re-establish the trusted cluster relationship,
the trusted cluster has to be created again from the leaf cluster.
Remove the relationship from the leaf cluster: tctl rm tc/root.example.com
.
Below is an example of how to share a kubernetes group between trusted clusters.
In this example, we have a root trusted cluster with a role root
and kubernetes groups:
kubernetes_groups: ["system:masters"]
SSH logins:
logins: ["root"]
The leaf cluster can choose to map this root
cluster to its own cluster. The admin
cluster in the trusted cluster config:
role_map:
- remote: "root"
local: [admin]
The role admin
of the leaf cluster can now be set up to use the root cluster role logins and kubernetes_groups
using the following variables:
logins: ["{{internal.logins}}"]
kubernetes_groups: ["{{internal.kubernetes_groups}}"]
In order to pass logins from a root trusted cluster to a leaf cluster, you must use the variable {{internal.logins}}
.
At a first glance, Trusted Clusters in combination with RBAC may seem complicated. However, it is based on certificate-based SSH authentication which is fairly easy to reason about:
One can think of an SSH certificate as a "permit" issued and time-stamped by a certificate authority. A certificate contains four important pieces of data:
Try executing tsh status
right after tsh login
to see all these fields in the
client certificate.
When a user from "root" tries to connect to a node inside "leaf", her certificate is presented to the auth server of "leaf" and it performs the following checks:
There are three common types of problems Teleport administrators can run into when configuring trust between two clusters:
tsh clusters
output on "root".If the web_proxy_addr
endpoint of the root cluster uses a self-signed or invalid HTTPS certificate,
you will get an error: "the trusted cluster uses misconfigured HTTP/TLS certificate". For ease of
testing, the teleport daemon on "leaf" can be started with the --insecure
CLI flag to accept
self-signed certificates. Make sure to configure HTTPS properly and remove the insecure flag for production use.
To troubleshoot connectivity problems, enable verbose output for the auth
servers on both clusters. Usually this can be done by adding --debug
flag to
teleport start --debug
. You can also do this by updating the configuration
file for both auth servers:
# snippet from /etc/teleport.yaml
teleport:
log:
output: stderr
severity: DEBUG
On systemd-based distributions you can watch the log output via:
$ sudo journalctl -fu teleport
Most of the time you will find out that either a join token is
mismatched/expired, or the network addresses for tunnel_addr
or
web_proxy_addr
cannot be reached due to pre-existing firewall rules or
how your network security groups are configured on AWS.
Troubleshooting access denied messages can be challenging. A Teleport administrator should check to see the following:
tsh login
. You can inspect the retrieved certificate with
tsh status
command on the client side./var/lib/teleport/log
on a auth server of a cluster.
Check the audit log messages on both clusters to get answers for the
questions above.