This guide is for those looking for a deeper understanding of Teleport. If you are looking for hands-on instructions on how to set up Teleport for your team, check out the Admin Guide
Teleport was designed in accordance with the following principles:
This doc introduces the basic concepts of Teleport so you can get started managing access!
Here are definitions of the key concepts you will use in Teleport.
|Node||A node is a "server", "host" or "computer". Users can create shell sessions to access nodes remotely.|
|User||A user represents someone (a person) or something (a machine) who can perform a set of operations on a node.|
|Cluster||A cluster is a group of nodes that work together and can be considered a single system. Cluster nodes can create connections to each other, often over a private network. Cluster nodes often require TLS authentication to ensure that communication between nodes remains secure and comes from a trusted source.|
|Certificate Authority (CA)||A Certificate Authority issues SSL certificates in the form of public/private keypairs.|
|Teleport Node||A Teleport Node is a regular node that is running the Teleport Node service. Teleport Nodes can be accessed by authorized Teleport Users. A Teleport Node is always considered a member of a Teleport Cluster, even if it's a single-node cluster.|
|Teleport User||A Teleport User represents someone who needs access to a Teleport Cluster. Users have stored usernames and passwords, and are mapped to OS users on each node. User data is stored locally or in an external store.|
|Teleport Cluster||A Teleport Cluster is comprised of one or more nodes, each of which hold certificates signed by the same Auth Server CA. The CA cryptographically signs the certificate of a node, establishing cluster membership.|
|Teleport CA||Teleport operates two internal CAs as a function of the Auth service. One is used to sign User certificates and the other signs Node certificates. Each certificate is used to prove identity, cluster membership and manage access.|
Teleport Nodes are servers which can be accessed remotely with
SSH. The Teleport Node service runs on a machine and is similar to the
daemon you may be familiar with. Users can log in to a Teleport Node with all
of the following clients:
ssh(works on Linux, MacOS and Windows)
tsh ssh(works on Linux and MacOS)
Teleport Auth authenticates Users and Nodes, authorizes User access to Nodes, and acts as a CA by signing certificates issued to Users and Nodes.
The numbers correspond to the steps needed to connect a client to a node. These steps are explained below the diagram.
--rolesflag has no relationship to concept of User Roles or permissions.
Here is a detailed diagram of a Teleport Cluster.
The numbers correspond to the steps needed to connect a client to a node. These steps are explained in detail below the diagram.
tctl, must be physically present on the same machine where Teleport Auth is running. Adding new nodes or inviting new users to the cluster is only possible using this tool.
The client tries to establish an SSH connection to a proxy using the CLI interface or a web browser. When establishing a connection, the client offers its certificate. Clients must always connect through a proxy for two reasons:
Individual nodes may not always be reachable from outside a secure network.
Proxies always record SSH sessions and keep track of active user sessions.
This makes it possible for an SSH user to see if someone else is connected to a node she is about to work on.
The proxy checks if the submitted certificate has been previously signed by the auth server.
If there was no certificate previously offered (first time login) or if the certificate has expired, the proxy denies the connection and asks the client to login interactively using a password and a 2nd factor if enabled.
Teleport supports Google Authenticator, Authy, or another TOTP generator. The password + 2nd factor are submitted to a proxy via HTTPS, therefore it is critical for a secure configuration of Teleport to install a proper HTTPS certificate on a proxy.
If the credentials are correct, the auth server generates and signs a new certificate and returns it to the client via the proxy. The client stores this certificate and will use it for subsequent logins. The certificate will automatically expire after 12 hours by default. This TTL can be configured to another value by the cluster administrator.
At this step, the proxy tries to locate the requested node in a cluster. There are three lookup mechanisms a proxy uses to find the node's IP address:
If the node is located, the proxy establishes the connection between the client and the requested node. The destination node then begins recording the session, sending the session history to the auth server to be stored.
When the node receives a connection request, it checks with the Auth Server to validate the node's certificate and validate the Node's cluster membership.
If the node certificate is valid, the node is allowed to access the Auth Server API which provides access to information about nodes and users in the cluster.
The node requests the Auth Server to provide a list of OS users (user mappings) for the connecting client, to make sure the client is authorized to use the requested OS login.
Finally, the client is authorized to create an SSH connection to a node.
Teleport offers two command line tools.
tsh is a client tool used by the end
tctl is used for cluster administration.
tsh is similar in nature to OpenSSH
scp. In fact, it has
subcommands named after them so you can call:
$ tsh --proxy=p ssh -p 1522 [email protected] $ tsh --proxy=p scp -P example.txt [email protected]/destination/dir
tsh is very opinionated about authentication: it always uses
auto-expiring certificates and it always connects to Teleport nodes via a proxy.
tsh logs in, the auto-expiring certificate is stored in
~/.tsh and is
valid for 12 hours by default, unless you specify another interval via the
--ttl flag (capped by the server-side configuration).
You can learn more about
tsh in the User Manual.
tctl is used to administer a Teleport cluster. It connects to the Auth
server listening on
127.0.0.1 and allows a cluster administrator to manage
nodes and users in the cluster.
tctl is also a tool which can be used to modify the dynamic configuration of
the cluster, like creating new user roles or connecting trusted clusters.
You can learn more about
tctl in the Admin Manual.
Read the rest of the Architecture Guides: