Using the tsh Command Line Tool
This guide will show you how to use the Teleport client tool,
You will learn how to:
- Log in to an interactive shell on remote cluster nodes.
- Copy files to and from cluster nodes.
- Connect to SSH clusters behind firewalls without any open ports using SSH reverse tunnels.
- Explore a cluster and execute commands on specific nodes in the cluster.
- Share interactive shell sessions with colleagues or join someone else's session.
- Replay recorded interactive sessions.
In addition to this document, you can always simply type
tsh into your
terminal for the CLI reference.
For the impatient, here's an example of how a user would typically use
Log into a Teleport cluster. This command retrieves the user's certificates
and saves them into ~/.tsh/teleport.example.comtsh login --proxy=teleport.example.com
SSH into a Node as usualtsh ssh [email protected]
`tsh ssh` takes the same arguments as the OpenSSH client:tsh ssh -o ForwardAgent=yes [email protected]tsh ssh -o AddKeysToAgent=yes [email protected]
You can even create a convenient symlink:ln -s /path/to/tsh /path/to/ssh
... and now your 'ssh' command is calling Teleport's `tsh ssh`
This command removes SSH certificates from a user's machine:tsh logout
In other words, Teleport was designed to be fully compatible with existing
SSH-based workflows and does not require users to learn anything new, other than
tsh login in the beginning.
Follow these install instructions to obtain the
binary. Ideally, install
tsh of the same version as the version used in your
A user identity in Teleport exists in the scope of a cluster. The member nodes of a cluster may have multiple OS users on them. A Teleport administrator assigns allowed logins to every Teleport user account.
When logging into a remote node, you will have to specify both the Teleport
login and the OS login. A Teleport identity will have to be passed via the
--user flag while the OS login will be passed as
[email protected] using syntax
compatible with the traditional
Authenticate against the "work" cluster as joe and then
log into the node as root:tsh ssh --proxy=work.example.com --user=joe [email protected]
To retrieve a user's certificate, execute:
Full form:tsh login --proxy=proxy_host:<https_proxy_port>,<ssh_proxy_port>
Using default ports:tsh login --proxy=work.example.com
Using custom HTTPS port:tsh login --proxy=work.example.com:5000
Using a custom SSH proxy port, which is set on the Auth Server:tsh login --proxy=work.example.com:2002
|https_proxy_port||the HTTPS port the proxy host is listening to (defaults to |
|ssh_proxy_port||the SSH port the proxy is listening to (defaults to |
The login command retrieves a user's certificate and stores it in
directory as well as in the ssh agent if there is one running.
This allows you to authenticate just once, maybe at the beginning of the day. Subsequent
tsh ssh commands will run without asking for credentials until the temporary certificate expires. By default, Teleport issues user certificates with a time to live (TTL) of 12 hours.
A Teleport cluster can be configured for multiple user identity sources. For example, a cluster may have a local user called
admin while regular users should authenticate via GitHub. In this case, you have to pass
--auth flag to
tsh login to specify which identity storage to use:
Log in using the local Teleport 'admin' user:tsh --proxy=proxy.example.com --auth=local --user=admin login
Log in using GitHub as an SSO provider, assuming the GitHub connector is called "github"tsh --proxy=proxy.example.com --auth=github --user=admin login
When using an external identity provider to log in,
tsh will need to open a
web browser to complete the authentication flow. By default,
tsh will use your
system's default browser. If you wish to suppress this behavior, you can use the
Don't open the system default browser when logging intsh login --proxy=work.example.com --browser=none
In this situation, a link will be printed on the screen. You can copy and paste this link into a browser of your choice to continue the login flow.
To inspect the SSH certificates in
~/.tsh, a user may execute the following
> Profile URL: https://proxy.example.com:3080
Logged in as: johndoe
Logins: root, admin, guest
Valid until: 2017-04-25 15:02:30 -0700 PDT [valid for 1h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
If there is an ssh agent running,
tsh login will store the user certificate in the agent. This can be verified
The SSH agent can be used to feed the certificate to other SSH clients, for example
to OpenSSH (
If you wish to disable SSH agent integration, pass
tsh. You can also set the
false in your shell profile to make this permanent.
tsh login can also save the user certificate into a
Authenticate the user against proxy.example.com and save the user
certificate to joe.pemtsh login --proxy=proxy.example.com --out=joe
Use joe.pem to log in to the server 'db'tsh ssh --proxy=proxy.example.com -i joe [email protected]
By default, the
--out flag will create an identity file suitable for
If compatibility with OpenSSH is needed,
--format=openssh must be specified.
In this case, the identity will be saved into two files,
tsh login --proxy=proxy.example.com --out=joe --format=opensshls -lh
-rw------- 1 joe staff 1.7K Aug 10 16:16 joe
-rw------- 1 joe staff 1.5K Aug 10 16:16 joe-cert.pub
Regular users of Teleport must request an auto-expiring SSH certificate, usually every day. This doesn't work for non-interactive scripts, like cron jobs or a CI/CD pipeline.
For this kind of automation, it is recommended to create a separate Teleport user for bots and request a certificate for them with a long time to live (TTL).
In this example, we're creating a certificate with a TTL of one hour for the
jenkins user and storing it in a
jenkins.pem file, which can be later used with
-i (identity) flag for
To be executed on a Teleport Auth Servertctl auth sign --ttl=1h--user=jenkins --out=jenkins.pem
jenkins.pem can be copied to the Jenkins server and passed to the
(identity file) flag of
tctl auth sign is an admin's equivalent of
tsh login --out and allows for
unrestricted certificate TTL values.
For non-production usage, you can use Machine ID, currently in preview, to provide your bot user with automatically updated, short-lived credentials.
In a Teleport cluster, all Nodes periodically ping the cluster's Auth Service
and update their status. This allows Teleport users to see which Nodes are
online with the
tsh ls command:
This command lists all Nodes in the cluster you logged into via "tsh login":tsh ls
Node Name Address Labels
--------- ------- ------
turing 10.1.0.5:3022 os:linux
turing 10.1.0.6:3022 os:linux
graviton 10.1.0.7:3022 os:osx
tsh ls can apply a filter based on the node labels.
Only show Nodes with os label set to 'osx':tsh ls os=osx
Nodename UUID Address Labels
--------- ------- ------- ------
graviton 33333333-aaaa-1284 10.1.0.7:3022 os:osx
To launch an interactive shell on a remote Node or to execute a command, use
tsh tries to mimic the
ssh experience as much as possible, so it supports
the most popular
ssh flags like
-L. For example, if you have
the following alias defined in your
alias ssh="tsh ssh" then you
can continue using familiar SSH syntax:
Have this alias configured, perhaps via ~/.bashrcalias ssh="/usr/local/bin/tsh ssh"
Login in to a cluster and retrieve your SSH certificate:tsh --proxy=proxy.example.com login
These commands execute `tsh ssh` under the hood:ssh -p 6122 [email protected] lsssh -o ForwardAgent=yes [email protected]ssh -o AddKeysToAgent=yes [email protected]
By default, the Teleport Proxy Service listens on port
If a Teleport Proxy Service instance is configured to listen on non-default
ports, they must be specified via
--proxy flag as shown:
tsh --proxy=proxy.example.com:5000 <subcommand>
tsh command will use port
5000 of the Proxy Service.
tsh ssh supports the OpenSSH
-L flag which forwards incoming
connections from localhost to the specified remote host:port. The syntax of
flag is as follows, where "bind_ip" defaults to
tsh ssh -L 5000:web.remote:80 node
This will connect to remote server
node via the Proxy Service, then open a
listening socket on
localhost:5000. Finally, it will forward all incoming
web.remote:80 via this SSH tunnel.
It is often convenient to establish port forwarding, execute a local command
which uses the connection, and then disconnect. You can do this with the
tsh ssh -L 5000:google.com:80 --local node curl http://localhost:5000
- Connects to
- Binds the local port
curlcommand locally, which results in
ProxyJump for Teleport, we have extended the feature to
tsh ssh -J proxy.example.com telenode
- Only one jump host is supported (
-Jsupports chaining that Teleport does not utilize) and
tshwill return with error in the case of two jump hosts, i.e.
-J proxy-1.example.com,proxy-2.example.comwill not work.
tsh ssh -J [email protected]is used, it overrides the SSH proxy defined in the tsh profile, and port forwarding is used instead of the existing Teleport proxy subsystem.
tsh supports multiple methods to resolve remote Node names.
- Traditional: by IP address or via DNS.
- Nodename setting: the
teleportdaemon supports the
nodenameflag, which allows Teleport administrators to assign alternative Node names.
- Labels: you can address a Node by
If we have two Node, one with
os:linux label and one Node with
can log in to the OSX Node with:
tsh ssh os=osx
This only works if there is only one remote node with the
os:osx label, but
you can still execute commands via SSH on multiple Nodes using labels as a
selector. This command will update all system packages on machines that run
tsh ssh os=ubuntu apt-get update -y
The default TTL of a Teleport user certificate is 12 hours. This can be modified
at login with the
--ttl flag. This command logs you into the cluster with a
very short-lived (1 minute) temporary certificate:
tsh --ttl=1 login
You will be logged out after one minute, but if you want to log out immediately, you can always run:
To securely copy files to and from cluster Nodes, use the
tsh scp command. It
is designed to mimic traditional
scp as much as possible:
Again, you may want to create a bash alias like
alias scp="tsh --proxy=work scp" and use the familiar syntax:
Suppose you are trying to troubleshoot a problem on a remote server. Sometimes
it makes sense to ask another team member for help. Traditionally, this could be
done by letting them know which host you're on, having them SSH in, start a
terminal multiplexer like
screen, and join a session there.
Teleport makes this more convenient. Let's log in to a server named
and ask Teleport for our current session status:
tsh ssh luna
on host lunateleport status
User ID : joe, logged in as joe from 10.0.10.1 43026 3022
Session ID : 7645d523-60cb-436d-b732-99c5df14b7c4
Session URL: https://work:3080/web/sessions/7645d523-60cb-436d-b732-99c5df14b7c4
Now you can invite another user account to the
work cluster. You can share the
URL for access through a web browser, or you can share the session ID, and the
other user can join you through their terminal by typing:
tsh join <session_ID>
Joining sessions is not supported in recording proxy mode (where
session_recording is set to
Teleport supports creating clusters of servers located behind firewalls without any open listening TCP ports. This works by creating reverse SSH tunnels from behind-firewall environments into a Teleport Proxy Service you have access to.
These features are called Trusted Clusters. Refer to the Trusted Clusters guide to learn how a Trusted Cluster can be configured.
Assuming the Teleport Proxy Server called
work is configured with a few Trusted
Clusters, a user may use the
tsh clusters command to see a list of all Trusted Clusters on the server:
tsh --proxy=work clusters
Cluster Name Status
Now you can use the
--cluster flag with any
tsh command. For example, to list SSH nodes that are members of the
production cluster, simply run:
tsh --proxy=work ls --cluster=production
Node Name Node ID Address Labels
--------- ------- ------- ------
db-1 xxxxxxxxx 10.0.20.31:3022 kernel:4.4
db-2 xxxxxxxxx 10.0.20.41:3022 kernel:4.2
Similarly, if you want to SSH into
db-1 inside the
tsh --proxy=work ssh --cluster=production db-1
This is possible even if Nodes in the
production cluster are located behind a
firewall without open ports. This works because the
establishes a reverse SSH tunnel back into the Proxy Service called
this tunnel is used to establish inbound SSH connections.