Fork me on GitHub
Teleport

Server Access Getting Started Guide

Improve
Teleport Server Access - Intro and Getting Started

Teleport Server Access - Intro and Getting Started

Length: 17:14

Server Access involves managing your resources, configuring new clusters, and issuing commands through a CLI or programmatically to an API.

This guide introduces some of these common scenarios and how to interact with Teleport to accomplish them:

  1. SSH into a cluster using Teleport.
  2. Introspect the cluster using Teleport features.
Tip

This guide also demonstrates how to configure Teleport Nodes using the bastion pattern so that only a single Node can be accessed publicly.

Teleport Bastion

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see one of our Getting Started guides.

  • The tctl admin tool and tsh client tool version >= 9.3.7.

    tctl version

    Teleport v9.3.7 go1.17

    tsh version

    Teleport v9.3.7 go1.17

    See Installation for details.

  • A running Teleport cluster. For details on how to set this up, see our Enterprise Getting Started guide.

  • The tctl admin tool and tsh client tool version >= 9.3.7, which you can download by visiting the customer portal.

    tctl version

    Teleport v9.3.7 go1.17

    tsh version

    Teleport v9.3.7 go1.17

  • A Teleport Cloud account. If you do not have one, visit the sign up page to begin your free trial.

  • The tctl admin tool and tsh client tool version >= 9.3.8. To download these tools, visit the Downloads page.

    tctl version

    Teleport v9.3.8 go1.17

    tsh version

    Teleport v9.3.8 go1.17

  • One host running a Linux environment (such as Ubuntu 20.04, CentOS 8.0, or Debian 10). This will serve as a Teleport Node.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=teleport.example.com [email protected]
tctl status

Cluster teleport.example.com

Version 9.3.7

CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

You can run subsequent tctl commands in this guide on your local machine.

For full privileges, you can also run tctl commands on your Auth Service host.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=myinstance.teleport.sh [email protected]
tctl status

Cluster myinstance.teleport.sh

Version 9.3.8

CA pin sha256:sha-hash-here

You must run subsequent tctl commands in this guide on your local machine.

When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:

  • Avoid using sudo in production environments unless it's necessary.
  • Create new, non-root, users and use test instances for experimenting with Teleport.
  • Run Teleport's services as a non-root user unless required. Only the SSH Service requires root access. Note that you will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443).
  • Follow the "Principle of Least Privilege" (PoLP). Don't give users permissive roles when giving them more restrictive roles will do instead. For example, assign users the built-in access,editor roles.
  • When joining a Teleport agent to a cluster, save the invitation token to a file. Otherwise, the token will be visible when examining the teleport command that started the agent, e.g., via the history command on a compromised system.

Step 1/4. Install Teleport on your Linux host

  1. Your Linux host will be a private resource. Open port 22 so you can initially access, configure, and provision your instance.

    We'll configure and launch our instance, then demonstrate how to use the tsh tool and Teleport in SSH mode.

  2. On the host where you will run your Teleport Node, follow the instructions for your environment to install Teleport.

    Download Teleport's PGP public key

    sudo curl https://deb.releases.teleport.dev/teleport-pubkey.asc \ -o /usr/share/keyrings/teleport-archive-keyring.asc

    Add the Teleport APT repository

    echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] https://deb.releases.teleport.dev/ stable main" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null
    sudo apt-get update
    sudo apt-get install teleport
    sudo yum-config-manager --add-repo https://rpm.releases.teleport.dev/teleport.repo
    sudo yum install teleport

    Optional: Using DNF on newer distributions

    $ sudo dnf config-manager --add-repo https://rpm.releases.teleport.dev/teleport.repo

    $ sudo dnf install teleport

    curl https://get.gravitational.com/teleport-v9.3.7-linux-amd64-bin.tar.gz.sha256

    <checksum> <filename>

    curl -O https://get.gravitational.com/teleport-v9.3.7-linux-amd64-bin.tar.gz
    shasum -a 256 teleport-v9.3.7-linux-amd64-bin.tar.gz

    Verify that the checksums match

    tar -xzf teleport-v9.3.7-linux-amd64-bin.tar.gz
    cd teleport
    sudo ./install
    curl https://get.gravitational.com/teleport-v9.3.7-linux-arm-bin.tar.gz.sha256

    <checksum> <filename>

    curl -O https://get.gravitational.com/teleport-v9.3.7-linux-arm-bin.tar.gz
    shasum -a 256 teleport-v9.3.7-linux-arm-bin.tar.gz

    Verify that the checksums match

    tar -xzf teleport-v9.3.7-linux-arm-bin.tar.gz
    cd teleport
    sudo ./install
    curl https://get.gravitational.com/teleport-v9.3.7-linux-arm64-bin.tar.gz.sha256

    <checksum> <filename>

    curl -O https://get.gravitational.com/teleport-v9.3.7-linux-arm64-bin.tar.gz
    shasum -a 256 teleport-v9.3.7-linux-arm64-bin.tar.gz

    Verify that the checksums match

    tar -xzf teleport-v9.3.7-linux-arm64-bin.tar.gz
    cd teleport
    sudo ./install

    Next, we'll create a join token so you can start the Teleport Node and add it to your cluster.

Step 2/4. Add a Node to the cluster

Create a join token

Next, create a join token so you can add the Node to your Teleport cluster.

Let's save the token to a file

sudo tctl tokens add --type=node | grep -oP '(?<=token:\s).*' > token.file

--type=node specifies that the Teleport Node will act and join as an SSH server.

> token.file indicates that you'd like to save the output to a file name token.file.

Tip

This helps to minimize the direct sharing of tokens even when they are dynamically generated.

Join your Node to the cluster

On your Node, save token.file to an appropriate, secure, directory you have the rights and access to read.

Start the Node. Change tele.example.com to the address of your Teleport Proxy Service. Assign the --token flag to the path where you saved token.file.

Join cluster

sudo teleport start \ --roles=node \ --token=/path/to/token.file \ --auth-server=tele.example.com:443

Access the Web UI

Run the following command to create a user that can access the Teleport Web UI:

sudo tctl users add tele-admin --roles=editor,access --logins=root,ubuntu,ec2-user

This will generate an initial login link where you can create a password and set up two-factor authentication for tele-admin.

Note

We've only given tele-admin the roles editor and access according to the Principle of Least Privilege.

You should now be able to view your Teleport Node in the Teleport Web UI after logging in as tele-admin:

Both Nodes in the Web UI

Step 3/4. SSH into the server

Now that we've got our cluster up and running, let's see how easy it is to connect to our Node.

We can use tsh to SSH into the cluster:

Log in to the cluster

On your local machine, log in to your cluster through tsh, assigning the --proxy flag to the address of your Teleport Proxy Service:

Log in through tsh

tsh login --proxy=tele.example.com --user=tele-admin

You'll be prompted to supply the password and second factor we set up previously.

tele-admin will now see something similar to:

> Profile URL:        https://tele.example.com:443
Logged in as:       tele-admin
Cluster:            tele.example.com
Roles:              access, editor
Logins:             root, ubuntu, ec2-user
Kubernetes:         disabled
Valid until:        2021-04-30 06:39:13 -0500 CDT [valid for 12h0m0s]
Extensions:         permit-agent-forwarding, permit-port-forwarding, permit-pty

In this example, tele-admin is now logged into the tele.example.com cluster through Teleport SSH.

Display cluster resources

tele-admin can now execute the following to find the cluster's Node names, which are used for establishing SSH connections:

Display cluster resources

tsh ls

In this example, the bastion host Node is located on the bottom line below:

Node Name        Address        Labels
---------------- -------------- --------------------------------------
ip-172-31-35-170 ⟵ Tunnel
ip-172-31-41-144 127.0.0.1:3022 env=example, hostname=ip-172-31-41-144

Connect to a Node

tele-admin can SSH into the bastion host Node by running the following command locally:

Use tsh to ssh into a Node

Now, they can:

  • Connect to other Nodes in the cluster by using the appropriate IP address in the tsh ssh command.
  • Traverse the Linux file system.
  • Execute desired commands.

All commands executed by tele-admin are recorded and can be replayed in the Teleport Web UI.

The tsh ssh command allows users to do anything they could if they were to SSH into a server using a third-party tool. Compare the two equivalent commands:

ssh -J tele.example.com [email protected]

Step 4/4. Use tsh and the unified resource catalog to introspect the cluster

Now, tele-admin has the ability to SSH into other Nodes within the cluster, traverse the Linux file system, and execute commands.

  • They have visibility into all resources within the cluster due to their defined and assigned roles.
  • They can also quickly view any Node or grouping of Nodes that have been assigned a particular label.

Display the unified resource catalog

Execute the following command within your bastion host console:

List Nodes

sudo tctl nodes ls

This displays the unified resource catalog with all queried resources in one view:

Nodename         UUID                                 Address        Labels
---------------- ------------------------------------ -------------- -------------------------------------
ip-172-31-35-170 4980899c-d260-414f-9aea-874feef71747
ip-172-31-41-144 f3d2a65f-3fa7-451d-b516-68d189ff9ae5 127.0.0.1:3022 env=example,hostname=ip-172-31-41-144

Note the "Labels" column on the farthest side. tele-admin can query all resources with a shared label using the command:

Query all Nodes with a label

tsh ls env=example

Customized labels can be defined in your teleport.yaml configuration file or during Node creation.

This is a convenient feature that allows for more advanced queries. If an IP address changes, for example, an admin can quickly find the current Node with that label since it remains unchanged.

Run commands on all Nodes with a label

tele-admin can also execute commands on all Nodes that share a label, vastly simplifying repeated operations. For example, the command:

Run the ls command on all Nodes with a label

tsh ssh [email protected]=example ls

will execute the ls command on each Node and display the results in your terminal.

Optional: Harden your bastion host

We previously configured our Linux instance to leave port 22 open to easily configure and install Teleport. Feel free to compare Teleport SSH to your usual ssh commands.

If you'd like to further experiment with using Teleport according to the bastion pattern:

  • Close port 22 on your private Linux instance now that your Teleport Node is configured and running.
  • For self-hosted deployments, optionally close port 22 on your bastion host.
  • You'll be able to fully connect to the private instance and, for self-hosted deployments, the bastion host, using tsh ssh.

Conclusion

To recap, this guide described:

  1. How to set up and add an SSH Node to a cluster.
  2. Connect to the cluster using tsh to manage and introspect resources.

Feel free to shut down, clean up, and delete your resources, or use them in further Getting Started exercises.

Next steps

Resources