Fork me on GitHub

Teleport

Join Services with a Secure Token

  • Available for:
  • OpenSource
  • Team
  • Cloud
  • Enterprise

In this guide, we will show you how to register a Teleport process running one or more services to your cluster by presenting a join token.

In this approach, you declare your intention to register a new Teleport process, and Teleport generates a secure token that the process uses to establish a trust relationship with the Teleport cluster.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see the Getting Started guide.

  • The tctl admin tool and tsh client tool version >= 14.0.1.

    See Installation for details.

  • A Teleport Team account. If you don't have an account, sign up to begin your free trial.

  • The Enterprise tctl admin tool and tsh client tool, version >= 13.3.9.

    You can download these tools from the Cloud Downloads page.

  • A running Teleport Enterprise cluster. For details on how to set this up, see the Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 14.0.1.

    You can download these tools by visiting your Teleport account workspace.

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.

To check version information, run the tctl version and tsh version commands. For example:

tctl version

Teleport Enterprise v13.3.9 git:api/14.0.0-gd1e081e go1.21


tsh version

Teleport v13.3.9 go1.21

Proxy version: 13.3.9Proxy: teleport.example.com
  • A Linux server that you will use to host your Teleport process, e.g., a virtual machine or Docker container with an image based on a Linux distribution.

    In this guide, we will show you how to register a Teleport SSH Service instance. This approach also applies to other Teleport services, like the Proxy Service, Kubernetes Service, Database Service, and other services for accessing resources in your infrastructure.

    The join token method works if a cluster includes a single Proxy Service instance as well as multiple Proxy Service instances behind a load balancer (LB) or a DNS entry with multiple values. If there are multiple Proxy Service instances, a Teleport process joining the cluster establishes a tunnel to every Proxy Service instance.

    If you are using a load balancer, it must use a round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (i.e., "session affinity") with Teleport Proxy Service instances.

    If you are are using a Docker container, note that this guide assumes that your Linux host has curl and sudo installed.

To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands on your administrative workstation using your current credentials. For example:

tsh login --proxy=teleport.example.com --user=[email protected]
tctl status

Cluster teleport.example.com

Version 14.0.1

CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/3. Install Teleport

Install Teleport on your Linux host.

Select an edition, then follow the instructions for that edition to install Teleport.

Teleport Edition

curl https://goteleport.com/static/install.sh | bash -s 14.0.1

Add the Teleport repository to your repository list:

Download Teleport's PGP public key

sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.asc

Source variables about OS version

source /etc/os-release

Add the Teleport APT repository for cloud.

echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null

sudo apt-get update
sudo apt-get install teleport-ent-updater

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")
sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"
sudo yum install teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use the dnf config manager plugin to add the teleport RPM repo

sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"

Install teleport

sudo dnf install teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use Zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")

Install teleport

sudo zypper install teleport-ent-updater

OS repository channels

The following channels are available for APT, YUM, and Zypper repos. They may be used in place of stable/v14 anywhere in the Teleport documentation.

Channel nameDescription
stable/<major>Receives releases for the specified major release line, i.e. v14
stable/cloudRolling channel that receives releases compatible with current Cloud version
stable/rollingRolling channel that receives all published Teleport releases

Before installing a teleport binary with a version besides v13, read our compatibility rules to ensure that the binary is compatible with Teleport Cloud.

When running multiple teleport binaries within a cluster, the following rules apply:

  • Patch and minor versions are always compatible, for example, any 8.0.1 component will work with any 8.0.3 component and any 8.1.0 component will work with any 8.3.0 component.
  • Servers support clients that are 1 major version behind, but do not support clients that are on a newer major version. For example, an 8.x.x Proxy Service is compatible with 7.x.x resource services and 7.x.x tsh, but we don't guarantee that a 9.x.x resource service will work with an 8.x.x Proxy Service. This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. You must upgrade to 7.x.x first.
  • Proxy Services and resource services do not support Auth Services that are on an older major version, and will fail to connect to older Auth Services by default. This behavior can be overridden by passing --skip-version-check when starting Proxy Services and resource services.

Download Teleport's PGP public key

sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.asc

Source variables about OS version

source /etc/os-release

Add the Teleport APT repository for v14. You'll need to update this

file for each major release of Teleport.

echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/v14" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null

sudo apt-get update
sudo apt-get install teleport-ent

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo apt-get install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for v14. You'll need to update this

file for each major release of Teleport.

First, get the major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")
sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport.repo")"
sudo yum install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo yum install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository for v14. You'll need to update this

file for each major release of Teleport.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")
sudo yum install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo yum install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for v14. You'll need to update this

file for each major release of Teleport.

First, get the major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use the dnf config manager plugin to add the teleport RPM repo

sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport.repo")"

Install teleport

sudo dnf install teleport-ent

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo dnf install teleport-ent-fips

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use Zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/v14/teleport-zypper.repo")

Install teleport

sudo zypper install teleport-ent

For FedRAMP/FIPS-compliant installations, install the teleport-ent-fips package instead:

sudo zypper install teleport-ent-fips

In the example commands below, update $SYSTEM_ARCH with the appropriate value (amd64, arm64, or arm). All example commands using this variable will update after one is filled out.

curl https://get.gravitational.com/teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-bin.tar.gz.sha256

<checksum> <filename>

curl -O https://cdn.teleport.dev/teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-bin.tar.gz
shasum -a 256 teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-bin.tar.gz

Verify that the checksums match

tar -xvf teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-bin.tar.gz
cd teleport-ent
sudo ./install

For FedRAMP/FIPS-compliant installations of Teleport Enterprise, package URLs will be slightly different:

curl https://get.gravitational.com/teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-fips-bin.tar.gz.sha256

<checksum> <filename>

curl -O https://cdn.teleport.dev/teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-fips-bin.tar.gz
shasum -a 256 teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-fips-bin.tar.gz

Verify that the checksums match

tar -xvf teleport-ent-v14.0.1-linux-$SYSTEM_ARCH-fips-bin.tar.gz
cd teleport-ent
sudo ./install

OS repository channels

The following channels are available for APT, YUM, and Zypper repos. They may be used in place of stable/v14 anywhere in the Teleport documentation.

Channel nameDescription
stable/<major>Receives releases for the specified major release line, i.e. v14
stable/cloudRolling channel that receives releases compatible with current Cloud version
stable/rollingRolling channel that receives all published Teleport releases

Add the Teleport repository to your repository list:

Download Teleport's PGP public key

sudo curl https://apt.releases.teleport.dev/gpg \-o /usr/share/keyrings/teleport-archive-keyring.asc

Source variables about OS version

source /etc/os-release

Add the Teleport APT repository for cloud.

echo "deb [signed-by=/usr/share/keyrings/teleport-archive-keyring.asc] \https://apt.releases.teleport.dev/${ID?} ${VERSION_CODENAME?} stable/cloud" \| sudo tee /etc/apt/sources.list.d/teleport.list > /dev/null

sudo apt-get update
sudo apt-get install teleport-ent-updater

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")
sudo yum-config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"
sudo yum install teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport YUM repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use the dnf config manager plugin to add the teleport RPM repo

sudo dnf config-manager --add-repo "$(rpm --eval "https://yum.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-yum.repo")"

Install teleport

sudo dnf install teleport-ent-updater

Tip: Add /usr/local/bin to path used by sudo (so 'sudo tctl users add' will work as per the docs)

echo "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin" > /etc/sudoers.d/secure_path

Source variables about OS version

source /etc/os-release

Add the Teleport Zypper repository for cloud.

First, get the OS major version from $VERSION_ID so this fetches the correct

package version.

VERSION_ID=$(echo $VERSION_ID | grep -Eo "^[0-9]+")

Use Zypper to add the teleport RPM repo

sudo zypper addrepo --refresh --repo $(rpm --eval "https://zypper.releases.teleport.dev/$ID/$VERSION_ID/Teleport/%{_arch}/stable/cloud/teleport-zypper.repo")

Install teleport

sudo zypper install teleport-ent-updater

OS repository channels

The following channels are available for APT, YUM, and Zypper repos. They may be used in place of stable/v14 anywhere in the Teleport documentation.

Channel nameDescription
stable/<major>Receives releases for the specified major release line, i.e. v14
stable/cloudRolling channel that receives releases compatible with current Cloud version
stable/rollingRolling channel that receives all published Teleport releases

Before installing a teleport binary with a version besides v13, read our compatibility rules to ensure that the binary is compatible with Teleport Enterprise Cloud.

When running multiple teleport binaries within a cluster, the following rules apply:

  • Patch and minor versions are always compatible, for example, any 8.0.1 component will work with any 8.0.3 component and any 8.1.0 component will work with any 8.3.0 component.
  • Servers support clients that are 1 major version behind, but do not support clients that are on a newer major version. For example, an 8.x.x Proxy Service is compatible with 7.x.x resource services and 7.x.x tsh, but we don't guarantee that a 9.x.x resource service will work with an 8.x.x Proxy Service. This also means you must not attempt to upgrade from 6.x.x straight to 8.x.x. You must upgrade to 7.x.x first.
  • Proxy Services and resource services do not support Auth Services that are on an older major version, and will fail to connect to older Auth Services by default. This behavior can be overridden by passing --skip-version-check when starting Proxy Services and resource services.

Step 2/3. Join your Teleport process to the cluster

In this section, we will join your Teleport process to your cluster by:

  • Obtaining a join token
  • Running your Teleport process with the join token

Generate a token

Teleport only allows access to resources in your infrastructure via Teleport processes that that have joined the cluster.

On your local machine, use the tctl tool to generate a new token. In the following example, a new token is created with a TTL of five minutes:

Generate a short-lived invite token for a new Teleport SSH Service instance:

tctl tokens add --ttl=5m --type=node
The invite token: abcd123-insecure-do-not-use-thisThis token will expire in 5 minutes.
Run this on the new node to join the cluster:
> teleport start \ --roles=node \ --token=abcd123-insecure-do-not-use-this \ --ca-pin=sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678 \ --auth-server=192.0.2.0:3025
Please note:
- This invitation token will expire in 5 minutes - 192.0.2.0:3025 must be reachable from the new node

In this command, we assigned the token the node type, indicating that it will belong to an SSH Service instance.

Copy the token so you can use it later in this guide. You can ignore the rest of the tctl tokens add output.

Here are all the values we support for --type flag when creating a join token:

RoleTeleport Service
appApplication Service
authAuth Service
botMachine ID
dbDatabase Service
discoveryDiscovery Service
kubeKubernetes Service
nodeSSH Service
proxyProxy Service
windowsdesktopWindows Desktop Service

Administrators can generate tokens as they are needed. A Teleport process can use a token multiple times until its time to live (TTL) expires, with the exception of tokens with the bot type, which are used by Machine ID.

To list all of the tokens you have generated, run the following command:

tctl tokens ls
Token Type Labels Expiry Time (UTC)-------------------------------- ---- ------ --------------------------abcd123-insecure-do-not-use-this Node 30 Mar 23 18:15 UTC (2m8s)

Use short-lived tokens instead of long-lived static tokens. Static tokens are easier to steal, guess, and leak.

Static tokens are defined ahead of time by an administrator and stored in the Auth Service's config file:

# Config section in `/etc/teleport.yaml` file for the Auth Service
auth_service:
    enabled: true
    tokens:
    # This static token allows new hosts to join the cluster as "proxy" or "node"
    - "proxy,node:secret-token-value"
    # A token can also be stored in a file. In this example the token for adding
    # new Auth Service instances are stored in /path/to/tokenfile
    - "auth:/path/to/tokenfile"

Start your Teleport process with the invite token

Execute the following command on the host running your new Teleport process to add it to a cluster. Assign join-token to the token you generated earlier and proxy-address to the host and web port of your Teleport Proxy Service or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443):

sudo teleport configure \ --roles=node \ --token=join-token \ --proxy=proxy-address \ -o file

For SSH Service instances, you can also run teleport node configure instead of teleport configure. This way, you can exclude the --roles=node flag from the command.

So far, this guide has assumed that you are joining your new Teleport process to your cluster by connecting it to the Proxy Service. (This is the only possibility in Teleport Enterprise Cloud.) Depending on the design of your infrastructure, you may need to connect your new Teleport process directly to the Auth Service.

Only connect Teleport processes directly to the Auth Service if no other join methods are suitable, as we recommend exposing the Auth Service to as few sources of ingress traffic as possible.

The Teleport process joining the cluster must also establish trust with the Auth Service in order to prevent an attacker from hijacking the address of your Auth Service host.

To do this, you supply your new Teleport process with a secure hash value generated by the Auth Service's certificate authority, called a CA pin. This way, an attacker cannot easily forge a private key to trick your Teleport process into communicating with a malicious service.

Obtain a CA pin

On you local machine, retrieve the CA pin of the Auth Service:

tctl status
Cluster teleport.example.comVersion 12.1.1host CA never updateduser CA never updateddb CA never updatedopenssh CA never updatedjwt CA never updatedsaml_idp CA never updatedCA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

Copy the CA pin and assign it to the value of ca-pin.

The CA pin becomes invalid if a Teleport administrator performs the CA rotation by executing tctl auth rotate.

Configure your Teleport process with the join token and CA pin

Run the following command to configure your Teleport process instead of the teleport configure command we showed you earlier. Assign auth-service to the host and gRPC port of your Auth Service host, e.g., teleport.example.com:3025.

sudo teleport configure \ --roles=node \ --token=join-token \ --auth-server=auth-service \ -o file

Next, edit the Teleport configuration file, /etc/teleport.yaml, assigning the CA pin (the teleport.ca_pin field) to the one you copied earlier:

sudo sed -i 's| ca_pin: ""| ca_pin: "ca-pin"|' /etc/teleport.yaml

Configure your Teleport instance to start automatically when the host boots up by creating a systemd service for it. The instructions depend on how you installed your Teleport instance.

On the host where you will run your Teleport instance, enable and start Teleport:

sudo systemctl enable teleport
sudo systemctl start teleport

On the host where you will run your Teleport instance, create a systemd service configuration for Teleport, enable the Teleport service, and start Teleport:

sudo teleport install systemd -o /etc/systemd/system/teleport.service
sudo systemctl enable teleport
sudo systemctl start teleport

You can check the status of your Teleport instance with systemctl status teleport and view its logs with journalctl -fu teleport.

If you followed this guide with a local Docker container, execute the following command within your container to run your new Teleport process in the foreground:

teleport start

As new services come online, they start sending heartbeat requests every few seconds to the Auth Service. This allows users to explore cluster membership and size.

Run the following command on your local machine to see all of the Teleport SSH Service instances in your cluster:

tctl nodes ls
Host UUID Public Address Labels Version------------- --------------------- -------------- ---------------------- -------1f58429134c4 6805dda3-779e-493b... hostname=1f58429134c4 14.0.1

Step 3/3. Revoke an invitation

You can revoke a join token to prevent a Teleport process from using it.

Run the following command on your local machine to create a token for a new Proxy Service:

tctl nodes add --ttl=5m --roles=proxy

The invite token: abcd123-insecure-do-not-use-this.

This token will expire in 5 minutes.

Run this on the new node to join the cluster:

> teleport start \

--roles=proxy \

--token=abcd123-insecure-do-not-use-this \

--ca-pin=sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678 \

--auth-server=123.123.123.123:443

Please note:

- This invitation token will expire in 5 minutes

- 123.123.123.123 must be reachable from the new node

Next, run the following command to see a list of outstanding tokens:

tctl tokens ls
Token Type Labels Expiry Time (UTC)-------------------------------- ----- ------ ---------------------------abcd123-insecure-do-not-use-this Node 30 Mar 23 18:20 UTC (36s)efgh456-insecure-do-not-use-this Proxy 30 Mar 23 18:24 UTC (4m39s)
Signup tokens

The output of tctl tokens ls includes tokens used for adding users alongside tokens used for adding Teleport processes to your cluster.

You generated the token with the Node role earlier in this guide to invite a new Teleport process to this cluster. The second token is the one you generated for a Proxy Service instance.

Tokens created via tctl can be deleted (revoked) via the tctl tokens rm command. Copy the second token from the output above and run the following command to delete it, assigning the token to token-to-delete.

tctl tokens rm token-to-delete

Token abcd123-insecure-do-not-use-this has been deleted

Next steps

  • If you have workloads split across different networks or clouds, we recommend setting up Trusted Clusters. Read how to get started in our Trusted Clusters guide.