Skip to main content

Teleport Relay Service

info

All Teleport Relay service functionality is available starting from Teleport 18.3.0 for the SSH protocol. Other protocols are not supported at this time.

The Relay service is an optional component of a Teleport cluster that provides an alternative connectivity path between clients and resources. Similar to the Proxy service, it forwards connections to resources through reverse tunnels. However, unlike the Proxy service, the Relay service does not route connections through the Teleport control plane. Instead, it allows Teleport agents to open tunnels directly to the Relay service, enabling clients to connect to resources with lower latency and higher efficiency in specific network scenarios.

This alternate connectivity is beneficial in situations where the client and the resource are known to be close together in a physical or logical sense (the same data center, office, campus, geographical region) and regular connections through the control plane would incur higher latency, lower throughput, or higher costs.

Unlike the Proxy service, the Relay does not host a web UI, intercept connections, impersonate users or provide access to the Teleport control plane API. As such, it's simpler to deploy and keep secure, and it's thus suitable to be deployed even in environments where it wouldn't be possible to securely deploy a Proxy instance.

warning

The Teleport Relay service is intended for specific scenarios where clients and agents are in the same network segment and there is a need for connectivity that does not go through the Teleport control plane. It is not a required or recommended cluster component in most Teleport deployments.

How it works

A Relay service deployment consists of one or more Relay instances configured with the same Relay group name, reachable by Teleport agents and clients through a L4 network load balancer. Relay instances of the same group receive connections from agents and clients through the load balancer, and can connect to each other directly. Relay instances listen on two separate ports to serve connections from agents and clients, and a third listening port is used for direct connections from other Relay instances in the same group.

The typical setup uses a load balancer serving the two ports, reachable at a single hostname, with port 3042 used by agents and port 443 used by clients. However, it's possible to use different load balancers and hostnames for the client and agent ports if necessary. The listening port for connectivity between Relay instances does not make use of the load balancer, since each Relay instance might require connecting to specific other instances to serve connections; as such, the Relay service requires outbound connectivity from each Relay instance to its peers on that port.

The connections from the client to the Relay, the connections between Relay instances, and the tunnel connections from the Teleport agents and the Relay instances use TCP and are encrypted and authenticated through mutual TLS, with X.509 certificates issued by the Teleport control plane.

The relay_service section of the Teleport configuration file (see reference) defines both the network connectivity of the single Relay instance and the settings of the Relay group, which should be the same between all instances and will be fetched by agents as needed.

An agent can be configured to use a given Relay by adding the address of the load balancer to its teleport.yaml configuration file:

teleport:
  proxy_server: proxy.example.com:443
  relay_server: relay.example.com:3042

Agents running the SSH service that are also connected to a Proxy (i.e. they're running in "tunnel mode") will also open tunnels to the Relay service. The address specified in the configuration should resolve to the load balancer for the port used by agents.

When an agent is configured to open tunnels to a given Relay group, it will periodically check the Relay configuration advertised by the Relay instances, then open enough tunnel connections to distinct Relay instances as determined by the Relay group configuration. The target connection count in the configuration should be not bigger than the amount of active and reachable Relay instances in the group at any given time, or all agents configured to use the group will keep connecting to the load balancer to reach for distinct instances that are not actually available.

Before a Relay instance shuts down, it will inform all connected agents that it's about to terminate, and agents will then open new tunnel connections to replace their existing connection, ideally maintaining availability throughout this process. Because of this, we recommend deploying new Relay instances before shutting down old ones, if possible. This is what will happen when deploying the Relay service through the teleport-relay Helm chart. Running a fixed set of Relay instances can result in minor downtime when restarting instances.

When connected to a Relay group, Teleport agents will include the group name and the list of Relay instances that they have connected to in the heartbeat for their served resources. It's possible to see this by reading the appropriate resource heartbeat through tctl (e.g. tctl get node/<host ID>) or, for SSH servers, tsh ls --format=json. The list of Relay instance IDs is also used internally by the Relay service to route connections if an agent serving the requested resource is not connected to all Relay instances: in such a case, the Relay instance forwarding a connection from a client will pass the connection along to a Relay instance that has a tunnel from the agent serving the resource - this is the same mechanism used for connection routing in the control plane when using Proxy Peering.

When a Relay instance receives a request to open a connection to a resource, it'll run the same connection routing logic used by the Proxy service, resulting in one or more registered resources as a target for the connection. It will then check if the resource is available through an agent that has a tunnel opened directly with the instance; if so, the connection is then forwarded directly to the agent, otherwise, if the resource is available through a Relay instance of the same Relay group, the connection is forwarded to one of the other Relay instances, which will then pass the connection along to the appropriate agent through a tunnel. If the connection request is for a resource that is not available through the Relay group, an error is returned to the client.

Client configuration

Use of the Relay service requires a desktop client (i.e. tsh) or the ssh-multiplexer service of the Machine ID agent (tbot). It's not possible to use a Relay through the Teleport web UI.

When using tsh, it's possible to specify the --relay option (or the TELEPORT_RELAY environment variable) with the hostname and port of the load balancer for the port used by clients; if a port is not specified, the default is taken to be port 443. The load balancer for the client port of the Relay service can employ any routing strategy, since all Relay instances are going to serve client connections equally.

When you specify --relay at tsh login time, the configured Relay address is saved in your tsh profile configuration. This address will be used for all later invocations unless you override it.

You can configure a default Relay address on a per-user basis using the default_relay_addr user trait. This trait can be set in several ways:

When using tsh ssh:

  • If a Relay address was specified at login time or if you pass --relay, the SSH connection will go through the specified Relay instead of the control plane.
  • If an address was specified at login time, you can use --relay=none to temporarily disable the Relay service.
  • Use --relay=default to use the default Relay address for the user, even if a different address was specified at login time.
  • When a Relay address is configured, only servers available through the Relay will be accessible. No connection will go through the control plane.
# specifying a relay address when logging in
$ tsh login --proxy proxy.example.com --relay relay.example.com
...
> Profile URL:        https://proxy.example.com:443
  Relay address:      relay.example.com
  Logged in as:       username
  Cluster:            proxy.example.com
  Roles:              access, auditor, editor
  Logins:             root, ubuntu
  Kubernetes:         enabled
  Kubernetes groups:  system:masters
  Valid until:        2025-10-22 00:48:07 +0200 CEST [valid for 12h0m0s]
  Extensions:         permit-agent-forwarding, permit-port-forwarding, permit-pty
# connections will use the relay by default
$ tsh ssh root@nodename
...
# using a specific relay for a single connection
$ tsh ssh --relay another-relay.example.com root@nodename
...
# using no relay
$ tsh ssh --relay none root@nodename
...

The ssh-multiplexer Machine ID service can be configured to use a Relay. Similarly to the behavior of tsh, if a Relay address is set in the service configuration, only servers available through the Relay will be accessible. It's possible to provide access to servers through the Relay and through the control plane from the same Machine ID instance by configuring two independent ssh-multiplexer services.

More concepts