Machine ID Configuration Reference
This reference documents the various options that can be configured in the tbot
configuration file. This configuration file offers more control than
configuring tbot
using CLI parameters alone.
To configure tbot
to use a configuration file, specify the path with the -c
flag:
$ tbot -c ./tbot.yaml
In this reference, the term artifact refers an item that tbot
writes to a
destination as part of the process of generating an output. Examples of
artifacts include configuration files, certificates, and cryptographic key
material. Usually, artifacts are files, but this term is explicitly avoided
because a destination isn't required to be a filesystem.
From Teleport 14, tbot
supports the v2 configuration version.
# version specifies the version of the configuration file in use. `v2` is the
# most recent and should be used for all new bots. The rest of this example
# is in the `v2` schema.
version: v2
# debug enables verbose logging to stderr. If unspecified, this defaults to
# false.
debug: true
# auth_server specifies the address of the Auth Service instance that `tbot` should connect
# to. You should prefer specifying `proxy_server` to specify the Proxy Service
# address.
auth_server: "teleport.example.com:3025"
# proxy_server specifies the address of the Teleport Proxy Service that `tbot` should
# connect to.
# It is recommended to use the address of your Teleport Proxy Service, or, if using
# Teleport Cloud, the address of your Teleport Cloud instance.
proxy_server: "teleport.example.com:443" # or "example.teleport.sh:443" for Teleport Cloud
# certificate_ttl specifies how long certificates generated by `tbot` should
# live for. It should be a positive, numeric value with an `m` (for minutes) or
# `h` (for hours) suffix. By default, this value is `1h`.
# This has a maximum value of `24h`.
certificate_ttl: "1h"
# renewal_interval specifies how often `tbot` should aim to renew the
# outputs it has generated. It should be a positive, numeric value with an
# `m` (for minutes) or `h` (for hours) suffix. The default value is `20m`.
# This value must be lower than `certificate_ttl`.
# This value is ignored when using `tbot` is running in one-shot mode.
renewal_interval: "20m"
# oneshot configures `tbot` to exit immediately after generating the outputs.
# The default value is `false`. A value of `true` is useful in ephemeral environments, like
# CI/CD.
oneshot: false
# onboarding is a group of configuration options that control how `tbot` will
# authenticate with the Teleport cluster.
onboarding:
# token specifies which join token, configured in the Teleport cluster,
# should be used to join the Teleport cluster.
#
# This can also be an absolute path to a file containing the value you wish
# to be used.
# File path example:
# token: /var/lib/teleport/tokenjoin
token: "00000000000000000000000000000000"
# join_method must be the join method associated with the specified token
# above. This setting should match the value output when creating the bot using
# `tctl`.
#
# Support values include:
# - `token`
# - `azure`
# - `gcp`
# - `circleci`
# - `github`
# - `gitlab`
# - `iam`
# - `ec2`
# - `kubernetes`
# - `spacelift`
# - `tpm`
join_method: "token"
# ca_pins are used to validate the identity of the Teleport Auth Service on
# first connect. This should not be specified when using Teleport Cloud or
# connecting through a Teleport Proxy.
ca_pins:
- "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"
- "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"
# ca_path is used to specify where a CA file can be found that can be used to
# validate the identity of the Teleport Auth Service on first connect.
# This should not be specified when using Teleport Cloud or connecting through a
# Teleport Proxy. The ca_pins option should be preferred over ca_path.
ca_path: "/path/to/ca.pem"
# storage specifies the destination that `tbot` should use to store its
# internal state. This state is sensitive, and you should ensure that the
# destination you specify here can only be accessed by `tbot`.
#
# If unspecified, storage is set to a directory destination with a path
# of `/var/lib/teleport/bot`.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
storage:
type: directory
path: /var/lib/teleport/bot
# outputs specifies what artifacts `tbot` should generate and renew when it
# runs.
#
# See the full list of supported outputs and their configuration options
# under the Outputs section of this reference page.
outputs:
- type: identity
destination:
type: directory
path: /opt/machine-id
# services specify which `tbot` sub-services should be enabled and how they
# should be configured.
#
# See the full list of supported services and their configuration options
# under the Services section of this reference page.
services:
- type: example
If no configuration file is provided, a simple configuration is used based
on the provided CLI flags. Given the following sample CLI from
tctl bots add ...
:
$ tbot start \
--destination-dir=./tbot-user \
--token=00000000000000000000000000000000 \
--ca-pin=sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678 \
--proxy-server=example.teleport.sh:443
it uses a configuration equivalent to the following:
proxy_server: example.teleport.sh:443
onboarding:
join_method: "token"
token: "abcd123-insecure-do-not-use-this"
ca_pins:
- "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"
storage:
type: directory
path: /var/lib/teleport/bot
outputs:
- type: identity
destination:
type: directory
path: ./tbot-user
Outputs
Outputs define what actions tbot
should take when it runs. They describe
the format of the certificates to be generated, the roles used to generate the certificates, and the
destination where they should be written.
There are multiple types of output. Select the one that is most appropriate for your intended use-case.
identity
The identity
output can be used to authenticate:
- SSH access to your Teleport servers, using
tsh
, openssh and tools like ansible. - Administrative actions against your cluster using tools like
tsh
ortctl
. - Management of Teleport resources using the Teleport Terraform provider.
- Access to the Teleport API using the Teleport Go SDK.
See the Getting Started guide to see the identity
output used in context.
# type specifies the type of the output. For the identity output, this will
# always be `identity`.
type: identity
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
application
The application
output is used to generate credentials that can be used to
access applications that have been configured with Teleport.
See the Machine ID with Application Access guide
to see the application
output used in context.
# type specifies the type of the output. For the application output, this will
# always be `application`.
type: application
# app_name specifies the application name, as configured in your Teleport
# cluster, that `tbot` should generate credentials for.
# This field must be specified.
app_name: grafana
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
database
The database
output is used to generate credentials that can be used to
access databases that have been configured with Teleport.
See the Machine ID with Database Access guide
to see the database
output used in context.
# type specifies the type of the output. For the database output, this will
# always be `database`.
type: database
# service is the name of the database server, as configured in Teleport, that
# the output should generate credentials for. This field must be specified.
service: my-postgres-server
# database is the name of the specific database on the specified database
# server to generate credentials for. This field doesn't need to be specified
# for database types that don't support multiple individual databases.
database: my-database
# username is the name of the user on the specified database server to
# generate credentials for. This field doesn't need to be specified
# for database types that don't have users.
username: my-user
# format specifies the format to use for output artifacts. If
# unspecified, a default format is used. See the table titled "Supported
# formats" below for the full list of supported values.
format: tls
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
Supported formats
You can provide the following values to the format
configuration field in
the database
output type:
format | Description |
---|---|
Unspecified | Provides a certificate in tlscert , a private key in key and the CA in teleport-database-ca.crt . This is compatible with most clients and databases. |
mongo | Provides mongo.crt and mongo.cas . This is designed to be used with MongoDB clients. |
cockroach | Provides cockroach/node.key , cockroach/node.crt , and cockroach/ca.crt . This is designed to be used with CockroachDB clients. |
tls | Provides tls.key , tls.crt , and tls.cas . This is for generic clients that require the specific file extensions. |
kubernetes
The kubernetes
output is used to generate credentials that can be used to
access Kubernetes clusters that have been configured with Teleport.
It outputs a kubeconfig.yaml
in the output destination, which can be used
with kubectl
.
See the Machine ID with Kubernetes Access guide
to see the kubernetes
output used in context.
# type specifies the type of the output. For the kubernetes output, this will
# always be `kubernetes`.
type: kubernetes
# kubernetes_cluster is the name of the Kubernetes cluster, as configured in
# Teleport, that the output should generate credentials and a kubeconfig for.
# This field must be specified.
kubernetes_cluster: my-cluster
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
ssh_host
The ssh_host
output is used to generate the artifacts required to configure
an OpenSSH server with Teleport in order to allow Teleport users to connect to
it.
The output generates the following artifacts:
ssh_host-cert.pub
: an SSH certificate signed by the Teleport host certificate authority.ssh_host
: the private key associated with the SSH host certificate.ssh_host-user-ca.pub
: an export of the Teleport user certificate authority in an OpenSSH compatible format.
# type specifies the type of the output. For the ssh host output, this will
# always be `ssh_host`.
type: ssh_host
# principals is the list of host names to include in the host certificates.
# These names should match the names that clients use to connect to the host.
principals:
- host.example.com
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
spiffe-svid
The spiffe-svid
output is used to generate a SPIFFE X509 SVID and write this
to a configured destination.
The output generates the following artifacts:
svid.pem
: the X509 SVID.svid.key
: the private key associated with the X509 SVID.bundle.pem
: the X509 bundle that contains the trust domain CAs.
See Workload Identity for more information on how to use SPIFFE SVIDs.
# type specifies the type of the output. For the SPIFFE SVID output, this will
# always be `spiffe-svid`.
type: spiffe-svid
# svid specifies the properties of the SPIFFE SVID that should be requested.
svid:
# path specifies what the path element should be requested for the SPIFFE ID.
path: /svc/foo
# sans specifies optional Subject Alternative Names (SANs) to include in the
# generated X509 SVID. If omitted, no SANs are included.
sans:
# dns specifies the DNS SANs. If omitted, no DNS SANs are included.
dns:
- foo.svc.example.com
# ip specifies the IP SANs. If omitted, no IP SANs are included.
ip:
- 10.0.0.1
# The following configuration fields are available across most output types.
# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
type: directory
path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
- editor
Services
Services are configurable long-lived components that run within tbot
. Unlike
Outputs, they may not necessarily generate artifacts. Typically, services
provide supporting functionality for machine to machine access, for example,
opening tunnels or providing APIs.
spiffe-workload-api
The spiffe-workload-api
service opens a listener for a service that implements
the SPIFFE Workload API. This service is used to provide SPIFFE SVIDs to
workloads.
See Workload Identity for more information on the SPIFFE Workload API.
# type specifies the type of the service. For the SPIFFE Workload API service,
# this will always be `spiffe-workload-api`.
type: spiffe-workload-api
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: unix:///opt/machine-id/workload.sock
# attestors allows Workload Attestation to be configured for this Workload
# API.
attestors:
# kubernetes is configuration for the Kubernetes Workload Attestor. See
# the Kubernetes Workload Attestor section for more information.
kubernetes:
# enabled specifies whether the Kubernetes Workload Attestor should be
# enabled. If unspecified, this defaults to false.
enabled: true
# kubelet holds configuration relevant to the Kubernetes Workload Attestors
# interaction with the Kubelet API.
kubelet:
# read_only_port is the port on which the Kubelet API is exposed for
# read-only operations. Since Kubernetes 1.16, the read-only port is
# typically disabled by default and secure_port should be used instead.
read_only_port: 10255
# secure_port is the port on which the attestor should connect to the
# Kubelet secure API. If unspecified, this defaults to `10250`. This is
# mutually exclusive with ReadOnlyPort.
secure_port: 10250
# token_path is the path to the token file that the Kubelet API client
# should use to authenticate with the Kubelet API. If unspecified, this
# defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`.
token_path: "/var/run/secrets/kubernetes.io/serviceaccount/token"
# ca_path is the path to the CA file that the Kubelet API client should
# use to validate the Kubelet API server's certificate. If unspecified,
# this defaults to `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
ca_path: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
# skip_verify is used to disable verification of the Kubelet API server's
# certificate. If unspecified, this defaults to false.
#
# If specified, the value specified in ca_path is ignored.
#
# This is useful in cases where the Kubelet API server has not been issued
# with a certificate signed by the Kubernetes cluster's CA. This is fairly
# common with a number of Kubernetes distributions.
skip_verify: true
# anonymous is used to disable authentication with the Kubelet API. If
# unspecified, this defaults to false. If set, the token_path field is
# ignored.
anonymous: false
# svids specifies the SPIFFE SVIDs that the Workload API should provide.
svids:
# path specifies what the path element should be requested for the SPIFFE
# ID.
- path: /svc/foo
# hint is a free-form string which can be used to help workloads determine
# which SVID to select when multiple are available. If omitted, no hint is
# included.
hint: my-hint
# sans specifies optional Subject Alternative Names (SANs) to include in the
# generated X509 SVID. If omitted, no SANs are included.
sans:
# dns specifies the DNS SANs. If omitted, no DNS SANs are included.
dns:
- foo.svc.example.com
# ip specifies the IP SANs. If omitted, no IP SANs are included.
ip:
- 10.0.0.1
# rules specifies a list of workload attestation rules. At least one of
# these rules must be satisfied by the workload in order for it to receive
# this SVID.
#
# If no rules are specified, the SVID will be issued to all workloads that
# connect to this service.
rules:
# unix is a group of workload attestation criteria that are available
# when the workload is running on the same host, and is connected to
# the Workload API using a Unix socket.
#
# If any of the criteria in this group are specified, then workloads
# that do not connect using a Unix socket will not receive this SVID.
- unix:
# uid is the ID of the user that the workload process must be running
# as to receive this SVID.
#
# If unspecified, the UID is not checked.
uid: 1000
# pid is the ID that the workload process must have to receive this
# SVID.
#
# If unspecified, the PID is not checked.
pid: 1234
# gid is the ID of the primary group that the workload process must be
# running as to receive this SVID.
#
# If unspecified, the GID is not checked.
gid: 50
Envoy SDS
The spiffe-workload-api
service endpoint also implements the Envoy SDS API.
This allows it to act as a source of certificates and certificate authorities
for the Envoy proxy.
As a forward proxy, Envoy can be used to attach an X.509 SVID to an outgoing connection from a workload that is not SPIFFE-enabled.
As a reverse proxy, Envoy can be used to terminate mTLS connections from SPIFFE-enabled clients. Envoy can validate that the client has presented a valid X.509 SVID and perform enforcement of authorization policies based on the SPIFFE ID contained within the SVID.
When acting as a reverse proxy for certain protocols, Envoy can be configured to attach a header indicating the identity of the client to a request before forwarding it to the service. This can then be used by the service to make authorization decisions based on the client's identity.
When configuring Envoy to use the SDS API exposed by the spiffe-workload-api
service, three additional special names can be used to aid configuration:
default
:tbot
will return the default SVID for the workload.ROOTCA
:tbot
will return the trust bundle for the trust domain that the workload is a member of.ALL
:tbot
will return the trust bundle for the trust domain that the workload is a member of, as well as the trust bundles of any trust domain that the trust domain is federated with.
The following is an example Envoy configuration that sources a certificate
and trust bundle from the spiffe-workload-api
service listening on
unix:///opt/machine-id/workload.sock
. It requires that a connecting client
presents a valid SPIFFE SVID and forwards this information to the backend
service in the x-forwarded-client-cert
header.
node:
id: "my-envoy-proxy"
cluster: "my-cluster"
static_resources:
listeners:
- name: test_listener
enable_reuse_port: false
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
common_http_protocol_options:
idle_timeout: 1s
forward_client_cert_details: sanitize_set
set_current_client_cert_details:
uri: true
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: my_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: my_service
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
# configure the certificate that the reverse proxy should present.
tls_certificate_sds_secret_configs:
# `name` can be replaced with the desired SPIFFE ID if multiple
# SVIDs are available.
- name: "default"
sds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
transport_api_version: V3
grpc_services:
envoy_grpc:
cluster_name: tbot_agent
# combined validation context "melds" two validation contexts
# together. This is handy for extending the validation context
# from the SDS source.
combined_validation_context:
default_validation_context:
# You can use match_typed_subject_alt_names to configure
# rules that only allow connections from specific SPIFFE IDs.
match_typed_subject_alt_names: []
validation_context_sds_secret_config:
name: "ALL" # This can also be replaced with the trust domain name
sds_config:
resource_api_version: V3
api_config_source:
api_type: GRPC
transport_api_version: V3
grpc_services:
envoy_grpc:
cluster_name: tbot_agent
clusters:
# my_service is the example service that Envoy will forward traffic to.
- name: my_service
type: strict_dns
load_assignment:
cluster_name: my_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8090
- name: tbot_agent
http2_protocol_options: {}
load_assignment:
cluster_name: tbot_agent
endpoints:
- lb_endpoints:
- endpoint:
address:
pipe:
# Configure the path to the socket that `tbot` is
# listening on.
path: /opt/machine-id/workload.sock
database-tunnel
The database-tunnel
service opens a listener for a service that tunnels
connections to a database server.
The tunnel authenticates connections for the client, meaning that any application which can connect to the listener will be able to connect to the database as the specified user. For this reason, we heavily recommend using the Unix socket listener type and configuring the permissions of the socket to ensure that only the intended applications can connect.
# type specifies the type of the service. For the database tunnel service, this
# will always be `database-tunnel`.
type: database-tunnel
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: tcp://127.0.0.1:25432
# service is the name of the database server, as configured in Teleport, that
# the service should open a tunnel to.
service: postgres-docker
# database is the name of the specific database on the specified database
# service.
database: postgres
# username is the name of the user on the specified database server to open a
# tunnel for.
username: postgres
application-tunnel
The application-tunnel
service opens a listener that tunnels connections to
an application in Teleport. It supports both HTTP and TCP applications. This is
useful for applications which cannot be configured to use client certificates,
when using TCP application or where using a L7 load-balancer in front of your
Teleport proxies.
The tunnel authenticates connections for the client, meaning that any
client that connects to the listener will be able to access the application.
For this reason, ensure that the listener is only accessible by the intended
clients by using the Unix socket listener or binding to 127.0.0.1
.
# type specifies the type of the service. For the application tunnel service,
# this will always be `application-tunnel`.
type: application-tunnel
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: tcp://127.0.0.1:8084
# app_name is the name of the application, as configured in Teleport, that
# the service should open a tunnel to.
app_name: my-application
ssh-multiplexer
The ssh-multiplexer
service opens a listener for a high-performance local
SSH multiplexer. This is designed for use-cases which create a large number
of SSH connections using Teleport, for example, Ansible.
This differs to using identity
output for SSH in a few ways:
- The
tbot
instance running thessh-multiplexer
service must be running on the same host as the SSH client. - The
ssh-multiplexer
service is designed to be a long-running background service and cannot be used in one-shot mode. It must be running in order for SSH connections to be established and to continue running. - Resource consumption is significantly reduced by multiplexing SSH connections through a fewer number of upstream connections to the Teleport Proxy Service.
Additionally, the ssh-multiplexer
opens a socket that implements the SSH
agent protocol. This allows the SSH client to authenticate without writing the
sensitive private key to disk.
By default, the ssh-multiplexer
service outputs an ssh_config
which uses
tbot
itself as the ProxyCommand. You can further reduce the resource
consumption of SSH connections by installing and specifying the
fdpass-teleport
binary.
# type specifies the type of the service. For the SSH multiplexer
type: ssh-multiplexer
# destination specifies where the tunnel should be opened and any artifacts
# should be written. It must be of type `directory`.
destination:
type: directory
path: /foo
# enable_resumption specifies whether the multiplexer should negotiate
# session resumption. This allows SSH connections to survive network
# interruptions. It does increase the memory resources used per connection.
#
# If unspecified, this defaults to true.
enable_resumption: true
# proxy_command specifies the command that should be used as the ProxyCommand
# in the generated SSH configuration.
#
# If unspecified, the ProxyCommand will be the currently running binary of tbot
# itself.
proxy_command:
- /usr/local/bin/fdpass-teleport
# proxy_templates_path specifies a path to a proxy templates configuration file
# which should be used when resolving the Teleport node to connect to. This
# file must be accessible by the long-lived tbot process running the
# ssh-multiplexer.
#
# If unspecified, proxy templates will not be used.
proxy_templates_path: /etc/my-proxy-templates.yaml
Once configured, tbot
will create the following artifacts in the specified
destination:
ssh_config
: an SSH configuration file that will configure OpenSSH to use the multiplexer and agent.known_hosts
: the known hosts file that will be used by OpenSSH to validate a server's identity.v1.sock
: the Unix socket that the multiplexer listens on.agent.sock
: the Unix socket that the SSH agent listens on.
Using the SSH multiplexer programmatically
To use the SSH multiplexer programmatically, your SSH client library will need to support one of two things:
- The ability to use a ProxyCommand with FDPass. If so, you can use the
ssh_config
file generated bytbot
to configure the SSH client. - The ability to accept an open socket to use as the connection to the SSH server. You will then need to manually connect to the socket and send the multiplexer request.
The v1.sock
Unix Domain Socket implements the V1 Teleport SSH multiplexer
protocol. The client must first send a short request message to indicate the
desired target host and port, terminated with a null byte. The multiplexer will
then begin to forward traffic to the target host and port. The client can then
make an SSH connection.
Example in Python (Paramiko)
import os
import paramiko
import socket
host = "ubuntu.example.teleport.sh"
username = "root"
port = 3022
directory_destination = "/opt/machine-id"
# Connect to Mux Unix Domain Socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(os.path.join(directory_destination, "v1.sock"))
# Send the connection request specifying the server you wish to connect to
sock.sendall(f"{host}:{port}\x00".encode("utf-8"))
# We must set the env var as Paramiko does not make this configurable...
os.environ["SSH_AUTH_SOCK"] = os.path.join(directory_destination, "agent.sock")
ssh_config = paramiko.SSHConfig()
with open(os.path.join(directory_destination, "ssh_config")) as f:
ssh_config.parse(f)
ssh_client = paramiko.SSHClient()
# Paramiko does not support known_hosts with CAs: https://github.com/paramiko/paramiko/issues/771
# Therefore, we must disable host key checking
ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy())
ssh_client.connect(
hostname=host,
port=port,
username=username,
sock=sock
)
stdin, stdout, stderr = ssh_client.exec_command("hostname")
print(stdout.read().decode())
Example in Go
package main
import (
"fmt"
"net"
"path/filepath"
"golang.org/x/crypto/ssh"
"golang.org/x/crypto/ssh/agent"
"golang.org/x/crypto/ssh/knownhosts"
)
func main() {
host := "ubuntu.example.teleport.sh"
username := "root"
directoryDestination := "/opt/machine-id"
// Setup Agent and Known Hosts
agentConn, err := net.Dial(
"unix", filepath.Join(directoryDestination, "agent.sock"),
)
if err != nil {
panic(err)
}
defer agentConn.Close()
agentClient := agent.NewClient(agentConn)
hostKeyCallback, err := knownhosts.New(
filepath.Join(directoryDestination, "known_hosts"),
)
if err != nil {
panic(err)
}
// Create SSH Config
sshConfig := &ssh.ClientConfig{
Auth: []ssh.AuthMethod{
ssh.PublicKeysCallback(agentClient.Signers),
},
User: username,
HostKeyCallback: hostKeyCallback,
}
// Dial Unix Domain Socket and send multiplexing request
conn, err := net.Dial(
"unix", filepath.Join(directoryDestination, "v1.sock"),
)
if err != nil {
panic(err)
}
defer conn.Close()
_, err = fmt.Fprint(conn, fmt.Sprintf("%s:0\x00", host))
if err != nil {
panic(err)
}
sshConn, sshChan, sshReq, err := ssh.NewClientConn(
conn,
// Port here doesn't matter because Multiplexer has already established
// connection.
fmt.Sprintf("%s:22", host),
sshConfig,
)
if err != nil {
panic(err)
}
sshClient := ssh.NewClient(sshConn, sshChan, sshReq)
defer sshClient.Close()
sshSess, err := sshClient.NewSession()
if err != nil {
panic(err)
}
defer sshSess.Close()
out, err := sshSess.CombinedOutput("hostname")
if err != nil {
panic(err)
}
fmt.Println(string(out))
}
Destinations
A destination is somewhere that tbot
can read and write artifacts.
Destinations are used in two places in the tbot
configuration:
- Specifying where
tbot
should store its internal state. - Specifying where an output should write its generated artifacts.
Destinations come in multiple types. Usually, the directory
type is the most
appropriate.
directory
The directory
destination type stores artifacts as files in a specified
directory.
# type specifies the type of the destination. For the directory destination,
# this will always be `directory`.
type: directory
# path specifies the path to the directory that this destination should write
# to. This directory should already exist, or `tbot init` should be used to
# create it with the correct permissions.
path: /opt/machine-id
# symlinks configures the behaviour of symlink attack prevention.
# Requires Linux 5.6+.
# Supported values:
# * try-secure (default): Attempt to securely read and write certificates
# without symlinks, but fall back (with a warning) to insecure read
# and write if the host doesn't support this.
# * secure: Attempt to securely read and write certificates, with a hard error
# if unsupported.
# * insecure: Quietly allow symlinks in paths.
symlinks: try-secure
# acls configures whether Linux Access Control List (ACL) setup should occur for
# this destination.
# Requires Linux with a file system that supports ACLs.
# Supported values:
# * try (default on Linux): Attempt to use ACLs, warn at runtime if ACLs
# are configured but invalid.
# * off (default on non-Linux): Do not attempt to use ACLs.
# * required: Always use ACLs, produce a hard error at runtime if ACLs
# are invalid.
acls: try
memory
The memory
destination type stores artifacts in the process memory. When
the process exits, nothing is persisted. This destination type
is most suitable for ephemeral environments, but can also be used for testing.
Configuration:
# type specifies the type of the destination. For the memory destination, this
# will always be `memory`.
type: memory
kubernetes_secret
The kubernetes_secret
destination type stores artifacts in a Kubernetes
secret. This allows them to be mounted into other containers deployed in
Kubernetes.
Prerequisites:
tbot
must be running in Kubernetes with at most one replica. If using adeployment
, then theRecreate
strategy must be used to ensure only one instance exists at any time. This is because multipletbot
agents configured with the same secret will compete to write to the secret and it may be left in an inconsistent state or thetbot
agents may fail to write.- The
tbot
pod must be configured with a service account that allows it to read and write from the configured secret. - The
POD_NAMESPACE
environment variable must be configured with the name of the namespace thattbot
is running in. This is best achieved with the Downward API.
There is no requirement that the secret already exists, one will be created
if it does not exist. If a secret already exists, tbot
will overwrite any
other keys within the secret.
Configuration:
# type specifies the type of the destination. For the kubernetes_secret
# destination, this will always be `kubernetes_secret`.
type: kubernetes_secret
# name specifies the name of the Kubernetes Secret to write the artifacts to.
# This must be in the same namespace that `tbot` is running in.
name: my-secret
Bot resource
The bot
resource is used to manage Machine ID Bots. It is used to configure
the access that is granted to a Bot.
kind: bot
version: v1
metadata:
# name is a unique identifier for the bot in the cluster.
name: robot
spec:
# roles is a list of roles that the bot should be able to generate credentials
# for.
roles:
- editor
# traits controls the traits applied to the Bot user. These are fed into the
# role templating system and can be used to grant a specific Bot access to
# specific resources without the creation of a new role.
traits:
- name: logins
values:
- root
You can apply a file containing YAML that defines a bot
resource using
tctl create -f ./bot.yaml
.