Teleport
Terraform Starter: Enroll Infrastructure
- Edge version
- Version 17.x
- Version 16.x
- Version 15.x
- Older Versions
This guide is Part One of the Teleport Terraform starter guide. Read the overview for the scope and purpose of the Terraform starter guide.
This guide shows you how to use Terraform to enroll infrastructure resources with Teleport. You will:
- Deploy a pool of Teleport Agents running on virtual machines.
- Label resources enrolled by the Agents with
env:dev
andenv:prod
so that, in Part Two, you can configure Teleport roles to enable access to these resources.
How it works
An Agent is a Teleport instance configured to run one or more Teleport services in order to proxy infrastructure resources. For a brief architectural overview of how Agents run in a Teleport cluster, read the Introduction to Teleport Agents.
There are several methods you can use to join a Teleport Agent to your cluster, which we discuss in the Joining Services to your Cluster guide. In this guide, we will use the join token method, where the operator stores a secure token on the Auth Service, and an Agent presents the token in order to join a cluster.
No matter which join method you use, it will involve the following Terraform resources:
- Compute instances to run Teleport services
- A join token for each compute instance in the Agent pool
Prerequisites
-
A running Teleport cluster version 16.2.0 or above. If you want to get started with Teleport, sign up for a free trial or set up a demo environment.
-
The
tctl
admin tool andtsh
client tool.Visit Installation for instructions on downloading
tctl
andtsh
.
We recommend following this guide on a fresh Teleport demo cluster so you can see how an Agent pool works. After you are familiar with the setup, apply the lessons from this guide to protect your infrastructure. You can get started with a demo cluster using:
- A demo deployment on a Linux server
- A Teleport Enterprise (Cloud) trial
-
An AWS, Google Cloud, or Azure account with permissions to create virtual machine instances.
-
Cloud infrastructure that enables virtual machine instances to connect to the Teleport Proxy Service. For example:
- An AWS subnet with a public NAT gateway or NAT instance
- Google Cloud NAT
- Azure NAT Gateway
In minimum-security demo clusters, you can also configure the VM instances you deploy to have public IP addresses.
-
Terraform v1.0.0 or higher.
-
To check that you can connect to your Teleport cluster, sign in with
tsh login
, then verify that you can runtctl
commands using your current credentials.For example:
tsh login --proxy=teleport.example.com --user=[email protected]tctl statusCluster teleport.example.com
Version 16.4.12
CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
If you can connect to the cluster and run the
tctl status
command, you can use your current credentials to run subsequenttctl
commands from your workstation. If you host your own Teleport cluster, you can also runtctl
commands on the computer that hosts the Teleport Auth Service for full permissions.
Step 1/3. Import the Terraform module
In this step, you will download Terraform modules that show you how to get started enrolling Teleport resources. These modules are minimal examples of how Teleport Terraform resources work together to enable you to manage Teleport Agents.
After finishing this guide and becoming familiar with the setup, you should modify your Terraform configuration to accommodate your infrastructure in production.
-
Navigate to your Terraform project directory.
-
Fetch the Teleport code repository and copy the example Terraform configuration for this project into your current working directory. The following commands copy the appropriate child module for your cloud provider into a subdirectory called
cloud
and HCL configurations for Teleport resources into a subdirectory calledteleport
:git clone --depth=1 https://github.com/gravitational/teleport teleport-clonecp -R teleport-clone/examples/terraform-starter/agent-installation teleportcp -R teleport-clone/examples/terraform-starter/aws cloudrm -rf teleport-clonegit clone --depth=1 https://github.com/gravitational/teleport teleport-clonecp -R teleport-clone/examples/terraform-starter/agent-installation teleportcp -R teleport-clone/examples/terraform-starter/gcp cloudrm -rf teleport-clonegit clone --depth=1 https://github.com/gravitational/teleport teleport-clonecp -R teleport-clone/examples/terraform-starter/agent-installation teleportcp -R teleport-clone/examples/terraform-starter/azure cloudrm -rf teleport-clone -
Create a file called
agent.tf
with the following content, which configures the child modules you downloaded in the previous step:module "agent_installation_dev" { source = "./teleport" agent_count = 1 agent_labels = { env = "dev" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_installation_prod" { source = "./teleport" agent_count = 1 agent_labels = { env = "prod" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_deployment" { region = "" source = "./cloud" subnet_id = "" userdata_scripts = concat( module.agent_installation_dev.userdata_scripts, module.agent_installation_prod.userdata_scripts ) }
module "agent_installation_dev" { source = "./teleport" agent_count = 1 agent_labels = { env = "dev" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_installation_prod" { source = "./teleport" agent_count = 1 agent_labels = { env = "prod" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_deployment" { gcp_zone = "us-east1-b" google_project = "" source = "./cloud" subnet_id = "" userdata_scripts = concat( module.agent_installation_dev.userdata_scripts, module.agent_installation_prod.userdata_scripts ) }
module "agent_installation_dev" { source = "./teleport" agent_count = 1 agent_labels = { env = "dev" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_installation_prod" { source = "./teleport" agent_count = 1 agent_labels = { env = "prod" } proxy_service_address = "teleport.example.com:443" teleport_edition = "cloud" teleport_version = "16.4.12" } module "agent_deployment" { azure_resource_group = "" public_key_path = "" region = "East US" source = "./cloud" subnet_id = "" userdata_scripts = concat( module.agent_installation_dev.userdata_scripts, module.agent_installation_prod.userdata_scripts ) }
Each of the agent_installation_*
module blocks produces a number of
installation scripts equal to the agent_count
input. Each installation script
runs the Teleport SSH Service with a Teleport join token, labeling the Agent
with the key/value pairs specified in agent_labels
. This configuration passes
all installation scripts to the agent_deployment
module in order to run them
on virtual machines, launching one VM per script.
As you scale your Teleport usage, you can increase this count to ease the load on each Agent.
Edit the agent_installation_dev
and agent_installation_prod
blocks in
agent.tf
as follows:
-
Assign
proxy_service_address
to the host and HTTPS port of your Teleport Proxy Service, e.g.,mytenant.teleport.sh:443
.Make sure to include the port.
-
Make sure
teleport_edition
matches your Teleport edition. Assign this tooss
,cloud
, orenterprise
. The default isoss
. -
If needed, change the value of
teleport_version
to the version of Teleport you want to run on your Agents. It must be either the same major version as your Teleport cluster or one major version behind.
Edit the module "agent_deployment"
block in agent.tf
as follows:
-
If you are deploying your instance in a minimum-security demo environment and do not have a NAT gateway, NAT instance, or other method for connecting your instances to the Teleport Proxy Service, modify the
module
block to associate a public IP address with each Agent instance:insecure_direct_access=true
-
Assign the remaining input variables depending on your cloud provider:
- Assign
region
to the AWS region where you plan to deploy Teleport Agents, such asus-east-1
. - For
subnet_id
, include the ID of the subnet where you will deploy Teleport Agents.
-
Assign
google_project
to the name of your Google Cloud project andgcp_zone
to the zone where you will deploy Agents, such asus-east1-b
. -
For
subnet_id
, include the name or URI of the Google Cloud subnet where you will deploy the Teleport Agents.The subnet URI has the format:
projects/PROJECT_NAME/regions/REGION/subnetworks/SUBNET_NAME
-
Assign
azure_resource_group
to the name of the Azure resource group where you are deploying Teleport Agents. -
The module uses
public_key_path
to pass validation, as Azure VMs must include an RSA public key with at least 2048 bits. Once the module deploys the VMs, a cloud-init script removes the public key and disables OpenSSH. Set this input to the path to a valid public SSH key. -
Assign
region
to the Azure region where you plan to deploy Teleport Agents, such asEast US
. -
For
subnet_id
, include the ID of the subnet where you will deploy Teleport Agents. Use the following format:/subscriptions/SUBSCRIPTION/resourceGroups/RESOURCE_GROUP/providers/Microsoft.Network/virtualNetworks/NETWORK_NAME/subnets/SUBNET_NAME
- Assign
Step 2/3. Add provider configurations
In this step, you will configure the terraform-starter
module for your
Teleport cluster and cloud provider.
In your Terraform project directory, ensure that the file called provider.tf
includes the following content:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
teleport = {
source = "terraform.releases.teleport.dev/gravitational/teleport"
version = "~> 16.0"
}
}
}
provider "aws" {
region = AWS_REGION
}
provider "teleport" {
# Update addr to point to your Teleport Enterprise (managed) tenant URL's host:port
addr = PROXY_SERVICE_ADDRESS
}
Replace the following placeholders:
Placeholder | Value |
---|---|
AWS_REGION | The AWS region where you will deploy Agents, e.g., us-east-2 |
PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., example.teleport.sh:443 |
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.5.0"
}
teleport = {
source = "terraform.releases.teleport.dev/gravitational/teleport"
version = "~> 16.0"
}
}
}
provider "google" {
project = GOOGLE_CLOUD_PROJECT
region = GOOGLE_CLOUD_REGION
}
provider "teleport" {
# Update addr to point to your Teleport Enterprise (managed) tenant URL's host:port
addr = PROXY_SERVICE_ADDRESS
}
Replace the following placeholders:
Placeholder | Value |
---|---|
GOOGLE_CLOUD_PROJECT | Google Cloud project where you will deploy Agents. |
GOOGLE_CLOUD_REGION | Google Cloud region where you will deploy Agents. |
PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., example.teleport.sh:443 |
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.0"
}
teleport = {
source = "terraform.releases.teleport.dev/gravitational/teleport"
version = "~> 16.0"
}
}
}
provider "teleport" {
# Update addr to point to your Teleport Cloud tenant URL's host:port
addr = PROXY_SERVICE_ADDRESS
}
provider "azurerm" {
features {}
}
Replace the following placeholders:
Placeholder | Value |
---|---|
PROXY_SERVICE_ADDRESS | The host and port of the Teleport Proxy Service, e.g., example.teleport.sh:443 |
Step 3/3. Verify the deployment
In this step, you will create a Teleport bot to apply your Terraform
configuration. The bot will exist for one hour and will be granted the default
terraform-provider
role that can edit every resource the TF provider supports.
-
Navigate to your Terraform project directory and run the following command. The
eval
command assigns environment variables in your shell to credentials for the Teleport Terraform provider:eval "$(tctl terraform env)"🔑 Detecting if MFA is requiredThis is an admin-level action and requires MFA to completeTap any security keyDetected security key tap⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token🤖 Using the temporary bot to obtain certificates🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s -
Make sure your cloud provider credentials are available to Terraform using the standard approach for your organization.
-
Apply the Terraform configuration:
terraform initterraform apply -
Once the
apply
command completes, run the following command to verify that your Agents have deployed successfully. This command, which assumes that the Agents have theNode
role, lists all Teleport SSH Service instances with therole=agent-pool
label:tsh ls role=agent-poolNode Name Address Labels-------------------------- ---------- ---------------ip-10-1-1-187.ec2.internal ⟵ Tunnel role=agent-poolip-10-1-1-24.ec2.internal ⟵ Tunnel role=agent-pool
Further reading: How the module works
In this section, we explain the resources configured in the
terraform-starter
module.
Join token
The terraform-starter
module deploys one virtual machine instance for each
Teleport Agent. Each Agent joins the cluster using a token. We create each token
using the teleport_provision_token
Terraform resource, specifying the token's
value with a random_string
resource:
resource "random_string" "token" {
count = var.agent_count
length = 32
override_special = "-.+"
}
resource "teleport_provision_token" "agent" {
count = var.agent_count
version = "v2"
spec = {
roles = ["Node"]
}
metadata = {
name = random_string.token[count.index].result
expires = timeadd(timestamp(), "1h")
}
}
When we apply the teleport_provision_token
resources, the Teleport Terraform
provider creates them on the Teleport Auth Service backend.
User data script
Each Teleport Agent deployed by the terraform-starter
module loads a user
data script that creates a Teleport configuration file for the Agent:
#!/bin/bash
curl https://cdn.teleport.dev/install-v${teleport_version}.sh | bash -s ${teleport_version} ${teleport_edition}
echo ${token} > /var/lib/teleport/token
cat<<EOF >/etc/teleport.yaml
version: v3
teleport:
auth_token: /var/lib/teleport/token
proxy_server: ${proxy_service_address}
auth_service:
enabled: false
proxy_service:
enabled: false
ssh_service:
enabled: true
labels:
role: agent-pool
${extra_labels}
EOF
systemctl restart teleport;
# Disable OpenSSH and any longstanding authorized keys.
systemctl disable --now ssh.service
find / -wholename "*/.ssh/authorized_keys" -delete
The configuration adds the role: agent-pool
label to the Teleport SSH Service
on each instance. This makes it easier to access hosts in the Agent pool later.
It also adds the labels you configured using the agent_labels
input of the
module.
The script makes Teleport the only option for accessing Agent instances by disabling OpenSSH on startup and deleting any authorized public keys.
Virtual machine instances
Each cloud-specific child module of terraform-starter
declares resources to
deploy a virtual machine instance on your cloud provider:
ec2-instance.tf
declares a data source for an Amazon Linux 2023 machine image
and uses it to launch EC2 instances that run Teleport Agents with the
teleport_provision_token
resource:
data "aws_ami" "amazon_linux_2023" {
most_recent = true
filter {
name = "description"
values = ["Amazon Linux 2023 AMI*"]
}
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "aws_instance" "teleport_agent" {
count = length(var.userdata_scripts)
ami = data.aws_ami.amazon_linux_2023.id
instance_type = "t3.small"
subnet_id = var.subnet_id
user_data = var.userdata_scripts[count.index]
associate_public_ip_address = var.insecure_direct_access
// Adheres to security best practices
monitoring = true
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
}
root_block_device {
encrypted = true
}
}
gcp-instance.tf
declares Google Compute Engine instances that use the
teleport_provision_token
to run Teleport Agents:
locals {
// Google Cloud provides public IP addresses to instances when the
// network_interface block includes an empty access_config, so use a dynamic
// block to enable a public IP based on the insecure_direct_access input.
access_configs = var.insecure_direct_access ? [{}] : []
}
resource "google_compute_instance" "teleport_agent" {
count = length(var.userdata_scripts)
name = "teleport-agent-${count.index}"
zone = var.gcp_zone
// Initialize the instance tags to an empty map to prevent errors when the
// Teleport SSH Service fetches them.
params {
resource_manager_tags = {}
}
boot_disk {
initialize_params {
image = "family/ubuntu-2204-lts"
}
}
network_interface {
subnetwork = var.subnet_id
// If the user enables insecure direct access, allocate a public IP to the
// instance.
dynamic "access_config" {
for_each = local.access_configs
content {}
}
}
machine_type = "e2-standard-2"
metadata_startup_script = var.userdata_scripts[count.index]
}
azure-instance.tf
declares an Azure virtual machine resource to run Teleport
Agents using the teleport_provision_token
resource, plus the required network
interface for each instance.
Note that while Azure VM instances require a user account, this module declares a temporary one to pass validation, but uses Teleport to enable access to the instances:
locals {
username = "admin_temp"
}
resource "azurerm_network_interface" "teleport_agent" {
count = length(var.userdata_scripts)
name = "teleport-agent-ni-${count.index}"
location = var.region
resource_group_name = var.azure_resource_group
ip_configuration {
name = "teleport_agent_ip_config"
subnet_id = var.subnet_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = var.insecure_direct_access ? azurerm_public_ip.agent[count.index].id : ""
}
}
resource "azurerm_public_ip" "agent" {
count = var.insecure_direct_access ? length(var.userdata_scripts) : 0
name = "agentIP-${count.index}"
resource_group_name = var.azure_resource_group
location = var.region
allocation_method = "Static"
}
resource "azurerm_virtual_machine" "teleport_agent" {
count = length(var.userdata_scripts)
name = "teleport-agent-${count.index}"
location = var.region
resource_group_name = var.azure_resource_group
network_interface_ids = [azurerm_network_interface.teleport_agent[count.index].id]
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
key_data = file(var.public_key_path)
// The only allowed path. See:
// https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine
path = "/home/${local.username}/.ssh/authorized_keys"
}
}
os_profile {
computer_name = "teleport-agent-${count.index}"
admin_username = local.username
custom_data = var.userdata_scripts[count.index]
}
vm_size = "Standard_B2s"
storage_os_disk {
name = "teleport-agent-disk-${count.index}"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
}
Next steps: More options for enrolling resources
In Part One of the Terraform starter guide, we showed you how to use Terraform to deploy a pool of Teleport Agents in order to enroll infrastructure resources with Teleport. While the guide showed you how to enroll resources dynamically, by declaring Terraform resources for each infrastructure resource you want to enroll, you can protect more of your infrastructure with Teleport by:
- Configuring Auto-Discovery
- Configuring resource enrollment
Configure Auto-Discovery
For a more scalable approach to enrolling resources than the one shown in this guide, configure the Teleport Discovery Service to automatically detect resources in your infrastructure and enroll them with the Teleport Auth Service.
To configure the Teleport Discovery Service:
- Edit the userdata script run by the Agent instances managed in the Terraform starter module. Follow the Auto-Discovery guides guides to configure the Discovery Service and enable your Agents to proxy the resources that the service enrolls.
- Add the
Discovery
role to the join token resource you created earlier. In this guide, the join token only has theNode
role. - Add roles to the join token resource that corresponds to the Agent services you want to proxy discovered resources. The roles to add depend on the resources you want to automatically enroll based on the Auto-Discovery guides guides.
Enroll resources manually
You can also enroll resources manually, instructing Agents to proxy specific endpoints in your infrastructure. For information about manual enrollment, read the documentation section for each kind of resource you would like to enroll:
- Databases
- Windows desktops
- Kubernetes clusters
- Linux servers
- Web applications and cloud provider APIs
Once you are familiar with the process of enrolling a resource manually, you can edit your Terraform module to:
- Add token roles: The token resource you created has only the
Node
role, and you can add roles to authorize your Agents to proxy additional kinds of resources. Consult a guide to enrolling resources manually to determine the role to add to the token. - Change the userdata script to enable additional Agent services additional infrastructure resources for your Agents to proxy.
- Deploy dynamic resources: Consult the Terraform provider reference for Terraform resources that you can apply in order to enroll dynamic resources in your infrastructure.