Skip to main content

Get Started with the Teleport Terraform Provider

This guide provides an example of a Terraform module that manages Teleport resources in production. The guide helps you to understand the Teleport resources to manage with Terraform in order to accomplish common Teleport setup tasks. You can use the example module as a starting point for managing a complete set of Teleport cluster resources.

How it works​

This guide shows you how to use a Terraform module that serves two purposes: joining Teleport Agents to your cluster and configure role-based access control for infrastructure resources.

Joining Teleport Agents​

An Agent is a Teleport instance configured to run one or more Teleport services in order to proxy infrastructure resources (see Introduction to Teleport Agents). There are several methods you can use to join a Teleport Agent to your cluster, which we discuss in the Joining Services to your Cluster guide. In this guide, we will use the join token method, where the operator stores a secure token on the Auth Service, and an Agent presents the token in order to join a cluster.

The Terraform module enrolls resources such as Linux servers, databases, and Kubernetes clusters by deploying a pool of Teleport Agents on virtual machine instances. You can then declare dynamic infrastructure resources with Terraform or change the configuration file provided to each Agent.

Configuring role-based access control​

The module also configures Teleport role-based access controls to provide different levels of access to the resources. It also configures Access Requests, available in Teleport Identity Governance, so that users authenticate with less privileged roles by default but can request access to more privileged roles. An authentication connector lets users authenticate to Teleport using a Single Sign-On provider.

Prerequisites​

  • A running Teleport (v16.2.0 or higher) cluster. If you do not have one, read Getting Started.
tip

We recommend following this guide on a fresh Teleport demo cluster. After you are familiar with the setup, apply the lessons from this guide to protect your infrastructure. You can get started with a demo cluster using:

  • An AWS, Google Cloud, or Azure account with permissions to create virtual machine instances.

  • Cloud infrastructure that enables virtual machine instances to connect to the Teleport Proxy Service. For example:

    • An AWS subnet with a public NAT gateway or NAT instance
    • Google Cloud NAT
    • Azure NAT Gateway

    In minimum-security demo clusters, you can also configure the VM instances you deploy to have public IP addresses.

  • [Optional] If adding a Single Sign-On authentication connector, an identity provider that supports OIDC or SAML. You should have either:

    • The ability to modify SAML attributes or OIDC claims in your organization.
    • Pre-existing groups of users that you want to map to two levels of access: the ability to connect to dev resources; and the ability to review Access Requests for prod access.
  • [Optional] If adding a Single Sign-On authentication connector, an app registered with your IdP for your Teleport cluster. The following guides show you how to set up your IdP to support the SAML or OIDC authentication connector. Read only the linked section, since these guides assume you are using tctl instead of Terraform to manage authentication connectors:

  • To help with troubleshooting, we recommend completing the setup steps in this guide with a local user that has the preset editor and auditor roles. In production, you can apply the lessons in this guide using a less privileged user.

  • Terraform v1.0.0 or higher.

Step 1/7. Import the Terraform module​

To configure the terraform-starter module, you will clone the gravitational/teleport repository from GitHub and copy child modules into a project directory. After finishing this guide and becoming familiar with the setup, you can modify your Terraform configuration to accommodate your infrastructure in production.

  1. Navigate to your Terraform project directory.

  2. Fetch the Teleport code repository and copy the example Terraform configuration for this project into your current working directory.

    git clone --depth=1 https://github.com/gravitational/teleport teleport-clone --branch=branch/v18

Step 2/7. Add provider configurations​

In this step, you will configure the terraform-starter module for your Teleport cluster and cloud provider.

In your Terraform project directory, ensure that the file called provider.tf includes the following content, depending on which cloud provider you plan to use to deploy Teleport Agents:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }

    teleport = {
      source  = "terraform.releases.teleport.dev/gravitational/teleport"
      version = "~> 18.0"
    }
  }
}

provider "aws" {
  region = AWS_REGION
}

provider "teleport" {
  # Update addr to point to your Teleport Enterprise (Cloud) tenant URL's host:port
  addr               = PROXY_SERVICE_ADDRESS
}

Replace the following placeholders:

PlaceholderValue
AWS_REGIONThe AWS region where you will deploy Agents, e.g., us-east-2
PROXY_SERVICE_ADDRESSThe host and port of the Teleport Proxy Service, e.g., example.teleport.sh:443

Step 3/7. Configure Agent deployments​

Configure your Terraform project to deploy Teleport Agents:

  1. Copy the appropriate child module for your cloud provider into a subdirectory called cloud and HCL configurations for Teleport resources into a subdirectory called teleport:

    cp -R teleport-clone/examples/terraform-starter/agent-installation teleport
    cp -R teleport-clone/examples/terraform-starter/aws cloud
  2. Create a file called agent.tf with the following content, which configures the child modules you downloaded in the previous step:

    module "agent_installation_dev" {
      source      = "./teleport"
      agent_count = 1
      agent_labels = {
        env = "dev"
      }
      proxy_service_address = "teleport.example.com:443"
      teleport_edition      = "cloud"
      teleport_version      = "18.2.2"
    }
    
    module "agent_installation_prod" {
      source      = "./teleport"
      agent_count = 1
      agent_labels = {
        env = "prod"
      }
      proxy_service_address = "teleport.example.com:443"
      teleport_edition      = "cloud"
      teleport_version      = "18.2.2"
    }
    
    module "agent_deployment" {
      region           = ""
      source           = "./cloud"
      subnet_id        = ""
      userdata_scripts = concat(
        module.agent_installation_dev.userdata_scripts,
        module.agent_installation_prod.userdata_scripts
      )
    }
    

Each of the agent_installation_* module blocks produces a number of installation scripts equal to the agent_count input. Each installation script runs the Teleport SSH Service with a Teleport join token, labeling the Agent with the key/value pairs specified in agent_labels. This configuration passes all installation scripts to the agent_deployment module in order to run them on virtual machines, launching one VM per script.

As you scale your Teleport usage, you can increase this count to ease the load on each Agent.

Edit the agent_installation_dev and agent_installation_prod blocks in agent.tf as follows:

  1. Assign proxy_service_address to the host and HTTPS port of your Teleport Proxy Service, e.g., mytenant.teleport.sh:443.

    tip

    Make sure to include the port.

  2. Make sure teleport_edition matches your Teleport edition. Assign this to oss, cloud, or enterprise. The default is oss.

  3. If needed, change the value of teleport_version to the version of Teleport you want to run on your Agents. It must be either the same major version as your Teleport cluster or one major version behind.

Edit the module "agent_deployment" block in agent.tf as follows:

  1. If you are deploying your instance in a minimum-security demo environment and do not have a NAT gateway, NAT instance, or other method for connecting your instances to the Teleport Proxy Service, modify the module block to associate a public IP address with each Agent instance:

    insecure_direct_access=true
    
  2. Assign the remaining input variables depending on your cloud provider:

    1. Assign region to the AWS region where you plan to deploy Teleport Agents, such as us-east-1.
    2. For subnet_id, include the ID of the subnet where you will deploy Teleport Agents.

Step 4/7. Configure role-based access control​

After configuring Teleport Agent deployments, configure role-based access control so Teleport users can access only the infrastructure resources they need to:

  1. Since you will enable users to authenticate to Teleport through your organization's identity provider (IdP), the modules depend on whether your organization uses OIDC or SAML to communicate with services:

    cp -R teleport-clone/examples/terraform-starter/env_role env_role
    cp -R teleport-clone/examples/terraform-starter/oidc oidc

    Your project directory will include two new modules:

    NameDescription
    env_roleA module for a Teleport role that grants access to resources with a specific env label.
    oidcTeleport resources to configure an OIDC authentication connector and require that users authenticate with it.
  2. Create a file called rbac.tf that includes the following module blocks:

    module "oidc" {
      source               = "./oidc"
      oidc_claims_to_roles = []
      oidc_client_id       = ""
      oidc_connector_name  = "Log in with OIDC"
      oidc_redirect_url    = ""
      oidc_secret          = ""
      teleport_domain      = ""
    }
    
    module "prod_role" {
      source        = "./env_role"
      env_label     = "prod"
      principals    = {}
      request_roles = []
    }
    
    module "dev_role" {
      source        = "./env_role"
      env_label     = "dev"
      principals    = {}
      request_roles = [module.prod_role.role_name]
    }
    

Next, we will show you how to configure the two child modules, and walk you through the Terraform resources that they apply.

Step 5/7. Configure role principals​

Together, the prod_role and dev_role modules you declared in Step 1 create three Teleport roles:

RoleDescription
prod_accessAllows access to infrastructure resources with the env:prod label.
dev_accessAllows access to infrastructure resources with the env:dev label, and Access Requests for the prod_access role.
prod_reviewerAllows reviews of Access Requests for the prod_access role.

When Teleport users connect to resources in your infrastructure, they assume a principal, such as an operating system login or Kubernetes user, in order to interact with those resources. In this step, you will configure the prod_role and dev_role modules to grant access to principals in your infrastructure.

In rbac.tf, edit the prod_role and dev_role blocks so that the principals field contains a mapping, similar to the example below. Use the list of keys below the example to configure the principals.

module "prod_role" {
  principals = {
    KEY = "value"
  }
  // ...
}

// ...
KeyDescription
aws_role_arnsAWS role ARNs the user can access when authenticating to an AWS API.
azure_identitiesAzure identities the user can access when authenticating to an Azure API.
db_namesNames of databases the user can access within a database server.
db_rolesRoles the user can access on a database when they authenticate to a database server.
db_usersUsers the user can access on a database when they authenticate to a database server.
gcp_service_accountsGoogle Cloud service accounts the user can access when authenticating to a Google Cloud API.
kubernetes_groupsKubernetes groups the Teleport Database Service can impersonate when proxying requests from the user.
kubernetes_usersKubernetes users the Teleport Database Service can impersonate when proxying requests from the user.
loginsOperating system logins the user can access when authenticating to a Linux server.
windows_desktop_loginsOperating system logins the user can access when authenticating to a Windows desktop.

For example, the following configuration allows users with the dev_access role to assume the dev login on Linux hosts and the developers group on Kubernetes. prod users have more privileges and can assume the root login and system:masters Kubernetes group:

module "dev_role" {
  principals = {
    logins            = ["dev"]
    kubernetes_groups = ["developers"]
  }
  // ...
}

module "prod_role" {
  principals = {
    logins            = ["root"]
    kubernetes_groups = ["system:masters"]
  }
  // ...
}

Step 6/7. [Optional] Configure the single sign-on connector​

In this step, you will configure your Terraform module to enable authentication through your organization's IdP. Configure the saml or oidc module you declared in Step 1 by following the instructions.

tip

You can skip this step for now if you want to assign the dev_access and prod_access roles to local Teleport users instead of single sign-on users. To do so, you can:

  • Import existing teleport_user resources and modify them to include the dev_access and prod_access roles (see the documentation).
  • Create a new teleport_user resource that includes the roles (documentation.

If you plan to skip this step, make sure to remove the module "saml" or module "oidc" block from your Terraform configuration.

  1. Configure the redirect URL (for OIDC) or assertion consumer service (for SAML):

    Set oidc_redirect_url to https://example.teleport.sh:443/v1/webapi/oidc/callback, replacing example.teleport.sh with the domain name of your Teleport cluster.

    Ensure that oidc_redirect_url matches match the URL you configured with your IdP when registering your Teleport cluster as a relying party.

  2. After you register Teleport as a relying party, your identity provider will print information that you will use to configure the authentication connector. Fill in the information depending on your provider type:

    Fill in the oidc_client_id and oidc_secret with the client ID and secret returned by the IdP.

  3. Assign teleport_domain to the domain name of your Teleport Proxy Service, with no scheme or path, e.g., example.teleport.sh. The child module uses this to configure WebAuthn for local users. This way, you can authenticate as a local user as a fallback if you need to troubleshoot your single sign-on authentication connector.

  4. Configure role mapping for your authentication connector. When a user authenticates to Teleport through your organization's IdP, Teleport assigns roles to the user based on your connector's role mapping configuration:

    In this example, users with a group claim with the developers value receive the dev_access role, while users with a group claim with the value admins receive the prod_reviewer role:

         oidc_claims_to_roles = [
           {
             claim = "group"
             value = "developers"
             roles = [
               module.dev_role.role_name
             ]
           },
           {
             claim = "group"
             value = "admins"
             roles = module.dev_role.reviewer_role_names
           }
         ]
    

    Edit the claim value for each item in oidc_claims_to_roles to match the name of an OIDC claim you have configured on your IdP.

Step 7/7. Apply and verify​

In this step, you will ensure that your Terraform configuration works as expected by applying it against your demo cluster.

Apply your Terraform configuration​

In this step, you will create a Teleport bot to apply your Terraform configuration. The bot will exist for one hour and will be granted the default terraform-provider role that can edit every resource the TF provider supports.

  1. Navigate to your Terraform project directory and run the following command. The evalcommand assigns environment variables in your shell to credentials for the Teleport Terraform provider:

    eval "$(tctl terraform env)"
    πŸ”‘ Detecting if MFA is requiredThis is an admin-level action and requires MFA to completeTap any security keyDetected security key tapβš™οΈ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its tokenπŸ€– Using the temporary bot to obtain certificatesπŸš€ Certificates obtained, you can now use Terraform in this terminal for 1h0m0s
  2. Make sure your cloud provider credentials are available to Terraform using the standard approach for your organization.

  3. Apply the Terraform configuration:

    terraform init
    terraform apply

Verify that agents have deployed​

Once the apply command completes, run the following command to verify that your Agents have deployed successfully. This command, which assumes that the Agents have the Node role, lists all Teleport SSH Service instances with the role=agent-pool label:

tsh ls role=agent-pool
Node Name Address Labels-------------------------- ---------- ---------------ip-10-1-1-187.ec2.internal ⟡ Tunnel role=agent-poolip-10-1-1-24.ec2.internal ⟡ Tunnel role=agent-pool

Verify access controls​

  1. Open the Teleport Web UI in a browser and sign in to Teleport as a user on your IdP with the groups trait assigned to the value that you mapped to the role in your authentication connector. Your user should have the dev_access role.

    tip

    If you receive errors logging in using your authentication connector, log in as a local user with permissions to view the Teleport audit log. These is available in the preset auditor role. Check for error messages in events related with the "SSO Login" type.

  2. Request access to the prod_access role through the Web UI. Visit the "Access Requests" tab and click "New Request".

  3. Sign out of the Web UI and, as a user in a group that you mapped to the prod_access role, sign in. In the "Access Requests" tab, you should be able to see and resolve the Access Request you created.

Further reading: How the module works​

In this section, we explain the resources configured in the terraform-starter module.

We encourage you to copy and customize these configurations in order to refine your settings and choose the best reusable interface for your environment.

Join token​

The terraform-starter module deploys one virtual machine instance for each Teleport Agent. Each Agent joins the cluster using a token. We create each token using the teleport_provision_token Terraform resource, specifying the token's value with a random_string resource:

resource "random_string" "token" {
  count            = var.agent_count
  length           = 32
  override_special = "-.+"
}

resource "teleport_provision_token" "agent" {
  count   = var.agent_count
  version = "v2"
  spec = {
    roles = ["Node"]
  }
  metadata = {
    name    = random_string.token[count.index].result
    expires = timeadd(timestamp(), "1h")
  }
}

When we apply the teleport_provision_token resources, the Teleport Terraform provider creates them on the Teleport Auth Service backend.

User data script​

Each Teleport Agent deployed by the terraform-starter module loads a user data script that creates a Teleport configuration file for the Agent:

#!/bin/bash

curl https://cdn.teleport.dev/install-v${teleport_version}.sh | bash -s ${teleport_version} ${teleport_edition}

echo ${token} > /var/lib/teleport/token
cat<<EOF >/etc/teleport.yaml
version: v3
teleport:
  auth_token: /var/lib/teleport/token
  proxy_server: ${proxy_service_address}
auth_service:
  enabled: false
proxy_service:
  enabled: false
ssh_service:
  enabled: true
  labels:
    role: agent-pool
    ${extra_labels}
EOF

systemctl restart teleport;

# Disable OpenSSH and any longstanding authorized keys.
systemctl disable --now ssh.service
find / -wholename "*/.ssh/authorized_keys" -delete


The configuration adds the role: agent-pool label to the Teleport SSH Service on each instance. This makes it easier to access hosts in the Agent pool later. It also adds the labels you configured using the agent_labels input of the module.

The script makes Teleport the only option for accessing Agent instances by disabling OpenSSH on startup and deleting any authorized public keys.

Virtual machine instances​

Each cloud-specific child module of terraform-starter declares resources to deploy a virtual machine instance on your cloud provider:

ec2-instance.tf declares a data source for an Amazon Linux 2023 machine image and uses it to launch EC2 instances that run Teleport Agents with the teleport_provision_token resource:

data "aws_ami" "amazon_linux_2023" {
  most_recent = true

  filter {
    name   = "description"
    values = ["Amazon Linux 2023 AMI*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "owner-alias"
    values = ["amazon"]
  }
}

resource "aws_instance" "teleport_agent" {
  count                       = length(var.userdata_scripts)
  ami                         = data.aws_ami.amazon_linux_2023.id
  instance_type               = "t3.small"
  subnet_id                   = var.subnet_id
  user_data                   = var.userdata_scripts[count.index]
  associate_public_ip_address = var.insecure_direct_access

  // Adheres to security best practices
  monitoring = true

  metadata_options {
    http_endpoint = "enabled"
    http_tokens   = "required"
  }

  root_block_device {
    encrypted = true
  }
}

The env_access role​

The env_role child module creates Teleport roles with the ability to access Teleport-protected resources with the env label:

resource "teleport_role" "env_access" {
  version = "v7"
  metadata = {
    name        = "${var.env_label}_access"
    description = "Can access infrastructure with label ${var.env_label}"
    labels = {
      env = var.env_label
    }
  }

  spec = {
    allow = {
      aws_role_arns          = lookup(var.principals, "aws_role_arns", [])
      azure_identities       = lookup(var.principals, "azure_identities", [])
      db_names               = lookup(var.principals, "db_names", [])
      db_users               = lookup(var.principals, "db_users", [])
      gcp_service_accounts   = lookup(var.principals, "gcp_service_accounts", [])
      kubernetes_groups      = lookup(var.principals, "kubernetes_groups", [])
      kubernetes_users       = lookup(var.principals, "kubernetes_users", [])
      logins                 = lookup(var.principals, "logins", [])
      windows_desktop_logins = lookup(var.principals, "windows_desktop_logins", [])

      request = {
        roles           = var.request_roles
        search_as_roles = var.request_roles
        thresholds = [{
          approve = 1
          deny    = 1
          filter  = "!equals(request.reason, \"\")"
        }]
      }

      app_labels = {
        env = [var.env_label]
      }

      db_labels = {
        env = [var.env_label]
      }

      node_labels = {
        env = [var.env_label]
      }

      kubernetes_labels = {
        env = [var.env_label]
      }

      windows_desktop_labels = {
        env = [var.env_label]
      }
    }
  }
}

output "role_name" {
  value = teleport_role.env_access.metadata.name
}


The role hardcodes an allow rule with the ability to access applications, databases, Linux servers, Kubernetes clusters, and Windows desktops with the user-configured env label.

Since we cannot predict which principals are available in your infrastructure, this role leaves the aws_role_arns, logins, and other principal-related role attributes for the user to configure.

The role also configures an allow rule that enables users to request access for the roles configured in the request_roles input variable.

An output prints the name of the role to allow us to create a dependency relationship between this role and an authentication connector.

The env_access_reviewer role​

If var.request_roles in the env_access role is nonempty, the env_role module creates a role that can review those roles. This is a separate role to make permissions more composable:

locals {
  can_review_roles = join(", ", var.request_roles)
}

resource "teleport_role" "env_access_reviewer" {
  version = "v7"
  count   = length(var.request_roles) > 0 ? 1 : 0
  metadata = {
    name        = "${local.can_review_roles}_reviewer"
    description = "Can review Access Requests for: ${local.can_review_roles}"
  }

  spec = {
    allow = {
      review_requests = {
        roles = var.request_roles
      }
    }
  }
}

output "reviewer_role_names" {
  value = teleport_role.env_access_reviewer[*].metadata.name
}

As with the env_access role, there is an output to print the name of the env_access_reviewer role to create a dependency relationship with the authentication connector.

Configuring an authentication connector​

The authentication connector resources are minimal. Beyond providing the attributes necessary to send and receive Teleport OIDC and SAML messages, the connectors configure role mappings based on user-provided values:

resource "teleport_oidc_connector" "main" {
  version = "v3"
  metadata = {
    name = var.oidc_connector_name
  }

  spec = {
    client_id       = var.oidc_client_id
    client_secret   = var.oidc_secret
    claims_to_roles = var.oidc_claims_to_roles
    redirect_url    = [var.oidc_redirect_url]
  }
}

Since the role mappings inputs are composite data types, we add a complex type definition when declaring the input variables for the oidc and saml child modules:

variable "teleport_domain" {
  type        = string
  description = "Domain name of your Teleport cluster (to configure WebAuthn)"
}

variable "oidc_claims_to_roles" {
  type = list(object({
    claim = string
    roles = list(string)
    value = string
  }))
  description = "Mappings of OIDC claims to lists of Teleport role names"
}

variable "oidc_client_id" {
  type        = string
  description = "The OIDC identity provider's client iD"
}

variable "oidc_connector_name" {
  type        = string
  description = "Name of the Teleport OIDC connector resource"
}

variable "oidc_redirect_url" {
  type        = string
  description = "Redirect URL for the OIDC provider."
}

variable "oidc_secret" {
  type        = string
  description = "Secret for configuring the Teleport OIDC connector. Available from your identity provider."
}


For each authentication connector, we declare a cluster authentication preference that enables the connector. The cluster authentication preference enables local user login with WebAuthn as a secure fallback in case you need to troubleshoot the single sign-on provider.

resource "teleport_auth_preference" "main" {
  version = "v2"
  metadata = {
    description = "Require authentication via the ${var.oidc_connector_name} connector"
  }

  spec = {
    connector_name   = teleport_oidc_connector.main.metadata.name
    type             = "oidc"
    allow_local_auth = true
    second_factor    = "webauthn"
    webauthn = {
      rp_id = var.teleport_domain
    }
  }
}

Next steps​

In this guide, we showed you how to use Terraform to deploy a pool of Teleport Agents in order to enroll infrastructure resources with Teleport. While the guide showed you how to enroll resources dynamically, by declaring Terraform resources for each infrastructure resource you want to enroll, you can protect more of your infrastructure with Teleport by:

  • Configuring Auto-Discovery
  • Configuring resource enrollment

Configure Auto-Discovery​

For a more scalable approach to enrolling resources than the one shown in this guide, configure the Teleport Discovery Service to automatically detect resources in your infrastructure and enroll them with the Teleport Auth Service.

To configure the Teleport Discovery Service:

  1. Edit the userdata script run by the Agent instances managed in the Terraform starter module. Follow the Auto-Discovery guides guides to configure the Discovery Service and enable your Agents to proxy the resources that the service enrolls.
  2. Add the Discovery role to the join token resource you created earlier. In this guide, the join token only has the Node role.
  3. Add roles to the join token resource that corresponds to the Agent services you want to proxy discovered resources. The roles to add depend on the resources you want to automatically enroll based on the Auto-Discovery guides guides.

Enroll resources manually​

You can also enroll resources manually, instructing Agents to proxy specific endpoints in your infrastructure. For information about manual enrollment, read the documentation section for each kind of resource you would like to enroll:

Once you are familiar with the process of enrolling a resource manually, you can edit your Terraform module to:

  1. Add token roles: The token resource you created has only the Node role, and you can add roles to authorize your Agents to proxy additional kinds of resources. Consult a guide to enrolling resources manually to determine the role to add to the token.
  2. Change the userdata script to enable additional Agent services additional infrastructure resources for your Agents to proxy.
  3. Deploy dynamic resources: Consult the Terraform provider reference for Terraform resources that you can apply in order to enroll dynamic resources in your infrastructure.

Fine-tune your configuration​

Now that you have configured RBAC in your Terraform demo cluster, fine-tune your setup by reading the comprehensive Terraform provider reference.