Part 2: Configure Teleport RBAC with Terraform
This guide is Part Two of the Teleport Terraform starter guide. Read the overview for the scope and purpose of the Terraform starter guide.
In Part One of this series, we showed you how to use
Terraform to deploy Teleport Agents in order to enroll infrastructure resources
with your Teleport cluster. While configuring Agents, you labeled them based on
their environment, with some falling under dev
and others under prod
.
In this guide, you will configure your Teleport cluster to manage access to
resources with the dev
and prod
labels in order to implement the principle
of least privilege.
How it works
This guide shows you how to create:
- A role that can access
prod
resources. - A role that can access
dev
resources and request access toprod
resources. - An authentication connector that allows users to sign into your organization's
identity provider and automatically gain access to
dev
resources.
In this setup, the only way to access prod
resources is with an Access
Request, meaning that there are no standing credentials for accessing prod
resources that an attacker can compromise.
Prerequisites
This guide assumes that you have completed Part 1: Enroll Infrastructure with Terraform.
-
A running Teleport cluster version 16.2.0 or above. If you want to get started with Teleport, sign up for a free trial or set up a demo environment.
-
The
tctl
admin tool andtsh
client tool.Visit Installation for instructions on downloading
tctl
andtsh
.
- Resources enrolled with Teleport that include the
dev
andprod
labels. We show you how to enroll these resources using Terraform in Part One. - An identity provider that supports OIDC or SAML. You should have either:
- The ability to modify SAML attributes or OIDC claims in your organization.
- Pre-existing groups of users that you want to map to two levels of access:
the ability to connect to
dev
resources; and the ability to review Access Requests forprod
access.
We recommend following this guide on the same Teleport demo cluster you used for Part One. After you are familiar with the setup, you can apply the lessons from this guide to manage RBAC with Terraform.
- Terraform v1.0.0 or higher.
- To check that you can connect to your Teleport cluster, sign in with
tsh login
, then verify that you can runtctl
commands using your current credentials. For example:If you can connect to the cluster and run the$ tsh login --proxy=teleport.example.com [email protected]
$ tctl status
# Cluster teleport.example.com
# Version 17.0.0-dev
# CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678tctl status
command, you can use your current credentials to run subsequenttctl
commands from your workstation. If you host your own Teleport cluster, you can also runtctl
commands on the computer that hosts the Teleport Auth Service for full permissions. - To help with troubleshooting, we recommend completing the setup steps in this
guide with a local user that has the preset
editor
andauditor
roles. In production, you can apply the lessons in this guide using a less privileged user.
Step 1/4. Import Terraform modules
In this step, you will download Terraform modules that show you how to get started managing Teleport RBAC. These modules are minimal examples of how Teleport Terraform resources work together to enable you to manage Teleport roles and authentication connectors.
After finishing this guide and becoming familiar with the setup, you should modify your Terraform configuration to accommodate your infrastructure in production.
-
Navigate to the directory where you organized files for your root Terraform module.
-
Fetch the Teleport code repository and copy the example Terraform configuration for this project into your current working directory.
Since you will enable users to authenticate to Teleport through your organization's identity provider (IdP), the modules depend on whether your organization uses OIDC or SAML to communicate with services:
- OIDC
- SAML
$ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone
$ cp -R teleport-clone/examples/terraform-starter/env_role env_role
$ cp -R teleport-clone/examples/terraform-starter/oidc oidc
$ rm -rf teleport-clone$ git clone --depth=1 https://github.com/gravitational/teleport teleport-clone
$ cp -R teleport-clone/examples/terraform-starter/env_role env_role
$ cp -R teleport-clone/examples/terraform-starter/saml saml
$ rm -rf teleport-cloneYour project directory will include two new modules:
- OIDC
- SAML
Name Description env_role
A module for a Teleport role that grants access to resources with a specific env
label.oidc
Teleport resources to configure an OIDC authentication connector and require that users authenticate with it. Name Description env_role
A module for a Teleport role that grants access to resources with a specific env
label.saml
Teleport resources to configure a SAML authentication connector and require that users authenticate with it. -
Create a file called
rbac.tf
that includes the followingmodule
blocks:- OIDC
- SAML
module "oidc" {
source = "./oidc"
oidc_claims_to_roles = []
oidc_client_id = ""
oidc_connector_name = "Log in with OIDC"
oidc_redirect_url = ""
oidc_secret = ""
teleport_domain = ""
}
module "prod_role" {
source = "./env_role"
env_label = "prod"
principals = {}
request_roles = []
}
module "dev_role" {
source = "./env_role"
env_label = "dev"
principals = {}
request_roles = [module.prod_role.role_name]
}module "saml" {
source = "./saml"
saml_connector_name = "Log in with SAML"
saml_attributes_to_roles = []
saml_acs = ""
saml_entity_descriptor = ""
teleport_domain = ""
}
module "prod_role" {
source = "./env_role"
env_label = "prod"
principals = {}
request_roles = []
}
module "dev_role" {
source = "./env_role"
env_label = "dev"
principals = {}
request_roles = [module.prod_role.role_name]
}
Next, we will show you how to configure the two child modules, and walk you through the Terraform resources that they apply.
Step 2/4. Configure role principals
Together, the prod_role
and dev_role
modules you declared in Step 1 create
three Teleport roles:
Role | Description |
---|---|
prod_access | Allows access to infrastructure resources with the env:prod label. |
dev_access | Allows access to infrastructure resources with the env:dev label, and Access Requests for the prod_access role. |
prod_reviewer | Allows reviews of Access Requests for the prod_access role. |
When Teleport users connect to resources in your infrastructure, they assume a
principal, such as an operating system login or Kubernetes user, in order to
interact with those resources. In this step, you will configure the prod_role
and dev_role
modules to grant access to principals in your infrastructure.
In rbac.tf
, edit the prod_role
and dev_role
blocks so that the
principals
field contains a mapping, similar to the example below. Use the
list of keys below the example to configure the principals.
module "prod_role" {
principals = {
KEY = "value"
}
// ...
}
// ...
Key | Description |
---|---|
aws_role_arns | AWS role ARNs the user can access when authenticating to an AWS API. |
azure_identities | Azure identities the user can access when authenticating to an Azure API. |
db_names | Names of databases the user can access within a database server. |
db_roles | Roles the user can access on a database when they authenticate to a database server. |
db_users | Users the user can access on a database when they authenticate to a database server. |
gcp_service_accounts | Google Cloud service accounts the user can access when authenticating to a Google Cloud API. |
kubernetes_groups | Kubernetes groups the Teleport Database Service can impersonate when proxying requests from the user. |
kubernetes_users | Kubernetes users the Teleport Database Service can impersonate when proxying requests from the user. |
logins | Operating system logins the user can access when authenticating to a Linux server. |
windows_desktop_logins | Operating system logins the user can access when authenticating to a Windows desktop. |
For example, the following configuration allows users with the dev_access
role
to assume the dev
login on Linux hosts and the developers
group on
Kubernetes. prod
users have more privileges and can assume the root
login
and system:masters
Kubernetes group:
module "dev_role" {
principals = {
logins = ["dev"]
kubernetes_groups = ["developers"]
}
// ...
}
module "prod_role" {
principals = {
logins = ["root"]
kubernetes_groups = ["system:masters"]
}
// ...
}
Step 3/4. [Optional] Configure the single sign-on connector
In this step, you will configure your Terraform module to enable authentication
through your organization's IdP. Configure the saml
or oidc
module you
declared in Step 1 by following the instructions.
You can skip this step for now if you want to assign the dev_access
and
prod_access
roles to local Teleport users instead of single sign-on users. To
do so, you can:
- Import existing
teleport_user
resources and modify them to include thedev_access
andprod_access
roles (see the documentation). - Create a new
teleport_user
resource that includes the roles (documentation.
If you plan to skip this step, make sure to remove the module "saml"
or
module "oidc"
block from your Terraform configuration.
-
Register your Teleport cluster with your IdP as a relying party. The instructions depend on your IdP.
The following guides show you how to set up your IdP to support the SAML or OIDC authentication connector. Read only the linked section, since these guides assume you are using
tctl
instead of Terraform to manage authentication connectors: -
Configure the redirect URL (for OIDC) or assertion consumer service (for SAML):
- OIDC
- SAML
Set
oidc_redirect_url
tohttps://example.teleport.sh:443/v1/webapi/oidc/callback
, replacingexample.teleport.sh
with the domain name of your Teleport cluster.Ensure that
oidc_redirect_url
matches match the URL you configured with your IdP when registering your Teleport cluster as a relying party.Set
saml_acs
tohttps://example.teleport.sh:443/v1/webapi/saml/acs
, replacingexample.teleport.sh
with the domain name of your Teleport cluster.Ensure that
saml_acs
matches the URL you configured with your IdP when registering your Teleport cluster as a relying party. -
After you register Teleport as a relying party, your identity provider will print information that you will use to configure the authentication connector. Fill in the information depending on your provider type:
- OIDC
- SAML
Fill in the
oidc_client_id
andoidc_secret
with the client ID and secret returned by the IdP.Assign
saml_entity_descriptor
to the contents of the XML document that contains the SAML entity descriptor for the IdP. -
Assign
teleport_domain
to the domain name of your Teleport Proxy Service, with no scheme or path, e.g.,example.teleport.sh
. The child module uses this to configure WebAuthn for local users. This way, you can authenticate as a local user as a fallback if you need to troubleshoot your single sign-on authentication connector. -
Configure role mapping for your authentication connector. When a user authenticates to Teleport through your organization's IdP, Teleport assigns roles to the user based on your connector's role mapping configuration:
- OIDC
- SAML
In this example, users with a
group
claim with thedevelopers
value receive thedev_access
role, while users with agroup
claim with the valueadmins
receive theprod_reviewer
role:oidc_claims_to_roles = [
{
claim = "group"
value = "developers"
roles = [
module.dev_role.role_name
]
},
{
claim = "group"
value = "admins"
roles = module.dev_role.reviewer_role_names
}
]Edit the
claim
value for each item inoidc_claims_to_roles
to match the name of an OIDC claim you have configured on your IdP.In this example, users with a
group
attribute with thedevelopers
value receive thedev_access
role, while users with agroup
attribute with the valueadmins
receive theprod_reviewer
role:saml_attributes_to_roles = [
{
name = "group"
value = "developers"
roles = [
module.dev_role.role_name
]
},
{
name = "group"
value = "admins"
roles = module.dev_role.reviewer_role_names
}
]
Step 4/4. Apply and verify changes
In this step, you will create a Teleport bot to apply your Terraform
configuration. The bot will exist for one hour and will be granted the default
terraform-provider
role that can edit every resource the TF provider supports.
-
Navigate to your Terraform project directory and run the following command. The
eval
command assigns environment variables in your shell to credentials for the Teleport Terraform provider:$ eval "$(tctl terraform env)"
🔑 Detecting if MFA is required
This is an admin-level action and requires MFA to complete
Tap any security key
Detected security key tap
⚙️ Creating temporary bot "tctl-terraform-env-82ab1a2e" and its token
🤖 Using the temporary bot to obtain certificates
🚀 Certificates obtained, you can now use Terraform in this terminal for 1h0m0s -
Make sure your cloud provider credentials are available to Terraform using the standard approach for your organization.
-
Apply the Terraform configuration:
$ terraform init
$ terraform apply -
Open the Teleport Web UI in a browser and sign in to Teleport as a user on your IdP with the
groups
trait assigned to the value that you mapped to the role in your authentication connector. Your user should have thedev_access
role.tipIf you receive errors logging in using your authentication connector, log in as a local user with permissions to view the Teleport audit log. These is available in the preset
auditor
role. Check for error messages in events related with the "SSO Login" type. -
Request access to the
prod_access
role through the Web UI. Visit the "Access Requests" tab and click "New Request". -
Sign out of the Web UI and, as a user in a group that you mapped to the
prod_access
role, sign in. In the "Access Requests" tab, you should be able to see and resolve the Access Request you created.
Further reading: How the module works
This section describes the resources managed by the env_role
, saml
, and
oidc
child modules. We encourage you to copy and customize these
configurations in order to refine your settings and choose the best reusable
interface for your environment.
The env_access
role
The env_role
child module creates Teleport roles with the ability to access
Teleport-protected resources with the env
label:
resource "teleport_role" "env_access" {
version = "v7"
metadata = {
name = "${var.env_label}_access"
description = "Can access infrastructure with label ${var.env_label}"
labels = {
env = var.env_label
}
}
spec = {
allow = {
aws_role_arns = lookup(var.principals, "aws_role_arns", [])
azure_identities = lookup(var.principals, "azure_identities", [])
db_names = lookup(var.principals, "db_names", [])
db_users = lookup(var.principals, "db_users", [])
gcp_service_accounts = lookup(var.principals, "gcp_service_accounts", [])
kubernetes_groups = lookup(var.principals, "kubernetes_groups", [])
kubernetes_users = lookup(var.principals, "kubernetes_users", [])
logins = lookup(var.principals, "logins", [])
windows_desktop_logins = lookup(var.principals, "windows_desktop_logins", [])
request = {
roles = var.request_roles
search_as_roles = var.request_roles
thresholds = [{
approve = 1
deny = 1
filter = "!equals(request.reason, \"\")"
}]
}
app_labels = {
env = [var.env_label]
}
db_labels = {
env = [var.env_label]
}
node_labels = {
env = [var.env_label]
}
kubernetes_labels = {
env = [var.env_label]
}
windows_desktop_labels = {
env = [var.env_label]
}
}
}
}
output "role_name" {
value = teleport_role.env_access.metadata.name
}
The role hardcodes an allow
rule with the ability to access applications,
databases, Linux servers, Kubernetes clusters, and Windows desktops with the
user-configured env
label.
Since we cannot predict which principals are available in your infrastructure,
this role leaves the aws_role_arns
, logins
, and other principal-related role
attributes for the user to configure.
The role also configures an allow
rule that enables users to request access
for the roles configured in the request_roles
input variable.
An output
prints the name of the role to allow us to create a dependency
relationship between this role and an authentication connector.
The env_access_reviewer
role
If var.request_roles
in the env_access
role is nonempty, the env_role
module creates a role that can review those roles. This is a separate role to
make permissions more composable:
locals {
can_review_roles = join(", ", var.request_roles)
}
resource "teleport_role" "env_access_reviewer" {
version = "v7"
count = length(var.request_roles) > 0 ? 1 : 0
metadata = {
name = "${local.can_review_roles}_reviewer"
description = "Can review Access Requests for: ${local.can_review_roles}"
}
spec = {
allow = {
review_requests = {
roles = var.request_roles
}
}
}
}
output "reviewer_role_names" {
value = teleport_role.env_access_reviewer[*].metadata.name
}
As with the env_access
role, there is an output to print the name of the
env_access_reviewer
role to create a dependency relationship with the
authentication connector.
Configuring an authentication connector
The authentication connector resources are minimal. Beyond providing the attributes necessary to send and receive Teleport OIDC and SAML messages, the connectors configure role mappings based on user-provided values:
- OIDC
- SAML
resource "teleport_oidc_connector" "main" {
version = "v3"
metadata = {
name = var.oidc_connector_name
}
spec = {
client_id = var.oidc_client_id
client_secret = var.oidc_secret
claims_to_roles = var.oidc_claims_to_roles
redirect_url = [var.oidc_redirect_url]
}
}
resource "teleport_saml_connector" "main" {
version = "v2"
metadata = {
name = var.saml_connector_name
}
spec = {
attributes_to_roles = var.saml_attributes_to_roles
acs = var.saml_acs
entity_descriptor = var.saml_entity_descriptor
}
}
Since the role mappings inputs are composite data types, we add a complex type
definition when declaring the input variables for the oidc
and saml
child
modules:
- OIDC
- SAML
variable "teleport_domain" {
type = string
description = "Domain name of your Teleport cluster (to configure WebAuthn)"
}
variable "oidc_claims_to_roles" {
type = list(object({
claim = string
roles = list(string)
value = string
}))
description = "Mappings of OIDC claims to lists of Teleport role names"
}
variable "oidc_client_id" {
type = string
description = "The OIDC identity provider's client iD"
}
variable "oidc_connector_name" {
type = string
description = "Name of the Teleport OIDC connector resource"
}
variable "oidc_redirect_url" {
type = string
description = "Redirect URL for the OIDC provider."
}
variable "oidc_secret" {
type = string
description = "Secret for configuring the Teleport OIDC connector. Available from your identity provider."
}
variable "teleport_domain" {
type = string
description = "Domain name of your Teleport cluster (to configure WebAuthn)"
}
variable "saml_connector_name" {
type = string
description = "Name for the SAML authentication connector created by this module"
}
variable "saml_attributes_to_roles" {
type = list(object({
name = string
roles = list(string)
value = string
}))
description = "Mappings of SAML attributes to lists of Teleport role names"
}
variable "saml_acs" {
type = string
description = "URL (scheme, domain, port, and path) for the SAML assertion consumer service"
}
variable "saml_entity_descriptor" {
type = string
description = "SAML entity descriptor"
}
For each authentication connector, we declare a cluster authentication preference that enables the connector. The cluster authentication preference enables local user login with WebAuthn as a secure fallback in case you need to troubleshoot the single sign-on provider.
- OIDC
- SAML
resource "teleport_auth_preference" "main" {
version = "v2"
metadata = {
description = "Require authentication via the ${var.oidc_connector_name} connector"
}
spec = {
connector_name = teleport_oidc_connector.main.metadata.name
type = "oidc"
allow_local_auth = true
second_factor = "webauthn"
webauthn = {
rp_id = var.teleport_domain
}
}
}
resource "teleport_auth_preference" "main" {
version = "v2"
metadata = {
description = "Require authentication via the ${var.saml_connector_name} connector"
}
spec = {
connector_name = teleport_saml_connector.main.metadata.name
type = "saml"
allow_local_auth = true
second_factor = "webauthn"
webauthn = {
rp_id = var.teleport_domain
}
}
}
Next steps
Now that you have configured RBAC in your Terraform demo cluster, fine-tune your setup by reading the comprehensive Terraform provider reference.