Fork me on GitHub

Teleport

Getting Started With Access Controls

Improve

In Teleport, any local, SSO, or robot user can be assigned one or several roles. Roles govern access to databases, SSH servers, Kubernetes clusters, Windows desktops, and web apps.

We will start with local users and preset roles, assign roles to SSO users, and wrap up with creating your own role.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see one of our Getting Started guides.

  • The tctl admin tool and tsh client tool version >= 12.1.1.

    tctl version

    Teleport v12.1.1 go1.19

    tsh version

    Teleport v12.1.1 go1.19

    See Installation for details.

  • A running Teleport cluster. For details on how to set this up, see our Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 12.1.1, which you can download by visiting the customer portal.

    tctl version

    Teleport Enterprise v12.1.1 go1.19

    tsh version

    Teleport v12.1.1 go1.19

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.

When running Teleport in production, we recommend that you follow the practices below to avoid security incidents. These practices may differ from the examples used in this guide, which are intended for demo environments:

  • Avoid using sudo in production environments unless it's necessary.
  • Create new, non-root, users and use test instances for experimenting with Teleport.
  • Run Teleport's services as a non-root user unless required. Only the SSH Service requires root access. Note that you will need root permissions (or the CAP_NET_BIND_SERVICE capability) to make Teleport listen on a port numbered < 1024 (e.g. 443).
  • Follow the "Principle of Least Privilege" (PoLP). Don't give users permissive roles when giving them more restrictive roles will do instead. For example, assign users the built-in access,editor roles.
  • When joining a Teleport resource service (e.g., the Database Service or Application Service) to a cluster, save the invitation token to a file. Otherwise, the token will be visible when examining the teleport command that started the agent, e.g., via the history command on a compromised system.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=teleport.example.com [email protected]
tctl status

Cluster teleport.example.com

Version 12.1.1

CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

You can run subsequent tctl commands in this guide on your local machine.

For full privileges, you can also run tctl commands on your Auth Service host.

To connect to Teleport, log in to your cluster using tsh, then use tctl remotely:

tsh login --proxy=myinstance.teleport.sh [email protected]
tctl status

Cluster myinstance.teleport.sh

Version 12.1.1

CA pin sha256:sha-hash-here

You must run subsequent tctl commands in this guide on your local machine.

Step 1/3. Add local users with preset roles

Teleport provides several preset roles: editor, auditor and access. The editor role authorizes users to modify cluster configuration, the auditor role to view audit logs, and access role to access cluster resources.

Invite the local user Alice as cluster editor:

tctl users add alice --roles=editor

Once Alice signs up, she will be able to edit cluster configuration. You can list users and their roles using tctl users ls.

tctl users ls

User Roles

-------------------- --------------

alice editor

You can update the user's roles using the tctl users update command:

Once Alice logs back in, she will be able to view audit logs

tctl users update alice --set-roles=editor,auditor

Because Alice has two roles, permissions from those roles create a union. She will be able to act as a system administrator and auditor at the same time.

Step 2/3. Map SSO users to roles

Next, follow the instructions to set up an authentication connector that maps users within your SSO solution to Teleport roles.

Save the file below as github.yaml and update the fields. You will need to set up a GitHub OAuth 2.0 Connector app. Any member belonging to the GitHub organization octocats and on team admin will be able to assume the built-in role access.

kind: github
version: v3
metadata:
  # connector name that will be used with `tsh --auth=github login`
  name: github
spec:
  # client ID of GitHub OAuth app
  client_id: client-id
  # client secret of GitHub OAuth app
  client_secret: client-secret
  # This name will be shown on UI login screen
  display: GitHub
  # Change tele.example.com to your domain name
  redirect_url: https://tele.example.com:443/v1/webapi/github/callback
  # Map github teams to teleport roles
  teams_to_roles:
    - organization: octocats # GitHub organization name
      team: admin            # GitHub team name within that organization
      # map github admin team to Teleport's "access" role
      roles: ["access"]

Create the github resource:

tctl create github.yaml

Step 3/3. Create a custom role

Let's create a custom role for interns. Interns will have access to test or staging SSH servers as readonly users. We will let them view some monitoring web applications and dev kubernetes cluster.

Save this role as interns.yaml:

kind: role
version: v6
metadata:
  name: interns
spec:
  allow:
    # Logins configures SSH login principals
    logins: ['readonly']
    # Assigns users with this role to the built-in Kubernetes group "view"
    kubernetes_groups: ["view"]
    # Allow access to SSH nodes, Kubernetes clusters, apps or databases
    # labeled with "staging" or "test"
    node_labels:
      'env': ['staging', 'test']
    kubernetes_labels:
      'env': 'dev'
    kubernetes_resources:
      - kind: pod
        namespace: "*"
        name: "*"
    app_labels:
      'type': ['monitoring']
  # The deny rules always override allow rules.
  deny:
    # deny access to any Node, database, app or Kubernetes cluster labeled
    # as prod as any user.
    node_labels:
      'env': 'prod'
    kubernetes_labels:
      'env': 'prod'
    kubernetes_resources:
      - kind: pod
        namespace: "prod"
        name: "*"
    db_labels:
      'env': 'prod'
    app_labels:
      'env': 'prod'

Create a role using the tctl create -f command:

tctl create -f /tmp/interns.yaml

Get a list of all roles in the system

tctl get roles --format text

Next steps