Skip to main content

Automatically Discover GCP Compute Instances

Report an Issue

This guide shows you how to set up automatic server discovery for Google Compute Engine virtual machines.

How it works

The Teleport Discovery Service can connect to GCP and automatically discover and enroll GCP Compute Engine instances matching configured labels. It will then execute a script on these discovered instances that will install Teleport, start it and join the cluster.

Prerequisites

  • A running Teleport cluster. If you want to get started with Teleport, sign up for a free trial or set up a demo environment.

  • The tctl and tsh clients.

    Installing tctl and tsh clients
    1. Determine the version of your Teleport cluster. The tctl and tsh clients must be at most one major version behind your Teleport cluster version. Send a GET request to the Proxy Service at /v1/webapi/find and use a JSON query tool to obtain your cluster version. Replace teleport.example.com:443 with the web address of your Teleport Proxy Service:

      TELEPORT_DOMAIN=teleport.example.com:443
      TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')"
    2. Follow the instructions for your platform to install tctl and tsh clients:

      Download the signed macOS .pkg installer for Teleport, which includes the tctl and tsh clients:

      curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg

      In Finder double-click the pkg file to begin installation.

      danger

      Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security.

  • A GCP compute instance to run the Discovery Service on.
  • GCP compute instances to join the Teleport cluster, running Ubuntu/Debian/RHEL if making use of the default Teleport install script. (For other Linux distributions, you can install Teleport manually.)
  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and [email protected] to your Teleport username:
    tsh login --proxy=teleport.example.com --user=[email protected]
    tctl status

    Cluster teleport.example.com

    Version 18.6.4

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/5. Create a GCP invite token

When discovering GCP compute instances, Teleport makes use of GCP invite tokens for authenticating joining SSH Service instances.

Create a file called token.yaml:

# token.yaml
kind: token
version: v2
metadata:
  # the token name is not a secret because instances must prove that they are
  # running in your GCP project to use this token
  name: gcp-discovery-token
spec:
  # use the minimal set of roles required (e.g. Node, Proxy, App, Kube, DB, WindowsDesktop)
  roles: [Node]

  # set the join method allowed for this token
  join_method: gcp

  gcp:
    allow:
      # The GCP project ID(s) that VMs can join from.
      - project_ids: []
        # (Optional) The locations that VMs can join from. Note: both regions and
        # zones are accepted.
        locations: []
        # (Optional) The email addresses of service accounts that VMs can join
        # with.
        service_accounts: []

Add your instance's project ID(s) to the project_ids field. Add the token to the Teleport cluster with:

tctl create -f token.yaml

Step 2/5. Configure IAM permissions for Teleport

Create a service account that will give Teleport IAM permissions needed to discover instances.

Go to IAM > Roles in the GCP console and click + Create Role. Pick a name for the role (e.g. teleport-discovery) and give it the following permissions:

  • compute.instances.get
  • compute.instances.getGuestAttributes
  • compute.instances.list
  • compute.instances.setMetadata
  • iam.serviceAccounts.actAs
  • iam.serviceAccounts.get
  • iam.serviceAccounts.list

Click Create.

Go to IAM > Service accounts and click + Create Service Account. Pick a name for the service account (e.g. teleport-discovery) and copy its email address to your clipboard. Click Create and Continue.

Go to IAM and click Grant Access. Paste the service account's email into the New principals field and select your custom role. Click Save.

If the Discovery Service will run in a GCP compute instance, edit the instance and assign the service account to the instance and set its access scopes to include Read Write access to the Compute API.

Step 3/5. Configure instances to be discovered

Ensure that each instance to be discovered has a service account assigned to it. No permissions are required on the service account. To check if an instance has a service account, run the following command and confirm that there is at least one entry under serviceAccounts:

gcloud compute instances describe --format="yaml(name,serviceAccounts)" instance_name

Enable guest attributes on instances

Guest attributes (a subset of custom metadata used for infrequently updated data) must be enabled on instances to be discovered so Teleport can access their SSH host keys. Enable guest attributes by setting enable-guest-attributes to TRUE in the instance's metadata.

gcloud compute instances add-metadata instance_name \--metadata=enable-guest-attributes=True

If guest attributes are enabled during instance creation, the guest attributes will automatically be populated with the instance's host keys. If guest attributes were enabled after the instance was created, you can manually add the host keys to the guest attributes below:

Create a file named add-host-keys.sh and copy the following into it:

#!/usr/bin/env bash
for file in /etc/ssh/ssh_host_*_key.pub; do
  read -r KEY_TYPE KEY _ <"$file"
  curl -X PUT --data "$KEY" "http://metadata.google.internal/computeMetadata/v1/instance/guest-attributes/hostkeys/$KEY_TYPE" -H "Metadata-Flavor: Google"
done

Run the following command to add the host keys as part of a startup script:

gcloud compute instances add-metadata instance_name \--metadata-from-file=startup-script="add-host-keys.sh"

Step 4/5. Install the Teleport Discovery Service

tip

If you plan on running the Discovery Service on a host that is already running another Teleport service (Auth or Proxy, for example), you can skip this step.

Install Teleport on the virtual machine that will run the Discovery Service.

To install a Teleport Agent on your Linux server:

The recommended installation method is the cluster install script. It will select the correct version, edition, and installation mode for your cluster.

  1. Assign teleport.example.com:443 to your Teleport cluster hostname and port, but not the scheme (https://).

  2. Run your cluster's install script:

    curl "https://teleport.example.com:443/scripts/install.sh" | sudo bash

Step 5/5. Configure Teleport to discover GCP compute instances

If you are running the Discovery Service on its own host, the service requires a valid invite token to connect to the cluster. Generate one by running the following command against your Teleport Auth Service:

tctl tokens add --type=discovery

Save the generated token in /tmp/token on the virtual machine that will run the Discovery Service.

warning

Discovery Service exposes a configuration parameter - discovery_service.discovery_group - that allows you to group discovered resources into different sets. This parameter is used to prevent Discovery Agents watching different sets of cloud resources from colliding against each other and deleting resources created by another services.

When running multiple Discovery Services, you must ensure that each service is configured with the same discovery_group value if they are watching the same cloud resources or a different value if they are watching different cloud resources.

It is possible to run a mix of configurations in the same Teleport cluster meaning that some Discovery Services can be configured to watch the same cloud resources while others watch different resources. As an example, a 4-agent high availability configuration analyzing data from two different cloud accounts would run with the following configuration.

  • 2 Discovery Services configured with discovery_group: "prod" polling data from Production account.
  • 2 Discovery Services configured with discovery_group: "staging" polling data from Staging account.

Assign teleport.example.com:443 to the host and port of the Teleport Proxy Service in your cluster, and gcp-prod to a name that identifies a group of resources that you will enroll:

# teleport.yaml
version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: "teleport.example.com:443"
auth_service:
  enabled: false
proxy_service:
  enabled: false
ssh_service:
  enabled: false
discovery_service:
  enabled: true
  discovery_group: gcp-prod

Create a matcher for the resources you want to enroll.

tip

Dynamic configuration uses Discovery Configs which can be managed using Terraform. See the Terraform discovery_config reference for more information.

Static configuration while simpler at first, has less flexibility because enrollment changes require edits to teleport.yaml and the restart of the Discovery Service.

Create a Discovery Config resource, that has the same discovery group you configured earlier, to enable GCP VM discovery.

Create a file named discovery-gcp-prod.yaml with the following content:

kind: discovery_config
version: v1
metadata:
  name: example-discovery-config
spec:
  discovery_group: gcp-prod
  gcp:
    - types: ["gce"]
      # The IDs of GCP projects that VMs can join from.
      project_ids: []
      # (Optional) The locations that VMs can join from. Note: both regions and
      # zones are accepted.
      locations: []
      # (Optional) The email addresses of service accounts that VMs can join
      # with.
      service_accounts: []
      # (Optional) Labels that joining VMs must have.
      labels:
        "env": "prod" # Match virtual machines where label:env=prod

Adjust the keys under spec.gcp to match your GCP environment, specifically the project IDs, locations, service accounts and labels you want to associate with the Discovery Service.

Create the Discovery Config by running the following command:

tctl create -f discovery-gcp-prod.yaml

Matching instances will be added to the Teleport cluster automatically.

You can update the Discovery Config at any time, and the service will automatically re-apply the changes.

GCP credentials

The Teleport Discovery Service must have the credentials of the teleport-discovery GCP service account we created above in order to be able to log in.

The easiest way to ensure that is to run the Discovery Service on a GCP instance and assign the service account to that instance. Refer to Set Up Application Default Credentials for details on alternate methods.

Auto-discovery labels

Teleport applies a set of default labels to resources on AWS, Azure, and Google Cloud that join a cluster via auto-discovery. See the auto-discovery labels reference

Advanced configuration

This section covers configuration options for discovering and enrolling servers.

Install multiple Teleport agents on the same instance

When using blue-green deployments or other multiple clusters setups, you might want to access your instances from different clusters.

Teleport supports installing and running multiple agents on the same instance, using a suffixed installation which allows you to isolate each installation.

To configure the Discovery Service to use a suffixed installation, edit the Discovery Config and set the spec.aws.install.suffix key:

kind: discovery_config
# ...
spec:
  gcp:
   - install:
       suffix: "blue-cluster"

Requires agent managed updates to be enabled.

Define the group for Managed Updates

If you are using Teleport Agent managed updates, you can configure the update group so that you can control which instances get updated together.

To set the update group, edit the Discovery Config and set the spec.aws.install.update_group key:

kind: discovery_config
# ...
spec:
  gcp:
   - install:
       update_group: "update-group-1"

Configure HTTP Proxy during installation

For instances which require a proxy to access the installation files, you can configure HTTP Proxy settings in the Discovery Service.

To set the HTTP proxy settings, edit the Discovery Config and set the spec.aws.install.http_proxy_settings key:

kind: discovery_config
# ...
spec:
  gcp:
   - install:
       http_proxy_settings:
         https_proxy: http://172.31.5.130:3128
         http_proxy: http://172.31.5.130:3128
         no_proxy: my-local-domain

Use a custom installation script

To customize an installer, your user must have a role that allows list, create, read and update verbs on the installer resource.

Create a file called installer-manager.yaml with the following content:

kind: role
version: v5
metadata:
  name: installer-manager
spec:
  allow:
    rules:
      - resources: [installer]
        verbs: [list, create, read, update]

Create the role:

tctl create -f installer-manager.yaml

role 'installer-manager' has been created

tip

You can also create and edit roles using the Web UI. Go to Access -> Roles and click Create New Role or pick an existing role to edit.

The preset editor role has the required permissions by default.

To customize the default installer script, execute the following command on your workstation:

tctl edit installer/default-installer

After making the desired changes to the default installer, save and close the file in your text editor.

Multiple installer resources can exist and be specified in the gcp.install.script_name section:

Edit the Discovery Config to specify a custom installer script:

kind: discovery_config
# ...
spec:
  gcp:
    - types: ["gce"]
      tags:
       - "env": "prod"
      install: # optional section when default-installer is used.
        script_name: "default-installer"
    - types: ["gce"]
      tags:
       - "env": "devel"
      install:
        script_name: "devel-installer"


The installer resource has the following templating options:

  • {{ .MajorVersion }}: the major version of Teleport to use when installing from the repository.
  • {{ .PublicProxyAddr }}: the public address of the Teleport Proxy Service to connect to.
  • {{ .RepoChannel }}: Optional package repository (apt/yum) channel name. Has format <channel>/<version> e.g. stable/v18. See installation for more details.
  • {{ .AutomaticUpgrades }}: indicates whether Automatic Updates are enabled or disabled. Its value is either true or false. See Automatic Agent Updates for more information.
  • {{ .TeleportPackage }}: the Teleport package to use. Its value is either teleport-ent or teleport depending on whether the cluster is enterprise or not.

These can be used as follows:

kind: installer
metadata:
  name: default-installer
spec:
  script: |
    echo {{ .PublicProxyAddr }}
    echo Teleport-{{ .MajorVersion }}
    echo Repository Channel: {{ .RepoChannel }}
version: v1

Which, when retrieved for installation, will evaluate to a script with the following contents:

echo teleport.example.com
echo Teleport-18.6.4
echo Repository Channel: stable/v18.6.4

The default installer will take the following actions:

  • Add an official Teleport repository to supported Linux distributions.
  • Install Teleport via apt or yum.
  • Generate the Teleport config file and write it to /etc/teleport.yaml.
  • Enable and start the Teleport service.

Next steps