Fork me on GitHub

Teleport

Export Teleport Audit Events with Datadog

  • Available for:
  • OpenSource
  • Team
  • Cloud
  • Enterprise

Datadog is a SAAS monitoring and security platform. In this guide, we'll explain how to forward Teleport audit events to Datadog using Fluentd.

The Teleport Event Handler is designed to communicate with Fluentd using mTLS to establish a secure channel. In this setup, the Event Handler sends events to Fluentd, which forwards them to Datadog using an API key to authenticate.

Prerequisites

  • A running Teleport cluster. For details on how to set this up, see the Getting Started guide.

  • The tctl admin tool and tsh client tool version >= 14.0.0.

    See Installation for details.

  • A Teleport Team account. If you don't have an account, sign up to begin your free trial.

  • The Enterprise tctl admin tool and tsh client tool, version >= 13.3.9.

    You can download these tools from the Cloud Downloads page.

  • A running Teleport Enterprise cluster. For details on how to set this up, see the Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 14.0.0.

    You can download these tools by visiting your Teleport account workspace.

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.

To check version information, run the tctl version and tsh version commands. For example:

tctl version

Teleport Enterprise v13.3.9 git:api/14.0.0-gd1e081e go1.21


tsh version

Teleport v13.3.9 go1.21

Proxy version: 13.3.9Proxy: teleport.example.com
  • A Datadog account.
  • A server, virtual machine, Kubernetes cluster, or Docker environment to run the Event Handler. The instructions below assume a local Docker container for testing.
  • Fluentd version v1.12.4 or greater. The Teleport Event Handler will create a new fluent.conf file you can integrate into an existing Fluentd system, or use with a fresh setup.

The instructions below demonstrate a local test of the Event Handler plugin on your workstation. You will need to adjust paths, ports, and domains for other environments.

  • To check that you can connect to your Teleport cluster, sign in with tsh login, then verify that you can run tctl commands on your administrative workstation using your current credentials. For example:
    tsh login --proxy=teleport.example.com --user=[email protected]
    tctl status

    Cluster teleport.example.com

    Version 14.0.0

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    If you can connect to the cluster and run the tctl status command, you can use your current credentials to run subsequent tctl commands from your workstation. If you host your own Teleport cluster, you can also run tctl commands on the computer that hosts the Teleport Auth Service for full permissions.

Step 1/6. Install the Event Handler plugin

The Teleport event handler runs alongside the Fluentd forwarder, receives events from Teleport's events API, and forwards them to Fluentd.

curl -L -O https://get.gravitational.com/teleport-event-handler-v14.0.0-linux-amd64-bin.tar.gz
tar -zxvf teleport-event-handler-v14.0.0-linux-amd64-bin.tar.gz
sudo ./teleport-event-handler/install

We currently only build the Event Handler plugin for amd64 machines. For ARM architecture, you can build from source.

curl -L -O https://get.gravitational.com/teleport-event-handler-v14.0.0-darwin-amd64-bin.tar.gz
tar -zxvf teleport-event-handler-v14.0.0-darwin-amd64-bin.tar.gz
sudo ./teleport-event-handler/install

We currently only build the event handler plugin for amd64 machines. If your macOS machine uses Apple silicon, you will need to install Rosetta before you can run the event handler plugin. You can also build from source.

Ensure that you have Docker installed and running.

docker pull public.ecr.aws/gravitational/teleport-plugin-event-handler:14.0.0

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Ensure that you have Docker installed and running.

Run the following commands to build the plugin:

git clone https://github.com/gravitational/teleport-plugins.git --depth 1
cd teleport-plugins/event-handler/build.assets
make build

You can find the compiled binary within your clone of the teleport-plugins repo, with the file path, event-handler/build/teleport-event-handler.

You will need Go >= 1.21 installed.

Run the following commands on your Universal Forwarder host:

git clone https://github.com/gravitational/teleport-plugins.git --depth 1
cd teleport-plugins/event-handler
go build

The resulting executable will have the name event-handler. To follow the rest of this guide, rename this file to teleport-event-handler and move it to /usr/local/bin.

Step 2/6. Configure the plugin

Run the configure command to generate a sample configuration. Replace mytenant.teleport.sh with the DNS name of your Teleport Team or Teleport Enterprise Cloud tenant:

teleport-event-handler configure . mytenant.teleport.sh:443

Run the configure command to generate a sample configuration. Replace teleport.example.com:443 with the DNS name and HTTPS port of Teleport's Proxy Service:

teleport-event-handler configure . teleport.example.com:443

Run the configure command to generate a sample configuration. Assign TELEPORT_CLUSTER_ADDRESS to the DNS name and port of your Teleport Auth Service or Proxy Service:

TELEPORT_CLUSTER_ADDRESS=mytenant.teleport.sh:443
docker run -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:14.0.0 configure . ${TELEPORT_CLUSTER_ADDRESS?}

In order to export audit events, you'll need to have the root certificate and the client credentials available as a secret. Use the following command to create that secret in Kubernetes:

kubectl create secret generic teleport-event-handler-client-tls --from-file=ca.crt=ca.crt,client.crt=client.crt,client.key=client.key

This will pack the content of ca.crt, client.crt, and client.key into the secret so the Helm chart can mount them to their appropriate path.

You'll see the following output:

Teleport event handler 14.0.0

[1] mTLS Fluentd certificates generated and saved to ca.crt, ca.key, server.crt, server.key, client.crt, client.key
[2] Generated sample teleport-event-handler role and user file teleport-event-handler-role.yaml
[3] Generated sample fluentd configuration file fluent.conf
[4] Generated plugin configuration file teleport-event-handler.toml

The plugin generates several setup files:

ls -l

-rw------- 1 bob bob 1038 Jul 1 11:14 ca.crt

-rw------- 1 bob bob 1679 Jul 1 11:14 ca.key

-rw------- 1 bob bob 1042 Jul 1 11:14 client.crt

-rw------- 1 bob bob 1679 Jul 1 11:14 client.key

-rw------- 1 bob bob 541 Jul 1 11:14 fluent.conf

-rw------- 1 bob bob 1078 Jul 1 11:14 server.crt

-rw------- 1 bob bob 1766 Jul 1 11:14 server.key

-rw------- 1 bob bob 260 Jul 1 11:14 teleport-event-handler-role.yaml

-rw------- 1 bob bob 343 Jul 1 11:14 teleport-event-handler.toml

File(s)Purpose
ca.crt and ca.keySelf-signed CA certificate and private key for Fluentd
server.crt and server.keyFluentd server certificate and key
client.crt and client.keyFluentd client certificate and key, all signed by the generated CA
teleport-event-handler-role.yamluser and role resource definitions for Teleport's event handler
fluent.confFluentd plugin configuration

This guide assumes that you are running the Event Handler on the same host or Kubernetes pod as your log forwarder. If you are not, you will need to instruct the Event Handler to generate mTLS certificates for subjects besides localhost. To do this, use the --cn and --dns-names flags of the teleport-event-handler configure command.

For example, if your log forwarder is addressable at forwarder.example.com and the Event Handler at handler.example.com, you would run the following configure command:

teleport-event-handler configure --cn=handler.example.com --dns-names=forwarder.example.com

The command generates client and server certificates with the subjects set to the value of --cn.

The --dns-names flag accepts a comma-separated list of DNS names. It will append subject alternative names (SANs) to the server certificate (the one you will provide to your log forwarder) for each DNS name in the list. The Event Handler looks up each DNS name before appending it as an SAN and exits with an error if the lookup fails.

Step 3/6. Create a user and role for reading audit events

The teleport-event-handler configure command generated a file called teleport-event-handler-role.yaml. This file defines a teleport-event-handler role and a user with read-only access to the event API:

kind: role
metadata:
  name: teleport-event-handler
spec:
  allow:
    rules:
      - resources: ['event', 'session']
        verbs: ['list','read']
version: v5
---
kind: user
metadata:
  name: teleport-event-handler
spec:
  roles: ['teleport-event-handler']
version: v2

Move this file to your workstation (or recreate it by pasting the snippet above) and use tctl on your workstation to create the role and the user:

tctl create -f teleport-event-handler-role.yaml

user "teleport-event-handler" has been created

role 'teleport-event-handler' has been created

Step 4/6. Create teleport-event-handler credentials

Enable impersonation of the Event Handler user

In order for the Teleport Event Handler plugin to forward events from your Teleport cluster, it needs a signed identity file from the cluster's certificate authority. The teleport-event-handler user cannot request this itself, and requires another user to impersonate this account in order to request a certificate.

Create a role that enables your user to impersonate the Fluentd user. First, paste the following YAML document into a file called teleport-event-handler-impersonator.yaml:

kind: role
version: v5
metadata:
  name: teleport-event-handler-impersonator
spec:
  # SSH options used for user sessions
  options:
    # max_session_ttl defines the TTL (time to live) of SSH certificates
    # issued to the users with this role.
    max_session_ttl: 10h

  # allow section declares a list of resource/verb combinations that are
  # allowed for the users of this role. by default nothing is allowed.
  allow:
    impersonate:
      users: ["teleport-event-handler"]
      roles: ["teleport-event-handler"]

Next, create the role:

tctl create -f teleport-event-handler-impersonator.yaml

Assign the teleport-event-handler-impersonator role to your Teleport user by running the appropriate commands for your authentication provider:

  1. Retrieve your local user's configuration resource:

    tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
  2. Edit out.yaml, adding teleport-event-handler-impersonator to the list of existing roles:

      roles:
       - access
       - auditor
       - editor
    +  - teleport-event-handler-impersonator 
    
  3. Apply your changes:

    tctl create -f out.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your github authentication connector:

    tctl get github/github --with-secrets > github.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the github.yaml file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource.

  2. Edit github.yaml, adding teleport-event-handler-impersonator to the teams_to_roles section.

    The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

    Here is an example:

      teams_to_roles:
        - organization: octocats
          team: admins
          roles:
            - access
    +       - teleport-event-handler-impersonator
    
  3. Apply your changes:

    tctl create -f github.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your saml configuration resource:

    tctl get --with-secrets saml/mysaml > saml.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the saml.yaml file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

  2. Edit saml.yaml, adding teleport-event-handler-impersonator to the attributes_to_roles section.

    The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

    Here is an example:

      attributes_to_roles:
        - name: "groups"
          value: "my-group"
          roles:
            - access
    +       - teleport-event-handler-impersonator
    
  3. Apply your changes:

    tctl create -f saml.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

  1. Retrieve your oidc configuration resource:

    tctl get oidc/myoidc --with-secrets > oidc.yaml

    Note that the --with-secrets flag adds the value of spec.signing_key_pair.private_key to the oidc.yaml file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

  2. Edit oidc.yaml, adding teleport-event-handler-impersonator to the claims_to_roles section.

    The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

    Here is an example:

      claims_to_roles:
        - name: "groups"
          value: "my-group"
          roles:
            - access
    +       - teleport-event-handler-impersonator
    
  3. Apply your changes:

    tctl create -f oidc.yaml
  4. Sign out of the Teleport cluster and sign in again to assume the new role.

Export an identity file for the Event Handler plugin user

The Teleport Event Handler plugin uses the teleport-event-handler role and user to read events. We export an identity file for the user with the tctl auth sign command.

Like all Teleport users, teleport-event-handler needs signed credentials in order to connect to your Teleport cluster. You will use the tctl auth sign command to request these credentials for your plugin.

The following tctl auth sign command impersonates the teleport-event-handler user, generates signed credentials, and writes an identity file to the local directory:

tctl auth sign --user=teleport-event-handler --out=auth.pem

The plugin connects to the Teleport Auth Service's gRPC endpoint over TLS.

The identity file, auth.pem, includes both TLS and SSH credentials. The plugin uses the SSH credentials to connect to the Proxy Service, which establishes a reverse tunnel connection to the Auth Service. The plugin uses this reverse tunnel, along with your TLS credentials, to connect to the Auth Service's gRPC endpoint.

You will refer to this file later when configuring the plugin.

Certificate Lifetime

By default, tctl auth sign produces certificates with a relatively short lifetime. For production deployments, we suggest using Machine ID to programmatically issue and renew certificates for your plugin. See our Machine ID getting started guide to learn more.

Note that you cannot issue certificates that are valid longer than your existing credentials. For example, to issue certificates with a 1000-hour TTL, you must be logged in with a session that is valid for at least 1000 hours. This means your user must have a role allowing a max_session_ttl of at least 1000 hours (60000 minutes), and you must specify a --ttl when logging in:

tsh login --proxy=teleport.example.com --ttl=60060

Like all Teleport users, teleport-event-handler needs signed credentials in order to connect to your Teleport cluster. You will use the tctl auth sign command to request these credentials for your plugin.

The following tctl auth sign command impersonates the teleport-event-handler user, generates signed credentials, and writes an identity file to the local directory:

tctl auth sign --user=teleport-event-handler --out=auth.pem

The plugin connects to the Teleport Auth Service's gRPC endpoint over TLS.

The identity file, auth.pem, includes both TLS and SSH credentials. The plugin uses the SSH credentials to connect to the Proxy Service, which establishes a reverse tunnel connection to the Auth Service. The plugin uses this reverse tunnel, along with your TLS credentials, to connect to the Auth Service's gRPC endpoint.

You will refer to this file later when configuring the plugin.

Certificate Lifetime

By default, tctl auth sign produces certificates with a relatively short lifetime. For production deployments, we suggest using Machine ID to programmatically issue and renew certificates for your plugin. See our Machine ID getting started guide to learn more.

Note that you cannot issue certificates that are valid longer than your existing credentials. For example, to issue certificates with a 1000-hour TTL, you must be logged in with a session that is valid for at least 1000 hours. This means your user must have a role allowing a max_session_ttl of at least 1000 hours (60000 minutes), and you must specify a --ttl when logging in:

tsh login --proxy=teleport.example.com --ttl=60060

Next, create a Kubernetes secret for the Teleport identity file:

kubectl create secret generic teleport-event-handler-identity --from-file=auth_id=identity

These commands should result in a PEM-encoded file, identity, and a secret in Kubernetes with the name teleport-event-handler-identity.

Step 5/6. Install Fluentd output plugin for Datadog

In order for Fluentd to communicate with Datadog, it requires the Fluentd output plugin for Datadog. Install the plugin on your Fluentd host using either gem or the td-agent, if installed:

Using Gem

gem install fluent-plugin-datadog

Using td-agent

/usr/sbin/td-agent-gem install fluent-plugin-datadog
Testing Locally?

If you're running Fluentd in a local Docker container for testing, you can adjust the entrypoint to an interactive shell as the root user, so you can install the plugin before starting Fluentd:

docker run -u $(id -u root):$(id -g root) -p 8888:8888 -v $(pwd):/keys -v \$(pwd)/fluent.conf:/fluentd/etc/fluent.conf --entrypoint=/bin/sh -i --tty fluent/fluentd:edge

From the container shell:

gem install fluent-plugin-datadog
fluentd -c /fluentd/etc/fluent.conf

Configure Fluentd for Datadog

From the Datadog web UI, generate an API key for Fluentd. From Organization Settings -> Access -> API Keys, click on + New Key:

Copy the API key, and use it to add a new <match> block to fluent.conf:

<match test.log>

  @type datadog
  @id awesome_agent
  api_key abcd123-insecure-do-not-use-this

  host http-intake.logs.us5.datadoghq.com

  # Optional parameters
  dd_source teleport

</match>
  • Add your API key to the api_key field.
  • Adjust the host value to match your Datadog site. See their Log Collection and Integrations guide to determine the correct value.
  • dd_source is an optional field you can use to filter these logs in the Datadog UI.
  • Adjust ca_path, cert_path and private_key_path to point to the credential files generated earlier. If you're testing locally, the Docker command above already mounted the current working directory to keys/ in the container.

Restart Fluentd after saving the changes to fluent.conf.

Step 6/6. Start the event handler plugin

Earlier, we generated a file called teleport-event-handler.toml to configure the Fluentd event handler. This file includes setting similar to the following:

storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"

[forward.fluentd]
ca = "/home/sasha/scripts/event-handler/ca.crt"
cert = "/home/sasha/scripts/event-handler/client.crt"
key = "/home/sasha/scripts/event-handler/client.key"
url = "https://localhost:8888/test.log"

[teleport]
addr = "mytenant.teleport.sh:443"
identity = "identity"

To start the event handler, run the following command:

teleport-event-handler start --config teleport-event-handler.toml
storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"

[forward.fluentd]
ca = "/home/sasha/scripts/event-handler/ca.crt"
cert = "/home/sasha/scripts/event-handler/client.crt"
key = "/home/sasha/scripts/event-handler/client.key"
url = "https://localhost:8888/test.log"

[teleport]
addr = "teleport.example.com:443"
identity = "identity"

To start the event handler, run the following command:

teleport-event-handler start --config teleport-event-handler.toml

Use the following template to create teleport-plugin-event-handler-values.yaml:

eventHandler:
  storagePath: "./storage"
  timeout: "10s"
  batch: 20
  namespace: "default"

teleport:
  address: "example.teleport.com:443"
  identitySecretName: teleport-event-handler-identity

fluentd:
  url: "https://fluentd.fluentd.svc.cluster.local/events.log"
  sessionUrl: "https://fluentd.fluentd.svc.cluster.local/session.log"
  certificate:
    secretName: "teleport-event-handler-client-tls"
    caPath: "ca.crt"
    certPath: "client.crt"
    keyPath: "client.key"

persistentVolumeClaim:
  enabled: true

To start the event handler in Kubernetes, run the following command:

helm install teleport-plugin-event-handler teleport/teleport-plugin-event-handler \ --values teleport-plugin-event-handler-values.yaml \ --version 14.0.0
Note

This example will start exporting from May 5th 2021:

teleport-event-handler start --config teleport-event-handler.toml --start-time "2022-02-02T00:00:00Z"

The start time can be set only once, on the first run of the tool.

If you want to change the time frame later, remove the plugin state directory that you specified in the storage field of the handler's configuration file.

Once the handler starts, you will see notifications in Fluentd about scanned and forwarded events:

INFO[0046] Event sent id=0b5f2a3e-faa5-4d77-ab6e-362bca0994fc ts="2021-06-08 11:00:56.034 +0000 UTC" type=user.login
...

The Logs view in Datadog should now report your Teleport cluster events:

Troubleshooting connection issues

If the Teleport Event Handler is displaying error logs while connecting to your Teleport Cluster, ensure that:

  • The certificate the Teleport Event Handler is using to connect to your Teleport cluster is not past its expiration date. This is the value of the --ttl flag in the tctl auth sign command, which is 12 hours by default.
  • Ensure that in your Teleport Event Handler configuration file (teleport-event-handler.toml), you have provided the correct host and port for the Teleport Proxy Service.

Next steps

  • Read more about impersonation here.
  • While this guide uses the tctl auth sign command to issue credentials for the Teleport Event Handler, production clusters should use Machine ID for safer, more reliable renewals. Read our guide to getting started with Machine ID.
  • To see all of the options you can set in the values file for the teleport-plugin-event-handler Helm chart, consult our reference guide.
  • Review the Fluentd output plugin for Datadog README file to learn how to customize the log format entering Datadog.