Fork me on GitHub

Teleport

Export Teleport Audit Events to Splunk

Improve

Teleport's Event Handler plugin receives audit logs from the Teleport Auth Service and forwards them to your log management solution, letting you perform historical analysis, detect unusual behavior, and form a better understanding of how users interact with your Teleport cluster.

In this guide, we will show you how to configure the Teleport Event Handler plugin to send your Teleport audit logs to Splunk. In this setup, the Teleport Event Handler plugin forwards audit logs from Teleport to Splunk's Universal Forwarder, which stores them in Splunk Cloud Platform or Splunk Enterprise for visualization and alerting.

Prerequisites

  • A running Teleport Enterprise cluster, including the Auth Service and Proxy Service. For details on how to set this up, see our Enterprise Getting Started guide.

  • The Enterprise tctl admin tool and tsh client tool version >= 13.0.3, which you can download by visiting your Teleport account.

    tctl version

    Teleport Enterprise v13.0.3 go1.20

    tsh version

    Teleport v13.0.3 go1.20

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.
  • Splunk Cloud Platform or Splunk Enterprise v9.0.1 or above.

  • A Linux host where you will run the Teleport Event Handler plugin and Splunk Universal Forwarder. The Universal Forwarder must be installed on the host.

    If you run the Teleport Event Handler and Universal Forwarder on the same host, there is no need to open a port on the host for ingesting logs. However, if you run the Universal Forwarder on a separate host from the Teleport Event Handler, you will need to open a port on the Universal Forwarder host to traffic from the Teleport Event Handler. This guide assumes that the Universal Forwarder is listening on port 9061.

  • On Splunk Enterprise, port 8088 should be open to traffic from the host running the Teleport Event Handler and Universal Forwarder.

  • Make sure you can connect to Teleport. Log in to your cluster using tsh, then use tctl remotely:

    tsh login --proxy=teleport.example.com [email protected]
    tctl status

    Cluster teleport.example.com

    Version 13.0.3

    CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678

    You can run subsequent tctl commands in this guide on your local machine.

    For full privileges, you can also run tctl commands on your Auth Service host.

Step 1/4. Set up the Teleport Event Handler plugin

The Event Handler plugin is a binary that runs independently of your Teleport cluster. It authenticates to your Teleport cluster and your Splunk Universal Forwarder using mutual TLS. In this section, you will install the Teleport Event Handler plugin on the Linux host where you are running your Universal Forwarder and generate credentials that the plugin will use for authentication.

Install the Teleport Event Handler plugin

Follow the instructions for your environment to install the Teleport Event Handler plugin on your Universal Forwarder host:

curl -L -O https://get.gravitational.com/teleport-event-handler-v13.0.3-linux-amd64-bin.tar.gz
tar -zxvf teleport-event-handler-v13.0.3-linux-amd64-bin.tar.gz

We currently only build the Event Handler plugin for amd64 machines. For ARM architecture, you can build from source.

curl -L -O https://get.gravitational.com/teleport-event-handler-v13.0.3-darwin-amd64-bin.tar.gz
tar -zxvf teleport-event-handler-v13.0.3-darwin-amd64-bin.tar.gz

We currently only build the event handler plugin for amd64 machines. If your macOS machine uses Apple silicon, you will need to install Rosetta before you can run the event handler plugin. You can also build from source.

Ensure that you have Docker installed and running.

docker pull public.ecr.aws/gravitational/teleport-plugin-event-handler:13.0.3

To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add:

helm repo add teleport https://charts.releases.teleport.dev

To update the cache of charts from the remote repository, run helm repo update:

helm repo update

Ensure that you have Docker installed and running.

Run the following commands to build the plugin:

git clone https://github.com/gravitational/teleport-plugins.git --depth 1
cd teleport-plugins/event-handler/build.assets
make build

You can find the compiled binary within your clone of the teleport-plugins repo, with the file path, event-handler/build/teleport-event-handler.

You will need Go >= 1.20 installed.

Run the following commands on your Universal Forwarder host:

git clone https://github.com/gravitational/teleport-plugins.git --depth 1
cd teleport-plugins/event-handler
go build

The resulting executable will have the name event-handler. To follow the rest of this guide, rename this file to teleport-event-handler and move it to /usr/local/bin.

Generate a starter config file

Generate a configuration file with placeholder values for the Teleport Event Handler plugin. Later in this guide, we will edit the configuration file for your environment.

Cloud is not available for Teleport v.
Please use the latest version of Teleport Enterprise documentation.

Run the configure command to generate a sample configuration. Replace teleport.example.com:443 with the DNS name and HTTPS port of Teleport's Proxy Service:

./teleport-event-handler configure . teleport.example.com:443

Run the configure command to generate a sample configuration. Assign TELEPORT_CLUSTER_ADDRESS to the DNS name and port of your Teleport Auth Service or Proxy Service:

TELEPORT_CLUSTER_ADDRESS=mytenant.teleport.sh:443
docker run -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:13.0.3 configure . ${TELEPORT_CLUSTER_ADDRESS?}

In order to export audit events, you'll need to have the root certificate and the client credentials available as a secret. Use the following command to create that secret in Kubernetes:

kubectl create secret generic teleport-event-handler-client-tls --from-file=ca.crt=ca.crt,client.crt=client.crt,client.key=client.key

This will pack the content of ca.crt, client.crt, and client.key into the secret so the Helm chart can mount them to their appropriate path.

You'll see the following output:

Teleport event handler 0.0.1 07617b0ad0829db043fe779faf1669defdc8d84e

[1] mTLS Fluentd certificates generated and saved to ca.crt, ca.key, server.crt, server.key, client.crt, client.key
[2] Generated sample teleport-event-handler role and user file teleport-event-handler-role.yaml
[3] Generated sample fluentd configuration file fluent.conf
[4] Generated plugin configuration file teleport-event-handler.toml

Follow-along with our getting started guide:

https://goteleport.com/setup/guides/fluentd

The plugin generates several setup files:

ls -l

-rw------- 1 bob bob 1038 Jul 1 11:14 ca.crt

-rw------- 1 bob bob 1679 Jul 1 11:14 ca.key

-rw------- 1 bob bob 1042 Jul 1 11:14 client.crt

-rw------- 1 bob bob 1679 Jul 1 11:14 client.key

-rw------- 1 bob bob 541 Jul 1 11:14 fluent.conf

-rw------- 1 bob bob 1078 Jul 1 11:14 server.crt

-rw------- 1 bob bob 1766 Jul 1 11:14 server.key

-rw------- 1 bob bob 260 Jul 1 11:14 teleport-event-handler-role.yaml

-rw------- 1 bob bob 343 Jul 1 11:14 teleport-event-handler.toml

File(s)Purpose
ca.crt and ca.keySelf-signed CA certificate and private key for Fluentd
server.crt and server.keyFluentd server certificate and key
client.crt and client.keyFluentd client certificate and key, all signed by the generated CA
teleport-event-handler-role.yamluser and role resource definitions for Teleport's event handler
fluent.confFluentd plugin configuration

We'll re-purpose the files generated for Fluentd in our Universal Forwarder configuration.

Define RBAC resources

The teleport-event-handler configure command generated a file called teleport-event-handler-role.yaml. This file defines a teleport-event-handler role and a user with read-only access to the event API:

kind: role
metadata:
  name: teleport-event-handler
spec:
  allow:
    rules:
      - resources: ['event', 'session']
        verbs: ['list','read']
version: v5
---
kind: user
metadata:
  name: teleport-event-handler
spec:
  roles: ['teleport-event-handler']
version: v2

Move this file to your workstation (or recreate it by pasting the snippet above) and use tctl on your workstation to create the role and the user:

tctl create -f teleport-event-handler-role.yaml

user "teleport-event-handler" has been created

role 'teleport-event-handler' has been created

Enable impersonation of the Teleport Event Handler plugin user

In order for the Teleport Event Handler plugin to forward events from your Teleport cluster, it needs signed credentials from the cluster's certificate authority.

The teleport-event-handler user cannot request this itself, and requires another user to impersonate this account in order to request credentials. We will show you how to enable impersonation for your Teleport user so you can retrieve credentials for the Teleport Event Handler.

Create a role that enables your user to impersonate the teleport-event-handler user. First, paste the following YAML document into a file called teleport-event-handler-impersonator.yaml:

kind: role
version: v5
metadata:
  name: teleport-event-handler-impersonator
spec:
  options:
    # max_session_ttl defines the TTL (time to live) of SSH certificates
    # issued to the users with this role.
    max_session_ttl: 10h

  # This section declares a list of resource/verb combinations that are
  # allowed for the users of this role. By default nothing is allowed.
  allow:
    impersonate:
      users: ["teleport-event-handler"]
      roles: ["teleport-event-handler"]

Next, create the role:

tctl create -f teleport-event-handler-impersonator.yaml

Assign the teleport-event-handler-impersonator role to your Teleport user by running the following commands, depending on whether you authenticate as a local Teleport user or via the github, saml, or oidc authentication connectors:

Retrieve your local user's configuration resource:

tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml

Edit out.yaml, adding teleport-event-handler-impersonator to the list of existing roles:

  roles:
   - access
   - auditor
   - editor
+  - teleport-event-handler-impersonator

Apply your changes:

tctl create -f out.yaml

Retrieve your github configuration resource:

tctl get github/github --with-secrets > github.yaml

Edit github.yaml, adding teleport-event-handler-impersonator to the teams_to_roles section. The team you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest team possible within your organization. This team must also include your user.

Here is an example:

  teams_to_roles:
    - organization: octocats
      team: admins
      roles:
        - access
+       - teleport-event-handler-impersonator

Apply your changes:

tctl create -f github.yaml
Warning

Note the --with-secrets flag in the tctl get command. This adds the value of spec.signing_key_pair.private_key to github.yaml. This is a sensitive value, so take precautions when creating this file and remove it after updating the resource.

Retrieve your saml configuration resource:

tctl get --with-secrets saml/mysaml > saml.yaml

Edit saml.yaml, adding teleport-event-handler-impersonator to the attributes_to_roles section. The attribute you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest group possible within your organization. This group must also include your user.

Here is an example:

  attributes_to_roles:
    - name: "groups"
      value: "my-group"
      roles:
        - access
+       - teleport-event-handler-impersonator

Apply your changes:

tctl create -f saml.yaml
Warning

Note the --with-secrets flag in the tctl get command. This adds the value of spec.signing_key_pair.private_key to saml.yaml. This is a sensitive value, so take precautions when creating this file and remove it after updating the resource.

Retrieve your oidc configuration resource:

tctl get oidc/myoidc --with-secrets > oidc.yaml

Edit oidc.yaml, adding teleport-event-handler-impersonator to the claims_to_roles section. The claim you will map to this role will depend on how you have designed your organization's RBAC, but it should be the smallest group possible within your organization. This group must also include your user.

Here is an example:

  claims_to_roles:
    - name: "groups"
      value: "my-group"
      roles:
        - access
+       - teleport-event-handler-impersonator

Apply your changes:

tctl create -f oidc.yaml
Warning

Note the --with-secrets flag in the tctl get command. This adds the value of spec.signing_key_pair.private_key to oidc.yaml. This is a sensitive value, so take precautions when creating this file and remove it after updating the resource.

Log out of your Teleport cluster and log in again to assume the new role.

Log out of your Teleport cluster and log in again to assume the new role.

Export the plugin identity

Like all Teleport users, teleport-event-handler needs signed credentials in order to connect to your Teleport cluster. You will use the tctl auth sign command to request these credentials for your plugin.

The following tctl auth sign command impersonates the teleport-event-handler user, generates signed credentials, and writes an identity file to the local directory:

tctl auth sign --user=teleport-event-handler --out=auth.pem

The plugin connects to the Teleport Auth Service's gRPC endpoint over TLS.

The identity file, auth.pem, includes both TLS and SSH credentials. The plugin uses the SSH credentials to connect to the Proxy Service, which establishes a reverse tunnel connection to the Auth Service. The plugin uses this reverse tunnel, along with your TLS credentials, to connect to the Auth Service's gRPC endpoint.

You will refer to this file later when configuring the plugin.

Certificate Lifetime

By default, tctl auth sign produces certificates with a relatively short lifetime. For production deployments, we suggest using Machine ID to programmatically issue and renew certificates for your plugin. See our Machine ID getting started guide to learn more.

Move the credentials you generated to the host where you are running the Teleport Event Handler plugin.

Once the Teleport Event Handler's certificate expires, you will need to renew it by running the tctl auth sign command again.

Step 2/4. Configure the Universal Forwarder

In this step, you will configure the Universal Forwarder to receive audit logs from the Teleport Event Handler plugin and forward them to Splunk. The Event Handler sends audit logs as HTTP POST requests with the content type application/json.

We will assume that you assigned $SPLUNK_HOME to /opt/splunkforwarder when installing the Universal Forwarder.

To find your $SPLUNK_HOME, run the following command to see the location of your Universal Forwarder service definition, which the init system systemd uses to run the Universal Forwarder:

sudo systemctl status SplunkForwarder.service

● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'

Loaded: loaded (/lib/systemd/system/SplunkForwarder.service; enabled; vendor preset: enabled)

Active: active (running) since Fri 2022-10-07 15:57:37 UTC; 2h 18min ago

Main PID: 1772 (splunkd)

Tasks: 53 (limit: 2309)

Memory: 70.8M (limit: 1.8G)

CGroup: /system.slice/SplunkForwarder.service

├─1772 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd

└─1810 [splunkd pid=1772] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner]

View the file at the path shown in the Loaded: field. Your $SPLUNK_HOME will include the filepath segments in ExecStart before /bin. In this case, $SPLUNK_HOME is /opt/splunkforwarder/:

ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd

Create an index for your audit logs

Create an index for your Teleport audit logs by visiting the home page of the Splunk UI and navigating to Settings > Indexes. Click New Index. Name your index teleport-audit-logs and assign the Index Data Type field to "Events".

Creating an Index

The values of the remaining fields, Max raw data size and Searchable retention (days) depend on your organization's resources and practices for log management.

Click Save

Create a token for the Universal Forwarder

The Universal Forwarder authenticates client traffic using a token. To generate a token, visit the home page of the Splunk UI. Navigate to Settings > Data inputs In the Local inputs table, find the HTTP Event Collector row and click Add new

Enter a name you can use to recognize the token later so you can manage it, e.g., Teleport Audit Events. Click Next.

Create a Token

In the Input Settings view (above), next to the Source type field, click Select. In the Select Source Type dropdown menu, click Structured, then _json. Splunk will index incoming logs as JSON, which is the format the Event Handler uses to send logs to the Universal Forwarder.

In the Index section, select the teleport-audit-logs index you created earlier. Click Review then view the summary and click Submit. Copy the Token Value field and keep it somewhere safe so you can use it later in this guide.

Prepare a certificate file for the Universal Forwarder

The Universal Forwarder signs TLS certificates using a file that contains both an X.509-format certificate and an RSA private key. To prepare this, run the following commands on the Universal Forwarder host, where server.crt and server.key are two of the files you generated earlier with the teleport-event-handler configure the command:

cp server.crt server.pem
cat server.key >> server.pem

Allow the Universal Forwarder to access the certificate file:

sudo chown splunk:splunk server.pem

Configure the HTTP Event Collector

On your Universal Forwarder host, create a file at /opt/splunkforwarder/etc/system/local/inputs.conf with the following content:

[http]
port = 9061
disabled = false
serverCert = server.pem
sslPassword =
requireClientCert = true

[http://audit]
token =
index = teleport-audit-logs
allowQueryStringAuth = true

This configuration enables the HTTP input, which will listen on port 9061 and receive logs from the Teleport Event Handler Plugin, assigning them to the teleport-audit-logs index.

Assign serverCert to the path to the server.pem file you generated earlier.

To assign sslPassword, run the following command in the directory that contains fluent.conf:

cat fluent.conf | grep passphrase

private_key_passphrase "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"

Copy the passphrase and paste it as the value of sslPassword.

The token field in the [http://audit] section enables the Universal Forwarder to collect logs from HTTP clients that present a token. Assign token to the token you generated earlier.

allowQueryStringAuth enables the Teleport Event Handler to include the token in a query string, rather than the Authorization HTTP header (the default). This is necessary because the Teleport Event Handler does not currently support custom HTTP headers.

Configure TLS

To configure secure communications between the Universal Forwarder and the Teleport Event Handler, create a file called /opt/splunkforwarder/etc/system/local/server.conf with the following content (if this file already exists, add the following field in the [sslConfig] section):

[sslConfig]
sslRootCAPath =

Assign sslRootCAPath to the path of the ca.crt file you generated earlier.

Ensure that the Universal Forwarder can read the CA certificate:

sudo chmod +r ca.crt

Configure an output

Instruct the Universal Forwarder to send the logs it collects to Splunk.

Create a file at the path /opt/splunkforwarder/etc/system/local/outputs.conf with the following content:

[tcpout]
sslVerifyServerCert = true

[httpout]
httpEventCollectorToken =
uri =

Fill in httpEventCollectorToken with the token you generated earlier.

Assign uri to the following, replacing MYHOST with the hostname of your Splunk instance and 8088 with the port you are using for your Splunk HTTP Event Collector.

https://MYHOST:8088

The format of the URL to use will depend on your Splunk deployment. See the list of acceptable URL formats in the Splunk documentation.

Note that you must only include the scheme, host, and port of the URL. The Universal Forwarder will append the correct URL path of the Splunk ingestion API when forwarding logs.

Finally, restart the Universal Forwarder:

sudo systemctl restart SplunkForwarder

Step 3/4. Run the Teleport Event Handler plugin

Now that you have configured your Universal Forwarder to receive logs via HTTP and forward them to Splunk, you will ensure that the Teleport Event Handler plugin is configured to authenticate to the Universal Forwarder and your Teleport cluster, then run the Teleport Event Handler.

Complete the Teleport Event Handler configuration

Earlier, we generated a file called teleport-event-handler.toml to configure the Teleport Event Handler plugin. This file includes settings similar to the following:

storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"

[forward.fluentd]
ca = "/home/ca.crt"
cert = "/home/client.crt"
key = "/home/client.key"
url = "https://localhost:9061/test.log"

[teleport]
addr = "example.teleport.com:443"
identity = "identity"

Update the configuration file as follows.

Change forward.fluentd.url to the following:

url = "https://localhost:9061/services/collector/raw?token=MYTOKEN"

Ensure the URL includes the scheme, host and port of your Universal Forwarder's HTTP input, plus the URL path that the Universal Forwarder uses for raw data (/services/collector/raw).

Replace MYTOKEN with the token you generated earlier for the Splunk Universal Forwarder. If you are running the Universal Forwarder and Event Handler on separate hosts, replace localhost with your Universal Forwarder's IP address or domain name.

Change forward.fluentd.session-url to the same value as forward.fluentd.url, but with the query parameter key &noop= appended to the end:

session-url = "https://localhost:9061/services/collector/raw?token=MYTOKEN&noop="

For audit logs related to Teleport sessions, the Teleport Event Handler appends routing information to the URL that our HTTP input configuration does not use. Adding the noop query parameter causes the Teleport Event Handler to append the routing information as the parameter's value so the Universal Forwarder can discard it.

Next, edit the teleport section of the configuration as follows:

addr: Include the hostname and HTTPS port of your Teleport Proxy Service or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443 or mytenant.teleport.sh:443).

identity: Fill this in with the path to the identity file you exported earlier.

client_key, client_crt, root_cas: Comment these out, since we are not using them in this configuration.

address: Include the hostname and HTTPS port of your Teleport Proxy Service or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443 or mytenant.teleport.sh:443).

identitySecretName: Fill in the identitySecretName field with the name of the Kubernetes secret you created earlier.

Ensure that the Teleport Event Handler can read the identity file:

chmod +r auth.pem

Start the Teleport Event Handler

Start the Teleport Teleport Event Handler as a daemon. To do so, create a systemd service definition at the path /usr/lib/systemd/system/teleport-event-handler.service with the following content:

[Unit]
Description=Teleport Event Handler
After=network.target

[Service]
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/teleport-event-handler start --config=/etc/teleport-event-handler.toml
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/teleport-event-handler.pid

[Install]
WantedBy=multi-user.target

Enable and start the plugin:

sudo systemctl enable teleport-event-handler
sudo systemctl start teleport-event-handler

You can configure when you would like the Teleport Event Handler to begin exporting events when you run the start command. This example will start exporting from May 5th, 2021:

teleport-event-handler start --config teleport-event-handler.toml --start-time "2021-05-05T00:00:00Z"

You can only determine the start time once, when first running the Teleport Event Handler. If you want to change the time frame later, remove the plugin state directory that you specified in the storage field of the handler's configuration file.

Once the Teleport Event Handler starts, you will see notifications about scanned and forwarded events:

sudo journalctl -u teleport-event-handler

DEBU Event sent id:f19cf375-4da6-4338-bfdc-e38334c60fd1 index:0 ts:2022-09-21

18:51:04.849 +0000 UTC type:cert.create event-handler/app.go:140

...

Step 4/4. Visualize your audit logs in Splunk

Since our setup forwards audit logs to Splunk in the structured JSON format, Splunk automatically indexes them, so fields will be available immediately for use in visualizations. You can use these fields to create dashboards that track the way users are interacting with your Teleport cluster.

For example, from the Splunk UI home page, navigate to Search & Reporting > Dashboards > Create New Dashboard. Enter "Teleport Audit Log Types" for the title of your dashboard and click Classic Dashboards. Click Create then, in the Edit Dashboard view, click Add Panel.

In the Add Panel sidebar, click New > Column Chart. For the Search String field, enter the following:

index="teleport-audit-logs" | timechart count by event

Once you click Add to Dashboard you will see a count of Teleport event types over time, which gives you a general sense of how users are interacting with Teleport:

Event Types over Time

Troubleshooting connection issues

If the Teleport Event Handler is displaying error logs while connecting to your Teleport Cluster, ensure that:

  • The certificate the Teleport Event Handler is using to connect to your Teleport cluster is not past its expiration date. This is the value of the --ttl flag in the tctl auth sign command, which is 12 hours by default.
  • Ensure that in your Teleport Event Handler configuration file (teleport-event-handler.toml), you have provided the correct host and port for the Teleport Proxy Service.

Next steps

Now that you are exporting your audit logs to Splunk, consult our audit log reference so you can plan visualizations and alerts.

In this guide, we made use of impersonation to supply credentials to the Teleport Event Handler to communicate with your Teleport cluster. To learn more about impersonation, read our guide.

While this guide uses the tctl auth sign command to issue credentials for the Teleport Event Handler, production clusters should use Machine ID for safer, more reliable renewals. Read our guide to getting started with Machine ID.