Export Teleport Audit Events to Splunk
Teleport's Event Handler plugin receives audit logs from the Teleport Auth Service and forwards them to your log management solution, letting you perform historical analysis, detect unusual behavior, and form a better understanding of how users interact with your Teleport cluster.
In this guide, we will show you how to configure the Teleport Event Handler plugin to send your Teleport audit logs to Splunk. In this setup, the Teleport Event Handler plugin forwards audit logs from Teleport to Splunk's Universal Forwarder, which stores them in Splunk Cloud Platform or Splunk Enterprise for visualization and alerting.
Prerequisites
-
A running Teleport cluster version 15.4.22 or above. If you want to get started with Teleport, sign up for a free trial or set up a demo environment.
-
The
tctl
admin tool andtsh
client tool.On Teleport Enterprise, you must use the Enterprise version of
tctl
, which you can download from your Teleport account workspace. Otherwise, visit Installation for instructions on downloadingtctl
andtsh
for Teleport Community Edition.
Recommended: Configure Machine ID to provide short-lived Teleport
credentials to the plugin. Before following this guide, follow a Machine ID
deployment guide
to run the tbot
binary on your infrastructure.
-
Splunk Cloud Platform or Splunk Enterprise v9.0.1 or above.
-
A Linux host where you will run the Teleport Event Handler plugin and Splunk Universal Forwarder. The Universal Forwarder must be installed on the host.
Running the Teleport Event Handler and Universal Forwarder on separate hosts
If you run the Teleport Event Handler and Universal Forwarder on the same host, there is no need to open a port on the host for ingesting logs. However, if you run the Universal Forwarder on a separate host from the Teleport Event Handler, you will need to open a port on the Universal Forwarder host to traffic from the Teleport Event Handler. This guide assumes that the Universal Forwarder is listening on port
9061
. -
On Splunk Enterprise, port
8088
should be open to traffic from the host running the Teleport Event Handler and Universal Forwarder. -
To check that you can connect to your Teleport cluster, sign in with
tsh login
, then verify that you can runtctl
commands using your current credentials.tctl
is supported on macOS and Linux machines.For example:
$ tsh login --proxy=teleport.example.com [email protected]
$ tctl status
# Cluster teleport.example.com
# Version 15.4.22
# CA pin sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678If you can connect to the cluster and run the
tctl status
command, you can use your current credentials to run subsequenttctl
commands from your workstation. If you host your own Teleport cluster, you can also runtctl
commands on the computer that hosts the Teleport Auth Service for full permissions.
Step 1/4. Set up the Teleport Event Handler plugin
The Event Handler plugin is a binary that runs independently of your Teleport cluster. It authenticates to your Teleport cluster and your Splunk Universal Forwarder using mutual TLS. In this section, you will install the Teleport Event Handler plugin on the Linux host where you are running your Universal Forwarder and generate credentials that the plugin will use for authentication.
Install the Teleport Event Handler plugin
Follow the instructions for your environment to install the Teleport Event Handler plugin on your Universal Forwarder host:
- Linux
- macOS
- Docker
- Helm
- Build via Go
$ curl -L -O https://get.gravitational.com/teleport-event-handler-v15.4.22-linux-amd64-bin.tar.gz
$ tar -zxvf teleport-event-handler-v15.4.22-linux-amd64-bin.tar.gz
$ sudo ./teleport-event-handler/install
We currently only build the Event Handler plugin for amd64 machines. For ARM architecture, you can build from source.
$ curl -L -O https://get.gravitational.com/teleport-event-handler-v15.4.22-darwin-amd64-bin.tar.gz
$ tar -zxvf teleport-event-handler-v15.4.22-darwin-amd64-bin.tar.gz
$ sudo ./teleport-event-handler/install
We currently only build the event handler plugin for amd64 machines. If your macOS machine uses Apple silicon, you will need to install Rosetta before you can run the event handler plugin. You can also build from source.
Ensure that you have Docker installed and running.
$ docker pull public.ecr.aws/gravitational/teleport-plugin-event-handler:15.4.22
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
$ helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
$ helm repo update
You will need Go >= 1.21 installed.
Run the following commands on your Universal Forwarder host:
$ git clone https://github.com/gravitational/teleport.git --depth 1 -b branch/v15
$ cd teleport/integrations/event-handler
$ git checkout 15.4.22
$ make build
The resulting executable will have the name event-handler
. To follow the
rest of this guide, rename this file to teleport-event-handler
and move it
to /usr/local/bin
.
Generate a starter config file
Generate a configuration file with placeholder values for the Teleport Event Handler plugin. Later in this guide, we will edit the configuration file for your environment.
- Cloud-Hosted
- Self-Hosted
- Helm Chart
Run the configure
command to generate a sample configuration. Replace
mytenant.teleport.sh
with the DNS name of your Teleport Enterprise Cloud
tenant:
$ teleport-event-handler configure . mytenant.teleport.sh:443
Run the configure
command to generate a sample configuration. Replace
teleport.example.com:443
with the DNS name and HTTPS port of Teleport's Proxy
Service:
$ teleport-event-handler configure . teleport.example.com:443
Run the configure
command to generate a sample configuration. Assign
TELEPORT_CLUSTER_ADDRESS
to the DNS name and port of your Teleport Auth
Service or Proxy Service:
$ TELEPORT_CLUSTER_ADDRESS=mytenant.teleport.sh:443
$ docker run -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:15.4.22 configure . ${TELEPORT_CLUSTER_ADDRESS?}
In order to export audit events, you'll need to have the root certificate and the client credentials available as a secret. Use the following command to create that secret in Kubernetes:
$ kubectl create secret generic teleport-event-handler-client-tls --from-file=ca.crt=ca.crt,client.crt=client.crt,client.key=client.key
This will pack the content of ca.crt
, client.crt
, and client.key
into the
secret so the Helm chart can mount them to their appropriate path.
You'll see the following output:
Teleport event handler 15.4.22
[1] mTLS Fluentd certificates generated and saved to ca.crt, ca.key, server.crt, server.key, client.crt, client.key
[2] Generated sample teleport-event-handler role and user file teleport-event-handler-role.yaml
[3] Generated sample fluentd configuration file fluent.conf
[4] Generated plugin configuration file teleport-event-handler.toml
The plugin generates several setup files:
$ ls -l
# -rw------- 1 bob bob 1038 Jul 1 11:14 ca.crt
# -rw------- 1 bob bob 1679 Jul 1 11:14 ca.key
# -rw------- 1 bob bob 1042 Jul 1 11:14 client.crt
# -rw------- 1 bob bob 1679 Jul 1 11:14 client.key
# -rw------- 1 bob bob 541 Jul 1 11:14 fluent.conf
# -rw------- 1 bob bob 1078 Jul 1 11:14 server.crt
# -rw------- 1 bob bob 1766 Jul 1 11:14 server.key
# -rw------- 1 bob bob 260 Jul 1 11:14 teleport-event-handler-role.yaml
# -rw------- 1 bob bob 343 Jul 1 11:14 teleport-event-handler.toml
File(s) | Purpose |
---|---|
ca.crt and ca.key | Self-signed CA certificate and private key for Fluentd |
server.crt and server.key | Fluentd server certificate and key |
client.crt and client.key | Fluentd client certificate and key, all signed by the generated CA |
teleport-event-handler-role.yaml | user and role resource definitions for Teleport's event handler |
fluent.conf | Fluentd plugin configuration |
Running the Event Handler separately from the log forwarder
This guide assumes that you are running the Event Handler on the same host or
Kubernetes pod as your log forwarder. If you are not, you will need to instruct
the Event Handler to generate mTLS certificates for subjects besides
localhost
. To do this, use the --cn
and --dns-names
flags of the
teleport-event-handler
configure command.
For example, if your log forwarder is addressable at forwarder.example.com
and the
Event Handler at handler.example.com
, you would run the following configure
command:
$ teleport-event-handler configure --cn=handler.example.com --dns-names=forwarder.example.com
The command generates client and server certificates with the subjects set to
the value of --cn
.
The --dns-names
flag accepts a comma-separated list of DNS names. It will
append subject alternative names (SANs) to the server certificate (the one you
will provide to your log forwarder) for each DNS name in the list. The Event
Handler looks up each DNS name before appending it as an SAN and exits with an
error if the lookup fails.
We'll re-purpose the files generated for Fluentd in our Universal Forwarder configuration.
Define RBAC resources
The teleport-event-handler configure
command generated a file called
teleport-event-handler-role.yaml
. This file defines a teleport-event-handler
role and a user with read-only access to the event
API:
kind: role
metadata:
name: teleport-event-handler
spec:
allow:
rules:
- resources: ['event', 'session']
verbs: ['list','read']
version: v5
---
kind: user
metadata:
name: teleport-event-handler
spec:
roles: ['teleport-event-handler']
version: v2
Move this file to your workstation (or recreate it by pasting the snippet above)
and use tctl
on your workstation to create the role and the user:
$ tctl create -f teleport-event-handler-role.yaml
# user "teleport-event-handler" has been created
# role 'teleport-event-handler' has been created
Enable issuing of credentials for the Event Handler role
- Machine ID
- Long-lived identity files
With the role created, you now need to allow the Machine ID bot to produce credentials for this role.
This can be done with tctl
, replacing my-bot
with the name of your bot:
$ tctl bots update my-bot --add-roles teleport-event-handler
In order for the Event Handler plugin to forward events from your Teleport
cluster, it needs signed credentials from the cluster's certificate authority.
The teleport-event-handler
user cannot request this itself, and requires
another user to impersonate this account in order to request credentials.
Create a role that enables your user to impersonate the teleport-event-handler
user. First, paste the following YAML document into a file called
teleport-event-handler-impersonator.yaml
:
kind: role
version: v5
metadata:
name: teleport-event-handler-impersonator
spec:
options:
# max_session_ttl defines the TTL (time to live) of SSH certificates
# issued to the users with this role.
max_session_ttl: 10h
# This section declares a list of resource/verb combinations that are
# allowed for the users of this role. By default nothing is allowed.
allow:
impersonate:
users: ["teleport-event-handler"]
roles: ["teleport-event-handler"]
Next, create the role:
$ tctl create teleport-event-handler-impersonator.yaml
Add this role to the user that generates signed credentials for the Event Handler:
Assign the teleport-event-handler-impersonator
role to your Teleport user by running the appropriate
commands for your authentication provider:
- Local User
- GitHub
- SAML
- OIDC
-
Retrieve your local user's roles as a comma-separated list:
$ ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
-
Edit your local user to add the new role:
$ tctl users update $(tsh status -f json | jq -r '.active.username') \
--set-roles "${ROLES?},teleport-event-handler-impersonator" -
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
github
authentication connector:$ tctl get github/github --with-secrets > github.yaml
Note that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to thegithub.yaml
file. Because this key contains a sensitive value, you should remove the github.yaml file immediately after updating the resource. -
Edit
github.yaml
, addingteleport-event-handler-impersonator
to theteams_to_roles
section.The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.
Here is an example:
teams_to_roles:
- organization: octocats
team: admins
roles:
- access
+ - teleport-event-handler-impersonator -
Apply your changes:
$ tctl create -f github.yaml
-
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
saml
configuration resource:$ tctl get --with-secrets saml/mysaml > saml.yaml
Note that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to thesaml.yaml
file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource. -
Edit
saml.yaml
, addingteleport-event-handler-impersonator
to theattributes_to_roles
section.The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
attributes_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator -
Apply your changes:
$ tctl create -f saml.yaml
-
Sign out of the Teleport cluster and sign in again to assume the new role.
-
Retrieve your
oidc
configuration resource:$ tctl get oidc/myoidc --with-secrets > oidc.yaml
Note that the
--with-secrets
flag adds the value ofspec.signing_key_pair.private_key
to theoidc.yaml
file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource. -
Edit
oidc.yaml
, addingteleport-event-handler-impersonator
to theclaims_to_roles
section.The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.
Here is an example:
claims_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator -
Apply your changes:
$ tctl create -f oidc.yaml
-
Sign out of the Teleport cluster and sign in again to assume the new role.
Export the plugin identity
Give the plugin access to a Teleport identity file. We recommend using Machine
ID for this in order to produce short-lived identity files that are less
dangerous if exfiltrated, though in demo deployments, you can generate
longer-lived identity files with tctl
:
- Machine ID
- Long-lived identity files
Configure tbot
with an output that will produce the credentials needed by
the plugin. As the plugin will be accessing the Teleport API, the correct
output type to use is identity
.
For this guide, the directory
destination will be used. This will write these
credentials to a specified directory on disk. Ensure that this directory can
be written to by the Linux user that tbot
runs as, and that it can be read by
the Linux user that the plugin will run as.
Modify your tbot
configuration to add an identity
output.
If running tbot
on a Linux server, use the directory
output to write
identity files to the /opt/machine-id
directory:
outputs:
- type: identity
destination:
type: directory
# For this guide, /opt/machine-id is used as the destination directory.
# You may wish to customize this. Multiple outputs cannot share the same
# destination.
path: /opt/machine-id
If running tbot
on Kubernetes, write the identity file to Kubernetes secret
instead:
outputs:
- type: identity
destination:
type: kubernetes_secret
name: teleport-event-handler-identity
If operating tbot
as a background service, restart it. If running tbot
in
one-shot mode, execute it now.
You should now see an identity
file under /opt/machine-id
or a Kubernetes
secret named teleport-event-handler-identity
. This contains the private key and signed
certificates needed by the plugin to authenticate with the Teleport Auth
Service.
Like all Teleport users, teleport-event-handler
needs signed credentials in order to
connect to your Teleport cluster. You will use the tctl auth sign
command to
request these credentials.
The following tctl auth sign
command impersonates the teleport-event-handler
user,
generates signed credentials, and writes an identity file to the local
directory:
$ tctl auth sign --user=teleport-event-handler --out=identity
The plugin connects to the Teleport Auth Service's gRPC endpoint over TLS.
The identity file, identity
, includes both TLS and SSH credentials. The
plugin uses the SSH credentials to connect to the Proxy Service, which
establishes a reverse tunnel connection to the Auth Service. The plugin
uses this reverse tunnel, along with your TLS credentials, to connect to the
Auth Service's gRPC endpoint.
Certificate Lifetime
By default, tctl auth sign
produces certificates with a relatively short
lifetime. For production deployments, we suggest using Machine
ID to programmatically issue and renew
certificates for your plugin. See our Machine ID getting started
guide to learn more.
Note that you cannot issue certificates that are valid longer than your existing credentials.
For example, to issue certificates with a 1000-hour TTL, you must be logged in with a session that is
valid for at least 1000 hours. This means your user must have a role allowing
a max_session_ttl
of at least 1000 hours (60000 minutes), and you must specify a --ttl
when logging in:
$ tsh login --proxy=teleport.example.com --ttl=60060
If you are running the plugin on a Linux server, create a data directory to hold certificate files for the plugin:
$ sudo mkdir -p /var/lib/teleport/api-credentials
$ sudo mv identity /var/lib/teleport/plugins/api-credentials
If you are running the plugin on Kubernetes, Create a Kubernetes secret that contains the Teleport identity file:
$ kubectl -n teleport create secret generic --from-file=identity teleport-event-handler-identity
Once the Teleport credentials expire, you will need to renew them by running the
tctl auth sign
command again.
Step 2/4. Configure the Universal Forwarder
In this step, you will configure the Universal Forwarder to receive audit logs
from the Teleport Event Handler plugin and forward them to Splunk. The Event
Handler sends audit logs as HTTP POST requests with the content type
application/json
.
We will assume that you assigned $SPLUNK_HOME
to /opt/splunkforwarder
when
installing the Universal Forwarder.
Finding your $SPLUNK_HOME
To find your $SPLUNK_HOME
, run the following command to see the location of
your Universal Forwarder service definition, which the init system systemd uses
to run the Universal Forwarder:
$ sudo systemctl status SplunkForwarder.service
● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'
Loaded: loaded (/lib/systemd/system/SplunkForwarder.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-10-07 15:57:37 UTC; 2h 18min ago
Main PID: 1772 (splunkd)
Tasks: 53 (limit: 2309)
Memory: 70.8M (limit: 1.8G)
CGroup: /system.slice/SplunkForwarder.service
├─1772 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd
└─1810 [splunkd pid=1772] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner]
View the file at the path shown in the Loaded:
field. Your $SPLUNK_HOME
will include the filepath segments in ExecStart
before /bin
. In this case,
$SPLUNK_HOME
is /opt/splunkforwarder/
:
ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd
Create an index for your audit logs
Create an index for your Teleport audit logs by visiting the home page of the
Splunk UI and navigating to Settings > Indexes. Click New Index.
Name your index teleport-audit-logs
and assign the Index Data Type field
to "Events".
The values of the remaining fields, Max raw data size and Searchable retention (days) depend on your organization's resources and practices for log management.
Click Save
Create a token for the Universal Forwarder
The Universal Forwarder authenticates client traffic using a token. To generate a token, visit the home page of the Splunk UI. Navigate to Settings > Data inputs In the Local inputs table, find the HTTP Event Collector row and click Add new
Enter a name you can use to recognize the token later so you can
manage it, e.g., Teleport Audit Events
. Click Next.
In the Input Settings view (above), next to the Source type field, click Select. In the Select Source Type dropdown menu, click Structured, then _json. Splunk will index incoming logs as JSON, which is the format the Event Handler uses to send logs to the Universal Forwarder.
In the Index section, select the teleport-audit-logs
index you created
earlier. Click Review then view the summary and click Submit. Copy the
Token Value field and keep it somewhere safe so you can use it later in this
guide.
Prepare a certificate file for the Universal Forwarder
The Universal Forwarder signs TLS certificates using a file that contains both
an X.509-format certificate and an RSA private key. To prepare this, run the
following commands on the Universal Forwarder host, where server.crt
and
server.key
are two of the files you generated earlier with the
teleport-event-handler configure
the command:
$ cp server.crt server.pem
$ cat server.key >> server.pem
Allow the Universal Forwarder to access the certificate file:
$ sudo chown splunk:splunk server.pem
Configure the HTTP Event Collector
On your Universal Forwarder host, create a file at
/opt/splunkforwarder/etc/system/local/inputs.conf
with the following content:
[http]
port = 9061
disabled = false
serverCert = server.pem
sslPassword =
requireClientCert = true
[http://audit]
token =
index = teleport-audit-logs
allowQueryStringAuth = true
This configuration enables the HTTP input, which will listen on port 9061
and
receive logs from the Teleport Event Handler Plugin, assigning them to the
teleport-audit-logs
index.
Assign serverCert
to the path to the server.pem
file you generated earlier.
To assign sslPassword
, run the following command in the directory that
contains fluent.conf
:
$ cat fluent.conf | grep passphrase
private_key_passphrase "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"
Copy the passphrase and paste it as the value of sslPassword
.
The token
field in the [http://audit]
section enables the Universal
Forwarder to collect logs from HTTP clients that present a token. Assign token
to the token you generated earlier.
allowQueryStringAuth
enables the Teleport Event Handler to include the token
in a query string, rather than the Authorization
HTTP header (the default).
This is necessary because the Teleport Event Handler does not currently support
custom HTTP headers.
Configure TLS
To configure secure communications between the Universal Forwarder and the
Teleport Event Handler, create a file called
/opt/splunkforwarder/etc/system/local/server.conf
with the following content
(if this file already exists, add the following field in the [sslConfig]
section):
[sslConfig]
sslRootCAPath =
Assign sslRootCAPath
to the path of the ca.crt
file you generated earlier.
Ensure that the Universal Forwarder can read the CA certificate:
$ sudo chmod +r ca.crt
Configure an output
Instruct the Universal Forwarder to send the logs it collects to Splunk.
Create a file at the path /opt/splunkforwarder/etc/system/local/outputs.conf
with the following content:
[tcpout]
sslVerifyServerCert = true
[httpout]
httpEventCollectorToken =
uri =
Fill in httpEventCollectorToken
with the token you generated earlier.
Assign uri
to the following, replacing MYHOST
with the hostname of your
Splunk instance and 8088
with the port you are using for your Splunk HTTP
Event Collector.
https://MYHOST:8088
The format of the URL to use will depend on your Splunk deployment. See the list of acceptable URL formats in the Splunk documentation.
Note that you must only include the scheme, host, and port of the URL. The Universal Forwarder will append the correct URL path of the Splunk ingestion API when forwarding logs.
Finally, restart the Universal Forwarder:
$ sudo systemctl restart SplunkForwarder
Step 3/4. Run the Teleport Event Handler plugin
Now that you have configured your Universal Forwarder to receive logs via HTTP and forward them to Splunk, you will ensure that the Teleport Event Handler plugin is configured to authenticate to the Universal Forwarder and your Teleport cluster, then run the Teleport Event Handler.
Configure the Teleport Event Handler
In this section, you will configure the Teleport Event Handler for your environment.
- Linux server
- Helm Chart
Earlier, we generated a file called teleport-event-handler.toml
to configure
the Fluentd event handler. This file includes setting similar to the following:
storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"
# The window size configures the duration of the time window for the event handler
# to request events from Teleport. By default, this is set to 24 hours.
# Reduce the window size if the events backend cannot manage the event volume
# for the default window size.
# The window size should be specified as a duration string, parsed by Go's time.ParseDuration.
window-size = "24h"
[forward.fluentd]
ca = "/home/bob/event-handler/ca.crt"
cert = "/home/bob/event-handler/client.crt"
key = "/home/bob/event-handler/client.key"
url = "https://fluentd.example.com:8888/test.log"
session-url = "https://fluentd.example.com:8888/session"
[teleport]
addr = "example.teleport.com:443"
identity = "identity"
Modify the configuration to replace fluentd.example.com
with the domain name
of your Fluentd deployment.
Use the following template to create teleport-plugin-event-handler-values.yaml
:
eventHandler:
storagePath: "./storage"
timeout: "10s"
batch: 20
namespace: "default"
# The window size configures the duration of the time window for the event handler
# to request events from Teleport. By default, this is set to 24 hours.
# Reduce the window size if the events backend cannot manage the event volume
# for the default window size.
# The window size should be specified as a duration string, parsed by Go's time.ParseDuration.
windowSize: "24h"
teleport:
address: "example.teleport.com:443"
identitySecretName: teleport-event-handler-identity
identitySecretPath: identity
fluentd:
url: "https://fluentd.fluentd.svc.cluster.local/events.log"
sessionUrl: "https://fluentd.fluentd.svc.cluster.local/session.log"
certificate:
secretName: "teleport-event-handler-client-tls"
caPath: "ca.crt"
certPath: "client.crt"
keyPath: "client.key"
persistentVolumeClaim:
enabled: true
Update the configuration file as follows.
Change forward.fluentd.url
to the following:
url = "https://localhost:9061/services/collector/raw?token=MYTOKEN"
Ensure the URL includes the scheme, host and port of your Universal Forwarder's
HTTP input, plus the URL path that the Universal Forwarder uses for raw data
(/services/collector/raw
).
Replace MYTOKEN
with the token you generated earlier for the Splunk Universal
Forwarder. If you are running the Universal Forwarder and Event Handler on
separate hosts, replace localhost
with your Universal Forwarder's IP address
or domain name.
Change forward.fluentd.session-url
to the same value as forward.fluentd.url
,
but with the query parameter key &noop=
appended to the end:
session-url = "https://localhost:9061/services/collector/raw?token=MYTOKEN&noop="
For audit logs related to Teleport sessions, the Teleport Event Handler appends
routing information to the URL that our HTTP input configuration does not use.
Adding the noop
query parameter causes the Teleport Event Handler to append
the routing information as the parameter's value so the Universal Forwarder can
discard it.
Next, edit the teleport
section of the configuration as follows:
- Executable or Docker
- Helm Chart
addr
: Include the hostname and HTTPS port of your Teleport Proxy Service
or Teleport Enterprise Cloud account (e.g., teleport.example.com:443
or
mytenant.teleport.sh:443
).
identity
: Fill this in with the path to the identity file you exported
earlier.
client_key
, client_crt
, root_cas
: Comment these out, since we
are not using them in this configuration.
address
: Include the hostname and HTTPS port of your Teleport Proxy Service
or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443
or
mytenant.teleport.sh:443
).
identitySecretName
: Fill in the identitySecretName
field with the name
of the Kubernetes secret you created earlier.
identitySecretPath
: Fill in the identitySecretPath
field with the path
of the identity file within the Kubernetes secret. If you have followed the
instructions above, this will be identity
.
If you are providing credentials to the Event Handler using a tbot
binary that
runs on a Linux server, make sure the value of identity
in the Event Handler
configuration is the same as the path of the identity file you configured tbot
to generate, /opt/machine-id/identity
.
Ensure that the Teleport Event Handler can read the identity file:
$ chmod +r auth.pem
Start the Teleport Event Handler
Start the Teleport Teleport Event Handler by following the instructions below.
- Linux server
- Helm chart
Copy the teleport-event-handler.toml
file to /etc
on your Linux server.
Update the settings within the toml
file to match your environment. Make sure to
use absolute paths on settings such as identity
and storage
. Files
and directories in use should only be accessible to the system user executing
the teleport-event-handler
service such as /var/lib/teleport-event-handler
.
Next, create a systemd service definition at the path
/usr/lib/systemd/system/teleport-event-handler.service
with the following
content:
[Unit]
Description=Teleport Event Handler
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/usr/local/bin/teleport-event-handler start --config=/etc/teleport-event-handler.toml --teleport-refresh-enabled=true
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/teleport-event-handler.pid
[Install]
WantedBy=multi-user.target
If you are not using Machine ID to provide short-lived credentials to the Event
Handler, you can remove the --teleport-refresh-enabled true
flag.
Enable and start the plugin:
$ sudo systemctl enable teleport-event-handler
$ sudo systemctl start teleport-event-handler
Choose when to start exporting events
You can configure when you would like the Teleport Event Handler to begin
exporting events when you run the start
command. This example will start
exporting from May 5th, 2021:
$ teleport-event-handler start --config /etc/teleport-event-handler.toml --start-time "2021-05-05T00:00:00Z"
You can only determine the start time once, when first running the Teleport
Event Handler. If you want to change the time frame later, remove the plugin
state directory that you specified in the storage
field of the handler's
configuration file.
Once the Teleport Event Handler starts, you will see notifications about scanned and forwarded events:
$ sudo journalctl -u teleport-event-handler
DEBU Event sent id:f19cf375-4da6-4338-bfdc-e38334c60fd1 index:0 ts:2022-09-21
18:51:04.849 +0000 UTC type:cert.create event-handler/app.go:140
...
Run the following command on your workstation:
$ helm install teleport-plugin-event-handler teleport/teleport-plugin-event-handler \
--values teleport-plugin-event-handler-values.yaml \
--version 15.4.22
Step 4/4. Visualize your audit logs in Splunk
Since our setup forwards audit logs to Splunk in the structured JSON format, Splunk automatically indexes them, so fields will be available immediately for use in visualizations. You can use these fields to create dashboards that track the way users are interacting with your Teleport cluster.
For example, from the Splunk UI home page, navigate to Search & Reporting > Dashboards > Create New Dashboard. Enter "Teleport Audit Log Types" for the title of your dashboard and click Classic Dashboards. Click Create then, in the Edit Dashboard view, click Add Panel.
In the Add Panel sidebar, click New > Column Chart. For the Search String field, enter the following:
index="teleport-audit-logs" | timechart count by event
Once you click Add to Dashboard you will see a count of Teleport event types over time, which gives you a general sense of how users are interacting with Teleport:
Troubleshooting connection issues
If the Teleport Event Handler is displaying error logs while connecting to your Teleport Cluster, ensure that:
- The certificate the Teleport Event Handler is using to connect to your
Teleport cluster is not past its expiration date. This is the value of the
--ttl
flag in thetctl auth sign
command, which is 12 hours by default. - Ensure that in your Teleport Event Handler configuration file
(
teleport-event-handler.toml
), you have provided the correct host and port for the Teleport Proxy Service.
Next steps
Now that you are exporting your audit logs to Splunk, consult our audit log reference so you can plan visualizations and alerts.
In this guide, we made use of impersonation to supply credentials to the Teleport Event Handler to communicate with your Teleport cluster. To learn more about impersonation, read our guide.
While this guide uses the tctl auth sign
command to issue credentials for the
Teleport Event Handler, production clusters should use Machine ID for safer,
more reliable renewals. Read our guide
to getting started with Machine ID.