
Teleport's Event Handler plugin receives audit events from the Teleport Auth Service and forwards them to your log management solution, letting you perform historical analysis, detect unusual behavior, and form a better understanding of how users interact with your Teleport cluster.
In this guide, we will show you how to configure Teleport's Event Handler plugin to send your Teleport audit events to the Elastic Stack. In this setup, the Event Handler plugin forwards audit events from Teleport to Logstash, which stores them in Elasticsearch for visualization and alerting in Kibana.
Prerequisites
-
A running Teleport Enterprise cluster, including the Auth Service and Proxy Service. For details on how to set this up, see our Enterprise Getting Started guide.
-
The Enterprise
tctl
admin tool andtsh
client tool version >= 13.0.3, which you can download by visiting your Teleport account.tctl versionTeleport Enterprise v13.0.3 go1.20
tsh versionTeleport v13.0.3 go1.20
Please use the latest version of Teleport Enterprise documentation.
-
Logstash version 8.4.1 or above running on a Linux host. Logstash must be listening on a TCP port that is open to traffic from the Teleport Auth Service. In this guide, you will also run the Event Handler plugin on this host.
-
Elasticsearch and Kibana version 8.4.1 or above, either running via an Elastic Cloud account or on your own infrastructure. You will need permissions to create and manage users in Elasticsearch.
We have tested this guide on the Elastic Stack version 8.4.1.
Step 1/4. Set up the Event Handler plugin
The Event Handler plugin is a binary that runs independently of your Teleport cluster. It authenticates to your Teleport cluster and Logstash using mutual TLS. In this section, you will install the Event Handler plugin on the Linux host where you are running Logstash and generate credentials that the plugin will use for authentication.
Install the Event Handler plugin
Follow the instructions for your environment to install the Event Handler plugin on your Logstash host:
curl -L -O https://get.gravitational.com/teleport-event-handler-v13.0.3-linux-amd64-bin.tar.gztar -zxvf teleport-event-handler-v13.0.3-linux-amd64-bin.tar.gz
We currently only build the Event Handler plugin for amd64 machines. For ARM architecture, you can build from source.
curl -L -O https://get.gravitational.com/teleport-event-handler-v13.0.3-darwin-amd64-bin.tar.gztar -zxvf teleport-event-handler-v13.0.3-darwin-amd64-bin.tar.gz
We currently only build the event handler plugin for amd64 machines. If your macOS machine uses Apple silicon, you will need to install Rosetta before you can run the event handler plugin. You can also build from source.
Ensure that you have Docker installed and running.
docker pull public.ecr.aws/gravitational/teleport-plugin-event-handler:13.0.3
To allow Helm to install charts that are hosted in the Teleport Helm repository, use helm repo add
:
helm repo add teleport https://charts.releases.teleport.dev
To update the cache of charts from the remote repository, run helm repo update
:
helm repo update
Ensure that you have Docker installed and running.
Run the following commands to build the plugin:
git clone https://github.com/gravitational/teleport-plugins.git --depth 1cd teleport-plugins/event-handler/build.assetsmake build
You can find the compiled binary within your clone of the teleport-plugins
repo, with the file path, event-handler/build/teleport-event-handler
.
You will need Go >= 1.20 installed.
Run the following commands on your Universal Forwarder host:
git clone https://github.com/gravitational/teleport-plugins.git --depth 1cd teleport-plugins/event-handlergo build
The resulting executable will have the name event-handler
. To follow the
rest of this guide, rename this file to teleport-event-handler
and move it
to /usr/local/bin
.
Generate a starter config file
Generate a configuration file with placeholder values for the Teleport Event Handler plugin. Later in this guide, we will edit the configuration file for your environment.
Please use the latest version of Teleport Enterprise documentation.
Run the configure
command to generate a sample configuration. Replace
teleport.example.com:443
with the DNS name and HTTPS port of Teleport's Proxy
Service:
./teleport-event-handler configure . teleport.example.com:443
Run the configure
command to generate a sample configuration. Assign
TELEPORT_CLUSTER_ADDRESS
to the DNS name and port of your Teleport Auth
Service or Proxy Service:
TELEPORT_CLUSTER_ADDRESS=mytenant.teleport.sh:443docker run -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:13.0.3 configure . ${TELEPORT_CLUSTER_ADDRESS?}
In order to export audit events, you'll need to have the root certificate and the client credentials available as a secret. Use the following command to create that secret in Kubernetes:
kubectl create secret generic teleport-event-handler-client-tls --from-file=ca.crt=ca.crt,client.crt=client.crt,client.key=client.key
This will pack the content of ca.crt
, client.crt
, and client.key
into the
secret so the Helm chart can mount them to their appropriate path.
You'll see the following output:
Teleport event handler 0.0.1 07617b0ad0829db043fe779faf1669defdc8d84e
[1] mTLS Fluentd certificates generated and saved to ca.crt, ca.key, server.crt, server.key, client.crt, client.key
[2] Generated sample teleport-event-handler role and user file teleport-event-handler-role.yaml
[3] Generated sample fluentd configuration file fluent.conf
[4] Generated plugin configuration file teleport-event-handler.toml
Follow-along with our getting started guide:
https://goteleport.com/setup/guides/fluentd
The plugin generates several setup files:
ls -l-rw------- 1 bob bob 1038 Jul 1 11:14 ca.crt
-rw------- 1 bob bob 1679 Jul 1 11:14 ca.key
-rw------- 1 bob bob 1042 Jul 1 11:14 client.crt
-rw------- 1 bob bob 1679 Jul 1 11:14 client.key
-rw------- 1 bob bob 541 Jul 1 11:14 fluent.conf
-rw------- 1 bob bob 1078 Jul 1 11:14 server.crt
-rw------- 1 bob bob 1766 Jul 1 11:14 server.key
-rw------- 1 bob bob 260 Jul 1 11:14 teleport-event-handler-role.yaml
-rw------- 1 bob bob 343 Jul 1 11:14 teleport-event-handler.toml
File(s) | Purpose |
---|---|
ca.crt and ca.key | Self-signed CA certificate and private key for Fluentd |
server.crt and server.key | Fluentd server certificate and key |
client.crt and client.key | Fluentd client certificate and key, all signed by the generated CA |
teleport-event-handler-role.yaml | user and role resource definitions for Teleport's event handler |
fluent.conf | Fluentd plugin configuration |
We'll re-purpose the files generated for Fluentd in our Logstash configuration.
Define RBAC resources
The teleport-event-handler configure
command generated a file called
teleport-event-handler-role.yaml
. This file defines a teleport-event-handler
role and a user with read-only access to the event
API:
kind: role
metadata:
name: teleport-event-handler
spec:
allow:
rules:
- resources: ['event', 'session']
verbs: ['list','read']
version: v5
---
kind: user
metadata:
name: teleport-event-handler
spec:
roles: ['teleport-event-handler']
version: v2
Move this file to your workstation (or recreate it by pasting the snippet above)
and use tctl
on your workstation to create the role and the user:
tctl create -f teleport-event-handler-role.yamluser "teleport-event-handler" has been created
role 'teleport-event-handler' has been created
If you are running Teleport on your Elastic Stack host, e.g., you are exposing
Kibana's HTTP endpoint via the Teleport Application Service, running the tctl create
command above will generate an error similar to the following:
ERROR: tctl must be either used on the auth server or provided with the identity file via --identity flag
To avoid this error, create the teleport-event-handler-role.yaml
file on your
workstation, then sign in to your Teleport cluster and run the tctl
command
locally.
Enable impersonation of the Event Handler plugin user
In order for the Event Handler plugin to forward events from your Teleport
cluster, it needs signed credentials from the cluster's certificate authority.
The teleport-event-handler
user cannot request this itself, and requires
another user to impersonate this account in order to request credentials.
Create a role that enables your user to impersonate the teleport-event-handler
user. First, paste the following YAML document into a file called
teleport-event-handler-impersonator.yaml
:
kind: role
version: v5
metadata:
name: teleport-event-handler-impersonator
spec:
options:
# max_session_ttl defines the TTL (time to live) of SSH certificates
# issued to the users with this role.
max_session_ttl: 10h
# This section declares a list of resource/verb combinations that are
# allowed for the users of this role. By default nothing is allowed.
allow:
impersonate:
users: ["teleport-event-handler"]
roles: ["teleport-event-handler"]
Next, create the role:
tctl create teleport-event-handler-impersonator.yaml
Assign the teleport-event-handler-impersonator
role to your Teleport user by running the following
commands, depending on whether you authenticate as a local Teleport user or via
the github
, saml
, or oidc
authentication connectors:
Retrieve your local user's configuration resource:
tctl get users/$(tsh status -f json | jq -r '.active.username') > out.yaml
Edit out.yaml
, adding teleport-event-handler-impersonator
to the list of existing roles:
roles:
- access
- auditor
- editor
+ - teleport-event-handler-impersonator
Apply your changes:
tctl create -f out.yaml
Retrieve your github
configuration resource:
tctl get github/github --with-secrets > github.yaml
Edit github.yaml
, adding teleport-event-handler-impersonator
to the
teams_to_roles
section. The team you will map to this role will depend on how
you have designed your organization's RBAC, but it should be the smallest team
possible within your organization. This team must also include your user.
Here is an example:
teams_to_roles:
- organization: octocats
team: admins
roles:
- access
+ - teleport-event-handler-impersonator
Apply your changes:
tctl create -f github.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to github.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Retrieve your saml
configuration resource:
tctl get --with-secrets saml/mysaml > saml.yaml
Edit saml.yaml
, adding teleport-event-handler-impersonator
to the
attributes_to_roles
section. The attribute you will map to this role will
depend on how you have designed your organization's RBAC, but it should be the
smallest group possible within your organization. This group must also include
your user.
Here is an example:
attributes_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator
Apply your changes:
tctl create -f saml.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to saml.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Retrieve your oidc
configuration resource:
tctl get oidc/myoidc --with-secrets > oidc.yaml
Edit oidc.yaml
, adding teleport-event-handler-impersonator
to the
claims_to_roles
section. The claim you will map to this role will depend on
how you have designed your organization's RBAC, but it should be the smallest
group possible within your organization. This group must also include your
user.
Here is an example:
claims_to_roles:
- name: "groups"
value: "my-group"
roles:
- access
+ - teleport-event-handler-impersonator
Apply your changes:
tctl create -f oidc.yaml
Note the --with-secrets
flag in the tctl get
command. This adds the value of
spec.signing_key_pair.private_key
to oidc.yaml
. This is a sensitive value,
so take precautions when creating this file and remove it after updating the resource.
Log out of your Teleport cluster and log in again to assume the new role.
Export the access plugin identity
Like all Teleport users, teleport-event-handler
needs signed credentials in
order to connect to your Teleport cluster. You will use the tctl auth sign
command to request these credentials for your plugin.
The following tctl auth sign
command impersonates the teleport-event-handler
user,
generates signed credentials, and writes an identity file to the local
directory:
tctl auth sign --user=teleport-event-handler --out=auth.pem
The plugin connects to the Teleport Auth Service's gRPC endpoint over TLS.
The identity file, auth.pem
, includes both TLS and SSH credentials. The plugin
uses the SSH credentials to connect to the Proxy Service, which establishes a
reverse tunnel connection to the Auth Service. The plugin uses this reverse
tunnel, along with your TLS credentials, to connect to the Auth Service's gRPC
endpoint.
You will refer to this file later when configuring the plugin.
By default, tctl auth sign
produces certificates with a relatively short
lifetime. For production deployments, we suggest using Machine
ID to programmatically issue and renew
certificates for your plugin. See our Machine ID getting started
guide to learn more.
Step 2/4. Configure a Logstash pipeline
The Event Handler plugin forwards audit logs from Teleport by sending HTTP requests to a user-configured endpoint. We will define a Logstash pipeline that handles these requests, extracts logs, and sends them to Elasticsearch.
Create a role for the Event Handler plugin
Your Logstash pipeline will require permissions to create and manage Elasticsearch indexes and index lifecycle management policies, plus get information about your Elasticsearch deployment. Create a role with these permissions so you can later assign it to the Elasticsearch user you will create for the Event Handler.
In Kibana, navigate to "Management" > "Roles" and click "Create role". Enter the
name teleport-plugin
for the new role. Under the "Elasticsearch" section,
under "Cluster privileges", enter manage_index_templates
, manage_ilm
, and
monitor
.
Under "Index privileges", define an entry with audit-events-*
in the "Indices"
field and write
and manage
in the "Privileges" field. Click "Create role".

Create an Elasticsearch user for the Event Handler
Create an Elasticsearch user that Logstash can authenticate as when making requests to the Elasticsearch API.
In Kibana, find the hamburger menu on the upper left and click "Management",
then "Users" > "Create user". Enter teleport
for the "Username" and provide a
secure password.
Assign the user the teleport-plugin
role we defined earlier.
Prepare TLS credentials for Logstash
Later in this guide, your Logstash pipeline will use an HTTP input to receive audit events from the Teleport Event Handler plugin.
Logstash's HTTP input can only sign certificates with a private key that uses
the unencrypted PKCS #8 format. When you ran teleport-event-handler configure
earlier, the command generated an encrypted RSA key. We will convert this key to
PKCS #8.
You will need a password to decrypt the RSA key. To retrieve this, execute the
following command in the directory where you ran teleport-event-handler configure
:
cat fluent.conf | grep passphraseprivate_key_passphrase "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"
Convert the encrypted RSA key to an unencrypted PKCS #8 key. The command will prompt your for the password you retrieved:
openssl pkcs8 -topk8 -in server.key -nocrypt -out pkcs8.key
Enable Logstash to read the new key, plus the CA and certificate we generated earlier:
chmod +r pkcs8.key ca.crt server.crt
Define an index template
When the Event Handler plugin sends audit events to Logstash, Logstash needs to know how to parse these events to forward them to Elasticsearch. You can define this logic using an index template, which Elasticsearch uses to construct an index for data it receives.
Create a file called audit-events.json
with the following content:
{
"index_patterns": ["audit-events-*"],
"template": {
"settings": {},
"mappings": {
"dynamic":"true"
}
}
}
This index template modifies any index with the pattern audit-events-*
.
Because it includes the "dynamic": "true"
setting, it instructs Elasticsearch
to define index fields dynamically based on the events it receives. This is
useful for Teleport audit events, which use a variety of fields depending on
the event type.
Define a Logstash pipeline
On the host where you are running Logstash, create a configuration file that
defines a Logstash pipeline. This pipeline will receive logs from port 9601
and forward them to Elasticsearch.
On the host running Logstash, create a file called
/etc/logstash/conf.d/teleport-audit.conf
with the following content:
input {
http {
port => 9601
ssl => true
ssl_certificate => "/home/server.crt"
ssl_key => "/home/pkcs8.key"
ssl_certificate_authorities => [
"/home/ca.crt"
]
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
user => "teleport"
password => "ELASTICSEARCH_PASSPHRASE"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
}
In the input.http
section, update ssl_certificate
and
ssl_certificate_authorities
to include the locations of the server certificate
and certificate authority files that the teleport-event-handler configure
command generated earlier.
Logstash will authenticate client certificates against the CA file and present a signed certificate to the Teleport Event Handler plugin.
Edit the ssl_key
field to include the path to the pkcs8.key
file we
generated earlier.
In the output.elasticsearch
section, edit the following fields depending on
whether you are using Elastic Cloud or your own Elastic Stack deployment:
Assign cloud_auth
to a string with the content teleport:PASSWORD
, replacing
PASSWORD
with the password you assigned to your teleport
user earlier.
Visit https://cloud.elastic.co/deployments
, find the "Cloud ID" field, copy
the content, and add it as the value of cloud_id
in your Logstash pipeline
configuration. The elasticsearch
section should resemble the following:
elasticsearch {
cloud_id => "CLOUD_ID"
cloud_auth => "teleport:PASSWORD"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
Assign hosts
to a string indicating the hostname of your Elasticsearch host.
Assign user
to teleport
and password
to the passphrase you created for
your teleport
user earlier.
The elasticsearch
section should resemble the following:
elasticsearch {
hosts => "elasticsearch.example.com"
user => "teleport"
password => "PASSWORD"
template_name => "audit-events"
template => "/home/audit-events.json"
index => "audit-events-%{+yyyy.MM.dd}"
template_overwrite => true
}
Finally, modify template
to point to the path to the audit-events.json
file
you created earlier.
Because the index template we will create with this file applies to indices
with the prefix audit-events-*
, and we have configured our Logstash pipeline
to create an index with the title "audit-events-%{+yyyy.MM.dd}
, Elasticsearch
will automatically index fields from Teleport audit events.
Disable the Elastic Common Schema for your pipeline
The Elastic Common Schema (ECS) is a standard set of fields that Elastic Stack uses to parse and visualize data. Since we are configuring Elasticsearch to index all fields from your Teleport audit logs dynamically, we will disable the ECS for your Logstash pipeline.
On the host where you are running Logstash, edit /etc/logstash/pipelines.yml
to add the following entry:
- pipeline.id: teleport-audit-logs
path.config: "/etc/logstash/conf.d/teleport-audit.conf"
pipeline.ecs_compatibility: disabled
This disables the ECS for your Teleport audit log pipeline.
If your pipelines.yml
file defines an existing pipeline that includes
teleport-audit.conf
, e.g., by using a wildcard value in path.config
, adjust
the existing pipeline definition so it no longer applies to
teleport-audit.conf
.
Run the Logstash pipeline
Restart Logstash:
sudo systemctl restart logstash
Make sure your Logstash pipeline started successfully by running the following command to tail Logstash's logs:
sudo journalctl -u logstash -f
When your Logstash pipeline initializes its http
input and starts running, you
should see a log similar to this:
Sep 15 18:27:13 myhost logstash[289107]: [2022-09-15T18:27:13,491][INFO ][logstash.inputs.http][main][33bdff0416b6a2b643e6f4ab3381a90c62b3aa05017770f4eb9416d797681024] Starting http input listener {:address=>"0.0.0.0:9601", :ssl=>"true"}
These logs indicate that your Logstash pipeline has connected to Elasticsearch and installed a new index template:
Sep 12 19:49:06 myhost logstash[33762]: [2022-09-12T19:49:06,309][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.4.1) {:es_version=>8}
Sep 12 19:50:00 myhost logstash[33762]: [2022-09-12T19:50:00,993][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"audit-events"}
If Logstash fails to initialize the pipeline, it may continue to attempt to contact Elasticsearch. In that case, you will see repeated logs like the one below:
Sep 12 19:43:04 myhost logstash[33762]: [2022-09-12T19:43:04,519][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://teleport:[email protected]:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond"}
Diagnosing the problem
To diagnose the cause of errors initializing your Logstash pipeline, search your
Logstash journalctl
logs for the following, which indicate that the pipeline is
starting. The relevant error logs should come shortly after these:
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,146][INFO][logstash.javapipeline][main] Starting pipeline {:pipeline_id=>"main","pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50,"pipeline.max_inflight"=>250,"pipeline.sources"=>["/etc/logstash/conf.d/teleport-audit.conf"],:thread=>"#<Thread:0x1c1a3ee5 run>"}
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,912][INFO][logstash.javapipeline][main] Pipeline Java execution initialization time {"seconds"=>0.76}
Disabling Elasticsearch TLS
This guide assumes that you have already configured Elasticsearch and Logstash to communicate with one another via TLS.
If your Elastic Stack deployment is in a sandboxed or low-security environment
(e.g., a demo environment), and your journalctl
logs for Logstash show that
Elasticsearch is unreachable, you can disable TLS for communication between
Logstash and Elasticsearch.
Edit the file /etc/elasticsearch/elasticsearch.yml
to set
xpack.security.http.ssl.enabled
to false
, then restart Elasticsearch.
Step 3/4. Run the Event Handler plugin
Complete the Event Handler configuration
Earlier, we generated a file called teleport-event-handler.toml
to configure
the Event Handler plugin. This file includes settings similar to the following:
storage = "./storage"
timeout = "10s"
batch = 20
namespace = "default"
[forward.fluentd]
ca = "/home/ca.crt"
cert = "/home/client.crt"
key = "/home/client.key"
url = "https://localhost:8888/test.log"
[teleport]
addr = "example.teleport.com:443"
identity = "identity"
Update the configuration file as follows.
Change forward.fluentd.url
to the scheme, host and port you configured for
your Logstash http
input earlier, https://localhost:9601
. Change
forward.fluentd.session-url
to the same value with the root URL path:
https://localhost:9601/
.
Change teleport.addr
to the host and port of your Teleport Proxy Service, or
the Auth Service if you have configured the Event Handler to connect to it
directly, e.g., mytenant.teleport.sh:443
.
addr
: Include the hostname and HTTPS port of your Teleport Proxy Service
or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443
or
mytenant.teleport.sh:443
).
identity
: Fill this in with the path to the identity file you exported
earlier.
client_key
, client_crt
, root_cas
: Comment these out, since we
are not using them in this configuration.
address
: Include the hostname and HTTPS port of your Teleport Proxy Service
or Teleport Enterprise Cloud tenant (e.g., teleport.example.com:443
or
mytenant.teleport.sh:443
).
identitySecretName
: Fill in the identitySecretName
field with the name
of the Kubernetes secret you created earlier.
Start the Event Handler
Start the Teleport Teleport Event Handler as a daemon. To do so, create a
systemd service definition at the path
/usr/lib/systemd/system/teleport-event-handler.service
with the following
content:
[Unit]
Description=Teleport Event Handler
After=network.target
[Service]
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/teleport-event-handler start --config=/etc/teleport-event-handler.toml
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/teleport-event-handler.pid
[Install]
WantedBy=multi-user.target
Enable and start the plugin:
sudo systemctl enable teleport-event-handlersudo systemctl start teleport-event-handler
You can configure when you would like the Teleport Event Handler to begin
exporting events when you run the start
command. This example will start
exporting from May 5th, 2021:
teleport-event-handler start --config teleport-event-handler.toml --start-time "2021-05-05T00:00:00Z"
You can only determine the start time once, when first running the Teleport
Event Handler. If you want to change the time frame later, remove the plugin
state directory that you specified in the storage
field of the handler's
configuration file.
Once the Teleport Event Handler starts, you will see notifications about scanned and forwarded events:
sudo journalctl -u teleport-event-handlerDEBU Event sent id:f19cf375-4da6-4338-bfdc-e38334c60fd1 index:0 ts:2022-09-21
18:51:04.849 +0000 UTC type:cert.create event-handler/app.go:140
...
Step 4/4. Create a data view in Kibana
Make it possible to explore your Teleport audit events in Kibana by creating a data view. In the Elastic Stack UI, find the hamburger menu on the upper left of the screen, then click "Management" > "Data Views". Click "Create data view".
For the "Name" field, use "Teleport Audit Events". In "Index pattern", use
audit-events-*
to select all indices created by our Logstash pipeline. In
"Timestamp field", choose time
, which Teleport adds to its audit events.

To use your data view, find the search box at the top of the Elastic Stack UI and enter "Discover". On the upper left of the screen, click the dropdown menu and select "Teleport Audit Events". You can now search and filter your Teleport audit events in order to get a better understanding how users are interacting with your Teleport cluster.

For example, we can click the event
field on the left sidebar and visualize
the event types for your Teleport audit events over time:

Troubleshooting connection issues
If the Teleport Event Handler is displaying error logs while connecting to your Teleport Cluster, ensure that:
- The certificate the Teleport Event Handler is using to connect to your
Teleport cluster is not past its expiration date. This is the value of the
--ttl
flag in thetctl auth sign
command, which is 12 hours by default. - Ensure that in your Teleport Event Handler configuration file
(
teleport-event-handler.toml
), you have provided the correct host and port for the Teleport Proxy Service or Auth Service.
Next steps
Now that you are exporting your audit events to the Elastic Stack, consult our audit event reference so you can plan visualizations and alerts.
While this guide uses the tctl auth sign
command to issue credentials for the
Teleport Event Handler, production clusters should use Machine ID for safer,
more reliable renewals. Read our guide
to getting started with Machine ID.