Managed Updates v2 in EC2 Agents
This guide shows two cloud-init patterns for bringing Linux EC2 instances under Managed Updates v2
and assigning them to an update group using teleport-update by defining the group and proxy address.
How it works
It assumes you'll join the agent to your cluster during first boot wit delegated join method and that your cluster has Managed Updates v2 enabled.
Delegated joins avoid shipping any secret token in user data. You create a named token resource that encodes context rules (your AWS account, role ARNs, regions, etc.), then the agent proves its identity to the Auth Service using cloud-issued credentials. Your agent can fetch IAM credentials (e.g., EC2 instance profile, IRSA, or env vars).
In more detail, here is how IAM join works:
- The Node signs an AWS STS
GetCallerIdentityrequest using its own IAM credentials (e.g., EC2 instance role). - The Node sends this pre-signed request to the Teleport Auth Service as part of the join handshake.
- The Auth Service does not call AWS APIs directly with its own credentials. Instead, it simply executes that pre-signed
GetCallerIdentityrequest over HTTPS. AWS STS returns the identity information (Account ID, ARN, etc.). - Teleport validates that identity against your token's allow rules.
In this guide, we use the teleport-update binary for installation.
The teleport-update enable command installs Teleport at the version advertised by the cluster and enables
Managed Updates v2 on the host. It also creates the necessary systemd units for Teleport, along with a
teleport-update timer that periodically runs teleport-update update. In addition, it saves most of the
flags you pass in (such as -g, -p, or -b), so that running the command again will update the stored settings
rather than requiring you to re-enter them.
For full Managed Updates v2 instructions, see Managed Updates forAgents (v2).
Every agent belongs to an update group (e.g., development, staging, prod), if no group is configured default is used.
The cluster-side autoupdate_config resource defines when each group may update.
Cloud or self-hosted clusters use the same schedule model. Self-hosted clusters also set the desired
version via autoupdate_version resource.
Step 1/5. Creating join tokens
See the joining documentation for a detailed explanation of the joining process and the supported join methods.
The IAM join method is the recommended way of joining EC2 instances. It offers stronger security guarantees, more granular control over who can join, and is easier to use (the token doesn't need to be short-lived nor rotated).
Create a file named token.yaml:
# token.yaml
kind: token
version: v2
metadata:
name: iam-join
spec:
roles: [Node]
join_method: iam
allow:
# Allow specific AWS accounts (or restrict by ARN)
- aws_account: "123456789012"
- aws_account: "999998880000"
aws_arn: "arn:aws:sts::999998880000:assumed-role/teleport-node-role/i-*"
Run the following command to create or update the resource:
tctl create -f token.yamlprovision_token "iam-join" has been created
This defines which AWS accounts and roles can join via IAM. You can use aws_account alone
(to allow all roles in that account) or add an aws_arn filter for stricter control.
On AWS, prepare an IAM role that your EC2 instances will assume to prove their identity.
No permissions required - the join verification only calls sts:GetCallerIdentity.
Example trust policy for EC2:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
This policy is required and must be created using the AWS Management Console or the AWS CLI.
Step 2/5. Define groups and schedules
To enable managed updates, you need to create an autoupdate_config resource with the enabled mode.
Additionally, in autoupdate_config, you can define update groups (each agent belongs to one) and configure update schedules.
If no group or schedule is defined, the default one is used.
- Default
- Custom groups with schedule
Create a file called autoupdate_config.yaml containing:
# autoupdate_config.yaml
kind: autoupdate_config
metadata:
name: autoupdate-config
spec:
agents:
mode: enabled
strategy: halt-on-error
Pick simple, meaningful groups like development, staging, production. Then model your rollout windows
and sequencing in autoupdate_config:
# autoupdate_config.yaml
kind: autoupdate_config
metadata:
name: autoupdate-config
spec:
agents:
mode: enabled
strategy: halt-on-error
schedules:
regular:
- name: development
days: ["Mon","Tue","Wed","Thu"]
start_hour: 4 # UTC
- name: staging
days: ["Mon","Tue","Wed","Thu"]
start_hour: 5
wait_hours: 24 # run a day later than development
This resource is used to control the update schedule by defining different groups, sequence and upgrade window.
In the example, we define two update groups: development and staging. Upgrades are allowed on Monday, Tuesday,
Wednesday, and Thursday.
The first group, development, must start the upgrade process at 04:00 UTC. Once it is completed and all
agents in the development group are upgraded, the next group, staging, begins the upgrade process with
a 24-hour delay at 05:00 UTC.
If any agent in the development group fails, the upgrade process must stop. This behavior is controlled by
the halt-on-error strategy value.
Run the following command to create or update the resource:
tctl create -f autoupdate_config.yamlautoupdate_config has been created
Changes to the schedule configuration will take effect for the next version change.
Step 3/5. Define the version (self-hosted only)
Self-hosted Teleport users must specify which version their agents should update
to via the autoupdate_version resource. Cloud-hosted Teleport Enterprise users must
skip this step, as it is managed by the Cloud team.
By creating autoupdate_version resource we define the version of the Teleport that must be initially installed
during EC2 instance bootstrap.
# autoupdate_version.yaml
kind: autoupdate_version
metadata:
name: autoupdate-version
spec:
agents:
start_version: 18.3.1
target_version: 18.3.1
schedule: regular
mode: enabled
Create or update autoupdate_version resource:
tctl create -f autoupdate_version.yamlautoupdate_version has been created
Step 4/5. Create an EC2 Instance
- AWS CLI
- AWS web console
To create required IAM role, instance profile and launch EC2 instance, follow these steps:
- Create an IAM role with a trust policy that allows EC2 to assume it:
cat > trust-policy.json <<EOF{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}EOFaws iam create-role \ --role-name EC2TeleportRole \ --assume-role-policy-document file://trust-policy.json
- Create an EC2 instance profile to use the role:
aws iam create-instance-profile --instance-profile-name EC2TeleportInstanceProfileaws iam add-role-to-instance-profile \ --instance-profile-name EC2TeleportInstanceProfile \ --role-name EC2TeleportRole
- Create
cloud-init.yamlconfiguration file with cloud init data.
#cloud-config
packages:
- curl
runcmd:
# Install Teleport using your cluster's script and define the group by query parameter.
# Script persists your updater flags (group, proxy, etc.)
- curl "https://example.teleport.sh:443/scripts/install.sh?group=development" | sudo bash
# Write agent config (join token) before starting the service.
- teleport configure --roles node --proxy example.teleport.sh:443 --join-method iam --token iam-join > /etc/teleport.yaml
# Enable and start Teleport service.
- systemctl enable --now teleport
- Launch the EC2 instance with the instance profile:
Specify the AMI image ID for the instance and a security group that allows at least outbound traffic.
aws ec2 run-instances \ --image-id ami-xxxx \ --instance-type t3.micro \ --iam-instance-profile Name=EC2TeleportInstanceProfile \ --security-group-ids sg-xxxx \ --region us-west-2 \ --user-data file://cloud-init.yaml
Follow the procedure for launching an instance and providing user data, which allows you to supply cloud-init directives at launch - whether by entering them in the User data field of the AWS Management Console's Advanced details section.
In the example below, the directives create and configure a Teleport node on Amazon Linux 2.
The script installs Teleport and the v2 updater and enables SSH service only.
The #cloud-config line at the top is required in order to identify the commands as cloud-init directives.
User-data example with a previously created join token:
#cloud-config
packages:
- curl
runcmd:
# Install Teleport using your cluster's script and define the group by query parameter.
# Script persists your updater flags (group, proxy, etc.).
- curl "https://example.teleport.sh:443/scripts/install.sh?group=development" | sudo bash
# Write agent config (join token) before starting the service.
- teleport configure --roles node --proxy example.teleport.sh:443 --join-method iam --token iam-join > /etc/teleport.yaml
# Enable and start Teleport service.
- systemctl enable --now teleport
Step 5/5. Verifying on the instance
Once the EC2 instance is started, SSH on it to check the updater status:
teleport-update status
proxy: example.teleport.sh:443path: /usr/local/binbase_url: https://cdn.teleport.devenabled: truepinned: falseactive: version: 18.3.1 flags: [Enterprise]target: version: 18.3.1 flags: [Enterprise]in_window: false
Shows proxy, group, active/target versions, and whether you're in an update window.
Check services:
systemctl status teleport● teleport.service - Teleport Service Loaded: loaded (/usr/lib/systemd/system/teleport.service; enabled; preset: enabled) Drop-In: /etc/systemd/system/teleport.service.d └─teleport-update.conf Active: active (running) Invocation: 1725c68591634876afd805b417cf9801 Main PID: 40848 (teleport) Tasks: 17 (limit: 11011) Memory: 79.5M (peak: 81.7M, swap: 27.5M, swap peak: 27.5M) CPU: 19min 2.548s CGroup: /system.slice/teleport.service └─40848 /usr/local/bin/teleport start --config /etc/teleport.yaml --pid-file=/run/teleport.pidjournalctl -u teleport -n100 -fteleport[2246081]: 2025-10-21T20:51:44.154Z INFO [PROC:1] Found an instance metadata service. Teleport will import labels from this cloud instance. pid:2246081.1 type:EC2 service/service.go:1186teleport[2246081]: 2025-10-21T20:51:44.156Z INFO [PROC:1] Service is creating new listener. pid:2246081.1 type:debug address:/var/lib/teleport/debug.sock service/signals.go:242teleport[2246081]: 2025-10-21T20:51:44.159Z INFO [PROC:1] Generating new host UUID pid:2246081.1 host_uuid:1fcaf1e0-0fbf-454d-ac13-49965918dc39 storage/storage.go:356teleport[2246081]: 2025-10-21T20:51:44.166Z INFO [PROC:1] Joining the cluster with a secure token. pid:2246081.1 service/connect.go:532teleport[2246081]: 2025-10-21T20:51:44.166Z INFO Attempting registration. method:via proxy server join/join.go:388teleport[2246081]: 2025-10-21T20:51:44.168Z WARN [CLOUD] Could not fetch EC2 instance's tags, please ensure 'allow instance tags in metadata' is enabled on the instance. labels/cloud.go:147teleport[2246081]: 2025-10-21T20:51:44.541Z INFO Attempting to register with IAM method using region STS endpoint. role:Instance join/join.go:785teleport[2246081]: 2025-10-21T20:51:44.697Z INFO Successfully registered with IAM method using regional STS endpoint. role:Instance join/join.go:807teleport[2246081]: 2025-10-21T20:51:44.697Z INFO Successfully registered. method:via proxy server join/join.go:395teleport[2246081]: 2025-10-21T20:51:44.698Z INFO [PROC:1] Successfully obtained credentials to connect to the cluster. pid:2246081.1 identity:Instance service/connect.go:383journalctl -u teleport-update -n100 -fsystemd[1]: Starting teleport-update.service - Teleport auto-update service...teleport-update[160893]: INFO [UPDATER] Teleport is up-to-date. Update window is active, but no action is needed. active_version:18.3.1 agent/updater.go:877systemd[1]: teleport-update.service: Deactivated successfully.systemd[1]: Finished teleport-update.service - Teleport auto-update service.
From an admin workstation.
Login to the remote cluster:
tsh login --proxy example.teleport.sh:443
Confirm that the node is joined (it might take up to 15 min to be synced).
tctl inventory lsServer ID Hostname Services Agent Version Upgrader Upgrader Version Update Group------------------------------------ ------------------------------------------- -------- ------------- -------- ---------------- ------------1fcaf1e0-0fbf-454d-ac13-49965918dc39 ip-172-31-44-126.us-west-2.compute.internal Node v18.3.1 binary v18.3.1 default
Check the Managed Update v2 status:
tctl autoupdate agents statusGroup Name State Start Time State Reason Agent Count Up-to-date------------------- ----- ------------------- --------------- ----------- ----------default (catch-all) Done XXXX-XX-XX 00:00:00 update_complete 1 1