Teleport provides a locking mechanism to restrict access to a computing environment. System administrators can disable a compromised user or prevent access during cluster maintenance.
When a lock is in force, all interactions like SSH/DB/k8s connections and certificate requests matched by the lock's target are rejected. A lock can target the following objects or attributes:
- a Teleport user by the user's name;
- a Teleport RBAC role by the role's name;
- an MFA device by the device's UUID;
- an OS/UNIX login;
- a Teleport node by the node's UUID (effectively unregistering it from the cluster).
teleportbinaries in your deployment.
Verify that your Teleport client is connected:
$ tctl status # Cluster tele.example.com # Version 7.1.3 # CA pin sha256:sha-hash-here
To try this flow in the cloud, login into your cluster using tsh, then use tctl remotely:
$ tsh login --proxy=myinstance.teleport.sh $ tctl status
Locks are modeled as resources with
lock. To create a new lock, one can run the
tctl lock command:
tctl lock [email protected] --message="Suspicious activity." --ttl=10h
Created a lock with name "dc7cee9d-fe5e-4534-a90d-db770f0234a1".
Note that without specifying
--expires the created lock remains in
force until explicitly removed with
tctl rm. Refer to
tctl lock --help for
the list of all supported parameters.
Under the hood,
tctl lock creates a resource:
kind: lock version: v2 metadata: name: dc7cee9d-fe5e-4534-a90d-db770f0234a1 spec: target: user: [email protected] message: "Suspicious activity." expires: "2021-08-14T22:27:00Z" # RFC3339 format
kind: lock resources can also be created and updated using
as per usual, see the Admin Guide for more
With a lock in force, all established connections involving the lock's target get terminated while any new requests are rejected.
Errors returned and warnings logged in this situation feature a message of the form:
lock targeting User:"[email protected]" is in force: Suspicious activity.
If a Teleport node or proxy cannot properly synchronize its local lock view with the backend, there is a decision to be made about whether to rely on the last known locks. This decision strategy is encoded as one of the two modes:
strictmode causes all interactions to be terminated when the locks are not guaranteed to be up to date;
best_effortmode keeps relying on the most recent locks.
The cluster-wide mode defaults to
You can set up default locking mode via API or CLI using resource
or static configuration file:
Create a YAML file
cap.yaml or get the existing file using
tctl get cap
kind: cluster_auth_preference metadata: name: cluster-auth-preference spec: locking_mode: best_effort version: v2
Create a resource:
tctl create -f cap.yaml
cluster auth preference has been updated
It is also possible to configure the locking mode for a particular role:
kind: role version: v4 metadata: name: example-role-with-strict-locking spec: options: lock: strict
When none of the roles involved in an interaction specify the mode or when there is no user involved, the mode is taken from the cluster-wide setting.
With multiple potentially conflicting locking modes (the cluster-wide default
and the individual per-role settings) a single occurrence of
for the local lock view to become evaluated strictly.