Securing Infrastructure Access at Scale in Large Enterprises
Dec 12
Virtual
Register Now
Teleport logoTry For Free
Fork me on GitHub

Teleport

Teleport Kubernetes Operator

The Teleport Kubernetes Operator provides a way for Kubernetes users to manage some Teleport resources through Kubernetes, following the Operator Pattern.

The Teleport Kubernetes Operator is deployed alongside its custom resource definitions. Once deployed, users can use a Kubernetes client like kubectl or their existing CI/CD Kubernetes pipelines to create Teleport custom resources. The Teleport Kubernetes Operator watches for those resources and does API calls to Teleport to reach the desired state.

Since Teleport version 15, the operator can be deployed both:

  • alongside self-hosted Teleport clusters deployed with the teleport-cluster Helm chart. This deployment method differs from version 14. In version 15 and above, the operator is no longer deployed as a sidecar. An operator outage cannot affect Teleport's availability.
  • against a remote Teleport instance (such as Teleport Cloud or deployed with Terraform)

The operator supports multiple replicas within a single cluster by electing a leader with a Kubernetes lease.

Warning

Only one operator deployment should run against a Teleport cluster. Else, different operators could cause instability and non-deterministic behaviour.

Currently supported Teleport resources are:

  • users (TeleportUser)
  • roles
    • TeleportRole creates role v5
    • TeleportRoleV6 creates role v6
    • TeleportRoleV7 creates role v7
  • OIDC connectors (TeleportOIDCConnector)
  • SAML connectors (TeleportSAMLConnector)
  • GitHub connectors (TeleportGithubConnector)
  • provision tokens (TeleportProvisionToken)
  • Login Rules (TeleportLoginRules)

Setting up the operator

If you are self-hosting Teleport using the teleport-cluster Helm chart, follow the guide for Helm-deployed clusters.

If you are hosting Teleport out of Kubernetes (Teleport Cloud, Terraform, ...), follow the standalone operator guide.

Control reconciliation with annotations

The operator supports two annotations on CRs:

teleport.dev/keep

This annotation instructs the operator to keep the Teleport resource if the CR is deleted. This is useful if you want to migrate between two resource versions.

For example, to migrate from TeleportRoleV6 to TeleportRoleV7:

  • Annotate the existing TeleportRoleV6 resource with teleport.dev/keep: "true"
  • Delete the TeleportRoleV6 CR, the operator won't delete the associated Teleport role
  • Create a TeleportRoleV7 CR with the same name, the operator will find the existing v6 role and adopt it.

Possible values are "true" or "false" (those are strings, as Booleans are not valid label values in Kubernetes).

teleport.dev/ignore

This annotation instructs the operator to ignore the CR when reconciling. This means the resource will not be created, updated, or deleted in Teleport.

This also means the operator will not remove its finalizer if you try to delete an ignored CR. The finalizer will stay and the deletion be blocked until you patch the resource to remove the finalizer or remove the ignore annotation.

Possible values are "true" or "false" (those are strings, as Booleans are not valid label values in Kubernetes).

Look up values from secrets

Some Teleport resources might contain sensitive values. Select CR fields can reference an existing Kubernetes secret and the operator will retrieve the value from the secret when reconciling.

Even when you store sensitive values out of CRs, the CRs must still be considered as critical as the Kubernetes secrets themselves. Many CRs configure Teleport RBAC. Someone with CR editing permissions can become a Teleport administrator and retrieve the sensitive values from Teleport.

See the dedicated guide for more details.

Troubleshooting

The CustomResources (CRs) are not reconciled

The Teleport Operator watches for new resources or changes in Kubernetes. When a change happens, it triggers the reconciliation loop. This loop is in charge of validating the resource, checking if it already exists in Teleport and making calls to the Teleport API to create/update/delete the resource. The reconciliation loop also adds a status field on the Kubernetes resource.

If an error happens and the reconciliation loop is not successful, an item in status.conditions will describe what went wrong. This allows users to diagnose errors by inspecting Kubernetes resources with kubectl:

kubectl describe teleportusers myuser

For example, if a user has been granted a nonexistent role the status will look like:

apiVersion: resources.teleport.dev/v2
kind: TeleportUser
# [...]
status:
  conditions:
  - lastTransitionTime: "2022-07-25T16:15:52Z"
    message: Teleport resource has the Kubernetes origin label.
    reason: OriginLabelMatching
    status: "True"
    type: TeleportResourceOwned
  - lastTransitionTime: "2022-07-25T17:08:58Z"
    message: 'Teleport returned the error: role my-non-existing-role is not found'
    reason: TeleportError
    status: "False"
    type: SuccessfullyReconciled

Here SuccessfullyReconciled is False and the error is role my-non-existing-role is not found.

If the status is not present or does not give sufficient information to solve the issue, check the operator logs:

The CR doesn't have a status

  1. Check if the CR is in the same namespace as the operator. The operator only watches for resource in its own namespace.

  2. Check if the operator pods are running and healthy:

    kubectl get pods -n "$OPERATOR_NAMESPACE"`
  3. Check the operator logs:

    kubectl logs deploy/<OPERATOR_DEPLOYMENT_NAME> -n "$OPERATOR_NAMESPACE"
    Note

    In case of multi-replica deployments, only one operator instance is running the reconciliation loop. This operator is called the leader and is the only one producing reconciliation logs. The other operator instances are waiting with the following log:

    leaderelection.go:248] attempting to acquire leader lease teleport/431e83f4.teleport.dev...
    

    To diagnose reconciliation issues, you will have to inspect all pods to find the one reconciling the resources.

I cannot delete the Kubernetes CR

The operator protects Kubernetes CRs from deletion with a finalizer. It will not allow the CR to be deleted until the Teleport resource is deleted as well, this is a safety to avoid leaving dangling resources and potentially grant unintentional access.

There might be some reasons causing Teleport to refuse a resource deletion, the most frequent one is if another resource depends on it. For example: you cannot delete a role if it is still assigned to a user.

If this happens, the operator will report the error sent by Teleport in its log.

To resolve this lock, you can either:

  • resolve the dependency issue so the resource gets successfully deleted in Teleport. In the role example, this would imply removing any mention of the role from the various users who had it.

  • patch the Kubernetes CR to remove the finalizers. This will tell Kubernetes to stop waiting for the operator deletion and remove the CR. If you do this, the CR will be removed but the Teleport resource will remain. The operator will never attempt to remove it again.

    For example, if the role is named my-role:

    kubectl patch TeleportRole my-role -p '{"metadata":{"finalizers":null}}' --type=merge

Next steps