Access Multiple Kubernetes Clusters

May 17, 2022 by 

Daniel Olaogun

csrf attacks

Kubernetes is a tool used by many developers and DevOps administrators to deploy and manage containerized applications, and it has become a default tool for container orchestration in many organizations.

To deploy and manage containerized applications in Kubernetes, you must set up a Kubernetes cluster. A Kubernetes cluster is a set of nodes or servers that hosts your Kubernetes workloads. Many organizations set up a single Kubernetes cluster to manage their applications, but in some scenarios, a single Kubernetes cluster isn’t enough, which means organizations will set up multiple Kubernetes clusters. Such scenarios may include a need for increased availability and performance, policy compliance requirements that are region-specific, or a desire to eliminate vendor lock-in.

Shifting from a single Kubernetes cluster to multiple Kubernetes clusters raises new problems, such as managing access and securing the clusters. In this article, you’ll learn more about managing access to multiple Kubernetes clusters at scale.

Use cases for accessing multiple clusters

There are many use cases for accessing multiple clusters, and in this section, you'll look at three of the most common.

Providing managed Kubernetes service

If you’re providing a managed Kubernetes service, such as AWS EKS, you’ll need to manage multiple master nodes, allowing the clients to take charge of the worker nodes. While your clients won’t be bothered with the management of their master nodes, you and your team will need to access them to manage things like scaling, availability, and backups.

Access to multiple in-house clusters

One of the most popular reasons an organization may have multiple in-house clusters is to isolate apps on the basis of their environment. For instance, all apps in development are deployed in the dev cluster, while apps in the testing stage are deployed in the test cluster, and apps ready to be used by end users are deployed in the production cluster. This approach restricts access to the production environment, and creates stronger security on the production cluster, so that if there’s a misconfiguration on any of the dev or test clusters, users on the production cluster are not affected.

Virtual clusters for developers

Just as you can have multiple clusters based on the application environments, Kubernetes administrators can also create virtual clusters. Virtual clusters are spun from an existing Kubernetes cluster, and use the cluster’s resources. The primary advantage of virtual clusters is that they offer similar benefits to multiple clusters, such as restricting users’ access to specific clusters, which improves security but is less expensive than running multiple resource-independent clusters. Examples for Developer Virtual Clusters include: MicroK8s, Minikube and K3s.

How to manage cluster access at scale

The kubeconfig configuration file is an important aspect of managing access to any Kubernetes cluster. Without this file, you can’t connect to any of your Kubernetes clusters via kubectl in your terminal.

There are three major sections in a kubeconfig file:

Clusters: This contains the endpoint to the Kubernetes cluster API server, as well as a public certificate.

Users: This is a list of users that are permitted to access the Kubernetes API.

Contexts: This maps users and clusters together, allowing one user to connect to multiple clusters, or multiple users to connect to one Kubernetes cluster.

Below is an example of a kubeconfig file:

apiVersion: v1
- cluster:
    certificate-authority: demo-ca-file-1
  name: demo-cluster-1
- cluster:
    certificate-authority: demo-ca-file-2
    insecure-skip-tls-verify: false
  name: demo-cluster-2
- cluster:
    certificate-authority: demo-ca-file-3
    insecure-skip-tls-verify: true
  name: demo-cluster-3
- context:
    cluster: demo-cluster-1
    namespace: development
    user: developer
  name: dev-cluster
- context:
    cluster: demo-cluster-2
    namespace: production
    user: admin
  name: admin-cluster
- context:
    cluster: demo-cluster-1
    namespace: staging
    user: developer
  name: dev-staging-cluster
current-context: ""
kind: Config
preferences: {}
- name: developer
    client-certificate: demo-cert-file
    client-key: demo-key-file
- name: admin
    password: admin-password
    username: admin

When dealing with multiple clusters, you must configure your kubeconfig file with the necessary credentials, as seen in the example above, to enable you to access the clusters.

There are several methods you can use, such as updating the kubeconfig file manually with the new cluster credentials or using the kubectl client to add new credentials to your kubeconfig file.

For instance, if you want to have access to another cluster, you can use the kubectl command-line client to set the cluster, context, and user, as seen below.

# Set the cluster, context, and user using kubectl command.
# add cluster details
kubectl config set-cluster new-cluster --server= --certificate-authority=new-cluster-ca-file

# add user details
kubectl config set-credentials new-user --client-certificate=new-user-cert-file --client-key=new-user-key-file

# add context details
kubectl config set-context user-cluster --cluster=new-cluster --namespace=default --user=new-user

Kubectl also has other commands for managing the kubeconfig file credentials, such as deleting and updating user and cluster credentials. When updating user and cluster credentials, setting new credentials in your kubeconfig file with a name that already exists in the file overwrites the previous credentials. To delete credentials, you can use the unset command:

# To delete a user 
kubectl config unset users.<name>

# To delete a cluster
kubectl config unset clusters.<name>

# To delete a context
kubectl config unset contexts.<name>

While kubectl provides useful commands for managing access to multiple clusters, it also can create new problems. When you create access for multiple users, you must share their user credentials in a secured manner in order to ensure cluster security. Managing access like this also quickly becomes cumbersome when you have many users or have to create multiple user credentials for your clusters.

Another problem with managing clusters at scale is enforcing best security practices on all your clusters. One example is multifactor authentication, which is a security best practice that uses multiple authentication methods, such as requiring a one-time password sent to the user’s verified device in addition to the standard username-password login.

Manually creating and managing users in a single cluster is tedious, and the pain points multiply when you have to manage their access across multiple clusters. When you follow the best practice of creating short-lived certificates for users in your clusters, it quickly becomes an enormous time sink, as you have to frequently generate new certificates when they expire, then share them with the respective users in a secure way. Exporting kubeconfigs becomes another operational challenge.

End-users can use kubectx to easily switch between clusters with kubens making it simple to switch between namespaces.

It’s clear that relying on kubectl alone isn’t sufficient for access management. To effectively and securely manage access to multiple clusters, you must use a solution with the following characteristics.

  • Excellent security: The solution needs to be able to secure cluster and user credentials. In the event of a security breach, it should also be able to blacklist compromised credentials.
  • Easy cluster management at scale: You need a tool that allows you to manage user access to multiple clusters easily. In addition to allowing new users to register on a cluster, the tool should permit you to easily add or remove users from a cluster or clusters, as well as edit their access permissions.
  • Support for incremental adoption: As your team grows, you may find that you need more clusters, which means more cluster access to manage. The solution you choose should have the capacity to handle access to as many clusters as possible, ensuring that you won’t be limited by any inability to handle the level of scale you need.
  • Support for existing clusters: A good solution should provide support for existing clusters. This means that you don’t have to tamper with existing configurations of your clusters to accommodate the solution. Instead, you should be able to integrate it easily into your existing clusters.

Teleport Kubernetes Access for multi-cluster access

You’ve learned about using the kubeconfig file as a tool to manage access to Kubernetes clusters, but you’ve also learned that it’s not a particularly effective solution, especially at scale.

Teleport is a sophisticated tool for managing access to multiple Kubernetes clusters. Teleport also provides SSO authentication, allowing for easy onboarding of users into your clusters. Furthermore, Teleport allows you to enforce security best practices for your clusters. Such practices include the following:

  • Multi-factor authentication: With Teleport, you can require that your users employ multiple authentication mechanisms before they are granted access to the cluster.
  • Short-lived kubeconfig and certificates: These measures help ensure that even if user credentials are compromised, an attacker won’t have access to the cluster for a long period of time.
  • Auditing: In addition to allowing you to see all your servers and connections in real time, Teleport also creates a comprehensive audit log for security and compliance. It records interactive sessions and security events across all your environments, offering you easy insight into exactly what’s happening.

It can become difficult to manage access in a unified way for multiple clusters across different cloud providers and data centers. However, Teleport provides an easy-to-use interface through a single dashboard that allows you to easily manage and control users’ access to your clusters, monitor the activities that each user performs on each cluster, and much more.

Teleport is open-source and can be installed using our Helm Charts. If using Teleport Cloud, follow our documentation.

# Add teleport-agent helm chart to charts repository
$ helm repo add teleport
$ helm repo update

# Install Kubernetes agent. It dials back to the Teleport cluster at $PROXY_ADDR
$ CLUSTER='cookie'
$ helm install teleport-agent teleport/teleport-kube-agent --set 
$ kubeClusterName=${CLUSTER?} \
  --set proxyAddr=${PROXY_ADDR?} --set authToken=${TOKEN?} --create-namespace --namespace=teleport-agent \
 --set teleportVersionOverride=9.1.2

Teleport also has a unique functionality called Trusted Clusters, which provides a seamless solution for accessing multiple clusters. This feature allows administrators to set up and connect multiple clusters, and establishes trust between them. One of these clusters, referred to as the root cluster, allows the user to SSH into other clusters through its own proxy server. This means the user doesn’t need to hop between different proxy servers to get access to different clusters, nor do they need to have direct connection to the proxy servers of other clusters.

Conclusion to access Kubernetes clusters

Configuring access to multiple clusters can be a daunting task if you don’t have the right tools. In this article, you’ve learned how to use kubectl to manage access in multiple clusters, and you've seen how doing so is a complex process that's difficult to do securely. You’ve also learned about Teleport, which can help you efficiently and securely manage access to your clusters.

Teleport isn’t restricted to Kubernetes clusters, either. You can also use Teleport to manage access to other infrastructure, such as your database, Linux servers, web applications, and more. You can control all aspects of your infrastructure’s access from your Teleport dashboard, reducing operational overhead, enforcing security compliance, and improving productivity tremendously. If you’re looking to take management of your infrastructure access to the next level, give Teleport a try.

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs