Transforming Privileged Access: A Dialogue on Secretless, Zero Trust Architecture
Watch the Replay
Teleport logoTry For Free
Background image
What is Kubernetes?

What is a Kubernetes Cluster

Kubernetes is an open-source container orchestration system for automating software deployment, scaling and management.

What is a Kubernetes Cluster?

Kubernetes is an open-source container orchestration system for automating software deployment, scaling and management. A Kubernetes cluster is another term for the compute, worker nodes and control plane used to access deployed workloads. This webpage will explore some of the history of Kubernetes, break down what a Kubernetes cluster is and go deeper into key Kubernetes topics.

Before Kubernetes and the history of Kubernetes

Before we understand Kubernetes, it’s important to understand the problems of the ‘hyper scalers.’ Back in the day, companies would deploy mainframes into their basement or anywhere on-premises. As computing workloads scaled, teams started to either build their own data center or rent one. With this change, teams started to use virtualization and virtual machines (VMs) to scale Linux hardware better. VMs have the limit of requiring a complete operating system, leading to wasted resources.

As teams moved to the cloud, they had new limitations and opportunities. This saw the rise of cloud-native architecture, teams started to use containers, leveraging technology such as Docker to deploy apps as microservices. As these services grew, there was a greater need for automation. Many large tech companies built internal systems to solve these problems. At Google they had the ‘Borg,’ and in 2015 it took the learnings and patterns for running stateless container applications and released Kubernetes.

Why do you need to use Kubernetes?

As mentioned above, teams started to need Kubernetes to run modern cloud-native microservice architectures better. Besides being an easy way to run containerized applications, Kubernetes has a few other benefits.

  • Multi-Cloud: As Kubernetes is open source, it enables teams to run the Kubernetes service on-prem, in the cloud, or on hosted cloud providers, such as AWS EKS or Azure AKS.

  • Stateless Services: Kubernetes is perfect for running stateless services, such as Microservices, Serverless Functions, Machine-Learning Models, and other container images.

  • High Availability: Kubernetes was designed to be highly available, and the system itself will self-heal and kill pods if the health checks say so.

  • Autoscaling: Teams can leverage Resource Management for Pods and Containers to set specific CPU requirements and define resource types, letting workloads easily scale with demand.

  • Service Discovery and Load Balancing: Kubernetes has built-in service discovery and uses ingress for load balancing.

SCP

Ideal Kubernetes workloads

Ideal workloads for Kubernetes include stateless services such as web apps, microservices, and APIs. These services are designed to scale quickly and efficiently, making them ideal candidates for container orchestration.

Kubernetes also excels at scheduling and managing batch jobs, distributed systems, and machine learning jobs. Kubernetes can also be used to deploy and manage IoT devices, as well as run distributed databases, such as Cassandra and MongoDB.

How Kubernetes works

Kubernetes is an open-source platform for managing containerized workloads and services. It works by automating the deployment, scaling, and operations of application containers.

Kubernetes is designed to facilitate both declarative configuration and automation. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes works by deploying containers on nodes across a cluster. Nodes are physical or virtual machines that are connected to the cluster. Kubernetes uses labels and annotations to organize and coordinate the containers across the nodes. It also provides a unified API to allow users to control and automate the deployment and scaling of applications.

The Kubernetes control plane is responsible for managing the state of the cluster, such as deploying and scaling the containers. It also provides a self-healing capability to ensure that the applications are up and running. It is also responsible for scheduling workloads and monitoring the cluster's health.

Kubernetes primitives

Kubernetes primitives are the basic building blocks of the system. These primitives provide the foundational elements for deploying and managing applications on a Kubernetes cluster. Kubernetes primitives include Pods, Services, Deployments, ConfigMaps, Secrets, and Namespaces.

Together, these primitives provide the basic building blocks for deploying, scaling and managing applications on a Kubernetes cluster.

  1. Pods: Pods are the basic unit of deployment in Kubernetes. A pod is a group of one or more containers and their associated storage, network and other resources.
Kubernetes Pods
  1. Deployment: A Deployment is a declarative way to describe how to deploy an application. It allows you to define the desired state of your application, and the Kubernetes resource scheduler will ensure that the application is running in the desired state.
Kubernetes Deployment
  1. Service: A Service is an abstraction layer that enables access to a group of applications running in a Kubernetes cluster. It provides a single entry point for external traffic to a group of applications, allowing you to route traffic to specific applications based on a variety of criteria.
Kubernetes Service
  1. ReplicaSet: A ReplicaSet is a controller that ensures the desired number of replicas of an application are running at any given time. If the number of replicas falls below the desired number, the ReplicaSet will scale up the application to meet the desired number of replicas.
Replica Set
  1. Autoscaling: Autoscaling is a feature of Kubernetes that allows an application to scale up or down based on its resource usage. This allows applications to scale up or down in response to changes in resource demand.
Autoscaling
  1. Namespace: A Namespace is a logical grouping of resources in a Kubernetes cluster. It allows you to organize resources in a way that makes it easier to manage and control access to them.
Namespace
  1. ConfigMap: A ConfigMap is a set of configuration settings that can be used by applications. It is used to store application configuration settings in a Kubernetes cluster, making it easier to manage and update settings without having to redeploy applications.

What is the Kube-Controller-Manager?

Kube-controller-manager is a control plane process that runs on the master node of a Kubernetes cluster. It is responsible for running various controllers that handle routine tasks in the cluster, such as node management, endpoints, and namespace management. It is also responsible for running the controller-manager, a process that ensures the desired state of the cluster matches the actual state of the cluster.

Kubernetes architecture

The Kubernetes systems diagram is a graphical representation of the components and architecture of Kubernetes. It illustrates the components that make up the Kubernetes system and how they interact with each other.

Kubernetes Systems Diagram

The diagram includes the main components of Kubernetes such as the Kubernetes master, nodes, controllers and the API server. It also includes the components that support the system, such as the networking, storage and monitoring components. The diagram also includes the relationships between the components, such as the master-node communication and the communication between the nodes and the controller. Finally, the diagram also illustrates the components that make up the applications running on the cluster, such as the deployment, services and pods.

  • etcd: Etcd is an open-source, distributed key-value store that provides a reliable way to store data across a cluster of machines.
  • Controller Manager: Kubernetes Controller Manager is an important component of Kubernetes that is responsible for managing the core control loops that keep the cluster running.
  • api-server: Kubernetes API Server is the central management entity of the Kubernetes cluster. It is a RESTful web service that provides an interface for users and administrators to interact with the cluster. The API Server is responsible for managing the lifecycle of Kubernetes objects, such as Deployments, Services, and Pods. You can interact using kubectl, to update yaml or json config files.
  • kube-apiserver: Kubernetes kube-apiserver is arn API sever that provides a secure, reliable and streamlined way for users to manage and administer their Kubernetes clusters.
  • DNS: Kubernetes DNS is a cluster add-on that provides DNS-based service discovery. It allows services within a Kubernetes cluster to be addressed using a DNS-like syntax, and simplifies service discovery and access within the cluster. Kubernetes DNS is based on the open source CoreDNS server.

The Kubernetes ecosystem

Kubernetes is an open-source product, but there are many different variants within the Kubernetes ecosystem. You can run Kubernetes yourself or use a cloud-hosted provider such as Amazon EKS , Microsoft AKS , Google GCE.

Kubernetes control plane

The Kubernetes control plane is a set of components that are responsible for managing the Kubernetes cluster. It consists of a variety of components, including the Kubernetes API Server, the Kubernetes Scheduler, the Kubernetes Controller Manager, and the etcd distributed key-value store. The control plane is responsible for managing the lifecycle of the Kubernetes cluster, including scheduling, scaling and networking. It also provides a secure, unified interface for users and applications to interact with the cluster.

Kubernetes systems diagram

What is kubectl?

kubectl can be used to access the Kubernetes API, or even shell into a pod. It can be run on a developer’s machine, or via a bastion. Kubectl has a lot of configuration flags. This kubectl cheat sheet highlights the key flags.

How to share Kubectl Config?

Once a team rolls out Kubernetes, the team will often need to access the Kubernetes API via kubectl. As a team grows, it’s important to add SSO to kubectl and secure any other Kubernetes auth methods.

What is Kubelet?

Kubelet is an agent that runs on each node in a Kubernetes cluster. Its purpose is to manage the lifecycle of containers on the node, making sure that the containers are healthy and running. It does this by communicating with the Kubernetes API server and responding to requests for container operations. It also monitors the resource usage of containers and ensures that the resources allocated to them are within limits set by the cluster administrator.

What is Kube-proxy?

Kube-proxy is a network proxy that runs on each node in a Kubernetes cluster. It is responsible for managing the network routing for individual services and pods, and for load-balancing traffic between pods. It also provides an API that enables other components of the cluster to access services and pods running on individual nodes.

How do you interact with Kubernetes clusters?

  1. Command Line Interface (CLI): You can use the kubectl command line tool to interact with and manage your Kubernetes clusters.
  2. Dashboard: The Kubernetes Dashboard is a web-based UI that provides an easy way to manage and troubleshoot your Kubernetes clusters.
  3. API: The Kubernetes API provides a programmatic way to interact with clusters, allowing you to automate the management of your deployments.
  4. Third-Party Tools: Some third-party tools and services provide additional ways of interacting with and managing Kubernetes clusters.

Kubernetes security

As more companies have adopted Kubernetes, it’s become important to secure Kubernetes. Here are four tips for keeping Kubernetes secure and preventing hackers from accessing your cluster.

  1. Implement role-based access control (RBAC): RBAC is a powerful tool for controlling user access to Kubernetes resources. It allows administrators to control who has access to which Kubernetes resources and at what level.
  2. Harden Kubernetes components: It’s important to ensure the security of Kubernetes components such as the API server, the scheduler and the kubelet. This includes patching, monitoring and hardening the configuration of these components.
  3. Use network policies: Network policies allow administrators to control the communication between pods in a cluster. This helps to restrict access to resources and ensure that unwanted traffic is blocked.
  4. Monitor Kubernetes resources: Monitoring Kubernetes resources and metrics is important for understanding the state of the cluster and for detecting any malicious activities. This includes monitoring for suspicious API calls, resource changes or unauthorized user access.
Background image

Easy to get started

Teleport is easy to deploy and use. We believe that simplicity and good user experience are key to first-class security.

Teleport consists of just two binaries.
  1. The tsh client allows users to login to retrieve short-lived certificates.
  2. The teleport agent can be installed on any server, database, application and Kubernetes cluster with a single command.
Download Teleport
Terminal
# on a client$ tsh login --proxy=example.com
# on a server$ apt install teleport
# in a Kubernetes cluster$ helm install
Background image

Try Teleport today

In the cloud, self-hosted, or open source
Get StartedView developer docs