There are several different platforms for managing multiple containerized applications. They’re called container orchestrators, and Kubernetes, an open source project originally created by Google, is the most popular container orchestration platform by far. It helps to deploy, run, and manage large clusters of containerized applications, even at the hardware layer. This makes developers more efficient by allowing them to focus on what matters: software development.
When Kubernetes is used to deploy applications, a cluster is formed from a combination of worker nodes and the control plane. The worker nodes are managed by the control plane, which hosts the computation, storage, and memory resources to run all the worker nodes.
Containers holding the applications are grouped into pods. These pods are encapsulated in the worker nodes, which run the containerized applications. Each worker node holds one or more containers, as well as a kubelet for communicating with the control plane layer.
In this article, you’ll learn about the control plane, what it does, and why it’s essential to your container orchestration. You’ll also get a view of its internal components and how they work.
Why Do You Need the Control Plane?
The control plane is the powerhouse of a Kubernetes cluster. It ensures that every component in the cluster is kept in the desired state. It receives data about internal cluster events, external systems, and third-party applications, then processes the data and makes and executes decisions in response.
The control plane manages and maintains the worker nodes that hold the containerized applications. In order to understand why you need the control plane, you need to take a deep dive into how each of the pieces of the control plane contributes to appropriately managing your cluster.
The control plane not only exposes the layer that deploys the containers, but also manages their lifecycle. There are several key parts to the control plane:
- An API server that transmits data both within the cluster and with external services
- A scheduler that handles resource sharing among the nodes
- A controller manager that watches the state of the nodes
- A persistent data store to keep configurations
- A controller manager and a cloud controller manager to manage control loops
All these components run on a node called the primary node or master node.
Components of the Control Plane
In this section, you'll see what the internal pieces of the control plane do and how they work together to manage the Kubernetes cluster.
The API server is the interface that the control plane uses to interact with the worker nodes and external systems. It exposes the Kubernetes API by sharing data via its REST endpoints. The command line interface, web user interface, users, and services communicate with the cluster through the API server.
The main implementation of the Kubernetes API server is the kube-apiserver. You can deploy multiple kube-apiserver instances to balance traffic, as it’s designed to scale horizontally.
etcd is a reliable key-value data store that is used to persistently store all cluster data in a distributed manner. Although it is a separate open source service in the Cloud Native Computing Foundation (CNCF) ecosystem, in Kubernetes, it can only be accessed via the kube-apiserver because of the highly sensitive nature of the information it stores. It holds the configuration used by the worker nodes and other data used to manage the cluster.
As the name implies, kube-scheduler allocates new pods to the worker nodes. Additionally, it distributes resources and workload across the worker nodes. It watches the nodes for how well they're handling their workload, and matches the available resources to the nodes. It then schedules newly created containers to the cluster nodes.
The control plane has four control loops called controller processes, which watch the state of the cluster and try to modify the current state of the cluster to match the desired state. The kube-controller-manager runs and manages the controller processes in a cluster. To reduce the complexity of managing them, the controllers are run in a single process:
- The node controller manages the nodes. It restarts any node that shuts down.
- The job controller notices on-off tasks and creates pods to run those tasks.
- The endpoints controller creates the endpoint objects to expose the pods externally.
- The service account controller and token controller create default accounts and API access tokens to authorize access to new namespaces, as you need an account and an API access token to interact with a newly created namespace.
The cloud-controller-manager is a separate component that connects the cluster to the API of the underlying cloud infrastructure. It runs only the controllers specific to the cloud provider, such as Amazon Web Services, Azure, or Google Cloud Platform. This way, the components interacting with your cloud provider are kept separate from the components that only interact with your cluster. The cloud-controller-manager consists of three controller processes, which are combined into a single process to reduce complexity:
- The node controller watches the state and health of the nodes in the cloud provider, checking every five seconds by default. If a node stops responding, the node controller also checks to see if the node has been deleted.
- The route controller creates routes in the cloud provider.
- The service controller creates, updates, and deletes load balancers in the cloud infrastructure.
Responsibilities of the Control Plane
The control plane is an essential element of the Kubernetes cluster, and manages and controls every component of the cluster. It handles all the operations in the cluster, and its components define and control the cluster's configuration and state data. It configures and runs the deployment, management, and maintenance of the containerized applications. All of the previously mentioned core components that interact with worker nodes are part of the control plane.
The Kubernetes API server, which is the only way to manage the pod configuration information stored in the etcd, is also implemented in the control plane. Every Kubernetes command and instruction comes from the API server. Setting up the control plane is the first thing you do when creating a cluster, and without the control plane, the worker nodes can't start or run—a Kubernetes cluster simply won't function without a control plane.
To work with the control plane, you need to understand how to interact with the REST endpoints exposed by the Kubernetes API server via the <terminal inline>kube-apiserver<terminal inline> command. To learn about control plane communication, check out the official Kubernetes documentation. You can interact with the API server using tools like the kubectl command.
The control plane node also monitors the health of containerized applications and interacts with your cloud provider, when applicable, to ensure your containerized applications run smoothly.
Best Practices for Working with the Control Plane
The control plane is an integral part of Kubernetes, and most of the best practices that apply to working with Kubernetes in general also apply to working with the control plane. That said, there are some particularly crucial aspects to keep in mind.
Ensure High Availability of Control Nodes
Ensuring high availability of control plane nodes is critical to running Kubernetes. Unlike worker nodes, the control plane node cannot be replaced. You might need to create replicas of the control plane in multiple fail zones. In cloud providers, fail zones are referred to as availability zones, which are regions. Set up replicas of your cluster’s control plane in multiple availability zones, and replicate each of the control plane components across the multiple zones.
Apply the Principle of Least Privilege
Using a role-based access control (RBAC) policy helps you implement the principle of least privilege. RBAC uses rules to establish what resources a given role is able to access, as well as what actions the role is permitted to perform. Roles are then associated with accounts, giving those accounts the permissions associated with the roles assigned to them. Kubernetes has extensive support for RBAC and allows you to create nuanced policies that ensure users and service accounts have exactly the permissions they need and nothing more. Role-based access control prevents unauthorized access to the control plane.
The Kubernetes documentation on RBAC has more information about the Kubernetes RBAC mechanism and how to configure it for your cluster.
Automate Monitoring of the Control Plane
You need to set up automatic reading of metrics and logs from your control plane. Monitoring the activities and health of the control plane is very important, and enables you to quickly troubleshoot and respond to orchestration or scheduling challenges when they arise. Tools like ContainIQ, Datadog, and Prometheus provide insight to the components of the control plane in a cluster, helping you stay abreast of the health of the control plane, its workload, and resource management.
The control plane is an essential part of every Kubernetes cluster. In this article, you took a closer look at how the control plane works and saw how important it is to a Kubernetes cluster. You also looked at the components of the primary node, how they work with each other, and how they relate to the worker nodes. Finally, you studied the functions and benefits of the control plane.
If you're working with containerized applications, ContainIQ offers observability and monitoring for Kubernetes. It allows you to view logs, metrics, and other important data about the state and health of your Kubernetes environment.