Welcome back to the next chapter of Kubernetes 101. I’m your host, Scott Mabe, and I’m here to continue the journey through our blog-based primer that examines the wonders of K8s and how it keeps applications available in the cloud.
We started with a high-level overview of the concept and all of the parts that come together to power your cloud-native apps. When we took that first dive into the world of Kubernetes 101, we learned that K8s works like a powerful locomotive engine that controls and drives the rest of the application train. This train is made up of compute nodes, clusters, containers, pods and more.
Today, we’re starting the first of our more micro-examinations of Kubernetes’ various elements, starting with nodes and clusters.
What are Nodes and Clusters?
Nodes and clusters are the hardware that carries the application deployments, and everything in Kubernetes runs “on top of” a cluster. You can’t have clusters without nodes; the two are symbiotic.
As a reminder from the brief mention of nodes and clusters in our first Kubernetes 101, a node is a server. It’s the smallest unit of computer hardware in Kubernetes. Nodes store and process data.
Nodes can be a physical computer or a virtual machine (VMs). VMs are software programs in the cloud that allow you to emulate a physical computing environment with its own operating system (OS) and applications.
Running on a cloud such as AWS or Google abstracts the need to set up and maintain these nodes. In many cases, your cloud provider will update the version of Kubernetes running on the nodes for you.
A cluster is a group of servers or nodes. Using the same train analogy from our Kubernetes 101 post, we called the nodes the individual train cars, such as a tanker or a freight car. The clusters as the body of the train, a connection of all these cars that form the train itself. Typically, however, we think of clusters as the whole without paying too much attention to individual nodes.
It’s best practice to create clusters with at least three nodes to guarantee reliability and efficiency. Every cluster has one master node, which is a unified endpoint within the cluster, and at least two worker nodes. All of these nodes communicate with each other through a shared network to perform operations. In essence, you can consider them to be a single system.
Working with a cloud provider takes away the need to manually add nodes to your clusters via the kubeadm command in the event that you need to scale up or down.
How do Nodes and Clusters Work?
As we dig deeper into Kubernetes architecture in future blogs, you’ll learn there’s much more under the surface of a cluster and its nodes. For the time being, we’re going to focus on the important role the cluster and its nodes play in the Kubernetes process.
The cluster is the foundation of the K8s objects representing all of your containerized applications. Applications run on top of a cluster, guided by the cluster master node. To stick with our train analogy, the cluster master is the conductor. It keeps things moving by determining what runs on the cluster’s worker nodes. Essentially, the nodes share resources with each other and act as a single powerful machine -- the cluster.
When you deploy an application on that cluster, the cluster master then schedules the deployment and distributes the work to be installed on the nodes. These worker nodes run the services that support the containers inside of them, starting and stopping app activity all according to the cluster master requests.
Because the nodes are workers (or in the train analogy, the separate train cars), the cluster master oversees activity on each node, much like a train conductor monitors what’s happening inside each train car.
These worker nodes self-report their status to the cluster master, which determines if nodes need to be repaired, upgraded or removed from service. The cluster master is able to shift work across nodes as needed in the event a node fails or ones or added or removed.
The Kubernetes engine drives all of this communication between nodes, the cluster master and the larger clusters.
Under the surface, you have other parts that also play a role in making all of this happen at a more micro level. These are pods and the aforementioned containers, the software powering Kubernetes. We’ll discuss these pieces of the puzzle in our next installment of Kubernetes 101.
For now, I’ve given you a look at the physical hardware (nodes and clusters) that supports Kubernetes and how these different servers or VMs come together to ensure app availability in your cloud computing environment.
Learn more about the mysteries of Kubernetes by checking out the other blogs in our Kubernetes 101 series.