Hello friends! Scott Mabe, here again. Are you ready to talk DevOps? Let’s say you overheard someone talking about container orchestration powered by Kubernetes. You hear them discussing pods, clusters, nodes and replication controllers.
It sounds like something out of Alien or Blade Runner, doesn’t it?
I assure you, this is computer science, not SciFi. Learning what Kubernetes is, how it works and why you need it to keep your cloud-native applications running efficiently will seriously change your life.
Ready to dive in? Here’s our take on understanding Kubernetes, one of the big benefits of DevOps.
What is Kubernetes?
First, Kubernetes, also referred to as K8s (starts with K with eight letters, get it?), isn’t a simple concept. It can easily solve many problems within cloud infrastructure and its related applications, but it’s complex. Even developers think so, as noted in this recent article from TechTarget recapping KubeCon + CloudNativeCon North America 2019, which I attended. What a great experience, by the way!
When you think of what you might already know about Kubernetes, containers probably come to mind. You’re on the right track, but Kubernetes isn’t a container. It’s the Google-created, open-source toolkit that manages application container orchestration in public, private and hybrid cloud computing environments to keep applications running — and minimize or eliminate downtime.
When trying to describe Kubernetes in simple terms, it’s sometimes easier to use a familiar point of reference. The internet is full of different analogies. One popular one is a ship with multiple decks and cabins. In fact, the word, “Kubernetes” is Greek for “helmsman” or “pilot." I prefer to liken it to a freight train.
In my example, Kubernetes is the powerful locomotive engine that pilots the rest of the train, which is filled with containers, clusters, pods, nodes and more. All of these make up your cloud-native applications. Kubernetes helps make them run as you intend them to function.
What Makes Kubernetes Work?
All of these pieces of virtual or physical hardware and software come together as the different parts of the train (controller) — and the cargo it carries (workloads) — that Kubernetes powers to drive your applications.
It provides automated application container deployment, scheduling, scaling, maintenance and operation by bringing all of these pieces and parts together to efficiently and effectively launch your applications.
In this next segment of understanding Kubernetes, let’s take a look at what these parts are and how each ties into Kubernetes and container orchestration.
Node Controller/Cluster Master
The node controller is responsible for monitoring the node’s health and controlling the cluster membership.
It is worth noting that all communication between the Node Controller and the Cluster is encrypted using the secure HTTPS (HyperText Transfer Protocol Secure) port 443.
The Node Controller is also the Cluster’s apiserver, which allows you to control your cluster using kubectl. It also monitors the health of Nodes and Pods and can move Pods away from unhealthy Nodes if need be.
Nodes & Clusters
At the most basic level, clusters are a grouping of individual virtual machines, also known as nodes, that use a shared network to communicate. In our train analogy, the nodes represent the individual train cars. They’re the hard workers that make up the clusters.
Clusters are Kubernetes’ configuration platform for components, capabilities and workloads. Think of clusters as a connection of all of the tankers and freight cars making up what we call the actual “train.” A cluster has at least one master node and one worker node.
Nodes and clusters are the hardware that carries the cargo, otherwise known as the application deployments.
Containers & Pods
This is where containers come into the picture. This software element in the Kubernetes train functions as a piece of cargo within the freight car or node. Containers virtually isolate and package applications for use. They run on top of clusters of shared operating systems on the host machines, aka nodes.
Simply put, they hold all of the components needed to make the application work, including files, libraries, dependencies and more. They’re a great way to bundle and run applications because they can be moved across different platforms, such as from an on-premise server to a cloud computing environment.
Next up: Pods. Think of a pod as a larger shipping crate that holds the container within the node, or in the case of the freight-train analogy, the freight car. In Kubernetes language, pods represent the processes running on each cluster. Each pod is assigned a unique IP address to differentiate it and prevent conflicts during Kubernetes activity.
Pods can hold single or multiple containers and still be considered a pod, sharing the same resources and local network. They can run a single instance of an application on a node in your cluster. A pod remains in its node until the process is complete. Then it’s evicted or deleted from the node. The pod is replicated unless the node fails, at which time a new pod is created.
Here’s how I remember how this all works: Containers comprise pods, which run on nodes. Those nodes make up a cluster. All of these things are run by the node controller/master.
Putting It All Together
While there are even more pieces that make up the bigger solution, these are the basic parts to help you begin understanding Kubernetes. The next step is to learn how DevOps and Kubernetes drive these components to give you powerful application performance.
Because pods are the smallest executable units in Kubernetes, they’re at the heart of what this tool does. For each pod, Kubernetes finds a node that has enough open compute capacity to launch the containers related to that pod. To keep things from getting muddled, each pod has its own IP address that allows applications to use ports.
Once Kubernetes launches the pods and their containers, an agent called a kubelet steps in to manage them. If a container fails, this node agent can automatically restart it. Kubernetes is self-healing. It also can also replace or kill containers that don’t respond to health checks.
Because Kubernetes operates at a container level — and not a hardware level, you can tell it to do all of these things (and a lot more) to ensure your applications are running at your desired state.
What’s more, as this InfoWorld article points out, because Kubernetes is open-source, you avoid cloud computing infrastructure lock-in. You aren’t tied to a single vendor, and can easily move to another infrastructure environment without disrupting your secure applications. It’s a great way of maintaining security and sanity at cloud speed.
The benefits of DevOps and Kubernetes are for real. You can deliver consistent performance and manage containerized applications efficiently, securely and automatically with the magic and power of Kubernetes. Stop being afraid; climb aboard and start being powerful in the cloud.
Topics: Cloud Infrastructure