Kubernetes 101: What are Containers and Pods
Hello, friends. It’s Scott Mabe again, back to discuss more about Kubernetes. Chances are that word makes you immediately think about containers. You’re not wrong to do that, but we need to better define what containers are. Oh, and pods, too.
That’s why I’m here. Today, we’re taking the next step in Kubernetes 101. We’ve covered the concept from a high level and went a little deeper into it by examining how nodes and clusters work. Now we’re going to take a closer look inside those nodes and clusters. That’s where you’ll find containers and pods, the software that runs on the K8s hardware environment. This is how your applications come to life in Kubernetes.
This is important to understand as containerized platforms are gaining popularity. In fact, a 2019 study from Gartner predicts that 75% of global organizations will be running containerized applications in production. That’s a huge jump from 30%, as reported by Gartner in April 2019. As Gartner notes, Kubernetes has become the “de-facto standard for container scheduling and orchestration,” so diving into K8s components will ready you for life on this platform.
What are Pods and Containers?
For this latest lesson, let’s first talk about containers. Containers are a self-contained environment for applications. Using our ongoing Kubernetes 101 train analogy, these are like the cargo inside the train cars and carry applications. They are portable and scalable.
What does that mean, exactly? Well, containers allow you to package all of your app code separately from the infrastructure it runs on the nodes and clusters. This gives you the ability to run your app on any computer that uses a containerization platform. It’s not tied to a specific node or cluster.
And while you can package multiple programs into a container, it’s usually best to keep one program to a container. A common rule of thumb in K8s architecture is that it’s better to have many containers instead of just one large one. Smaller containers are easier to deploy. It’s also easier to diagnose issues with a smaller container than a large one with multiple programs.
Kubernetes, however, doesn’t run directly on containers. That’s where pods come in. Pods are the smallest and simplest unit of replication in K8s — a “single instance of Kubernetes,” if you will. A pod is a higher-level structure that wraps around one or more containers, like a larger shipping crate inside the train car. It represents the processes running in on your cluster of virtual machines (nodes). It’s a group of containers deployed on the same host with shared resources, including memory and storage capacity.
In most cases, the single-container model for pods tends to be the most common scenario in Kubernetes as it’s easier to determine the needed resources to run the app with a single container inside a pod. The smaller and tighter the pod, the better because of shared resources.
How do Pods and Containers Work?
Pods are deployed on nodes within a cluster through the Kubernetes engine. Nodes collect these pods, each of which has a single IP address that’s applied to every container wrapped inside of it. Deployments are based on the number of pod replicas needed and launch the application on top of the cluster.
As I mentioned above, pods are a replication unit in Kubernetes. This means they can create more containers. For some applications, it’s completely permissible to run a single-container pod. But when multiple processes need to run together, this gets a little more complicated.
For example, if your application is popular but only running in a single container instance, that instance might not be able to support the demand. When this happens, you can configure the Kubernetes engine to deploy more copies of the pod where the container is located to the cluster as needed. In fact, it’s best practices to always have more than one copy of a pod running to avoid failures and support app reliability. It also allows you to scale up your app if needed.
When you use Kubernetes to drive this, you get a smooth deployment. K8s allows you to scale your deployment by allowing you to set all of the details about the pods you want to replicate on a given node. This includes the number of pods and a preferred strategy for updating the deployment.
As part of this effort, Kubernetes tracks pod health. When it finds a failing one, it can remove the weak link through a tool called a replication controller. This controller also can add new ones to keep your application running at the desired state.
Always keep in mind that pods are not static. They change constantly and aren’t long-running instances. Eventually, all pods will fail and need to be replaced. That’s why we have Kubernetes. It makes sure we always have working pods so there’s no application downtime.
So, that’s a look at the software side of Kubernetes -- the elements that run on the hardware side of things. Take a look for more insight into K8s by checking out the other blogs in our Kubernetes 101 series.