
Hello, all of you Kubernetes fans. Believe it or not, we’ve reached the end of our journey into the magical, mystical world of Kubernetes. Over this series of blogs, we’ve looked at how this solution works and examined its parts in detail.
Today, we’re wrapping up our Kubernetes 101 series with a discussion about Kubernetes deployment. Let’s say we’re firing up the Kubernetes engine and putting this train in motion. Let’s take a look and see how it all works together to drive your apps in the cloud.
What is Kubernetes Deployment?
Kubernetes.io describes deployment as something that “provides declarative updates” for pods and their replicas, which are a group of multiple, stable identical pods. It sure sounds techy, doesn’t it? That’s why I’m here to explain it in clearer terms.
Back in the first blog, we examined what Kubernetes is and why you should run your app with it. We used a freight train analogy to describe Kubernetes as the powerful locomotive engine that pilots the rest of the train, which is filled with containers, clusters, pods, nodes and more.
All of these make up your cloud-native applications, and Kubernetes helps make them run as you intend them to function. In simple terms, that’s deployment. It's the way K8s drives all of these parts to make your containers and pods (your apps, aka the smallest deployable part of K8s) available to your users via nodes and clusters (virtual machines or servers).
In a deployment, you will set the desired state in which you want all of the parts of the “train” to work together to ensure the strongest, healthiest app availability. The running train is your workload, in essence.