Kubernetes 101: Breaking Down a Kubernetes Deployment

Posted by Dustin Keib, Head of Cloud Enablement, - Jul 8, 2020

Hello, all of you Kubernetes fans. Believe it or not, we’ve reached the end of our journey into the magical, mystical world of Kubernetes. Over this series of blogs, we’ve looked at how this solution works and examined its parts in detail.

Today, we’re wrapping up our Kubernetes 101 series with a discussion about Kubernetes deployment. Let’s say we’re firing up the Kubernetes engine and putting this train in motion. Let’s take a look and see how it all works together to drive your apps in the cloud.

What is Kubernetes Deployment?

kubernetesKubernetes.io describes deployment as something that “provides declarative updates” for pods and their replicas, which are a group of multiple, stable identical pods. It sure sounds techy, doesn’t it? That’s why I’m here to explain it in clearer terms.

Back in the first blog, we examined what Kubernetes is and why you should run your app with it. We used a freight train analogy to describe Kubernetes as the powerful locomotive engine that pilots the rest of the train, which is filled with containers, clusters, pods, nodes and more.

All of these make up your cloud-native applications, and Kubernetes helps make them run as you intend them to function. In simple terms, that’s deployment. It's the way K8s drives all of these parts to make your containers and pods (your apps, aka the smallest deployable part of K8s) available to your users via nodes and clusters (virtual machines or servers).

In a deployment, you will set the desired state in which you want all of the parts of the “train” to work together to ensure the strongest, healthiest app availability. The running train is your workload, in essence.

In a Kubernetes deployment, you set the desired state in which you want all of the parts of the “train” to work together to ensure the strongest, healthiest app availability. @OnixNetworking

The deployment runs multiple copies, or replicas, of your applications and replaces any that are unhealthy or have failed. This is how deployments ensure that at least one instance, if not more, of your application, is available to respond to requests.

How does a Kubernetes Deployment Work?

When you launch a deployment on a running Kubernetes cluster, you will use a tool known as the Kubernetes deployment controller to manage the process. The controller monitors the state of your cluster and can make or request changes as needed to move that cluster’s workloads to your desired state.

Kubernetes containers and podsEach Kubernetes deployment uses what’s called a “pod template.” This template provides a specification that determines what the pods should look like, what application runs inside of the pod’s containers, and more.

A Kubernetes deployment ensures your clusters are running the desired number of pods and that these pods are available at all times. It also automatically replaces pods that fail or are evicted from their nodes.

You can update any deployment by making changes to the pod template specs. These changes will automatically trigger an update to the current state and create the new desired state, as outlined by the edited template specification.

An update works like this:

  1. The deployment triggers the update.
  2. New pods that match the desired state are started.
  3. Old pods are terminated as new pods become healthy.
  4. This process repeats until all outdated pods have been replaced. By default, Kubernetes ensures that at least 75% and no more than 125% of the desired number of pods are up at any given time.

In this scenario, old pods aren’t removed until the deployment has a sufficient number of new pods up and running. If you run into problems, deployment updates can be rolled back. You also can temporarily halt a deployment.

Deployments have a three-phase lifecycle. The progressing state means the deployment is at work, doing such tasks as bringing up pods or scaling them back. The completed state speaks for itself; all pods are running to the latest specification and old pods have been removed.

The third phase is known as the failed state. If your deployment is in this phase, it means it has run into issues that have prevented it from completing its specified tasks. These issues can include limit ranges, runtime errors or insufficient quotas or permissions, among other things.

Deployment_and_migrationThere are various commands you can use to check the state of your Kubernetes deployment or to troubleshoot the cause of a failed state. You also can monitor a deployment’s ongoing progress.

The need for Kubernetes is essential in today’s digital world. You have apps. Users want to use them. You want to be sure they’re healthy and accessible. Using Kubernetes to deploy and manage your containerized workloads ensures resiliency and uptime for your digital assets.

We want to be sure you understand all that Kubernetes has to offer, so be sure to check out other blogs in our Kubernetes 101 series. Thanks for reading!

Kubernetes 101: What It Is & Why Apps Need It?
Kubernetes 101: What are Containers and Pods?
Kubernetes 101: What are Nodes and Clusters?
Kubernetes 101: Container Orchestration in the Cloud

Post Your Comments

Search Blog

New call-to-action

Meet the Author

Dustin Keib, Head of Cloud Enablement

Dustin Keib, Head of Cloud Enablement

Dustin is a software engineer, systems architect, and cloud scalability expert at Onix. His deep understanding of the full SaaS and Paas stack comes from 20+ years of enterprise IT experience. Dustin is a Certified Google Cloud Solutions Architect, AWS Solutions Architect - Associate, and Puppet Professional and has a deep knowledge of infrastructure automation, containers, and CI/CD system design and implementation.

More Posts By Dustin Keib, Head of Cloud Enablement