Using Kubernetes for Container Orchestration in DevOps

[article]
Summary:
Containerization has replaced virtual machines to a great extent because containers are lightweight and make efficient use of the OS kernel. Docker’s efficient nature helps with software development, testing, delivery, and deployment in a DevOps environment, and all the benefits of Docker also apply to Kubernetes. Let’s explore some of the additional agile and DevOps benefits you can gain by using Kubernetes.

Containerization has replaced virtual machines to a great extent because containers are lightweight and make efficient use of the operating system kernel. Docker is the most commonly used containerization platform, and in fact has become basically synonymous with containerization.

Kubernetes is a container orchestration platform used for automating deployment, scaling, and management of application containers, and while it works with a range of container tools, it is often paired with Docker due to its many benefits.

Docker’s efficient nature helps with software development, testing, delivery, and deployment in a DevOps environment, and all the agile benefits of Docker also apply to Kubernetes. Let’s explore some of the additional agile and DevOps benefits you can gain by using Kubernetes.

Decoupling Controller and Service

Kubernetes implements a service-oriented architecture (SOA) and is an object-oriented framework, which is well suited to agile principles.

A Kubernetes cluster runs containers in isolated units called pods. Kubernetes serves client requests to a service that is associated with an application running in pods via controllers using matching labels.

The service is decoupled from the controller, which makes it possible to make incremental changes to either the controller or the service without having to modify the other, which can make your software more testable and upgradable, a basic agile expectation.

Several different types of Kubernetes controllers are available, including replication controllers, replica sets, session sets, and daemon sets.

No Application Left Behind

With multiple users or teams using the same cluster, it could be a concern that some users or teams are using more than their fair share of resources. Kubernetes uses resource quotas to limit the use of CPU and RAM per user namespace. 

For example, if a cluster has a capacity of 16 GiB RAM and eight CPU cores, two namespaces (NS1 and NS2) could share the cluster.

 Resource quotas divided between namespaces

Response to Change

One of the principles behind the Agile Manifesto is to welcome changing requirements. Kubernetes provides responses to changing requirements at various levels. 

One example is autoscaling. Kubernetes may be configured to use three different types of autoscaling: a horizontal pod autoscaler, a vertical pod autoscaler, and a cluster autoscaler. The horizontal pod autoscaler responds to changing load by scaling the number of pods in use by an application. If an application has one pod replica to serve client load to start with, as the load increases, the number of pods increases as well. Similarly, a cluster autoscaler scales the number of nodes in a Kubernetes cluster as the load changes.

Being able to respond dynamically to changing load requirements is a benefit in an agile environment.

Continuous Delivery with Rolling Updates

A running Kubernetes application may need to update the pod specification. For example, an application may need to use a different version or tag of a Docker image, or to run a completely different Docker image for an application. You might expect that an application would need to be stopped to change the pod specification and then restarted after making the updates, interrupting service and the team’s agile flow.

But that’s not the case with Kubernetes. Instead, it supports rolling updates. A running application may be updated without having to stop and restart. Rolling updates help provide continuous integration and continuous delivery of software, with zero downtime, incremental updates of pods, and the ability to roll back to a previous version if needed.

Consider an application running three pod replicas, and the pod specification needs to be updated. As illustrated in figure 2, first, another application version is created. One pod stops in the current application version and a pod with an updated specification starts in the new version. Incrementally, all running pods are stopped one at a time and updated specification pods are started in the new application version.

Rolling update of an application

High Availability

In a distributed cluster, multiple pod replicas are run for high availability. If one pod were to fail, all client requests would be directed to another pod, and in the meantime, additional replicas will be started by the controller to replace the failed replicas.

Fault tolerance and pod failover are built into a Kubernetes application. No downtime is incurred, and the application continues to stay available.

Think of a replication controller that has three pod replicas for a running application, and two of the pod replicas fail due to node failure or some other reason. The controller will start replacement pods, as illustrated in figure 3.

Controller starting replacement pods

High availability is another Kubernetes benefit that enables continuous delivery of software. 

Portability with ConfigMaps

A containerized application would typically have configuration information such as configuration files, command-line args, environment variables, and port numbers associated with it. ConfigMaps decouple the configuration artifacts from a Pod’s containers, making it feasible to make incremental changes to the configuration without affecting the containers. ConfigMaps provide portability to containerized applications.

Decoupling Storage from Volume Mounts

As pods are ephemeral, external data storage may need to be configured. Kubernetes pod specification provides for volume mounts using external storage that may be directly mounted into a pod. Decoupling storage from a pod is another example of how Kubernetes helps teams stay agile, as the storage or pod specification may be incrementally modified independent of the other.

Collaboration

Collaboration among users is essential for agile software development. While standard Docker images and resource definitions are available, most users customize applications and resource definition to suit their purpose.

Such customized applications could be useful to someone else who needs the same application settings, so customized applications may be packaged into Helm charts. Helm charts are reusable packages of Kubernetes software that may be deployed so that the customized applications packaged in the charts can be used by multiple projects. If Helm charts were not available, users would have to develop software that another user may already have running.

Agile principles support being able to make incremental changes to code to adjust to user requirements, giving priority to users, and collaborating with users. Kubernetes was designed to meet the agile needs of modern applications, making it a great choice for automating container orchestration in a DevOps environment.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.