Configuring a Rolling Update Deployment in Kubernetes

October 15, 2021

Using rolling updates for deployments is considered to be a best practice by many. Here we explore the topic in depth.

Sudip Sengupta
Solutions Architect

Application lifecycle management is one critical aspect of container orchestration that requires special attention particularly with rising workload and cluster complexities. When deploying a Kubernetes application into production, there are several strategies to stage and update cluster resources. The right strategy allows seamless updates to the application workload, without impacting service availability. 

This article delves into deployment strategies for managing the lifecycle of Kubernetes applications, and why rolling update deployments act as the most preferred deployment strategy by allowing seamless updates without impacting application availability. 

Kubernetes Deployment Strategies

Kubernetes uses replication through ReplicaSets to ensure high-availability by deploying as many instances of the workload pods as desired to maintain optimum performance. While doing so, ReplicaSets are used as intelligent agents that help run multiple instances of the same pod in a cluster. Kubernetes deployment strategies leverage such ReplicaSets to maintain the optimum number of operational pods as well as load-balancing to spin up new pods when existing ones fail. 

While there are a number of other strategies, the Recreate and Rolling Update deployment strategies are the most popular ones used to stage and update Kubernetes cluster resources. 

  • The Recreate deployment strategy is the simplest form of Kubernetes deployment that terminates all active pod instances, and then spins up new pod versions afresh. Though this strategy remains a popular choice for completely renewing the app state, it is often not recommended for application architectures that need to maintain a consistent steady-state
  • A Rolling Update, on the other hand, gradually replaces pod instances with newer versions to ensure there are enough new pods to maintain the availability threshold before terminating old pods. Such a phased replacement ensures there is always a minimum number of available pods that enable a safe rollout of updates without causing any downtime. 

Deployments in Kubernetes create new ReplicaSets for rollbacks, updates, spinning up replicas, or for any other configuration changes. Kubernetes assumes rolling updates as the default deployment strategy by incrementally replacing POD instances without affecting downtime. 

Let us learn more on how this works by configuring a Rolling Update Deployment. 

Configuring a RollingUpdate Deployment

Deployments in Kubernetes are created by defining their specifications in a YAML definition file. To understand how a rolling update deployment is configured, let us take the reference of a deployment named <terminal inline>deployment-definition<terminal inline> with the following specifications:


---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: deployment-definition
 labels:
  app: web
spec:
 replicas: 2
 selector:
  matchLabels:
   app: web
 template:
  metadata:
   labels:
    app: web
  spec:
   terminationGracePeriodSeconds: 40
   containers:
   - name: front-end
    image: nginx
    imagePullPolicy: Always
    ports:
     - containerPort: 80

The above specification highlights a Deployment manifest that creates a new container while downloading the image. In order to ensure that Kubernetes updates the above configuration without impacting application availability, we can use the <terminal inline>RollingUpdate<terminal inline> deployment by additionally including the following under <terminal inline>spec<terminal inline>:  


type: RollingUpdate
 rollingUpdate:
  maxSurge: 3
  maxUnavailable: 1
  timeoutSeconds: 100
  intervalSeconds: 5
  updatePeriodSeconds: 5

To update a deployment configuration, make changes to the deployment file by using the following command:

<terminal>$ kubectl apply -f deployment-definition.yml<terminal>

By defining the above parameters within the <terminal inline>deployment-definition<terminal inline> specification, the updated YAML would be similar to this:


---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: myapp-deployment
 labels:
  app: web
spec:
 replicas: 2
 strategy:
 type: RollingUpdate
 rollingUpdate:
  maxSurge: 3
  maxUnavailable: 1
  imeoutSeconds: 100
  intervalSeconds: 5
  updatePeriodSeconds: 5
selector:
 matchLabels:
  app: web
template
 metadata:
  labels:
   app: web
 spec:
  terminationGracePeriodSeconds: 40
  containers:
  - name: front-end
   image: nginx
   imagePullPolicy: Always
   ports:
    - containerPort: 80

To learn this well, let us understand the different parameter functions - 

  • maxUnavailable - specifies the maximum number of unavailable pods during an update. Optional, and can be specified through a percentage or an absolute number.  
  • maxSurge - specifies the maximum number of pods to be created beyond the desired state during the upgrade. Optional, and can be specified through a percentage or an absolute number.
  • timeoutSeconds - the time (in seconds) that waits for the rolling event to timeout. On reaching the specified time, it automatically rolls back to the previous deployment. Optional, if left blank, the default value is assumed to be 600 seconds.
  • intervalSeconds - specifies the time gap in seconds after an update. Optional, if left blank, the default value is assumed to be 1 second.
  • updatePeriodSeconds - time to wait between individual pods migrations or updates. Optional, if left blank, the default value is assumed to be 1 second.

As per the specified values, the deployment will continue to run until it reaches the desired state, following which old pods can be terminated or retained, as required - thereby, maintaining zero downtime.

Rolling Updates - Post Configuration Operations

Once the deployment specification is setup, Kubernetes takes use of the <terminal inline>kubectl<terminal inline> command for follow-up steps:

  • To replace the existing image with <terminal inline>darwin/rss-php-nginx:v1<terminal inline> version, you can use the command:
    <terminal>$ kubectl set image deployment/deployment-definition front-end=darwin/rss-php-nginx:v1 --record<terminal>
  • Once the deployment YAML is configured, the deployment is then launched by running the command:
    <terminal>$ kubectl create -f deployment-definition.yml<terminal>
  • To display the list of running deployments:
    <terminal>$ kubectl get deployments<terminal>
  • To update a deployment, make changes to the deployment file by using the <terminal inline>kubectl apply<terminal inline> command:
    <terminal>$ kubectl apply -f deployment-definition.yml<terminal>
  • Additional changes by changing of resources can also be done using the <terminal inline>kubectl set<terminal inline> command:

    $ kubectl set resources deployment deployment-definition
    --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi
  • In case there are ongoing deployments, to check the status of the rollout:
    <terminal>$ kubectl rollout status deployment/deployment-definition<terminal>
  • To display the history of rollouts:
    <terminal>$ kubectl rollout history deployment/deployment-definition<terminal>
  • In case of a deployment update doesn’t work as expected, one can roll back changes by using the command:
    <terminal>$ kubectl rollout undo deployment/deployment-definition<terminal>

Benefits of Rolling Updates

Rolling updates allow changes to be integrated gradually, offering flexibility and control within an application lifecycle. Some benefits of using rolling updates for Kubernetes clusters include:

  • Ensures zero downtime since pod instances of the application are always running even during an upgrade
  • Allows developers to examine the effect of the changes in a production environment without affecting user experience
  • Does not require additional resources assigned to the cluster, making it a cost-effective deployment strategy
  • Avoids the requirement of strenuous manual migration of configuration files, where complex upgrades can be achieved efficiently by making simple changes to a deployment file.

Closing Thoughts

One advantage of a cloud-native environment is its support to a microservice approach, which allows multiple developers to make changes simultaneously. While this is a good thing, frequent releases can easily affect application reliability and uptime. DevOps teams must therefore come up with mechanisms of managing deployment in a manner that minimizes risk to the app lifecycle. More so, complex workloads require the right methodologies for sustained operational efficiency and application resilience. 

To help with this, Kubernetes allows the provisioning of seamless deployments through rolling updates for controlled upgrades without impacting uptimes. Due to its apparent benefits of maintaining efficient automation and parallelism, using rolling updates for deployments is often considered as one of the essential best practices of a DevOps model. 

Do you use a rolling update deployment approach for your workloads? If so or otherwise, let us know the approach you follow for the efficient rollout of updates. 

Article by

Sudip Sengupta

Solutions Architect

Sudip Sengupta is a TOGAF Certified Solutions Architect with more than 15 years of experience working for global majors such as CSC, Hewlett Packard Enterprise, and DXC Technology. Sudip now works as a full-time tech writer, focusing on Cloud, DevOps, SaaS, and Cybersecurity. When not writing or reading, he’s likely on the squash court or playing Chess.

Read More