Readiness probes, like many management functions, are a way for you to ensure that your Kubernetes cluster is taking advantage of the features that keep your application available. Just like other systems, probes need to be designed to provide proper startup feedback within the cluster to make applications available. We will discuss using the Readiness probe in conjunction with others to provide proper application startup within a Kubernetes cluster.
Needless to say, these are not the type of probes that will snap a selfie from Mars! These probes are workers. They are there to do actions that extend a small container application to the point they are self-aware and therefore, have the ability to self-heal as well. There are three main types of probes: Liveliness, Startup, and of course Readiness
One is used to ensure traffic is managed so that only active containers receive requests. Another probe takes a more active approach and “pokes things with a stick” to make sure they are ready for action. These, in combination with a third type, culminate into full orchestration that guarantees services remain available.
The liveliness probe is the one with the “stick” we spoke about earlier. This probe is there to gently nudge the application to make sure it responds accordingly. The most common example is that of a deadlocked process. While this can be a sign of a significant bug, the liveliness probe allows for the ability to replace the stuck application with a fresh copy. This, along with other replicas of the application, would allow it to remain available for requests.
Startup probes in Kubernetes are critical for those situations where an extended “warmup” of the application would fall over into a point where other probes are active. This situation could cause issues with the underlying orchestration of the replicas. It is possible that the application never has a chance to fully start thanks in part to the liveliness probe’s actions.
Today we will focus a bit more on the third type, the Kubernetes readiness probe. Similar to other probes, it checks to ensure the container is ready to accept traffic. Unlike the liveliness probe which checks to see if things remain ship-shape, the readiness probe makes sure things are in a state that traffic can start flowing.
This is at the pod level. Meaning all of the containers must be in a ready state prior to anything being passed to the service. The startup probe, while also similar, is at the application level. It would be active prior to any liveliness or readiness probes.
Let us look at an example of how the readiness probe is used by doing an exercise. Take a moment to look at a few of the tools we will use for the example and exercise:
Our application example is simple. We will use NGINX to display a static web page for which we are checking to ensure it answers with a successful response code. Each probe has its function to make sure the container has started and remains active while still providing enough time for various checks completed along the way.
Because these activities happen at the node level, the kubelet checks for states described in our YAML file prior to registering the node with the pod.
Here is the web.yaml file we will load for the service that includes a section to check for readiness:
Note the container specifications are putting nginx on port 80. However, the readinessProbe outlines instructions that include the wrong port (8080). This is intentional in order to show the failed containers within the dashboard.
If you haven’t already, save the web.yaml file to your local machine. You should have already started your local cluster with the command ‘minikube start’. Rather than use the command line to create the deployment, we are going to use the dashboard (‘minikube dashboard’).
Click the + at the upper right of the dashboard to add a deployment using the code from earlier.
Paste the contents of the web.yaml file, directly.
After uploading the YAML, we immediately see the cluster is in an errored state:
Inspecting the failed pods is easily achieved by clicking the name within the dashboard:
The error message is clear:
“Readiness probe failed: Get "http://172.18.0.6:8080/": dial tcp 172.18.0.6:8080: connect: connection refused”
As noted earlier, we purposefully created a situation where the readiness probe would continuously fail. Our service is actually running on port 80 within the pods. Let’s make a small correction to the code to resolve this and re-deploy. We will change the readinessProbe section to use port 80.
We can delete the deployment from the main dashboard screen. Under deployments, find the one named “web” and use the navigation element to choose the Delete option.
After deleting the application, repeat the steps for creating a new deployment. This time using the corrected code. The results when uploaded will show a happy deployment that has allowed our containers to be created and the readiness probe to complete.
Using Kubernetes readiness probes for proper management is just one way to ensure stable services that have a way to heal themselves. In conjunction with other types of probes, they allow time needed for all resources to be “warmed up” and ready to take actions without giving false positives for alerts. They also make sure the application remains active and processing as expected. Otherwise the failed containers are replaced, and the cycle repeats itself.
Anthony is a dynamic technologist that takes pride in the breadth of experience he has gained from years in the software development industry. As a pioneer of the DevOps movement, he strives to educate others on the time-saving and stability aspects of proper automation.