Resources

Simple Steps To Delete A Pod From A Kubernetes Node

October 7, 2021

Deleting a pod from a Kubernetes node using the kubectl commands is a straightforward process. However, before deleting a pod, you should complete this series of steps.

Nate Matherson
Co-founder & CEO

Good news! Deleting a pod from a Kubernetes node using the kubectl commands is a straightforward process. Whether you need to troubleshoot issues with a node, complete an upgrade, or reduce the size of your cluster, you will find that deleting pods is not a difficult task.

However, before deleting a pod, you should complete a series of steps to smooth the process for the application. If you rush this process, it may lead to mistakes and application downtime. Let’s dive in!

How to Remove All Pods from a Node At Once 

If your node is running without any stateful pod, or pods that are non-essential, you can simply remove all of the pods from the nod using the kubectl drain command. But first, it is suggested that you double-check the name of the node you are removing from and to confirm that the pods on that node can be safely terminated. According to Kubernetes documentation, the following command will do the trick:

kubectl get nodes

kubectl get pods -o wide | grep <nodename>

Next, run the following command to drain all of the pods from the node:

Kubectl drain <nodename>

You should run the get pods command again to confirm that no pods are still running on the node. If you are running pods with NoExecute, they will have remained on the node or if you are using DaemonSet pods.

To force all of the pods from the node you can run the drain command again, this time, with the --force flag included. Finally, you can use the kuberctl delete node <nodename> command to remove the node from the cluster.

How to Individually Remove Pods From Nodes

To remove pods using a finer approach you can elect to remove pods individually. 

But first, using this method it is suggested that you again double-check the name of the node and the pods running on said node:
kubectl get nodes

kubectl get pods -o wide | grep <nodename>

Next, you should use the kubectl cordon <nodename> command to designate the node as not schedulable. The kubectl cordon command works to stop new pods from scheduling onto the node during the deletion process, and during normal maintenance.

At this point, you can manually delete pods one at a time using the kuberctl delete pod <podname> command.

However, if the pods in question are controlled by a Deployment or ReplicaSet, and you are concerned with having one less replicate, you may want to increase the replica count by the number of pods set to be deleted. Once the new pod is running, you can delete the old pod in question, and then scale the number of replicas lower.

As just one example, if your deployment needs 8 replicas, and one of the replicas is set to be deleted, you could temporarily scale to 9 replicas, and then back down to 8 replicas after the pods have been deleted. 

kubectl scale deployment web --replicas=9

kubectl delete pod <podname>

kubectl scale deployment web --replicas=8

For those pods that are controlled by a StatefulSet, before deletion, you may need to scale up the ReplicaSet to handle the increased demand as a result of a pod being deleted. 

How To Allow Pods Back Onto Nodes

Good news! We have now finished the maintenance on our nodes. Next, you will want to use the kubectl uncordon command to allow for scheduling to take place on the node. 

kubectl uncordon <nodename>

From here, pods will need to be scheduled again and they will start to appear back on the node.

Article by

Nate Matherson

Co-founder & CEO

Nate Matherson is the Co-founder & CEO of ContainIQ.

Read More