Start your free 14-day ContainIQ trial

Using Kubectl to Restart a Kubernetes Pod

When something goes wrong with one of your pods, you need to quickly restart the running pod. This tutorial shows you how to use kubectl to do just that.

July 12, 2022
Shingai Zivuku
DevOps Security Engineer

In Kubernetes, a pod is the smallest API object, or in more technical terms, it’s the atomic scheduling unit of Kubernetes. In a cluster, a pod represents a running application process. It holds one or more containers along with the resources shared by each container, such as storage and network.

The status of a pod tells you what stage of the lifecycle it’s at currently. There are five stages in the lifecycle of a pod:

  1. Pending: This state shows at least one container within the pod has not yet been created.
  2. Running: All containers have been created, and the pod has been bound to a Node. At this point, the containers are running, or are being started or restarted.
  3. Succeeded: All containers in the pod have been successfully terminated and will not be restarted.
  4. Failed: All containers have been terminated, and at least one container has failed. The failed container exists in a non-zero state.
  5. Unknown: The status of the pod cannot be obtained.

Sometimes when something goes wrong with one of your pods—for example, your pod has a bug that terminates unexpectedly—you will need to restart your Kubernetes pod. This tutorial will show you how to use kubectl to restart a pod.

Why You Might Want to Restart a Pod

First, let’s talk about some reasons you might restart your pods:

  • Resource use isn’t stated or when the software behaves in an unforeseen way. If a container with 600 Mi of memory attempts to allocate additional memory, the pod will be terminated with an OOM. You must restart your pod in this situation after modifying resource specifications.
  • A pod is stuck in a terminating state. This is found by looking for pods that have had all of their containers terminated yet the pod is still functioning. This usually happens when a cluster node is taken out of service unexpectedly, and the cluster scheduler and controller-manager cannot clean up all the pods on that node.
  • An error can’t be fixed.
  • Timeouts.
  • Mistaken deployments.
  • Requesting persistent volumes that are not available.
Monitor Kubernetes Events in Real-Time
Monitor the health of your cluster and troubleshoot issues faster with pre-built dashboards that just work.
Learn More Book a Demo
event dashboard

Restarting Kubernetes Pods Using kubectl

You can use <terminal inline>docker restart {container_id}<terminal inline> to restart a container in the Docker process, but there is no restart command in Kubernetes. In other words, there is no <terminal inline>kubectl restart {podname}<terminal inline>.

Your pod may occasionally develop a problem and suddenly shut down, forcing you to restart the pod. But there is no effective method to restart it, especially if there is no YAML file. Never fear, let’s go over a list of options for using kubectl to restart a Kubernetes pod.

Method 1: kubectl scale

Where there is no YAML file, a quick solution is to scale the number of replicas using the kubectl command scale and set the replicas flag to zero:

kubectl scale deployment shop --replicas=0 -n service

kubectl get pods -n service

Name Ready Status Restarts Age
api-7996469c47-d7zl2 1/1 Running 0 11d
api-7996469c47-tdr2n 1/1 Running 0 11d
shop-5796d5bc7c-2jdr5 0/1 Terminating 0 2d
shop-5796d5bc7c-xsl6p 0/1 Terminating 0 2d

Note that the Deployment object is not a direct pod object, but a ReplicaSet object, which is composed of the definition of the number of replicas and the pod template.

Example: Pod Template Used by ReplicaSet to Create New Pods

apiVersion: apps/v1
kind: ReplicaSet
  name: <name>
    app: <app_name>
    tier: <tier_name>
  # change replicas according to your case
  replicas: 2
      tier: <tier_name>
        tier: <tier_name>
      - name: <container_name>
        image: <container_image_name>

This command scales the number of replicas that should be running to zero.

kubectl get pods -n service

Name Ready Status Restarts Age
api-7996469c47-d7zl2 1/1 Running 0 11d
api-7996469c47-tdr2n 1/1 Running 0 11d

To restart the pod, set the number of replicas to at least one:

kubectl scale deployment shop --replicas=2 -n service scaled

Check the pods now:

kubectl scale deployment shop --replicas=0 -n service

kubectl get pods -n service

Name Ready Status Restarts Age
api-7996469c47-d7zl2 1/1 Running 0 11d
api-7996469c47-tdr2n 1/1 Running 0 11d
shop-5796d5bc7c-2jdr5 1/1 Running 0 3s
shop-5796d5bc7c-xsl6p 1/1 Running 0 3s

YourKkubernetes pods have successfully restarted.

Method 2: kubectl rollout restart

Method 1 is a quicker solution, but the simplest way to restart Kubernetes pods is using the <terminal inline>rollout restart<terminal inline> command.

The controller kills one pod at a time, relying on the ReplicaSet to scale up new pods until all of them are newer than the moment the controller resumed. Rolling out restart is the ideal approach to restarting your pods because your application will not be affected or go down.

For rolling out a restart, use the following command:

kubectl rollout restart deployment <deployment_name> -n <namespace>

Method 3: kubectl delete pod

Because Kubernetes is a declarative API, the pod API object will contradict the expected one after deleting it, using the command <terminal inline>kubectl delete pod <pod_name> -n <namespace><terminal inline>.

It will automatically recreate the pod to keep it consistent with the expected one, but if the ReplicaSet manages a lot of pod objects, then it will be very troublesome to delete them manually one by one. You can use the following command to delete the ReplicaSet:

kubectl delete replicaset <name> -n <namespace>

Method 4: kubectl get pod

Use the following command:

kubectl get pod <pod_name> -n <namespace> -o yaml | kubectl replace --force -f -

Here, since there is no YAML file and the pod object is started, it cannot be directly deleted or scaled to zero, but it can be restarted by the above command. The meaning of this command is to get the YAML statement of currently running pods and pipe the output to <terminal inline>kubectl replace<terminal inline>, the standard input command to achieve the purpose of a restart.

Using ContainIQ To Monitor Pod Restarts

Using ContainIQ, you can monitor, track, and alert on pod restarts.

ContainIQ, a Kubernetes native monitoring platform, can be used to monitor and track pod restarts. Using the Events dashboard, ContainIQ users can search and filter for restarts by pod name. Users can view restarts over time and filter based on date and time.

ContainIQ users can also view the events, metrics, associated logs surrounding the pod restarting:

For example, if a pod is terminated and subsequently restarted because of an OOM issue, you could use ContainIQ to view memory limits for that pod and/or node to determine if the limit was set appropriately. You can also correlate this event with the container logs leading up to the time restart occurred. It is also possible to set alerts on pods restarting using the New Monitor button.

Users can set alerts, or Slack notifications, on events leading up to the restart and the restarts themselves. Alerts can be set on specific pods, or across all pods, and can be updated or deleted at any time.

You can sign up for ContainIQ here, or book a demo to learn more.


In this summary, you were briefly introduced to Kubernetes pods as well as some reasons why you might need to restart them. In general, the most recommended way to ensure no application downtime is to use <terminal inline>kubectl rollout restart deployment <deployment_name> -n <namespace><terminal inline>.

While Kubernetes is in charge of pod orchestration, it’s no effortless task to continuously ensure that pods always have highly accessible and affordable nodes that are fully employed.

ContainIQ is a platform that monitors Kubernetes metrics and events within your cluster, instantly. Its Kubernetes events dashboard can monitor and alert on actions like pods restarting, therefore freeing engineering teams from monotonous management of Kubernetes clusters.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Shingai Zivuku
DevOps Security Engineer

Shingai Zivuku is a DevOps Security Engineer with expertise in secure systems design and project management. Shingai enjoys generating new ideas and devising feasible solutions to broadly relevant problems. Included, and in his free time, Shingai has developed and launched multiple apps in the entertainment and health spaces.