Start your free 14-day ContainIQ trial

Kubernetes ReplicaSet | Tutorial & Best Practices

June 24, 2022

This article shows users how to implement and effectively use ReplicaSets to ensure Kubernetes deployments are stable, scalable, and properly utilized.

Kasper Siig
DevOps Engineer

One of the biggest pain points when running an application in production isn’t making sure that it’s always running but ensuring that when something goes wrong it is automatically fixed. Kubernetes as a platform has a variety of built-in tools that can help developers accomplish this, the primary one being its error detection feature.

Kubernetes is quick at knowing when an application running in a container has made an error, which it can then report to either you or other parts of the system. This is where ReplicaSets come into play. When you create a ReplicaSet, you are essentially telling Kubernetes that you want a specific pod replicated x amount of times. For example, if you want to have four pods running at all times and one of them suddenly crashes, a ReplicaSet will ensure that the crashed one is removed and will spin up a new, hopefully healthy, one.

In this article, we will expand on that idea to provide more insight into this process; how a ReplicaSet knows which pods it owns; and how you can generally use them in your deployments.

The Anatomy of a ReplicaSet Manifest

Before diving into the use cases of a ReplicaSet and best practices for working with them, it’s important to first understand how they work.

The first thing to note is that they are very much like any other resource in Kubernetes, meaning you have to state the <terminal inline>apiVersion<terminal inline>, <terminal inline>kind<terminal inline>, <terminal inline>metadata<terminal inline>, and <terminal inline>spec<terminal inline>. You can see an example below of what a ReplicaSet manifest looks like:


apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

First of all, this ReplicaSet is creating a scenario where you have three pods in your environment, all running a frontend demo application. If you’ve ever made a manifest file for a simple pod or a deployment, you’ll recognize a lot of the fields. Let’s take a closer look at the first part:


apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
...

The two very first fields are simple and static, and they will always be the same. Specifying the <terminal inline>apiVersion<terminal inline> and <terminal inline>kind<terminal inline> just tells Kubernetes what it has to work with, and they should never be anything different. Moving further down to the definition of the <terminal inline>metadata<terminal inline>, you are specifying the <terminal inline>name<terminal inline> and <terminal inline>labels<terminal inline>. This will help you maintain a better overview of the resources you have in your environment.

Moving on in the manifest, you get to the <terminal inline>spec<terminal inline> field, where the first part is:


...
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
...

Here the <terminal inline>replicas<terminal inline> field is set to <terminal inline>3<terminal inline>, but you can set it to whatever fits your application. Officially, there is no limit to how high this can be set but, of course, you have to keep in mind the underlying resources of your Kubernetes cluster.

The next part in the manifest is the <terminal inline>selector<terminal inline> field. This is where you specify how the ReplicaSet should recognize the pods it needs to control. In this case, it should match any pods with the label <terminal inline>tier: frontend<terminal inline>.

Lastly, you need to define the actual template of the pods:


...
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

This is much like any other pod you would define. First, you specify the <terminal inline>metadata<terminal inline> field, which in this case is given the label of <terminal inline>tier: frontend<terminal inline>. As you saw earlier, this is what we specified the ReplicaSet would look for when finding which pods it needs to own. So make sure that it matches with what you wrote earlier in the manifest.

Lastly, you specify the container(s) inside the pod. In this example scenario, we’re using a simple demo frontend developed by Google. If you want to try out this example, you can apply it by running:


$ kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml

After a while, you should be able to see the ReplicaSet successfully deployed by running:

$ kubectl get rs

NAME DESIRED CURRENT READY AGE
frontend 3 3 3 4s

What Exactly is a ReplicaSet?

At this point, you know exactly what a manifest for a ReplicaSet looks like, but you may still be wondering exactly what it is.

On the surface, a ReplicaSet is simply a resource in Kubernetes that maintains a set number of pods, nothing more. In the previous example, you perhaps noticed how much of the manifest file is standard Kubernetes boilerplate. Without the boilerplate, it can quickly be boiled down to “Create this amount of pods, and configure them in this way.” This makes ReplicaSets a very simple resource to work with, and they are great when you have a simple use case.

However, there are some pitfalls you should be aware of. The most common misunderstanding about ReplicaSets is that they will keep a certain amount of actual replicas, but this is not true. A ReplicaSet focuses on the labels of pods, not the contents of pods. A ReplicaSet looks for the labels that you’ve specified in the <terminal inline>selector<terminal inline> field, and nothing more. This can cause issues if you already have pods in your environment with the given selector criteria.

Let’s say you want to deploy the example ReplicaSet above, but you already have a pod in your environment with the label <terminal inline>tier: frontend<terminal inline>. In this case, the ReplicaSet will acquire ownership over that existing pod and create two new ones according to its own <terminal inline>spec<terminal inline>. This leaves you with three pods that aren’t the same and can lead to confusion.

Conversely, you can also be trying to create bare pods with the same labels as those that the ReplicaSet has specified. If you’ve deployed the example above and then deployed individual pods with the label <terminal inline>tier: frontend<terminal inline>, the new pods will immediately be terminated. The ReplicaSet will acquire ownership, determine that there are now too many pods, and delete them.

K8s Metrics, Logging, and Tracing
Monitor the health of your cluster and troubleshoot issues faster with pre-built dashboards that just work.
Start Free Trial Book a Demo

Working with ReplicaSets

Now that you have more insight into what a ReplicaSet actually is, it’s time to understand some ways to work with them. This isn’t too complex, as there aren’t many things that can be done with a ReplicaSet.

Scaling ReplicaSets

Getting a ReplicaSet scaled is fairly easy, as it’s a simple matter of updating the <terminal inline>replicas<terminal inline> field. You can do it either by modifying your manifest file and re-applying it, or by running the following command to scale your ReplicaSet down to two:


$ kubectl scale --replicas=2 rs/frontend

If you want to scale your ReplicaSet up, simply add more pods. If you want to scale it down, the ReplicaSet has a set of rules it follows to determine which pods should be deleted, which you can read more about in the documentation.

Removing ReplicaSets

The process for deleting a ReplicaSet is fairly straightforward. Deleting the ReplicaSet itself will remove both the ReplicaSet and its pods.


$ kubectl delete rs frontend

It is also possible to remove a ReplicaSet and leave the pods as orphans by adding <terminal inline>--cascade=orphan<terminal inline> to the delete command.

When to Use a ReplicaSet?

ReplicaSets should not be used for tasks for which there are more appropriate resources. Some other Kubernetes resources with similar functions work better in certain circumstances.

Kubernetes also has a deployment resource, which is very much like a ReplicaSet in the way that you set a specific number of replicas, and it will maintain it. The biggest difference between these two resources is that deployments take care of upgrading pods. That means you should choose ReplicaSets when either you don’t want to have any automatic upgrading of pods or you want to implement your own custom upgrading logic.

People also use ReplicaSets to execute any batch jobs where they need many nodes to run the same task. However, best practice suggests that you use a job, instead, to create batch workloads, as the resource has been specifically designed with this purpose in mind, for example, not treating a pod that’s exited as a failure.

The last main alternative to ReplicaSets are DaemonSets. DaemonSets are a feature that get pods to run on each node and allow those pods to have access to machine-level functions. Use a DaemonSet when you need to have a given pod run on each node in your cluster. It’s possible to configure that scenario with a ReplicaSet, but DaemonSets are built for it.

Final Thoughts

A ReplicaSet is a great way to quickly get a set number of pods running in your environment. It isn’t complicated to set up. Generally, it functions very simply by replacing containers that are affected by errors.

However, you should always consider whether a ReplicaSet is the right choice. In many scenarios, it won’t be, and more often you should be opting for a deployment instead. Now that you know the capabilities of a ReplicaSet, you can determine for yourself whether it’s needed in your environment.

Whether you decide to go with a ReplicaSet, a deployment, or something else, make sure you’re notified of any errors happening in your environment. ContainIQ’s kubernetes monitoring platform can be a great ally. The platform can help you monitor events and metrics within your clusters automatically, and its dashboards offer you quick insight into your systems.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Kasper Siig
DevOps Engineer

As a DevOps engineer, Kasper Siig is used to working with a variety of exciting technologies, from automating simple tasks to CI/CD to Docker. In his previous role, Kasper was a DevOps Engineer at CYBOT where he led the migration to Kubernetes in production. He has a Computer Science degree from Erhvervsakademi SydVest (Business Academy South West), located in Denmark.

READ MORE