Start your free 14-day ContainIQ trial

Kubernetes Ephemeral Volumes & Storage | Tutorial

In this article, we’ll discuss how Kubernetes handles ephemeral storage and learn how these volumes are provisioned in operating clusters.

August 23, 2022
Sudip Sengupta
Solutions Architect

In Kubernetes, a volume represents a disk or directory that containers can write data onto or read data from, to handle cluster storage needs. Kubernetes supports two volume types — persistent and ephemeral — for different use cases. While persistent volumes retain data irrespective of a pod’s lifecycle, ephemeral volumes last only for the lifetime of a pod and are deleted as soon as the pod terminates.

In this article, we’ll discuss how Kubernetes handles ephemeral storage and learn how these volumes are provisioned in operating clusters.

How Kubernetes Handles Ephemeral Storage

Ephemeral storage is considered perfect for immutable applications and is used to handle the transient needs of pods running on cluster nodes. Such applications intermittently rely on storage devices but don’t care whether data persists across pod restarts. In addition to this, pods in Kubernetes leverage ephemeral storage for functions such as caching, scratch space, and logs. Ephemeral storage is also considered crucial for sharing nonessential data within multi-container pods and injecting configuration data into a pod.

In a Kubernetes cluster, you manage ephemeral storage by setting resource quotas and request limits across all non-terminal pods. This can be done both at the pod level and at the container level for fine-grained storage management. Ephemeral storage is unstructured and shared between all pods running on the node, the container runtime, and other processes managed by the system. While pods use the ephemeral storage framework to specify their transient local needs, Kubernetes relies on the storage framework to schedule pods appropriately while preventing pods from excessively using local node storage

Ephemeral Storage Options

Kubernetes offers two approaches to creating the node’s primary partition, where ephemeral storage is deployed. They are:

Root
The root partition holds the logs (<terminal inline>/var/log<terminal inline>) and kubelet (<terminal inline>/var/lib/kubelet<terminal inline>) directories and is shared among Kubernetes system daemons, user pods, and the underlying operating system. In such a setup, pods use <terminal inline>EmptyDir<terminal inline> volumes to consume ephemeral storage in the root primary partition. Pods also utilize the root partition when creating image layers, container-writeable layers, and container logs for transient applications. A root partition is completely ephemeral, so it doesn’t support any performance SLAs, such as disk IOPS.

Runtime
Runtimes often use an additional partition for overlay file systems. Kubernetes uses this runtime partition to identify and provide both shared access and isolation as required. Pods store container images and container-writeable layers within the runtime partition by default. In instances where both runtime and root partitions exist, the runtime partition is considered the default option for writeable storage.

Types of Kubernetes Ephemeral Volumes

To support different use cases, Kubernetes allows the provisioning of different types of ephemeral volumes. Some commonly used ephemeral volumes include:

Generic volumes
Generic ephemeral volumes are provisioned through storage drivers that support persistent storage on Kubernetes. These volumes can be written using any storage driver that supports dynamic provisioning, and they provide a per-pod directory for scratch data that is initially empty. Generic ephemeral volumes can be either local or network-attached and can perform typical volume operations supported by the installation driver, such as cloning, snapshotting, and resizing.

CSI volumes
Since version 1.15, Kubernetes has supported CSI drivers for inline ephemeral volumes. These drivers dynamically create volumes and mount them to a pod, so the volumes remain dependent on the pod’s lifecycle. CSI volumes are defined as part of the pod spec and are deleted during pod termination. Examples of CSI drivers for ephemeral inline volumes include:

  • PMEM CSI - A persistent memory driver that provides a hybrid of persistent data storage that is faster than normal SSDs, and ephemeral scratch space with a larger storage capacity than DRAM.
  • Image populator - A storage driver that automatically unwraps a container image to utilize its content as an ephemeral storage volume.
  • Cert-manager-csi - A volume driver that works with the Kubernetes certificate manager to facilitate a seamless requesting and mounting of key pairs to pods.

configMap, downwardAPI, secret
These volumes are collectively used to inject different types of data into a pod. They are provisioned as local ephemeral storage and are managed by the kubelet service on each node.

  • The ConfigMap injects configuration data into pods that are referenced within the ConfigMap volume and are mounted to the pod on a path specified within the pod manifest.
  • The downwardAPI volume stores data exposed by the Downward API as read-only files in plaintext format.
  • A secret volume is used to pass sensitive information, such as passwords, private keys, and authentication tokens, into pods.

emptyDir
This volume is created as soon as a pod initializes and is available for as long as the pod stays non-terminal. While all containers running in a pod read and write the same directories within the volume, the volume can be mounted on different paths in multiple containers. In such constructs, whenever a pod terminates, the data in the <terminal inline>emptyDir<terminal inline> volume is deleted permanently.

Provisioning Ephemeral Volumes in a Kubernetes Cluster

Ephemeral volumes are specified within a pod specification, simplifying their deployment and management. Kubernetes supports the provisioning of multiple ephemeral volume types in clusters, depending on the workload and use-case. In the following section, we discuss approaches to deploying CSI and generic ephemeral volumes, the two most commonly used ephemeral volumes.

CSI Ephemeral Volume

CSI ephemeral volumes are supported by some CSI drivers that rely on the <terminal inline>CSIInlineVolume<terminal inline> feature gate to be active. In Kubernetes versions 1.16 and later, the feature gate is enabled by default. The CSI ephemeral volume is managed within the node’s local storage and is created locally after pod scheduling. The manifest of a pod that uses CSI ephemeral storage would look similar to:


kind: Pod
apiVersion: v1
metadata:
  name: darwin
spec:
  containers:
    - name: generic-volume-mount
      image: nginx
      volumeMounts:
      - mountPath: "/var/log"
        name: darwin-generic-volume
  volumes:
    - name: darwin-generic-volume
      ephemeral:
        volumeClaimTemplate:
          metadata:
            labels:
              type: darwin-volume
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "darwin-storage-class"
            resources:
              requests:
                storage: 1Gi

The <terminal inline>csi<terminal inline> spec points the pod to the CSI driver and includes the volume attributes. The key-value declarations under the <terminal inline>volumeAttributes<terminal inline> spec determine the specification of the volume to be deployed by the CSI driver.

CSI driver limitations
Kubernetes determines volume attributes directly from the driver by referencing the <terminal inline>volumeAttributes<terminal inline> pod spec. This approach is known to potentially expose restricted parameters and allow non-admin users to modify them through an inline ephemeral volume. Administrators can prevent a CSI driver from being used as an inline ephemeral volume by removing <terminal inline>ephemeral<terminal inline> from the CSI driver’s <terminal inline>volumeLifecycleModes<terminal inline> spec.

Generic Ephemeral Volume

The sample manifest for a pod with a generic ephemeral volume would look similar to:


kind: Pod
apiVersion: v1
metadata:
  Name: darwin-app
spec:
  containers:
    - name: darwin-frontend
      image: busybox:1.28
      volumeMounts:
      - mountPath: "/scratch"
        name: darwin-scratch-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: darwin-scratch-volume
      ephemeral:
        volumeClaimTemplate:
          metadata:
            labels:
              type: darwin-frontend-volume
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "darwin-scratch-storage-class"
            resources:
              requests:
                storage: 1Gi

The above manifest represents a scratch volume named <terminal inline>darwin-scratch-volume<terminal inline> that is mounted to the <terminal inline>scratch<terminal inline> mount path of a pod named <terminal inline>darwin-pod<terminal inline>. The specification also defines the volume to support the ReadWriteOnce access mode and is assigned a 1Gi storage capacity.

PersistentVolumeClaim and Volume Lifecycle

In ephemeral volumes, volume claim parameters are defined within the pod’s volume source. When a pod initializes, the Kubernetes ephemeral volume controller creates a PVC object within the pod’s namespace. Besides ensuring volume binding, the PVC holds the current status of the volume and can be referenced as a data source for volume operations such as snapshotting and cloning.

An ephemeral volume controller ensures that the Kubernetes garbage collector deletes the PersistentVolumeClaim when the pod exits. The controller is also responsible for providing the labels, annotations, and other fields associated with the PVC object. The labels and names of PersistentVolumeClaims are deterministic, making it easier to search and sync with automatically created PVCs.

Monitoring Ephemeral Storage

Kubernetes supports various tools that monitor capacity and usage of ephemeral volumes. Within active nodes, a volume is usually located in the <terminal inline>/var/lib/kubelet<terminal inline> or <terminal inline>/var/lib/docker<terminal inline> directory. One common approach is to use tools such as <terminal inline>/bin/df<terminal inline> to check disk usage and other metrics in ephemeral storage directories.

To access storage capacity values in a human-readable format, administrators can use the <terminal inline>df<terminal inline> tool combined with a <terminal inline>-h<terminal inline> flag. For instance, to check storage statistics on the <terminal inline>/var/lib/<terminal inline> directory, administrators can use a command similar to:


$ df -h /var/lib/

Which returns a response with ephemeral storage usage data similar to:


Filesystem  Size  Used Avail Use% Mounted on
/dev/darwin    74G   28G   46G  38% /

How ContainIQ Can Help Debug Issues With Ephemeral Storage

Using built-in Kubernetes tools and services for monitoring storage works fine for small, non-complex setups, but organizations that operate a complex cluster network of dynamic workloads typically need to leverage a fully managed, automated monitoring solution.

ContainIQ is one such platform that can be installed with a single line of command; it was designed to allow users to monitor the health of Kubernetes clusters, with pre-built dashboards and easy-to-set alerts.

ContainIQ Dashboard

As a self-service SaaS platform, ContainIQ offers a fully managed monitoring platform to monitor Kubernetes clusters from the kernel or OS level. The platform also offers efficient correlation for easier debugging and can be used to seamlessly monitor a hybrid cloud ecosystem.

For example, using ContainIQ, users are able to track Kubernetes events related to ephemeral storage issues and changes. Users are able to track and alert on events including Volume failed to attach and Can’t mount the volume to a given path, in addition to all other events emitted by the API.

ContainIQ offers a free trial and a self-service experience, which can be found here.

Final Thoughts

Ephemeral volumes are designed for applications running in pods that don’t need to persist data across restarts. These volumes are useful for transient pod needs such as caching, logging and scratch space. Similar to persistent volumes, the lifecycle of an ephemeral volume is managed using a PVC object. With ephemeral volumes, pods can stop and restart gracefully without being restricted by the location of a persistent volume.

While the choice of provisioning persistent or ephemeral volumes allow administrators to configure cluster storage based on workload requirements, continuous monitoring is one of the most critical factors in maintaining a distributed cluster’s performance.

To know more about how ContainIQ can help monitor Kubernetes cluster storage, start a free trial here.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Sudip Sengupta
Solutions Architect

Sudip Sengupta is a TOGAF Certified Solutions Architect with more than 15 years of experience working for global majors such as CSC, Hewlett Packard Enterprise, and DXC Technology. Sudip now works as a full-time tech writer, focusing on Cloud, DevOps, SaaS, and Cybersecurity. When not writing or reading, he’s likely on the squash court or playing Chess.

READ MORE