Start your free 14-day ContainIQ trial

Using Kubectl Logs | In-Depth Tutorial

The purpose of this tutorial is to teach readers everything they need to know about the Kubectl logs command. Readers will learn how to use it and the benefits of using the command to improve productivity.

September 2, 2022
Hrittik Roy
Software Engineer

The cloud native industry relies heavily on Kubernetes, a well-liked container orchestrator, to manage and automate container deployments. Kubernetes ensures that your applications are resilient and that users can always access the requested services, but what if you deploy a new object and it crashes?

To find out why, you’ll need to get your hands dirty and use the command <terminal inline>kubectl logs<terminal inline> to follow the log trail your container leaves behind.

In this tutorial, you’ll learn about the log command using a live application. You can follow along locally on a minikube cluster. By the time you’ve finished, you’ll be able to troubleshoot and debug your applicationwhenever things go haywire.

Understanding kubectl logs

Logging helps you track events in your application, which can be very useful for debugging purposes and understanding why your application is behaving the way it is. These logs can be checked when the application crashes or behaves differently than expected.

Pod Logs

<terminal inline>kubectl logs<terminal inline> let you print the logs from a running container to a standard output or error stream, and comes with many flags that can help you with your specific purpose. For example, the logs can be timestamped, follow a specific container, or prefixed with log name or container name, which makes these logs very detailed and can help you debug the pods.

To print the logs from a container, use the following command:


kubectl logs pod-name

Let’s take an example of when logging is helpful. Suppose you have a Flask web app, many users visit you, and all the user requests and responses are logged.

Everything’s going well, until suddenly, many users are suddenly complaining about latency. In this situation, opening up the logs will help you understand the underlying cause of their problems. You can check for the slowest requests, see what users are making the slow requests, and then resolve the issue.

Aggregating all the logs can help you keep track of all the user requests and responses, which will help you understand what happened and why. It also makes it easier to visualize, such as with a Grafana dashboard, to identify potential issues.

Owner Object Logs

Kubernetes is a mature system, so the logs aren’t just limited to pods. The logging mechanism also supports logs from deployment and job objects:


kubectl logs job/job-name
kubectl logs deployment/deployment-name

By default, only the logs from the first container are printed on your stdout, but using the <terminal inline>--all<terminal inline> flag will allow you to list all the logs in the given deployment or job.

In Kubernetes, the owner object is the deployment, and the dependent object is the ReplicaSet. An owner is an object that creates or deletes a dependent object. A dependent is an object that depends on another object for its existence, but can exist individually as well.

The owner object logs helps you see all the dependent logs, which is sometimes very helpful. It can be hard to identify which pods the user request goes to when there’s a ReplicaSet. Logging the entire parent gives you all the details you need.

Practical Example of Log Use | Kubectl Logs

In this section, you will build a sample Flask application, then configure it to run on a local minikube cluster so you can begin examining logs with the <terminal inline>kubectl logs<terminal inline> command.

Prerequisites

You’ll need to install the Flask package before you can develop a Flask application, so make sure you have access to Python 3 and pip using the commands below:


hrittik@hrittik:~$ python3 --version
Python 3.8.10
hrittik@hrittik:~$ pip --version
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)

For Windows users, use <terminal inline>python<terminal inline> instead of <terminal inline>python3<terminal inline>; you can find additional information about installing Python and pip in the official documentation for each tool.

Make sure you have installed Docker following the Docker documentation, as you will be running this application on a Kubernetes cluster, and you need to pack your application inside containers to run in the cluster.

For this tutorial, you’ll use minikube to build a local Kubernetes cluster. Follow the minikube documentation to install minikube locally on your own machine.

Finally, you’ll need to have a working knowledge of kubectl, Flask, and Docker.

Flask Application

For this demo, you’ll have a Flask application with two endpoints, which will offer better log visibility. If you’d like to see the source code for the entire application, you can do so in this GitHub Repository.

Import the Necessary Packages

Flask is a lightweight Python web framework enabling you to create web applications quickly and easily. Install Flask using the pip command:


pip install Flask

Import the Flask package to use them in your app:


from flask import Flask
from flask import json

Create a Flask Application

The following code creates the Flask class with the name of the module (<terminal inline>__name__<terminal inline>):


app = Flask(__name__)

Create the Status Endpoint

Flask uses <terminal inline>@app.route()<terminal inline> decorator to bind a function to a URL.

The healthcheck route is an endpoint that can be accessed by making a GET request to <terminal inline>/status<terminal inline>. When this route is accessed, a response is returned that contains a JSON object with a key of “result” and a value of “OK - healthy”. The status code for this response is 200, which indicates that the request was successful.


@app.route('/status', methods=['GET'])
def healthcheck():
    response = app.response_class(
        response=json.dumps({"result": "OK - healthy"}),
        status=200,
        mimetype='application/json'
    )

    app.logger.info('Status request successful')
    return response

Create the Primary Endpoint

Now you’ll create a primary endpoint containing the default landing page:


@app.route("/", methods=['GET'])
def hello():
    app.logger.info('Main request successful')
    return "Hello World!"

The above code defines a route, “/”, which accepts GET requests. When a user sends a GET request to the “/” route, the “hello” function is executed and the string “Hello World!” is returned.

Start the Flask App

This last line of code tells your server that it should start itself. It also defines the debug option. Enabling debug mode allows you to see full error messages in the browser. The <terminal inline>host<terminal inline> and <terminal inline>port<terminal inline> options tell the server to listen to all requests coming in on port 9000 on any network interface:


if __name__ == "__main__":
    app.run(debug='True', host='0.0.0.0', port=9000)

Test the Application

Run the app using the following command:


 $ python main.py  

Test the app in your browser at <terminal inline>http://localhost:9000/<terminal inline>. You should see the message “Hello World!”. Visit the <terminal inline>/status<terminal inline> endpoint to see the status of your application and the logs that have been generated on your console.

Logs, application, and endpoint
Logs, application, and endpoint

Requirements File

A <terminal inline>requirements.txt<terminal inline> file is a list of dependencies for a Python project. It’s usually generated by a package installer like pip, and the content is just a list of package names and version numbers. The command to generate the file is using <terminal inline>pip freeze > requirements.txt<terminal inline> which tells pip to look at your current environment and create a file with all the dependencies and their versions.

The file comes in handy in the creation of a Docker container or virtual environment, because it allows you to install all the dependencies in the file with one simple command rather than running commands manually, one after one.

Building the Docker Image

To run the Flask application in a Docker container, you need a Dockerfile to build a Docker image for the application. You can use the following as your Dockerfile:


# Using Base Image as python:3.8-alpine to build the image
FROM python:3.8-alpine

# Add the following to the Dockerfile to copy the source code to the container
COPY app/ /app
WORKDIR /app

# Add the following to the Dockerfile to install the dependencies in the container
RUN pip install -r requirements.txt

# Add the following to the Dockerfile to run the app on port 9000
EXPOSE 9000

# Add the following to the Dockerfile to run the app in your container
CMD ["python", "main.py", "--host=0.0.0.0"]

The <terminal inline>docker build<terminal inline> command uses the specified Dockerfile to create a new Docker image. The example below will create the new image from the Dockerfile in the root directory. The new image will be tagged with the name <terminal inline>hrittik/sample-flask<terminal inline>:


sudo docker build -t hrittik/sample-flask .

Testing the Image

The <terminal inline>docker run<terminal inline> command will create a new container from the “hrittik/sample-flask” image.


sudo docker run -d -p 9000:9000 hrittik/sample-flask

The container will be running in the background, as indicated by the flag <terminal inline>-d<terminal inline>, indicating detached mode, and port 9000 will be mapped to port 9000 on the host machine (-p 9000:9000), which you can access the same way that you’ve accessed the application in your browser previously. This time, though, the content is served from a container!

Running Minikube Cluster

Minikube is a tool from CNCF that makes it easy to run Kubernetes locally. Users can use Minikube to run a single-node Kubernetes cluster on their laptops with the <terminal inline>minikube start<terminal inline> command.


hrittik@hrittik:~$ minikube start
😄  minikube v1.26.0 on Ubuntu 20.04
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing virtualbox VM for "minikube" ...
🐳  Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Because you’re running a minikube cluster locally, you don’t need to upload the image to an artifact repository, and can simply pass the image to your cluster using the following command:


minikube image load hrittik/sample-flask

Running the Application

The manifest defines a deployment named <terminal inline>sample-flask<terminal inline> that creates pods running the <terminal inline>hrittik/sample-flask<terminal inline> container image. The container exposes port 9000, which you can externally expose for your application to be viewable in your browser:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-flask
  labels:
    name: sample-flask
spec:
  selector:
    matchLabels:
      name: sample-flask
  template:
    metadata:
      labels:
        name: sample-flask
    spec:
      containers:
        - name: sample-flask
          image: hrittik/sample-flask
          imagePullPolicy: Never
          ports:
            - containerPort: 9000

After you have created the manifest, you can run it using the following command:


kubectl apply -f manifest.yaml

After your pods have been deployed, you are ready to access the app. You must forward traffic from port 9000 on your local machine to port 9000 on your deployment. With the port exposed, the user can access the application running in the deployment by visiting <terminal inline>http://localhost:9000<terminal inline> on your web browser.


kubectl port-forward deployment/sample-flask 9000:9000

Monitor Logs with the kubectl Logs Command

As your cluster and pods run, you can watch your pod, deployment, or job log. In this tutorial, you’ll be focusing on deployments and pods only, though it’s worth noting that jobs work the same way as deployments.

To get started, you can use the <terminal inline>kubectl logs pod-name<terminal inline> to get all the logs generated through the pod to standard output or error stream.

In a deployment, there may be many pods. If you’d like to specify a pod, you can first get its name by using the <terminal inline>kubectl get pods<terminal inline>, and then locate the pod you want to view the logs from, and replace <terminal inline>pod-name<terminal inline> with the name of the pod. In the below example, the pod’s name is <terminal inline>sample-flask-7cf5758669-hh5l5<terminal inline>.


hrittik@hrittik:~$ kubectl logs sample-flask-7cf5758669-hh5l5 
 * Serving Flask app 'main' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on all addresses (0.0.0.0)
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://127.0.0.1:9000
 * Running on http://172.17.0.3:9000 (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 180-059-963
[2022-07-24 19:35:21,500] INFO in main: Main request successful
127.0.0.1 - - [24/Jul/2022 19:35:21] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2022 19:35:23] "GET /favicon.ico HTTP/1.1" 404 -
[2022-07-24 19:35:26,326] INFO in main: Main request successful
127.0.0.1 - - [24/Jul/2022 19:35:26] "GET / HTTP/1.1" 200 -
[2022-07-24 19:35:27,005] INFO in main: Main request successful
127.0.0.1 - - [24/Jul/2022 19:35:27] "GET / HTTP/1.1" 200 -
hrittik@hrittik:~$ 

Get Recent Logs

Most of the time, you won’t need all the logs your objects have ever produced. The ‘–since’ flag allows you to obtain logs since that time, and also supports RFC3339-compliant timestamps:


kubectl logs pod-name --since=6h

Show a Specific Number of Lines of the Log

To see the latest events, you may want to look at what has happened in the last few lines of logs. To do this, replace <terminal inline>pod-name<terminal inline> with the name of your pod, and <nr-of-lines> with a number, such as <terminal inline>10<terminal inline>, which will allow you to see the last ten lines of your log stream.


kubectl logs pod-name --tail <nr-of-lines>

Accessing the Logs From Other Resource Types

Sometimes, there might be more than one container in your pods. You can get logs of a specific container inside a specific pod by using the following kubectl logs command:


kubectl logs pod-name -c container-name

A list of deployments or jobs can be retrieved with <terminal inline>kubectl get deployments<terminal inline> and <terminal inline>kubectl get jobs<terminal inline>.

You can then retrieve the logs of a deployment with the following:


kubectl  logs deployment/deployment-name

For jobs, you can use:


kubectl logs job/job-name   

Advanced Logs Search With grep

You can search for very specific occurrences if you pipe the kubectl and grep commands together.

The <terminal inline>kubectl logs<terminal inline> command searches through all of the logs for a given pod and returns any lines that match the grep string. This is a handy way to troubleshoot errors in your application, as you can quickly search through all of a pod’s logs for any lines that contain the specified error string.

For example, if you want to search logs for a specific endpoint or user, you can include it in the search expression:


kubectl logs my-pod | grep search-expression

An example of this command in action would be using it to find all <terminal inline>200 successful<terminal inline> requests, as below:


hrittik@hrittik:~$ kubectl logs sample-flask-7cf5758669-hh5l5  | grep 200
127.0.0.1 - - [24/Jul/2022 19:35:21] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2022 19:35:26] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [24/Jul/2022 19:35:27] "GET / HTTP/1.1" 200 -

Get Logs of a Specific Pod and All Its Containers with Timestamps

To get logs of a specific pod and all its containers with timestamps, use <terminal inline>kubectl logs<terminal inline> with the timestamps flag enabled:


kubectl logs pod-name --timestamps=true 

This command will return logs with timestamps:


hrittik@hrittik:~$ kubectl logs sample-flask-7cf5758669-hh5l5  --timestamps=true 
2022-07-24T19:33:50.902206138Z  * Serving Flask app 'main' (lazy loading)
2022-07-24T19:33:50.902267839Z  * Environment: production
2022-07-24T19:33:50.902275439Z    WARNING: This is a development server. Do not use it in a production deployment.
2022-07-24T19:33:50.902281739Z    Use a production WSGI server instead.
2022-07-24T19:33:50.902286139Z  * Debug mode: on
2022-07-24T19:33:50.937923019Z  * Running on all addresses (0.0.0.0)
2022-07-24T19:33:50.937949920Z    WARNING: This is a development server. Do not use it in a production deployment.
2022-07-24T19:33:50.937955920Z  * Running on http://127.0.0.1:9000
2022-07-24T19:33:50.937960320Z  * Running on http://172.17.0.3:9000 (Press CTRL+C to quit)
2022-07-24T19:33:50.939451832Z  * Restarting with stat
2022-07-24T19:33:51.532216796Z  * Debugger is active!
2022-07-24T19:33:51.539692655Z  * Debugger PIN: 180-059-963
2022-07-24T19:35:21.500774843Z [2022-07-24 19:35:21,500] INFO in main: Main request successful
2022-07-24T19:35:21.501381047Z 127.0.0.1 - - [24/Jul/2022 19:35:21] "GET / HTTP/1.1" 200 -
2022-07-24T19:35:23.326395583Z 127.0.0.1 - - [24/Jul/2022 19:35:23] "GET /favicon.ico HTTP/1.1" 404 -
2022-07-24T19:35:26.326450694Z [2022-07-24 19:35:26,326] INFO in main: Main request successful
2022-07-24T19:35:26.326713696Z 127.0.0.1 - - [24/Jul/2022 19:35:26] "GET / HTTP/1.1" 200 -
2022-07-24T19:35:27.005894049Z [2022-07-24 19:35:27,005] INFO in main: Main request successful
2022-07-24T19:35:27.006259152Z 127.0.0.1 - - [24/Jul/2022 19:35:27] "GET / HTTP/1.1" 200 -

Final Thoughts

In this piece, you’ve learned why Kubernetes logs are vital, and how you can use them to troubleshoot your application. Once you have mastered it, Kubernetes logging will continue to help you by making it simple to monitor the health of your application and troubleshoot as necessary.

In addition to application logs, you also need cluster-level logging, which will assist you in situations such as node failure or pod removal.

A solution with separate storage and a lifecycle logging system that is not dependent on your nodes, pods, or containers is vital. ContainIQ is a solution that helps you achieve cluster-level logging, and also allows you to monitor logs from pods, deployments, different objects, and search logs by pod, message, timestamp, date, or cluster, providing you a single pane of glass.

Sign up for ContainIQ and start your free trial today.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Hrittik Roy
Software Engineer

Hrittik is a writer and a software engineer specializing in cloud native ecosystems. He has worked on many large-scale projects and has experience in both the technical and the business aspects of cloud computing. He is a frequent speaker at conferences and has written numerous articles on software development and distributed systems. In his free time, he likes to go for long walks.

READ MORE