The cloud native industry relies heavily on Kubernetes, a well-liked container orchestrator, to manage and automate container deployments. Kubernetes ensures that your applications are resilient and that users can always access the requested services, but what if you deploy a new object and it crashes?
To find out why, you’ll need to get your hands dirty and use the command <terminal inline>kubectl logs<terminal inline> to follow the log trail your container leaves behind.
In this tutorial, you’ll learn about the log command using a live application. You can follow along locally on a minikube cluster. By the time you’ve finished, you’ll be able to troubleshoot and debug your applicationwhenever things go haywire.
Understanding kubectl logs
Logging helps you track events in your application, which can be very useful for debugging purposes and understanding why your application is behaving the way it is. These logs can be checked when the application crashes or behaves differently than expected.
<terminal inline>kubectl logs<terminal inline> let you print the logs from a running container to a standard output or error stream, and comes with many flags that can help you with your specific purpose. For example, the logs can be timestamped, follow a specific container, or prefixed with log name or container name, which makes these logs very detailed and can help you debug the pods.
To print the logs from a container, use the following command:
Let’s take an example of when logging is helpful. Suppose you have a Flask web app, many users visit you, and all the user requests and responses are logged.
Everything’s going well, until suddenly, many users are suddenly complaining about latency. In this situation, opening up the logs will help you understand the underlying cause of their problems. You can check for the slowest requests, see what users are making the slow requests, and then resolve the issue.
Aggregating all the logs can help you keep track of all the user requests and responses, which will help you understand what happened and why. It also makes it easier to visualize, such as with a Grafana dashboard, to identify potential issues.
Owner Object Logs
Kubernetes is a mature system, so the logs aren’t just limited to pods. The logging mechanism also supports logs from deployment and job objects:
By default, only the logs from the first container are printed on your stdout, but using the <terminal inline>--all<terminal inline> flag will allow you to list all the logs in the given deployment or job.
In Kubernetes, the owner object is the deployment, and the dependent object is the ReplicaSet. An owner is an object that creates or deletes a dependent object. A dependent is an object that depends on another object for its existence, but can exist individually as well.
The owner object logs helps you see all the dependent logs, which is sometimes very helpful. It can be hard to identify which pods the user request goes to when there’s a ReplicaSet. Logging the entire parent gives you all the details you need.
Practical Example of Log Use | Kubectl Logs
In this section, you will build a sample Flask application, then configure it to run on a local minikube cluster so you can begin examining logs with the <terminal inline>kubectl logs<terminal inline> command.
You’ll need to install the Flask package before you can develop a Flask application, so make sure you have access to Python 3 and pip using the commands below:
For Windows users, use <terminal inline>python<terminal inline> instead of <terminal inline>python3<terminal inline>; you can find additional information about installing Python and pip in the official documentation for each tool.
Make sure you have installed Docker following the Docker documentation, as you will be running this application on a Kubernetes cluster, and you need to pack your application inside containers to run in the cluster.
For this tutorial, you’ll use minikube to build a local Kubernetes cluster. Follow the minikube documentation to install minikube locally on your own machine.
Finally, you’ll need to have a working knowledge of kubectl, Flask, and Docker.
For this demo, you’ll have a Flask application with two endpoints, which will offer better log visibility. If you’d like to see the source code for the entire application, you can do so in this GitHub Repository.
Import the Necessary Packages
Flask is a lightweight Python web framework enabling you to create web applications quickly and easily. Install Flask using the pip command:
Import the Flask package to use them in your app:
Create a Flask Application
The following code creates the Flask class with the name of the module (<terminal inline>__name__<terminal inline>):
Create the Status Endpoint
Flask uses <terminal inline>@app.route()<terminal inline> decorator to bind a function to a URL.
The healthcheck route is an endpoint that can be accessed by making a GET request to <terminal inline>/status<terminal inline>. When this route is accessed, a response is returned that contains a JSON object with a key of “result” and a value of “OK - healthy”. The status code for this response is 200, which indicates that the request was successful.
Create the Primary Endpoint
Now you’ll create a primary endpoint containing the default landing page:
The above code defines a route, “/”, which accepts GET requests. When a user sends a GET request to the “/” route, the “hello” function is executed and the string “Hello World!” is returned.
Start the Flask App
This last line of code tells your server that it should start itself. It also defines the debug option. Enabling debug mode allows you to see full error messages in the browser. The <terminal inline>host<terminal inline> and <terminal inline>port<terminal inline> options tell the server to listen to all requests coming in on port 9000 on any network interface:
Test the Application
Run the app using the following command:
Test the app in your browser at <terminal inline>http://localhost:9000/<terminal inline>. You should see the message “Hello World!”. Visit the <terminal inline>/status<terminal inline> endpoint to see the status of your application and the logs that have been generated on your console.
A <terminal inline>requirements.txt<terminal inline> file is a list of dependencies for a Python project. It’s usually generated by a package installer like pip, and the content is just a list of package names and version numbers. The command to generate the file is using <terminal inline>pip freeze > requirements.txt<terminal inline> which tells pip to look at your current environment and create a file with all the dependencies and their versions.
The file comes in handy in the creation of a Docker container or virtual environment, because it allows you to install all the dependencies in the file with one simple command rather than running commands manually, one after one.
Building the Docker Image
To run the Flask application in a Docker container, you need a Dockerfile to build a Docker image for the application. You can use the following as your Dockerfile:
The <terminal inline>docker build<terminal inline> command uses the specified Dockerfile to create a new Docker image. The example below will create the new image from the Dockerfile in the root directory. The new image will be tagged with the name <terminal inline>hrittik/sample-flask<terminal inline>:
Testing the Image
The <terminal inline>docker run<terminal inline> command will create a new container from the “hrittik/sample-flask” image.
The container will be running in the background, as indicated by the flag <terminal inline>-d<terminal inline>, indicating detached mode, and port 9000 will be mapped to port 9000 on the host machine (-p 9000:9000), which you can access the same way that you’ve accessed the application in your browser previously. This time, though, the content is served from a container!
Running Minikube Cluster
Minikube is a tool from CNCF that makes it easy to run Kubernetes locally. Users can use Minikube to run a single-node Kubernetes cluster on their laptops with the <terminal inline>minikube start<terminal inline> command.
Because you’re running a minikube cluster locally, you don’t need to upload the image to an artifact repository, and can simply pass the image to your cluster using the following command:
Running the Application
The manifest defines a deployment named <terminal inline>sample-flask<terminal inline> that creates pods running the <terminal inline>hrittik/sample-flask<terminal inline> container image. The container exposes port 9000, which you can externally expose for your application to be viewable in your browser:
After you have created the manifest, you can run it using the following command:
After your pods have been deployed, you are ready to access the app. You must forward traffic from port 9000 on your local machine to port 9000 on your deployment. With the port exposed, the user can access the application running in the deployment by visiting <terminal inline>http://localhost:9000<terminal inline> on your web browser.
Monitor Logs with the kubectl Logs Command
As your cluster and pods run, you can watch your pod, deployment, or job log. In this tutorial, you’ll be focusing on deployments and pods only, though it’s worth noting that jobs work the same way as deployments.
To get started, you can use the <terminal inline>kubectl logs pod-name<terminal inline> to get all the logs generated through the pod to standard output or error stream.
In a deployment, there may be many pods. If you’d like to specify a pod, you can first get its name by using the <terminal inline>kubectl get pods<terminal inline>, and then locate the pod you want to view the logs from, and replace <terminal inline>pod-name<terminal inline> with the name of the pod. In the below example, the pod’s name is <terminal inline>sample-flask-7cf5758669-hh5l5<terminal inline>.
Get Recent Logs
Most of the time, you won’t need all the logs your objects have ever produced. The ‘–since’ flag allows you to obtain logs since that time, and also supports RFC3339-compliant timestamps:
Show a Specific Number of Lines of the Log
To see the latest events, you may want to look at what has happened in the last few lines of logs. To do this, replace <terminal inline>pod-name<terminal inline> with the name of your pod, and <nr-of-lines> with a number, such as <terminal inline>10<terminal inline>, which will allow you to see the last ten lines of your log stream.
Accessing the Logs From Other Resource Types
Sometimes, there might be more than one container in your pods. You can get logs of a specific container inside a specific pod by using the following kubectl logs command:
A list of deployments or jobs can be retrieved with <terminal inline>kubectl get deployments<terminal inline> and <terminal inline>kubectl get jobs<terminal inline>.
You can then retrieve the logs of a deployment with the following:
For jobs, you can use:
Advanced Logs Search With grep
You can search for very specific occurrences if you pipe the kubectl and grep commands together.
The <terminal inline>kubectl logs<terminal inline> command searches through all of the logs for a given pod and returns any lines that match the grep string. This is a handy way to troubleshoot errors in your application, as you can quickly search through all of a pod’s logs for any lines that contain the specified error string.
For example, if you want to search logs for a specific endpoint or user, you can include it in the search expression:
An example of this command in action would be using it to find all <terminal inline>200 successful<terminal inline> requests, as below:
Get Logs of a Specific Pod and All Its Containers with Timestamps
To get logs of a specific pod and all its containers with timestamps, use <terminal inline>kubectl logs<terminal inline> with the timestamps flag enabled:
This command will return logs with timestamps:
In this piece, you’ve learned why Kubernetes logs are vital, and how you can use them to troubleshoot your application. Once you have mastered it, Kubernetes logging will continue to help you by making it simple to monitor the health of your application and troubleshoot as necessary.
In addition to application logs, you also need cluster-level logging, which will assist you in situations such as node failure or pod removal.
A solution with separate storage and a lifecycle logging system that is not dependent on your nodes, pods, or containers is vital. ContainIQ is a solution that helps you achieve cluster-level logging, and also allows you to monitor logs from pods, deployments, different objects, and search logs by pod, message, timestamp, date, or cluster, providing you a single pane of glass.
Sign up for ContainIQ and start your free trial today.