Start your free 14-day ContainIQ trial

Python Logging: Getting Started, Best Practices, and More

July 5, 2022

Logging is mission-critical for most engineering teams. In this guide, you’ll learn about logging in Python, including how to get started and how to make your log data work for you.

Davis David
Data Scientist, Engineer

The Python logging module provides a flexible framework for tracking different activities or events when the application is running. The logging module can save the events in log files or any other format that allows you to track when a particular event occurred.

Logging is very useful for your applications because it can help to have clear information to debug when crashes occur. The logging records collected can also provide insights into your application’s performance and where it needs improvement.

Without the information offered by logging, it can be very difficult to know what’s wrong with your application when it goes down.

In this article, you’ll learn everything you need to know to start logging in Python, including the standard logging library, basics of logging in Python, and best practices.

Logging in Python | Basics

Part of the standard Python library, the Python logging module tracks events and writes log messages to a user-configured output. There are multiple ways to format log messages, ensuring that you’re able to include all the information you need. A log message can store events that occur during the normal operation of the application, errors within the application, and warnings regarding a specific event.

Logging in Python has many use cases:

  • Debugging: If something goes wrong, reading the log messages around the error will help you figure out what really happened, as well as point you towards potential solutions.
  • Performance insights: Logging can provide insights about your application’s performance.
    For example, you can track activities in your application such as request rates, load performance, and response time. The details you collect can help you identify operations with unnecessarily long run times that can cause problems in the future, and can inform improvements to the overall performance of your application.
  • Warnings: Logging can be used to record events and send a warning to a logging file, email address, or web server if something unexpected is happening in your application. For example, you might choose to get an email if you’re close to running out of disk space on your server.

Standard Logging Library

Python comes with a built-in logging module, so you don’t need to install any packages to implement logging in your application. All you need to do is to import the logging module, then set up a basic configuration by using the <terminal inline>logging.basicConfig()<terminal inline> method. You use <terminal inline>logging.{level}(output)<terminal inline> to show the log message.


import logging

logging.basicConfig(level=logging.INFO,filename="log_file.log")

logging.info("The application is running task number 5")

The above example does the following:

  • Imports the logging module.
  • Creates a basic configuration for the default logger.
  • Sets the threshold logging to info.
  • Sets a file called <terminal inline>log_file.log<terminal inline> where log messages will be saved.

The log messages in the <terminal inline>log_file.log<terminal inline> will read as follows:

<terminal inline>INFO:root:The application is running task number 5<terminal inline>

The first part of the log message is the logging level (info), followed by the default logger (root), and finally the log message.

Understand Logging Levels

In the above example, you set the logging level to INFO. Logging levels are labels used to show the importance of the given log message. The Python logging library supports five different logging levels, each associated with a constant value that shows the severity of the log entry.

  • Debug (10): The lowest level of log messages, this level shows information that can help you to diagnose a particular problem in your application.
  • Info (20): This level shows information that indicates the application is running as expected. For example, “new user has registered”.
  • Warning (30): This level shows information that is indicative of future problems. This is not an error, but requires your attention. For example, “Too many requests from IP address 126.45.67.3”, or “Low disk space.”
  • Error (40): This level shows that the application has failed to perform some tasks or functions, such as “File failed to upload.”
  • Critical (50): This level shows information that indicates serious, urgent errors that can cause the application to stop working.

The lowest severity level is debug, and the highest severity level is critical. The default severity level is warning, which means only events at this level and above will be tracked.

The example below shows how you can configure the logging level to error:


import logging

# create logger
logger = logging.getLogger('logger')

# configure the logging level to ERROR
logger.setLevel(logging.ERROR)

In the example above, the logging level is configured to error. This means that only events at the level of error or above will be tracked.

Configuring Python Loggers

Loggers are objects that you can use to create and configure different types of log messages that you want to implement in your application. For example, you can configure one logger to send log messages to a remote machine, and another to send logs to a file. You can also set these loggers at different severity levels, such as debug and warning.

The following example shows how you can create and configure different Python loggers:


import logging

# create first logger
first_logger = logging.getLogger('first logger')  #1
first_logger.setLevel(logging.DEBUG)  #2
first_logger.addHandler(logging.FileHandler(filename="first_logger.log")) #3

# create second logger
second_logger = logging.getLogger('second logger')  #4
second_logger.setLevel(logging.WARNING)  #5
second_logger.addHandler(logging.handlers.SocketHandler(host='localhost' ,port=8000)) #6

The above example does the following:

  • Creates the first logger, which is named <terminal inline>first logger<terminal inline>.
  • Sets the threshold logging level of the logger to debug.
  • Creates a file handler that sends log messages to a disk file named <terminal inline>first_logger.log<terminal inline>.
  • Creates the second logger named <terminal inline>second logger<terminal inline>.
  • Sets the threshold logging level of the logger to WARNING.
  • Creates a socket-based handler that sends log messages to a network socket whose address is given by host and port.
K8s Metrics, Logging, and Tracing
Monitor the health of your cluster and troubleshoot issues faster with pre-built dashboards that just work.
Start Free Trial Book a Demo

Understanding Logging Handlers

Logging handlers are responsible for sending the log messages created by loggers to their specified destinations. Logging handlers can also be called targets or writers, depending on the platform.

The Python <terminal inline>logging.handlers<terminal inline> module has several handlers that you can use in your application.

FileHandler

FileHandler saves log messages directly to a file. In the following example, the <terminal inline>file_handler object<terminal inline> will save log messages to a file called <terminal inline>log_file.log<terminal inline>.


import logging

# Create the Handler for log messages to a file
file_handler = logging.FileHandler("log_file.log")

HTTPHandler

HTTPHandler lets you send log messages to a web server using either GET or POST semantics.


import logging
# sends log messages over HTTP
http_handler = logging.handlers.HTTPHandler(host='127.0.0.1:5000',url='/logs', method='GET')

SocketHandler

SendHandler lets you send log messages to TCP/IP sockets.


import logging

# sends log messages to a network socket.
socket_handler = logging.handlers.SocketHandler(host='125.105.34.4',port=8000)

StreamHandler

StreamHandler lets you send log messages to streams such as <terminal inline>sys.stdout<terminal inline> and <terminal inline>sys.stderr<terminal inline>. In the following example, log messages will be sent to the console.


import logging
import sys

# send log messages to screen console
stream_handler = logging.StreamHandler(stream=sys.stdout)

StreamHandler is the recommended handler when you’re debugging your application.

SMTPHandler

SMTPHandler lets you send log messages to an email address via SMTP. In the following example, the <terminal inline>stmp_handler<terminal inline> object will send a log message via email with the subject of “Alert!” to system_admin@example.com.


import logging

#send log messages via email
stmp_handler = logging.handlers.SMTPHandler(   
   mailhost = ("example.com",8025),
   fromaddr = "alerts@example.com",
   toaddrs = "system_admin@example.com",
   subject = "Alert!")

This logging handler is especially useful, as it allows you to send urgent log messages directly to the person responsible for handling them. This person might be a system administrator, software developer, or security engineer.

Understanding Formatter

Formatter objects are responsible for specifying the layout of log messages in the final output. It’s up to you to decide what you want the output of the log messages to look like. The minimum recommendation for this is to include the date, time, and logging level in your output format.

The logging module provides various log-record attributes that you can implement in your formatter.

In the following example, you will learn how to specify the final output of the log message by including the date, time, name of the logger, and the specified logging level.


import logging

# create logger
logger = logging.getLogger('simple_logger')

# set logging level
logger.setLevel(logging.DEBUG)

# create console handler and set level to debug
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)

# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# add formatter to console_handler
console_handler.setFormatter(formatter)

# add ch to logger
logger.addHandler(console_handler)

# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warning('warn message')
logger.error('error message')
logger.critical('critical message')

The formatter in the above example sets several parameters.

  • %(asctime)s: A human-readable timestamp showing when the log message was created.
  • %(name)s: The name of the logger object used to log the call.
  • %(levelname)s: The logging level for the message in a text format.
  • %(message)s: The log message itself.

Here is the output in the console:


2022-02-15 21:53:41,135 - simple_logger - DEBUG - debug message

2022-02-15 21:53:41,143 - simple_logger - INFO - info message

2022-02-15 21:53:41,145 - simple_logger - WARNING - warn message

2022-02-15 21:53:41,148 - simple_logger - ERROR - error message

2022-02-15 21:53:41,149 - simple_logger - CRITICAL - critical message

Best Practices | Timestamps, Rotations

To get the most out of your Python logs, it’s important to adhere to logging best practices.

Timestamps

Timestamps can include both the date and time, and are important to include in your log messages. Timestamps make it easy to see when an event occurred, and make accessing historical logs much easier.

For example, if you receive a lot of log messages indicating a warning due to high application traffic, the timestamp will allow you to identify if the high traffic is caused by intermittent spikes, or if traffic consistently goes up only at a specific time of day. This will help you to come up with a solution to handle the situation.

Rotating Log Files

While at first it may seem easier to save all your logs to a single file, it’s considered best practice to spread your logs across multiple files, especially if you have extensive logs. Having a single massive log file can lead to poor performance, since the system needs to open and close the file each time it records a new log message. A 500 MB log file will take significantly longer to open and close than one that’s limited to 2 MB.

Using a logging handler called RotatingFileHandler can help you rotate your log files. Rotating log files is when you save log messages to a file, and when that file hits a predetermined size, a new file is created. The handler <terminal inline>files.RotatingFileHander<terminal inline> will rotate log files based on a user-configured maximum size. To enable this, you need to configure the values of two parameters, <terminal inline>maxBytes<terminal inline> and <terminal inline>backupCount<terminal inline>.

The <terminal inline>maxBytes<terminal inline> value is the maximum size of a log file. When the log file is about to reach maxBytes, that file is closed, and a new file is silently opened to receive new log messages. The <terminal inline>backupCount<terminal inline> value sets names for the new log files when rotating. For example, when you set <terminal inline>backupCount<terminal inline> to five and a base log file name of <terminal inline>logging_file.log<terminal inline>, you would get <terminal inline>logging_file.log<terminal inline>, <terminal inline>logging_file.log.1<terminal inline>, and so on, up through <terminal inline>logging_file.log.5<terminal inline>.

The example below shows you how to use and configure RotatingFileHandler in your application.


import logging
from logging.handlers import RotatingFileHandler

# create logger
logger = logging.getLogger('simple_logger')

# set logging level
logger.setLevel(logging.DEBUG)

# create rotating file handler and set level to debug
handler = RotatingFileHandler('my_log.log', maxBytes=2000, backupCount=10)
handler.setLevel(logging.DEBUG)

# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# add formatter to rotating file handler
handler.setFormatter(formatter)

# add ch to logger
logger.addHandler(handler)

for _ in range(10000):
  # 'application' code
  logger.debug('debug message')
  logger.info('info message')
  logger.warning('warn message')
  logger.error('error message')
  logger.critical('critical message')

In the above example, the RotatingFileHandler has a <terminal inline>maxBytes<terminal inline> value of two thousand bytes, (2 KB) and the <terminal inline>backupCount<terminal inline> is ten. This means you can have a total of eleven log files, each with a maximum size of 2 KB.

Using ContainIQ

ContainIQ Metrics Dashboard

It can be challenging to scan and analyze your log messages to find the information you need. If you’re running your python application in Kubernetes, you can easily manage your cluster level and application log messages with ContainIQ. The ContainIQ platform can help you and your team to automatically monitor your Kubernetes cluster by collecting and storing logs, events, traces, and latencies.

ContainIQ has pre-built dashboards that your engineering team can search, and offers a clear, human-readable view of events, logs, messages, timestamps, or clusters. You can also create alerts on specific messages and be notified on selected Slack channels.

ContainIQ stores cluster and application logs for 14-days by default. However, users are able to request longer retention periods if needed.

Final Thoughts

Tracking different activities or events when your application is running is very important. In this article, you learned how to get started with logging by using Python’s standard logging library and best practices relating to logging in Python.

Extracting insights from logs requires effort and planning, and is still tricky. With ContainIQ, you can effectively manage and analyze Kubernetes cluster logs, application logs, and other metrics. ContainIQ provides a user-friendly way to help you and your team easily visualize and gain actionable insights from your logs. Create an account or book a demo today.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
Davis David
Data Scientist, Engineer

Davis David is the Data Scientist at Binary Institute. He has a background in Computer Science with a degree in Computer Science Engineering from the University of Dodoma. Davis is passionate about artificial intelligence, machine learning, deep learning, and software development. Davis is the co-organizer of AI meet-ups, workshops and events with a passion to build a community of Data Scientists in Tanzania. He is an experienced technical author with bylines on Hackernoon, FreecodeCamp, and others.

READ MORE