Applications running in production can’t tell you directly what’s going on under the hood, so you need a way to keep track of their behavior. Knowing what’s going on in your code at any given time benefits you from both a technical and a business standpoint. With this information, engineers and product managers can make better decisions about which systems to repair or how to improve user experience (UX).
The most straightforward way to achieve this is with logging. You can code the program during development to share relevant information while running that could be useful in analysis, debugging, and further troubleshooting.
Django is a popular Python web application framework, used by small and large organizations. Python offers a powerful built-in logging module to log events from applications and libraries. The library is flexible and can be easily customized to fit any application. You can implement it this way:
When the program is run, you can see the records on the console:
This article will delve deeper into the logging concepts that every developer should know. You’ll learn how to log significant data, how to route the logs, and how to consolidate them for insights into your Django applications. You’ll also learn best practices to follow.
What Are Logs?
Logs are the records of events that occur while running software. They contain information about the application, system performance, or user activities. Loggers are the objects that a developer interacts with to print out the information. They help you tell the program what to log and how to do it.
Adding logging to your code involves three critical steps:
- Choosing what data to output and where in the code to do so
- Choosing how to format the logs
- Choosing where to transmit the logs (e.g., stdout or syslog)
How to Implement Logging
To implement logging in a Django application, you must consider the following factors.
The Python logging library adds several types of metadata to provide valuable context around the log messages. This can help in diagnosing a problem or analyzing what’s happening inside the code while the application is running. For example, log levels define the severity level of log events, which can be used to segment logs so you get the most relevant log message at any specified time.
You can use log levels to help prioritize log messages. For instance, when you’re developing an application, <terminal inline>DEBUG<terminal inline> information is most relevant; while you’re running the application, you can leave <terminal inline>INFO<terminal inline> logs to indicate certain events.
The Python logging package comes with five logging levels: <terminal inline>critical<terminal inline>, <terminal inline>error<terminal inline>, <terminal inline>warning<terminal inline>, <terminal inline>info<terminal inline>, and <terminal inline>debug<terminal inline>. These levels are denoted by constants with the same name: <terminal inline>logging.CRITICAL<terminal inline>, <terminal inline>logging.ERROR<terminal inline>, <terminal inline>logging.WARNING<terminal inline>, <terminal inline>logging.INFO<terminal inline>, and <terminal inline>logging.DEBUG<terminal inline>, with values of 50, 40, 30, 20, and 10. A level’s meaning is determined at runtime by its value.
The community-wide applicability rules for logging levels are as follows:
- DEBUG: <terminal inline>logging.DEBUG<terminal inline> can be used to log detailed information for debugging code in development, such as when the app starts.
- INFO: <terminal inline>logging.INFO<terminal inline> can be used to log information about the code if it is running as expected, such as when a process starts in the app.
- WARNING: <terminal inline>logging.WARNING<terminal inline> can be used to report unexpected behavior that could cause a future problem but isn’t impacting the current process of the application, such as when the app detects low memory.
- ERROR: <terminal inline>logging.ERROR<terminal inline> can be used to report events when the software fails to perform some action, such as when the app fails to save data due to insufficient permissions given to the user.
- CRITICAL: <terminal inline>logging.CRITICAL<terminal inline> can be used to report serious errors that impact the continued execution of the application, such as when the application fails to store data due to insufficient memory.
The <terminal inline>logging.Logger<terminal inline> object offers the primary interface to Python’s logging library. These objects include methods for issuing log requests and for querying and modifying their state, as follows:
- <terminal inline>Logger.critical(msg, *args, **kwargs)<terminal inline>
- <terminal inline>Logger.error(msg, *args, **kwargs)<terminal inline>
- <terminal inline>Logger.debug(msg, *args, **kwargs)<terminal inline>
- <terminal inline>Logger.info(msg, *args, **kwargs)<terminal inline>
- <terminal inline>Logger.warn(msg, *args, **kwargs)<terminal inline>
In addition, loggers provide the following two options:
- <terminal inline>Logger.log(level, msg, *args, **kwargs)<terminal inline> sends log requests with defined logging levels. When you’re using custom logging levels, this approach comes in handy.
- <terminal inline>Logger.exception(msg, *args, **kwargs)<terminal inline> sends log requests with the logging level <terminal inline>ERROR<terminal inline> and includes the current exception in the log entries. This function should be called from an exception handler.
Logging handlers are used to determine where to put the logs—system logs or files. Unless explicitly specified, the logging library uses a <terminal inline>StreamHandler<terminal inline> for sending the log messages to the console or <terminal inline>sys.stderr<terminal inline>.
Handlers also format log records into log entries using their formatters. The formatter for a handler can be set by clients using the <terminal inline>Handler.setFormatter(formatter)<terminal inline> function. If a handler doesn’t have a formatter, the library’s default formatter is used.
The <terminal inline>logging.handler<terminal inline> module includes fifteen useful handlers that span a range of use cases (including the ones mentioned above). The most commonly used logging handlers are as follows:
- <terminal inline bold>StreamHandler<terminal inline bold> transmits logs to a stream-like object, such as a console, using <terminal inline>stdout<terminal inline>.
- <terminal inline bold>FileHandler<terminal inline bold> redirects log events to a file.
- <terminal inline bold>SyslogHandler<terminal inline bold> routes logs to your system’s syslog daemon.
- <terminal inline bold>HTTPHandler<terminal inline bold> allows you to deliver logs through HTTP.
- <terminal inline bold>NullHandler<terminal inline bold> redirects your logs to nowhere, which is helpful for temporarily halting logging.
Log messages from the logging library follow this default format:
<terminal inline><LEVEL>:<LOGGER_NAME>:<MESSAGE><terminal inline>
However, they can be customized to add more information using <terminal inline>logging.Formatter<terminal inline> objects to change them into a string-based log entry.
Django leverages the potential of logging by using the Python <terminal inline>logging<terminal inline> module by default, which provides different ways to create customized loggers through handlers, formats, and levels. The logging module is capable of:
- Multithreading execution
- Categorizing messages via different log levels
- Setting the destination of the logs
- Controlling what to include and what to emit
How to Add Logging in Django
To use logging in your Django project, follow these steps.
Configure the <terminal inline>settings.py<terminal inline> for various loggers, handlers, and filters:
Restart the server. You’ll be able to see logs on your console or log files, depending on the configuration.
Best Practices for Logging in Django
Logging is vital to your Django application because it can save you time in crucial situations. Follow these best practices for implementation:
- Create loggers using the <terminal inline>logging.getLogger()<terminal inline> factory function, so that the logging library can manage the mapping of names to instances and maintain a hierarchy of logs. This way, you can use the logger’s name to access it in different parts of the application, and only a set number of loggers will be created at runtime.
- Specify proper log levels to lower the risks of deployment and ensure effective debugging. This helps prevent the flooding of log files with trivial information due to inappropriate settings.
- Format logs correctly so that the system can parse them. This is useful when manually reading the logs isn’t enough, such as for audits or alerts.
- Don’t log sensitive information like passwords, authorization tokens, personally identifiable information (PII), credit card numbers, or session identifiers that the user has chosen.
- Use fault-tolerant protocols while transferring logs to avoid packet loss. Secure log data by encrypting it and removing any sensitive information before transferring it.
- Create meaningful log messages so that you can easily tell what happened from the log file.
- Enhance log messages by including additional information.
- Don’t make log messages reliant on the content of prior messages, since the previous messages may not display if they’re logged at different levels.
- Ensure that logs are written asynchronously during log generation. Buffer or queue logs to prevent the program from stalling. Organize logs so that it’s simple to make changes as needed.
- Use a wrapper to shield the program from third-party tools. Use standard date and time formats, add timestamps, use log levels appropriately, and include a stack trace when reporting the error to make logs more human-readable. Include the thread’s name in a multithreaded program.
- Use filters or <terminal inline>logging.LoggerAdapter<terminal inline> to inject local contextual information, or use <terminal inline>logging.setLogRecordFactory()<terminal inline> to inject global contextual information in the log records. Don’t log too much, though, or it might become difficult to extract value.
- If you use <terminal inline>FileHandler<terminal inline> to write logs, the log file will expand over time and eventually take up all your storage space. In the production environment, utilize <terminal inline>RotatingFileHandler<terminal inline> instead of <terminal inline>FileHandler<terminal inline> to prevent this problem.
- When you have many different servers and log files, set up a central log system for all critical messages so you can quickly monitor it for problems.
Logging configuration in Django is simple, but it can get complicated when dealing with large applications. Aside from the Python logging module, you can also use popular logging tools such as ContainIQ.
Using ContainIQ for Django Applications
Today, many companies are running their Django applications on managed Kubernetes services like GKE and EKS. For those companies, ContainIQ allows users to quickly monitor metrics, logs, traces, and events within a Kubernetes cluster. This helps teams monitor and manage Kubernetes cluster and application health using pre-built dashboards.
ContainIQ for Django Applications on K8s
ContainIQ offers an efficient logging dashboard that automatically collects everything logged within the Django application as well as the Kubernetes system components. You can search through the logs by message, timestamp, or cluster.
Logging can help you improve your application development and end user experience. Because Django is a Python-based framework, Python’s logging system benefits you in a number of ways, and you can implement it fairly easily. Remember to follow the above best practices to simplify your setup and ensure better-quality results.
And if you are using Kubernetes as your container orchestration system, you should also consider incorporating ContainIQ into your logging strategy, because the tool offers multiple features to improve your workflow. To learn more about ContainIQ, check the documentation.