Start your free 14-day ContainIQ trial

Docker Logging: Getting Started, Best Practices, and More

Effective logging is a key aspect of successful deployment. This guide will explore logging in Docker, including best practices and how it differs from traditional logging.

July 13, 2022
James Walker
Software Engineer

In recent years, Docker has become one of the most popular tools to run containerized workloads in development and production. A vital part of any deployment is collecting logs from your applications to monitor for bugs, compromises, and misconfigurations, and Docker has integrated support for container logs so you can keep tabs on your environments.

In this article, we’ll explore how Docker logs work, the options for using them, and how they differ from conventional system logs. We’ll also look at the different ways Docker can be configured to store and deliver your logs to a secondary location for long-term retention.

The Importance of Logging

Logs are a critical component of observable applications. Well-placed log statements provide a route for developers and operations teams to follow when inspecting a system’s inner state.These statements function as convenient shortcuts for diagnosing problems ranging from outages to tricky bug reports from users.

Beyond debugging, effective use of logs serves everyone involved in an application’s lifecycle, from performance analysts to legal teams concerned with accountability. In the latter case, detailed logging of a system’s operations could inform an auditor whether external regulatory standards are being met.

For all their utility, the importance of retaining logs can sometimes be overlooked in containerized workflows, where it’s common for deployments to be short-lived instances of applications. Logs are often lost when the container stops, which destroys valuable information which you could need in the future.

To avoid this, it’s important to spend some time setting up a comprehensive Docker logging strategy. Although logs aren’t written to <terminal inline>/var/log<terminal inline> for automatic rotation by <terminal inline>logrotate<terminal inline>, the long-standing routine most Linux administrators will be familiar with, you can set up equivalent tooling by making full use of Docker’s native logging mechanisms.

Getting Started With Docker Logs

Docker collects logs from the standard output and error (<terminal inline>stdout<terminal inline> and <terminal inline>stderr<terminal inline>) streams of the foreground processes of your containers. These logs are sent to the selected logging driver—more on these below—to be stored persistently.

The Docker CLI includes a built-in log viewer that retrieves the logs saved for your containers:


$ docker logs my_container
Log line 1
Log line 2
Log line 3

Replace <terminal inline>my_container<terminal inline> with the ID or name of one of your containers. If you don’t know the names or IDs, you can get this information by running <terminal inline>docker ps<terminal inline> to list all the containers on your host.

Running <terminal inline>docker logs my_container<terminal inline>, with no further arguments, dumps the current contents of the container’s logs into your terminal window. You can continually livestream logs to your terminal by adding the <terminal inline>--follow<terminal inline> flag:


$ docker logs my_container --follow
Log line 1
Log line 2
...

This lets you monitor logs in real time until you stop the stream by pressing Ctrl+C.

The <terminal inline>docker logs<terminal inline> command supports a few additional arguments that provide basic filtering capabilities:

  • <terminal inline bold>--tail 10<terminal inline bold>: Show only the ten latest lines from the log. This is helpful when you’re looking for recent activity in very busy logs. Change <terminal inline>10<terminal inline> to the number of lines you’d like to fetch.
  • <terminal inline bold>---since 30m<terminal inline bold>: Show log lines written in the past thirty minutes. You can use a human-readable relative time string or a precise timestamp in RFC3339 format. The similar <terminal inline bold>--until<terminal inline bold> flag lets you filter to show only logs written before a given time.
  • <terminal inline bold>--timestamps<terminal inline bold>: Prepend each log line with the timestamp it was written at. This is disabled by default because it’s common for applications to include their own timestamps with the logs they write. The flag is useful when you’re inspecting a service that doesn’t follow this convention.

Running <terminal inline>docker logs<terminal inline> is usually your first step when accessing logs written by containers. The command illustrates the primary difference between Docker logs and conventional system logs: you need to use Docker’s tools to access your data, instead of directly opening a file on your host.

Automatically Collect Anything Logged Inside Your Cluster
Monitor the health of your cluster and troubleshoot issues faster with pre-built dashboards that just work.
Learn More Book a Demo
logging dashboard

Choosing a Logging Driver

Docker uses a modular approach to logging that’s based on the concept of the logging driver. Logging drivers are responsible for storing the logs created by your containers. They determine the on-disk format of the log files, as well as the location they’re written to. Several drivers are included with the default Docker distribution, and more are available as plugins.

The default driver is called <terminal inline>json-file<terminal inline>. It stores your logs as local JSON files within the <terminal inline>/var/lib/docker/containers <terminal inline> directory on your Docker host. The use of JSON means it’s straightforward to parse the logs within your own scripts if you’re adding custom automation. However, JSON is a verbose format, which can quickly create very large log files. The <terminal inline>json-file<terminal inline> driver doesn’t perform automatic file rotation, so over time, logs will eat up your system’s storage capacity.

A popular alternative driver is <terminal inline>local<terminal inline>. This also stores data on your host machine, but uses a non-human-readable format to reduce the size of its files. The <terminal inline>local<terminal inline> driver includes automatic log rotation to further reduce the risk of runaway disk usage. It’s recommended you use the <terminal inline>local<terminal inline> driver unless you need the convenience of the JSON format.

Additional bundled log drivers include <terminal inline>syslog<terminal inline> and <terminal inline>journald<terminal inline>. These write all container log lines to the respective daemons running on your host. There are also several ready-to-use integrations with popular log monitoring and observability platforms. Fluentd, Amazon CloudWatch Logs, Google Cloud Platform, and other popular solutions come packaged with Docker, giving you options for long-term, off-device retention of your logs.

To set the active logging driver, edit your Docker daemon config file at <terminal inline>/etc/docker/daemon.json<terminal inline>. Add or edit the <terminal inline>log-driver<terminal inline> field in this file to specify your chosen driver:


{
    "log-driver": "local"
}

Most drivers offer customizable options. In the case of the <terminal inline>local<terminal inline> driver, the following are available:

  • <terminal inline bold>max-size<terminal inline bold>: Force a log rotation once the current file reaches this size (e.g. <terminal inline>100m<terminal inline>).
  • <terminal inline bold>max-file<terminal inline bold>: Delete excess rotated log files when this many have been created (e.g. <terminal inline>5<terminal inline>).
  • <terminal inline bold>compress<terminal inline bold>: Set to <terminal inline>false<terminal inline> to disable the driver’s automatic compression of logs when they’re rotated.

Options are set in your daemon config file under the <terminal inline>log-opts<terminal inline> field:


{
    "log-driver": "local",
    "log-opts": {
     "max-size": "100m",
     "max-file": 5
    }
}

Apply your changes by restarting the Docker daemon:


sudo systemctl restart docker

In addition to the global daemon configuration, you can override the active logging driver on a per-container basis. Supply the <terminal inline>--log-driver<terminal inline> and <terminal inline>--log-opt<terminal inline> flags when you start your container with <terminal inline>docker run<terminal inline>:


docker run --log-driver local --log-opt max-size=100m example-image:latest

Selecting a Log Delivery Mode

In addition to letting you select the storage driver, Docker gives you a choice of log delivery modes. The delivery mode determines the strategy used to relay collected container logs to the driver.

Docker defaults to the blocking mode of delivery. When a container writes a log line, it’s immediately sent to the storage driver for persistence. The container won’t execute until the write is complete.

This strategy guarantees logs are saved at the point they’re issued. However, it can be a significant performance bottleneck for high-traffic applications that write verbose logs.

Switching to non-blocking delivery permits asynchronous log writes. Docker will temporarily store new lines in an in-memory buffer that’s then flushed through to the storage driver, allowing container execution to continue uninterrupted. The danger with this approach is the possibility of pending log writes exhausting the available buffer. If this happens, you’ll lose some log entries, making it a risky choice in scenarios where logs are used for auditing and compliance.

Enable non-blocking delivery by setting the <terminal inline>mode<terminal inline> log option to <terminal inline>non-blocking<terminal inline>. The in-memory buffer defaults to 1 MB, but you can change the size with the companion <terminal inline>max-buffer-size<terminal inline> option.

To set non-blocking delivery as the daemon default, edit <terminal inline>/etc/docker/daemon.json<terminal inline> as follows:


{
    "log-opts": {
     "mode": "non-blocking",
     "max-buffer-size": "16m"
    }
}

And here’s how you’d enable non-blocking delivery for a single container:


docker run --log-opt mode=non-blocking --log-opt max-buffer-size=16M example-image:latest

Best Practices

Success with Docker logs requires your application, containers, and Docker daemon instances to be configured correctly. Selecting the right storage driver and delivery mode for your use case is one step towards an effective log strategy; making sure your workload is fully compatible with Docker’s logging mechanism is another aspect.

Your containers need to expose their logs over the <terminal inline>stdout<terminal inline> and <terminal inline>stderr<terminal inline> streams, so they can be harvested by Docker. If you’re manually writing to your own log files inside the container, they won’t be accessible by <terminal inline>docker logs<terminal inline>, and will be lost when the container stops. This is one of the common oversights when preparing an existing application for containerization.

Another logging best practice, true of any system, is to only log what you actually need. While it’s important you include enough information to be helpful while debugging, too much verbosity reduces performance, increases storage costs, and can make your logs unusable. The handful of lines that matter will get buried in the noise of mundane events that don’t help you understand your application’s condition. You can reduce the tedium of sifting through logs by using a monitoring platform like ContainIQ, but this won’t always be a sustainable option.

A good approach to Docker logging can be summarized as choosing the right storage driver, only emitting useful events via your application’s output streams, and regularly reviewing your configuration. This final step should be used to check logs are being written as intended and contain a useful and usable quantity of data.

What About Docker’s Own Logs?

So far, you’ve examined reading logs created by the applications inside your Docker containers. The Docker daemon writes its own logs too; these are managed as conventional system log files. You’ll find the daemon log by running <terminal inline>journalctl -fu docker.service<terminal inline> on most Linux systems, or by reading the <terminal inline>%appdata%\Roaming\Docker\log\vm\dockerd.log<terminal inline> file on Windows hosts.

This log includes details of Docker operations such as container creations, image pulls, and networking changes. You can use it to monitor events that occur within Docker itself, but it won’t include any data from the logs written by your containers.

Final Thoughts

The logging of important events and errors is one of the principal tools developers use to debug issues, analyze performance, and aid operations teams in maintaining compliance. Deploying your system as Docker containers necessitates a unique approach to logging that relies on the logging mechanism integrated with Docker.

You should select the logging driver with the combination of features that works for your application and the method you’ll use to view your logs. Due consideration should be given to the log delivery mode, too, depending on whether your system favors maximum performance or logging integrity.

Setting up these components takes time. Working with adjacent technologies like Kubernetes requires picking up new logging concepts, as well, as each containerization system handles logs differently.

Platforms like ContainIQ can automate log collection from clusters and aggregate your files into easily searchable views of data, providing increased readability and quick access to your logs when it matters the most. This makes your data more useful by conveniently surfacing key insights without the tedium of having to hunt through raw logging streams. To learn more about how ContainIQ can make logging easier, book a demo today.

Start your free 14-day ContainIQ trial
Start Free TrialBook a Demo
No card required
James Walker
Software Engineer

James Walker is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. He has experience managing complete end-to-end web development workflows with DevOps, CI/CD, Docker, and Kubernetes. James also writes technical articles on programming and the software development lifecycle, using the insights acquired from his industry career. He's currently a regular contributor to CloudSavvy IT and has previously written for DigitalJournal.com, OnMSFT.com, and other technology-oriented publications.

READ MORE