January was a big month for ContainIQ, a Kubernetes monitoring, and observability platform. Our engineering team worked hard and didn’t miss a beat, releasing two new highly requested features in addition to multi-cluster support and namespace filtering.
- Latency by URL path now supported on more kernel versions
- Logs: container level log storage and search
- Multi-cluster support
- Namespace filtering
These new features allow you and your team to monitor, detect, and resolve issues across multiple clusters in your Kubernetes environments.
Latency By URL Path/Endpoint (Expanded Coverage)
Latency is particularly important to monitor, an impediment on a specific path could directly affect a number of things, from end-user experience to revenues. With so much on the line, we wanted to make it simple for users to identify the root cause of these issues and provide visibility down to specified paths or API endpoints.
To achieve this, we used Extended Berkeley Packet Filter (eBPF) technology. By using eBPF, ContainIQ is able to instrument from the kernel directly rather than at the application level, providing our users latency metrics for all microservices and URL paths without additional configuration or maintenance. We are able to do this by parsing the network packet from the socket directly and thus provide average, p95, and p99 latencies, as well as requests per second.
How it Works:
- ContainIQ runs as a Daemonset and automatically installs the necessary Linux headers on all nodes in your Kubernetes Cluster.
- No further configuration needed for new nodes. Linux headers are automatically installed on any newly created nodes in your K8s cluster.
- As new services are added or removed, ContainIQ will automatically instrument them.
To view latency by path or endpoint, simply click on a given microservice in our dashboard, and you will be shown a table containing calculated requests per second, average latency, p95 latency, and p99 latency for each individual URL path or API endpoint contained in that specific microservice.
Users can easily search for specific endpoints and filter through latency for the past hour, day, or week.
With ContainIQ we make it easy for engineering teams to drill down to exact paths and debug, all in a single platform. ContainIQ’s latency by URL path feature set is now supported on kernel versions 4.2 and higher. If you want to learn more about this feature, you can book a demo here.
Logs: Container Level Log Storage and Search
Logs are important for debugging applications and monitoring infrastructure performance. They provide a detailed look into the way your systems are performing. By implementing logging and monitoring into a single platform, ContainIQ saves engineering teams time and allows for easy correlations.
From previous experience, we understood how tedious it was to endlessly scroll through logs, trying to match up timestamps to warning events, slowdowns, and changes in resources. That's why we built an easy-to-use graphical interface with comprehensive functionality. Users can correlate pod-level logs to Kubernetes events at specific points in time, as well as, search through logs in a number of different ways: by pod, message, timestamp, or cluster.
ContainIQ automatically collects anything logged inside your cluster encompassing both your internal applications and Kubernetes system components. You can view logs for all clusters in your environment or on a cluster-to-cluster basis. By default, ContainIQ stores log data for 14-days. Longer-term storage is possible on enterprise-level plans. ContainIQ charges $1.00 per GB of log data ingested.
This is the first iteration of our logging feature. Soon, users will be able to set alerts on specific messages and build their queries based on multiple fields such as JSON message, container name, namespace, etc. Stay tuned for more exciting news as we continue to develop this powerful feature.
As more of our users deploy multi-cluster environments for improved availability and scalability, multi-cluster support became a popular request. Users needed an efficient way to filter between clusters in their environment and monitor key metrics associated with each.
Our latest multi-cluster feature allows you to do just that. ContainIQ monitors all clusters by default or allows you to filter by specific cluster. Simply navigate to the top left of our navigation bar and select which cluster you would like to filter by.
Users can aggregate multiple clusters from multiple clouds, like GKE, EKS, and AKS, into one view with centralized alerting.
Namespace filtering has been added across all of our dashboards, allowing users to separate events and metrics by specific namespace.
Previously, users were only able to view information about their clusters in aggregate. As we continue to move forward, this feature will eventually roll up into a more comprehensive role-based access system. Allowing administrators to give team members permissions to specific teams, projects, or dev environments.
That’s a wrap for our January update! If you want to learn more about these releases and how ContainIQ can help with your monitoring and observability needs, feel free to book a demo here. You can also sign up for an account and get started today here.
As always, we welcome any suggestions, comments, and feedback. Please feel free to reach out. We’d love to have a chat with you!