Over the last decade, web applications have grown to host millions of users and produce terabytes of data. Users of these applications expect fast responses and 24/7 availability. For applications to be fast and available, they have to respond quickly to increase in load. One way to achieve this is to use a microservices architecture.
In this article, you’ll learn what a microservices architecture entails, how it compares to other software architecture patterns, and the technologies that make it possible. You will also learn about the pros and cons of a microservices architecture, practices to avoid, and be provided with resources to further improve your knowledge.
Getting Started | Microservices
Microservices architecture is a software architecture pattern where each task performed by an application is handled by an independent application called a service. This service often manages its own database, and communicates to other services through events, messages, or a REST API. Microservices ease the task of building software, especially at scale.
Microservices help you achieve the following:
- Increase developer productivity and allow development teams greater autonomy.
- Make it easier to scale applications independently.
- Make it easier to ship new features as the whole application doesn’t have to be redeployed for every change made.
What are Microservices?
Microservices are a set of independent, limited-purpose applications that together form a larger application.
Take, for example, an e-commerce application. The product listing functionality could be packaged as a single service, as could the price computation functionality. When a user selects a product and checks the price, the price computation service will compute discounts and shipping costs. Each service can have its own dedicated database and web server, which allows it to scale according to demand.
Microservices allow a system to be more modular. Instead of building a single large system, you build a set of services, each of which handles one aspect of the system. Before microservices became mainstream, engineering teams built and deployed systems as single cohesive units. This sometimes led to development and scalability bottlenecks, because every change to the application required that the entire application be rebuilt. This meant that developers had less freedom in how changes were made, and required that the entire application be redeployed when scaling to more powerful deployment targets. Microservices solve these problems, enabling you to build fast and scalable systems.
Why are Microservices Important?
The microservices architecture offers some substantial benefits over a monolithic one:
- Microservices allow a system to be modular. Large applications often take hours to rebuild when changes are made, but microservices remove the build-time bottleneck of deploying an entire codebase. When each team maintains its own microservice, they have more autonomy about how changes and deployments are made.
- Because different versions of a service can be used by other services, teams are able to move at different speeds. The versioning can be done using semantic versioning, ensuring that every breaking change is released as a major version. If, for example, there are two services, A and B, and service B’s new deployment contains breaking changes, service A can keep using the previous version of service B until it is ready to move to the newer version.
- Microservices allow for better system-wide resource sharing, because each service is given only the resources it requires.
Microservices Architecture Compared to Other Approaches
While microservices architecture is a very popular way to build applications, it’s not the only one. Here’s how it compares to some other common approaches.
Before microservices became mainstream, monolithic architecture was widely used. Monolithic architecture means that all components of a system are deployed in a single machine.
Monoliths are scaled vertically, this means the compute resources like RAM and CPU of the deployment target are increased. An increase in load means that replica machines have to be deployed to meet the demand, each of them receiving a portion of the load through a load balancer. As load increases, it becomes increasingly difficult to use a single database, and that has to be replicated too. This replicated database will have to ensure consistency among the deployed nodes. The consistency and tight coupling that results from running every component in a single machine is the main drawback of the monolithic architecture.
Microservices are deployed independently, and each service has its own database. Each service can be scaled to meet demand, and new deployments don’t have to take days to complete.
Service-oriented architecture, or SOA, is an architectural pattern that was proposed to solve the problems created by monolithic systems. In SOA architecture, each service lives on a dedicated layer of the system. It differs from microservices architecture by depending on service buses to foster communication between service layers, meaning that all layers use the same avenue of communication.
How Does Microservices Architecture Work?
Microservice applications are composed of a number of smaller applications, each of them handling an individual function of the application. Services communicate with other services as needed to fulfill their functions. Even in the very simplified diagram below, you can see how the API gateway handles incoming requests from the front-end users and passes them to the appropriate services, which then communicate directly with each other as needed.
Benefits and Limitations of Microservices Architecture
Like any architecture pattern, microservices architecture comes with both benefits and limitations.
Benefits of Microservices
Microservices allow for language-agnostic application development. This means that teams can build applications in the language best suited for that application type. A team may use Node.js to build a real-time chat application backend to leverage the streaming capability of Node.js, while another team may use Rust for an image-processing microservice. This leads to system-wide performance improvement.
Since each service can be built by a dedicated team, each team only has to worry about one part of the system. In microservices architecture, teams have autonomy in how they build and deploy their service. This gives teams the ability to work independently, without worrying that their changes will have a huge impact on the overall state of the system. This also integrates well with the agile philosophy of continuous testing and delivery.
Microservices also foster reuse and collaboration. To keep components across a microservice consistent, teams often use shared libraries. This sharing still allows for services to be decoupled, but consistent. Additionally, microservices allow parts of the service to be reused in other applications—if a chat functionality has already been built for one application, and a later application also needs that functionality, the same microservice can be used in the later application.
Every organization wants its product to be scalable in case of usage spikes. Microservices are infinitely horizontally scalable, and their lightweight nature means each service can scale better to meet incoming demands. As more load is introduced to the system, more servers can be added to balance the load.
Microservices are highly fault tolerant. Even if one service goes down, the inherently isolated nature of the architecture means that other services may be largely unaffected. Because individual services are relatively small, downtime can often be remedied quickly, because it’s immediately clear which part of the service the problem originated in.
Limitations of Microservices
For all their benefits, microservices also have some limitations. They require more configuration from the operations teams, as different services need to communicate with each other effectively. Additionally, due to the distributed nature of microservices architecture, monitoring and observability tools need to be deployed alongside them.
Tools and Technologies That Make Microservices Possible
Microservice architecture is an architectural pattern. This means it doesn’t force teams to use specific tools and technologies; rather, it suggests technologies that address the pattern.
Microservices are deployed as containers in a cluster where containers can be provisioned and scaled in response to traffic. Containers communicate through API protocols, and message queues help buffer messages to prevent service overload. Authentication and authorization are deployed as a separate service or services. The following are some of the tools and technologies that fit into this pattern.
Containerization is packaging software into easy-to-replicate units that are isolated from other software in a machine. This is made possible using tools like Docker and Kubernetes.
Docker is a tool that allows you to define all the requirements needed to run software—such as the OS, runtime, and source code—in a file called an image, and to build isolated applications called containers from this image. Microservices are typically deployed as Docker containers, which enables teams to add more containers as the load increases.
Kubernetes is a container orchestration tool, which means that it manages how many containers are provisioned, and how they scale in a network. As the load on a particular microservice increases, Kubernetes can provision more containers to meet that load. When the load decreases, it then tears down the extra containers. Kubernetes has three well known autoscaling methods including Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), and the Cluster Autoscaler.
Cloud providers reduce the financial strain of building microservices at scale. They provide software infrastructure, like virtual machines and container registries, as a service. This often amounts to only a fraction of the cost of running your own infrastructure in data centers. Cloud providers enable organizations to deploy their microservices in a cost-effective manner. Without a cloud provider, an organization looking to function on a global scale would need to have their own data centers in multiple regions, and would also have to manage their infrastructure themselves.
For distributed services to be able to communicate effectively, there needs to be a way to access a user’s data across different services. For this purpose, you can use an auth provider like Okta, or implement your own authentication service as an independent microservice. For an independent microservice to be effective, you need to have a robust authentication system.
Continuous Integration and Continuous Deployment (CI/CD) tools enable developers to continuously push code to servers as they develop new features and fix existing bugs. Since microservices are deployed independently, the architecture requires each team to be able to push new features and bug fixes in a quick and effective way. CI/CD tools such as Jenkin, Travis CI, and GitHub Actions effectively serve this purpose.
Infrastructure as Code (IaC)
Infrastructure as code is a paradigm that allows ops teams to define their infrastructure as configuration files, often in YAML. This allows them to quickly spin up services in cloud providers where microservices can be deployed. Common IaC tools include Terraform, Pulumi, and Ansible.
Patterns Used to Implement Microservices
There are certain tools and patterns that must be implemented for an effective microservices implementation. These patterns range from API gateways to access tokens. This section goes over these patterns and how they can be integrated in a microservices architecture.
An API gateway is a service that is used to manage different microservices in a microservice mesh. API gateways help keep microservices private by only exposing one public IP to the internet. They also help with service discovery, load balancing, IP whitelisting, response caching, retries, and rate limiting. Common API gateways include Kong, Ambassador, and Ocelot.
Access tokens are a feature that provide authorization to different services from a central auth provider. An organization typically deploys a microservice that handles authentication, and this service will communicate with other services to provide authorization to different parts of a user’s data.
Log aggregation is the act of combining logs from different microservices into a single log file. Tools such as LogDNA, ContainIQ, and Apache Kafka can be used for this purpose. The logs produced can then be analyzed and visualized on a dashboard.
Log aggregation is necessary because microservices are usually deployed in ephemeral containers, which lose their logs when the service restarts. Log aggregation also simplifies log analysis, because all the logs are in a single location. And if you are using Kubernetes, ContainIQ’s logging feature set makes it easy to aggregate and visualize both Kubernetes cluster level and application logs.
Health Check APIs
A health-check API is an API endpoint that checks the operational status of a microservice and its dependencies. It must return the following information:
- The status of all dependent downstream services. For example, in an e-commerce application with an orders and shipping service, a health-check API checks the operational status of the orders service when checking the operational status of the shipping service. This is because the shipping service relies on data from the orders service to function.
- The service’s database connection status and its average response time. The result returned from this check is compared with the expected average to determine if the concerned service is fully operational or not.
- The average memory consumption. An average is used to account for spikes that may occur due to memory leaks. Orchestration tools like Kubernetes use the information from the health-check API to determine when to spin up new application deployments or alert the development team or problems.
Since microservices are an architecture pattern and not a framework, it’s easy to introduce antipatterns into your application. In this section, you will learn about microservices antipatterns you should avoid.
Building Distributed Monoliths
Monoliths are usually scaled by replicating the machine that runs them. In this way, each individual machine runs the monolith application, even though it’s replicated across different data centers or regions. In this approach, the monolith still faces the same deployment bottlenecks, and so is not desirable.
Using a Shared Database
Another microservices antipattern is when multiple services communicate with a single database. This arrangement makes code changes difficult, because multiple teams have to agree on schema changes before they are made. Downstream application will break if one team changes the database schema without other teams. The best way to avoid this antipattern is to use one database per service and utilize APIs for cross-service communication.
Using Schemas Everywhere
You introduce schema configuration issues when you build microservices with schemas everywhere. The solution to this is to use semantic versioning. This allows teams to incrementally adopt different services so that deployments can be independent.
Spiky Loads Between Services
A microservice architecture mandates that services communicate over some sort of protocol to share data and messages. Unfortunately, when too many services interconnect, they generate dependencies. It can also overload a microservice’s database. To avoid this antipattern, you should use an API gateway to serve as a request buffer, and use a message broker to queue messages.
Hardcoding IPs and Ports
When your microservices mesh is still small, it can be helpful to hardcode IPs and ports. As your architecture scales, though, services can randomly get assigned to new IPs and ports, breaking the connection between applications with hardcoded IPs and ports. One solution to this is to introduce a service discovery tool, which will ensure that the microservices can still communicate, even when IPs and ports change.
Resources to Build your Skills
This article has given you a good starting point, but building scalable systems using microservices is an incredibly broad topic. If you’d like to learn more about microservices, the following are great resources.
- Introduction to Microservices is a brief introduction to microservices by the Azure Service Fabric team at Microsoft. It goes over the importance and use cases of the microservices architecture.
- Microservices Full Course is a four-hour course that goes in depth on the microservices concept, and looks at how to build microservices using Java.
- Application Development Using Microservices and Serverless is a Coursera course from IBM that covers the basics of microservices, serverless architecture, and container management.
- Microservices Foundations is a LinkedIn learning course that goes over the microservices architecture, advanced concepts, and a rubric for making decisions when adopting microservices architecture.
- Building Microservices by Sam Newman is recommended in many places as the must-read book about microservice design.
- From Monolith to Microservices, also by Sam Newman, is an excellent overview of how to move from a monolithic architecture style to a microservices one.
Microservices architecture is a software architecture pattern where each task of a larger application is packaged as an independent app. When done correctly and used appropriately, it can greatly increase the agility of the team and scalability of the application. In this article, you learned how the architecture is implemented, the components that enable it, and how organizations can benefit from it.
Building microservices is no easy task. You have to consider your container orchestration and monitoring strategy. One way to effectively monitor your Kubernetes metrics, logs, events, and traces is by using ContainIQ. ContainIQ allows you to view and correlate metrics, logs, events, latencies, and traces, and gives your engineering teams a clear view of cluster health with pre-built dashboards and easy-to-set monitors.