When using Kubernetes, the popular container orchestration framework, you have two choices: host it yourself or use a managed Kubernetes service. A managed service saves you from potential headaches of self-hosting by handling most tasks for you, such as updating infrastructure, managing networks, and setting up service discovery and Linux configurations on multiple machines.
Many third-party providers now offer managed Kubernetes hosting services, which can make finding the best platform an uphill task. To help you choose, this article compares eleven popular Kubernetes hosting platforms and start-ups in terms of billing methods, ease of use, integration with existing infrastructure, customer support, and the ability to autoscale.
1. Google Kubernetes Engine
Google Kubernetes Engine (GKE) is a containers-as-a-service (CaaS) platform that allows you to run containers in a Kubernetes environment. This fully managed environment enables deployment, management, and scaling of containerized applications using Google infrastructure. You’ll have to pay $0.10 per cluster for every hour you use GKE and also pay for the worker nodes. (Anthos clusters use pay-as-you-go and subscription-based pricing models.)
The engineering contributors to Kubernetes built GKE with easy-to-use features, like the ability to build or attach multiple nodes to your cluster or to control configurations.
GKE runs on Google’s infrastructure, so it seamlessly integrates with other Google products if you’re using them. Even if you’re not, GKE integrates perfectly with the Kubernetes dashboard. Google Cloud’s operations suite lets you monitor your logs in one place.
GKE provides you with access to site reliability engineers (SRE), who can help you with issues, like handling runtime errors or maximizing application performance.
With GKE, pods scale horizontally or vertically depending on your custom metrics or CPU usage. It also has an inbuilt cluster autoscaler that scales clusters on a per-node-pool basis.
2. Azure Kubernetes Service
Azure Kubernetes Service (AKS) is Azure’s fully managed container orchestration service.
Using AKS doesn’t attract cluster management and master node fees, unlike other platforms, like GKE. You’re charged for the network resources and worker nodes that you use.
AKS allows you to deploy and administer containerized applications with ease. Infrastructure management is smooth; AKS elastically provisions resources, so you don’t need to worry about whether they’re being used effectively.
The service includes Azure Advisor, a support tool for optimizing performance and savings that provides guidance based on your configurations. You can also request support through your user account or visit AKS forums.
You can scale pods in your deployment through vertical or horizontal pod autoscaling. This depends on selected metrics, such as memory and CPU usage.
3. Amazon Elastic Kubernetes Service
Amazon Elastic Kubernetes Service (EKS) is a CaaS solution used to run Kubernetes on Amazon Web Services (AWS). AWS Identity and Access Management (IAM) security policies and Kubernetes namespaces allow you to run multiple applications on a single EKS cluster. EKS bills you $0.10 per hour for each cluster that you create.
For engineers already familiar with Kubernetes, running EKS is easy. You don’t need to maintain your own infrastructure because EKS is a fully managed service. The platform provides you with node management to give you more granular access control.
EKS fully integrates with AWS and provides everything that Kubernetes does. But you might experience setbacks if you haven’t switched to the latest Kubernetes version; for example, you won’t be able to create new clusters. Amazon deprecates and limits support for older versions.
For EKS issues, you can file a support ticket or consult the Knowledge Center for solutions.
Although EKS provides you with an autoscaling feature, you’ll need to perform some manual configurations. The cluster autoscaler isn’t enabled by default, so you’ll need to enable it.
4. Rackspace Kubernetes as a Service
Its APIs allow you to easily access hosted Kubernetes services, and the interface is user-friendly.
With a chat and ticket system, Rackspace KaaS ensures customers get the help they need. You can also call the support team.
Nodes don’t scale automatically, but you can scale your Kubernetes cluster environment at your own pace.
DigitalOcean is a managed infrastructure-as-a-service (IaaS) platform for deploying and managing Kubernetes. You pay for the cluster resources that you use as well as for the resources you control, like API usage.
The platform’s administration dashboard is intuitive for both experts and those with less experience. You don’t need direct access to instances to deploy a platform because DigitalOcean gives you managed Kubernetes and databases.
DigitalOcean integrates natively with standard Kubernetes toolchains.
While technical support for DigitalOcean users is active, the lack of live support is a drawback.
With this platform, the number of nodes in your cluster adjusts automatically. You can also automate pod creation or destruction by enabling the Horizontal Pod Autoscaler (HPA).
Platform9 is a managed software-as-a-service (SaaS) Kubernetes provider for edge, private, and hybrid clouds. With Platform9, you can import your existing AKS and EKS clusters, and bring them under Platform9 management. You can view such clusters created by Platform9 alongside Azure native clusters, AWS native clusters, and BareOS. Besides, Platform9 provides managed Kubernetes services with IPv6 support for your 5G deployments.
The Platform9 Freedom plan provides a basic managed Kubernetes environment. Growth and Enterprise plans offer more advanced capabilities. The price for these two plans ranges from $0 to $160, and a free trial is available.
Platform9 has an interactive graphical user interface (GUI) that makes it easy to deploy clusters or nodes. Kubernetes cluster management is straightforward, whether on-premises, at the edge, or in public clouds. That is because Platform9 can support multiple versions of Kubernetes clusters simultaneously.
Automatic integration with existing infrastructure is one of Platform9’s strengths. Its built-in monitoring integrates with Slack, and it can integrate with your single sign-on (SSO) provider.
Platform9’s peer support is still young, so you shouldn’t expect much input from this community. Growth plan subscribers have 24/7 customer support and access to a set number of technical support tickets per month.
The platform scales worker nodes automatically for clusters created on Azure and AWS, so clusters automatically scale up or down based on the workload. However, you can’t edit the autoscaling configuration for Azure clusters created using this platform.
7. Red Hat OpenShift Kubernetes Engine
Red Hat OpenShift Kubernetes Engine is a platform-as-a-service (PaaS) Kubernetes hosting solution that provides more advanced functionalities than Red Hat OpenShift Container Platform. This platform offers extra features for enterprises, such as management of images, applications, and source codes at an additional cost, making this a more expensive choice for individuals. You pay based on variables, such as availability zones, cloud configuration, and application node sizing. OpenShift provides a free trial.
Its user-friendly web console allows you to build, scale, or deploy quickly.
Red Hat OpenShift’s integration agility ensures high flexibility and availability, encouraging faster development and deployment. It seamlessly integrates with existing DevOps tools, including CI/CD tools.
OpenShift’s customer support service offers hands-on guidance. If you’re using the OpenShift API, though, you may not get a fully supported client library because third parties manage it.
OpenShift’s cluster autoscaler adjusts the size of the cluster automatically to meet your deployment needs. The autoscaler doesn’t manage the control plane nodes.
8. VMware Tanzu Kubernetes Grid
VMware Tanzu Kubernetes Grid (TKG) is a Kubernetes runtime that allows you to manage containers at scale on the cloud. TKG uses a subscription-based pricing model, and costs start at $995 a year.
It offers customers 24/7 production guidance for Kubernetes, and VMware’s Customer Reliability Engineering Team gives you architectural support. You can contact VMware Pivotal Labs if you want to change how you build your applications.
Tanzu Kubernetes Grid adds more worker nodes as you scale your workload.
9. IBM Cloud Kubernetes Service
IBM Cloud Kubernetes Service (IKS) is a managed container orchestration platform that enables you to automate deployment, scaling, and managing application containers across single-tenant clusters.
With IKS, you are charged per resource that you use. IBM’s prices for each resource can be fixed, tiered, or metered. Customers are also billed by incremental rates, including hourly and monthly. A free trial is available.
Its GUI makes it quick and easy to deploy Kubernetes clusters, and the platform allows you to efficiently monitor your infrastructure. You can also customize the infrastructure to meet your needs or enhance your application’s performance by integrating IBM Cloud services, including Internet of Things (IoT) and Watson APIs.
The platform provides a customer support service as well as online videos and written documentation.
IKS provides an automatic cluster autoscaler that automatically adds and removes worker nodes depending on cluster workload resource requests.
10. Alibaba Cloud Container Service for Kubernetes
Alibaba Cloud Container Service for Kubernetes (ACK) is a fully managed CaaS platform that allows you to run Kubernetes on Alibaba Cloud. ACK offers a pay-as-you-go pricing model. The billing rules and billable items depend on the cluster type you’re using.
You can use the container service console to create, contract, or expand Kubernetes clusters easily and even upgrade Kubernetes clusters with a single click.
ACK seamlessly integrates with Amazon Virtual Private Cloud (VPC) resources for a secure, high-performance deployment platform. Integration with Server Load Balancer (SLB) allows you to access containers.
Users of ACK get 24/7 online support and live support. If you have any issues relating to containers, Alibaba Cloud Container Service provides solutions and suggestions.
Horizontal pod autoscaling for deployments is possible with a cluster autoscaling pipeline. Scaling of workloads happens automatically. Resource autoscaling ensures that the resources of a cluster meet the scaling requirements of a workload. This is achieved by adding an elastic container or Elastic Compute Service (ECS) instances to the cluster.
11. Oracle Container Engine for Kubernetes
Oracle Container Engine for Kubernetes (OKE) is a fully managed container orchestration platform that you can use to deploy, manage, and scale your containerized apps. There are no fees for cluster management. Users pay for the hardware resources they use for their containerized workloads, such as worker nodes and storage. OKE offers a subscription-based pricing model.
Deploying Kubernetes clusters in OKE is easy, and you can instantly provide Kubernetes clusters, such as internet gateways and virtual cloud networks (VCN).
You can integrate your own tools with OKE to enhance automation, security, or observability. OKE integrates with other Oracle infrastructures, such as the OCI Service Operator and the WebLogic Server.
Customers with free and paid accounts can get help from the support chat in the console. However, Oracle support tickets are only available for paid accounts.
With OKE, you can scale Kubernetes clusters and pods automatically.
These managed Kubernetes providers will help you leverage the benefits of Kubernetes without the complexity of managing it yourself. To choose a platform, look for one that meets the specific requirements of your containerized applications.
No matter which platform you choose, you’ll need a monitoring tool for your Kubernetes and hosted applications. ContainIQ allows you to monitor Kubernetes metrics, logs, and events within your cluster in real time so you can spot problems early and improve the performance of your software. ContainIQ works across all managed Kubernetes offerings, and users are able to aggregate data from multiple clusters across multiple cloud providers into one view. For more on what ContainIQ can do for you, check out its documentation.