Kubernetes is most often used with the Docker, one of the most popular containerization platform. Containers became more prevalent after the launch of the Docker containerization project in 2013. It may also work with any container system that follows the Open Container Initiative (OCI) standards for container image formats. Since it is open-source, Kubernetes can be used by anyone anywhere on-premises, public cloud, or both. Distributed containerized applications are difficult to manage, but Kubernetes makes containerized apps radically easier to manage and became a key part of the container revolution.

In this article, we’ll analyze and compare the different features and services offered by giant public clouds and how they are beneficial for organizations. To deep dive into Kubernetes, check out Cloud Academy’s Certified Kubernetes Application Developer (CKAD) Exam Preparation. This learning path includes a combination of courses, exams, and a series of hands-on labs to build first-hand Kubernetes experience working directly in a live cloud environment.

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Amazon Elastic Container Service for Kubernetes (EKS) is a managed service that was made generally available in June 2018 to run Kubernetes on AWS. It’s fully compatible with the apps that run on any standard Kubernetes architecture. On Amazon EKS, a single-tenant Kubernetes control plane is run for each cluster where the control plane is not shared across clusters.

Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) is Microsoft’s Kubernetes solution that was made generally available in June 2018. The fully managed AKS makes containerized apps to easily deploy and manage in Kubernetes environment. Microsoft already announced Azure Container Service before AKS in 2016 which shows they have some experience in container orchestration that supported not only Kubernetes but also Apache Mesos and Docker Swarm.

Google Kubernetes Engine (GKE)

Kubernetes was first introduced by Google in July 2015 in the market. Google Kubernetes Engine (GKE) is a managed production-ready architecture for deploying containerized apps that is one of the most advanced solutions. GKE allows to set up containerized apps in no time, by eradicating the requirement to install and manage Kubernetes clusters.

Comparing hosted services

We have discussed the Kubernetes sevices and basic Kubernetes infrastructure provided by major cloud providers. In this section, we will analyze and compare the key features of these three providers regarding Kubernetes architecture.

Within a few weeks, or maybe days of writing this article the actual version of any of the platforms will be dated. Although, the age of the current version currently you want to use is the most important information than all. If we compare, Google Cloud has the most recent release, followed by Microsoft Azure, and then AWS. The resolution of bugs and security issues is easier in Google’s version than in Microsoft Azure and AWS at some level.

GKE is at the top as it provides a fully automated update for the cluster. AKS allows upgrading the cluster by a simple command. But upgrading Amazon EKS, a user needs to send some command-line instructions to it, which makes it more difficult among the other two.

The kubectl command-line utility is supported by all three platforms. Login commands are different for each provider:

AWS

aws eks –region ${region} update-kubeconfig –name ${cluster}

Microsoft Azure

az aks get-credentials –resource-group ${RS} –name ${cluster}

Google Cloud Platform

gcloud container clusters get-credentials ${cluster}

Resource monitoring

For Kubernetes monitoring, Google Cloud provides Stackdriver. Stackdriver monitors the master and nodes, and all Kubernetes components inside the platform along with integrating logging, and for this, no other additional user manual steps are required. Microsoft Azure has two offerings: Azure Monitor to evaluate the health of a container and Application Insights to monitor the Kubernetes components. A user needs to configure Istio (a service-mesh solution) to monitor Kubernetes components. AWS has no integrated monitoring solution, but it relies on third-party solutions instead.

Availability

Google Cloud has the best availability among these three, but after launching services in Latin America and Africa, Microsoft Azure will take the lead. As AWS is not offering services in Latin America, Africa, or Oceania, it will fall behind.

Node pools

For different types of workloads, different kinds of machines allocated to clusters by node pools. As for database systems, more RAM and better disks are required, whereas tasks like machine learning algorithms need a better CPU. With node pools, we can provide the best resource availability as a user can specify service deployment on demand.
Google Cloud and AWS are leading in this race by providing node pool support for the past two years. Yet Microsoft Azure has failed to deliver node pools for more than a year.

Autoscaling

Kubernetes has the capability to autoscale up and down the nodes in order to use resources on-demand, and that is the most exciting feature it has. In this way, users avail services that are available all the time, while stakeholders can manage to have a cost-effective infrastructure. To have fine-tuned resource utilization for specific types of services, we can use autoscaling along with node pools.

In autoscaling, Google Cloud is leading as the most mature solution available on the interface. What a user needs to do is just specify the desired VM size and the range of nodes in the node pool. And the rest of the steps are managed by Google Cloud. AWS is ranked as second in auto-scaling because it needs some minor manual configurations. Microsoft Azure has introduced autoscaler, which is partially covered by customer support (not available for production use), and it may deliver it with node-pool functionality in the near future.

High availability

The term “high availability” means your cluster will be available even if something goes wrong. For instance, if your services are relying on a single data center and it goes down, then your services will be interrupted. To ensure the availability of Kubernetes endpoints, the master nodes are spread over more than one availability zone for each of the three services. Now, Kubernetes endpoint will be available even if one of the regions becomes unavailable. Only Google Cloud has managed to provide full support for high availability for worker nodes. But of course, it is costly because the minimum worker nodes should guarantee 99.99% availability.

Role-based access control (RBAC)

We use role-based access control (RBAC) through Kubernetes API to let the admins configure policies dynamically. Each of the three hosted services providers provide RBAC implementations.

Bare-metal clusters

As shown by name, virtual machines (VMs) are an emulated machine running above real hardware. There is a bundle of benefits using this technique for a cloud provider. For better resource utilization, we can split a very large machine into several smaller units to share among several clients. High accessibility, as well as VMs, are easy to move from one physical machine to another. Besides all, the virtualization layer adds some complexities and low performance that may possible from the physical, bare metal, and hardware. For the time being, only AWS has bare-metal hardware available.

Pricing

Cluster management that includes Master node management and machines running it is provided free of cost by GKE and AKS. You are charged for services you run like VMs, bandwidth, storage, and services. On the other hand, Amazon EKS charges you $0.20 per hour for each deployed cluster other than the services you are using. And for the whole month, it costs you extra. Keep remembered that AWS charges you additionally even for testing and staging cluster environments.

The following table compares and summarizes the Kubernetes features offered by AWS, Azure, and Google Cloud

Service Amazon EKS Azure AKS Google GKE
Automatic Update On-demand with manual, command-line, and steps. Nodes need to be updated manually On-demand, master, and nodes are upgraded together Master and nodes
CLI Support Supported Supported Supported
Resource Monitoring Third-party only Azure Monitor for containers and Application Insights Stackdriver (paid with a free tier)
Availability U.S., Europe, and Asia. Not available in Latin America, Oceania, or Africa U.S., Europe, Asia, and Oceania. Latin America and Africa expected in Q2 2019 U.S., Europe, Asia, Oceania, and Latin America. Not available in Africa
Autoscaling of Nodes Yes Under preview Yes
Node Groups Yes No Yes
High-Availability Clusters No In development Yes
RBAC Yes Yes Yes
Bare Metal Nodes AWS No No
Cost $.20 per hour Only pay for the VMs running the Kubernetes nodes Only pay for the VMs running the Kubernetes nodes

Conclusion

Kubernetes established itself as the most popular container orchestrator service and has become a vital solution for cluster management — but it has not ended here. It continued gaining market value as its deployment is easier on a Platform as a Service (PaaS) solution. AWS, Microsoft Azure, and Google Cloud Platform are the most popular cloud providers available in the market. They are competing to claim the best Kubernetes solution for the past year. It is hard to predict the future, but Google has the advantage of the most mature and cheapest product. On the other hand, people are taking an interest in AKS and Amazon EKS solutions that may seek popularity.

The original version in English appeared on November 12, 2019 on cloud academy.

With the kind permission of the publisher, we were allowed to republish the post on this blog.

Write comment

* These fields are required

Comments

No Comments