Azure Kubernetes

Sarthak Agarwal
7 min readMar 4, 2021

What is kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.

What can AKS do?

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading much of the complexity and operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks for you, like health monitoring and maintenance.

Since the Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, as a managed Kubernetes service, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.

You can create an AKS cluster using the Azure portal, the Azure CLI, Azure PowerShell, or using template-driven deployment options, such as Resource Manager templates and Terraform. When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Additional features such as advanced networking, Azure Active Directory integration, and monitoring can also be configured during the deployment process. Windows Server containers are supported in AKS.

Use Cases solved by AKS

We have selected some common use cases to demonstrate Kubernetes’ capabilities. The use cases can be utilized together for different setups.

👉Self-Healing and Scaling Services
For simplicity, K8s process units can be detailed as pods and services. A pod is the smaller deployment unit available on Kubernetes. A pod can contain several containers that will have some related communication—such as network and storage. Services are the interface that provides accessibility to a set of containers. These services can be for internal or public access and can load balance several container instances.

👉Pods are mortal: once finished, they vanish from the cluster. Pod termination can be natural or through an error. A deployment is the most modern Kubernetes module to create and maintain pods. Using a single description file, a developer can specify everything necessary to deploy, keep running, scale, and upgrade the pod.

Kubernetes deployment of Nginx

The figure shows a simple deployment. This creates a pod of Nginx (version 1.7.9) with three replicas. In other words, Kubernetes will manage three Nginx instances; when an instance stops working, Kubernetes will create a new one.

This Deployment can be configured to be auto-scalable with the following command line:

$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80

$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80

One of the advantages of K8s is that it’s easy to understand what the platform is doing. In this case, the cluster will have 10 Nginx instances, and as many as 15 instances if the CPU utilization exceeds 80 percent of capacity.

Serverless, with Server
Serverless architecture has taken the world by a storm since AWS launched Lambda. The principle is simple: just develop the code, and don’t worry about anything else. Server and scalability are handled by the cloud provider and code just has to be developed as functions that handle specific events: from HTTP requests to queue messages.

Vendor lock-in is the major disadvantage of this solution. It almost impossible to change cloud providers without refactoring most of the code. There are some solutions like Serverless that seek to standardize function code across clouds. Another solution is to use a Kubernetes cluster to create a vendor-free serverless platform. As mentioned above, K8S abstracts away the difference between cloud servers. Currently, two popular frameworks virtualize the cluster as a serverless platform: Kubeless and Fission.

Optimized Resource Usage with Namespaces

A K8s namespace is also known as a virtual cluster. Namespaces create a virtually separated cluster inside the real cluster. Clusters without namespaces probably have test, staging and production clusters. Virtual clusters usually waste some resources because they do not undergo continuous testing, and because staging is used from time to time to validate the work of a new feature. By using a virtual cluster, or a namespace, an operations team can use the same set of physical machines for different sets depending on a given workload.

Namespaces are closely related to DNS because services located within the same namespace are accessible through their names. Namespaces offer a good solution for creating similar environments that locate services through network names: instances from different namespaces will find their dependencies without having to take into account which namespace they are located in.

In addition, namespaces can have resource quotas: each virtual cluster can receive a defined allocation in order to avoid a resource competition between namespaces. This is particularly useful to avoid a production environment sharing computing resources with just a few priority environments. Finally, different permissions can be created with roles for each namespace in order to limit the number of individuals with access to production environments.

Hybrid and Multiclouds of K8S

A hybrid cloud utilizes computing resources from a local, conventional data center, and from a cloud provider. A hybrid cloud is normally used when a company has some servers in an on-premise data center and wants to use the cloud’s unlimited computing resources to expand or substitute company resources. A multicloud, on the other hand, refers to a cloud that uses multiple cloud providers to handle computing resources. Multiclouds are generally used to avoid vendor lock-in, and to reduce the risk from a cloud provider going down while performing mission-critical operations.

Both solutions are addressed by Kubernetes Federation. Multiple clusters — one for each cloud or on-premise data center — are created that are managed by the Federation. The Federation synchronizes computing resources, and even allows cross-cluster discovery: virtually any pod can communicate with a pod in another cluster without knowing the infrastructure.

The Federation setup is not simple, and there is a caveat: for obvious reasons, the solution doesn’t work on managed services like Google Kubernetes Engine, Azure Container Service or AWS EKS.

"Kubernetes has the opportunity to be the new cloud platform. The amount of innovation that’s going to come from being able to standardize on Kubernetes as a platform is incredibly exciting - more exciting than anything I’ve seen in the last 10 years of working on the cloud. "

Case Study of Wind River

Wind River Cloud Platform combines a fully cloud native, Kubernetes and container-based architecture with the ability to manage a physically and geographically separated infrastructure for vRAN and core data center sites.

Reducing service providers’ operational burden and costs, the platform delivers single-pane-of-glass, zero-touch automated management of thousands of nodes.
Cloud Platform is a commercially supported version of StarlingX and lends itself to demanding 5G use cases applicable across mission-critical industries.

Wind River has been a long-standing contributor to open source projects. We are excited to have Wind River as a member of CNCF and we look forward to their contributions and collaboration to drive container technology to the edge,” said Dan Kohn, executive director of Cloud Native Computing Foundation. “With Wind River Cloud Platform, Wind River is helping to further advance technologies such as Kubernetes at the edge.”
Wind River has for decades provided a backbone for global telecommunications infrastructure, with offerings used by all top telecommunications equipment manufacturers (TEMs).
The company is a leader in the early 5G landscape, powering the majority of 5G RAN deployments. Now with Cloud Platform, Wind River can deliver, directly to service providers, one of the industry’s most advanced cloud native distributed infrastructure solutions for 5G vRAN network deployment.
Cloud Platform is a commercial implementation of the StarlingX open source project. StarlingX is a container-based cloud infrastructure software stack for edge implementations that demand ultra-low latency.

✍ Conclusion:

⚡ Based on our observation of AKS, it is a proven instrument to simplify container orchestration.
⚡The various features of Azure Kubernetes Services can provide developers with the ease of deploying and managing containers.
⚡ The specific application of Kubernetes in Azure provides important benefits such as automatic upgrades.

🌸Here I completed the task given by Vimal Daga Sir on the uses cases solved by Azure kubernetes. So guys just read the article.Hope it might help you🌸

Happy Learning ✍️

--

--

Sarthak Agarwal

Cloud & DevOps Enthusiast ★ARTH Learner ★ AWS ★ GCP ★ Jenkins ★ K8S ★ Ansible ★ MLOps ★ Terraform ★ Networking ★ Python