Kubernetes
Basics
Downsides of Monolith
In time, the new features and improvements added to code complexity, making development more challenging - loading, compiling, and building times increase with every new update.
However, there is some ease in administration as the application is running on a single server, ideally a Virtual Machine or a Mainframe.
Being a large, single piece of software which continuously grows, it has to run on a single system which has to satisfy its compute, memory, storage, and networking requirements.
The hardware of such capacity is not only complex and extremely pricey, but at times challenging to procure.
Since the entire monolith application runs as a single process, the scaling of individual features of the monolith is almost impossible.
It internally supports a hardcoded number of connections and operations.
However, scaling the entire application can be achieved by manually deploying a new instance of the monolith on another server, typically behind a load balancing appliance - another pricey solution.
During upgrades, patches or migrations of the monolith application downtime is inevitable and maintenance windows have to be planned well in advance as disruptions in service are expected to impact clients.
While there are third party solutions to minimize downtime to customers by setting up monolith applications in a highly available active/passive configuration, they introduce new challenges for system engineers to keep all systems at the same patch level and may introduce new possible licensing costs.
Why Microservices?
Microservices can be deployed individually on separate servers provisioned with fewer resources - only what is required by each service and the host system itself, helping to lower compute resource expenses.
Each microservice is developed and written in a modern programming language, selected to be the best suitable for the type of service and its business function.
This offers a great deal of flexibility when matching microservices with specific hardware when required, allowing deployments on inexpensive commodity hardware.
Although the distributed nature of microservices adds complexity to the architecture, one of the greatest benefits of microservices is scalability.
With the overall application becoming modular, each microservice can be scaled individually, either manually or automated through demand-based autoscaling.
Seamless upgrades and patching processes are other benefits of microservices architecture.
There is virtually no downtime and no service disruption to clients because upgrades are rolled out seamlessly - one service at a time, rather than having to recompile, rebuild and restart an entire monolithic application.
As a result, businesses are able to develop and roll-out new features and updates a lot faster, in an agile approach, having separate teams focusing on separate features, thus being more productive and cost-effective
Container
Containers are an application-centric method to deliver high-performing, scalable applications on any infrastructure of your choice.
Containers are best suited to deliver microservices by providing portable, isolated virtual environments for applications to run without interference from other running applications.
A container image bundles the application along with its runtime, libraries, and dependencies, and it represents the source of a container deployed to offer an isolated executable environment for the application.
Containers can be deployed from a specific image on many platforms, such as workstations, Virtual Machines, public cloud, etc.
Problem
Containers are a good way to bundle and run your applications.
In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime.
For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
Solution
Container orchestrators are tools which group systems together to form clusters where containers' deployment and management is automated at scale while meeting the requirements such as,
Fault-tolerance
On-demand scalability
Optimal resource usage
Auto-discovery to automatically discover and communicate with each other
Accessibility from the outside world
Seamless updates/rollbacks without any downtime.
The clustered systems confer the advantages of distributed systems, such as increased performance, cost efficiency, reliability, workload distribution, and reduced latency.
Most container orchestrators can:
Group hosts together while creating a cluster, in order to leverage the benefits of dictributed systems.
Schedule containers to run on hosts in the cluster based on resources availability.
Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster.
Bind containers and storage resources.
Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating an interface, a level of abstraction between the containers and the client.
Manage and optimize resource usage.
Allow for implementation of policies to secure access to applications running inside containers.
Kubernetes is an open sourced solution that provides you with a framework to run distributed systems resiliently.
It takes care of scaling and failover for your application, provides deployment patterns, and more.
For example: Kubernetes can easily manage a canary deployment for your system.
Features provided are,
Service discovery and load balancing
Storage orchestration
Automated rollouts and rollbacks
Automatic bin packing
Self-healing
Secret and configuration management
Batch execution
IPv4/IPv6 dual-stack
Designed for extensibility
History
Kubernetes comes from the Greek word κυβερνήτης, which means helmsman or ship pilot. With this analogy in mind, we can think of Kubernetes as the pilot on a ship of containers.
Kubernetes is also referred to as k8s (pronounced Kate's), as there are 8 characters between k and s.
Kubernetes is highly inspired by the Google Borg system, a container and workload orchestrator for its global operations, Google has been using for more than a decade.
It is an open source project written in the Go language and licensed under the Apache License, Version 2.0.
Kubernetes was started by Google and, with its v1.0 release in July 2015, Google donated it to the Cloud Native Computing Foundation (CNCF), one of the largest sub-foundations of the Linux Foundation.
For more than a decade, Borg has been Google's secret, running its worldwide containerized workloads in production. Services we use from Google, such as Gmail, Drive, Maps, Docs, etc., are all serviced using Borg.
"Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines".
Last updated