Kubernetes Overview
With the widespread adoption of containers among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps.
What is Kubernetes?
Kubernetes is an open-source container orchestration or in simple words we can say container management tool, developed by Google. It automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
Kubernetes offers a wide array of features that make it a popular choice for container orchestration. Some of its key features include:
Kubernetes manages the deployment, scaling and scaling down of containers, ensuring that the applications run smoothly and efficiently.
It can automatically distribute network traffic to the healthy containers, improving application availability.
Kubernetes can detect and replace failed containers or nodes, minimizing downtime.
It allows easy scaling of applications, both vertically (adding more resources to a container) and horizontally (replicating containers).
Kubernetes provide mechanisms for containers to communicate with each other using services and DNS.
It enables controlled updates and rollbacks of applications.
It allows you to specify the resource requirements for containers.
Why do we call it k8s?
The short form 'K8s' for Kubernetes is derived from its full name. It's just a way to shorten the name of Kubernetes.
'K' represents the first letter of Kube.
'8' is the number of letters between the 'K' and 's' in the word 'Kubernetes'.
's' represents the last letter of 'Kubernetes'.
This abbreviation is commonly used in discussions, documentation, and conversations about Kubernetes to make it more concise and easier to type or say.
Benefits of using k8s?
Kubernetes offers many benefits for container orchestration and managing containerized applications. Here are some advantages of K8s:
Kubernetes allows you to easily scale your application up or down based on demand.
Kubernetes ensures that your application is highly available by automatically rescheduling and replacing the failed containers.
K8s provides a standardised way to package and deploy applications using containers.
It optimizes resource allocation and minimizes wastage.
K8s can detect and replace failed containers or nodes, ensuring that the application remains available and healthy.
You can easily perform updates to your applications without downtime using rolling updates.
Architecture of Kubernetes
Master Node
API server: It's the central control plane component that exposes the k8s API and is responsible for handling requests from users, the CLI and other components.
etcd: It stores all cluster data, including configuration details and the current state of the cluster
Controller Manager: It watches the cluster's state stored in etcd and ensures that the actual state matches the desired state by creating, updating and deleting the resources as needed.
Scheduler: Assign work to nodes based on resource requirements and make sure workloads are balanced and highly available.
Worker Node
Kubelet: The primary agent running on each node that communicates with the API server.
Container Runtime: The software responsible for running containers.
Kube Proxy: Maintains network rules on nodes and communicates with outside users.
Pods: The smallest deployable units in k8s, represent one or more containers that share the same network namespace and storage volume.
What is Control Plane?
In Kubernetes, the control plane refers to a set of components that manage the overall state of the cluster. It makes decisions about what should happen to maintain the desired state of the cluster based on the user's configuration.
The control plane components run on a cluster's master nodes and make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
Difference between kubectl and kubelets
kubectl
and kubelet
are both important components in the Kubernetes ecosystem, but they serve different purposes and have distinct roles:
kubectl:
kubectl
is the command-line interface for interacting with Kubernetes clusters. It allows users to manage and control Kubernetes clusters using various commands. Some common tasks you can perform with kubectl
include:
Deploying and managing applications using Pods, Deployments, Services, etc.
Scaling applications.
Managing configuration files for Kubernetes objects.
Debugging applications by accessing container logs.
Creating, updating, and deleting Kubernetes resources.
kubelet:
kubelet
is an agent that runs on each node in the cluster. Its primary responsibility is to ensure that containers are running in a Pod. It receives Pod definitions from the Kubernetes control plane (typically from the API server) and takes care of starting, stopping, and maintaining application containers as specified in the Pod manifests.
The kubelet
performs the following tasks:
Pulls the container images from container registries if they are not already present on the node.
Starts the containers and ensures they are healthy.
Monitors the containers, restarts them if they fail, and reports their status to the control plane.
Role of API Server
The API server in Kubernetes plays a central role as it acts as the primary control plane component responsible for managing and exposing the Kubernetes API. Here are the key roles and responsibilities of the API server:
Exposing the API:
- The API server exposes the Kubernetes API, which serves as the entry point for all interactions with the cluster.
Authentication and Authorization:
- The API server handles authentication, ensuring that users and applications are who they claim to be before allowing them to interact with the cluster.
Validation and Admission Control:
- The API server validates incoming requests to ensure they adhere to the schema and business logic defined for each resource type. It checks the request payloads and rejects any invalid or malformed requests.
Resource Handling:
- The API server handles various resource types and their lifecycle management. It manages the creation, updating, and deletion of resources, ensuring that the desired state specified by users is maintained.
Etcd Interaction:
- The API server interacts with the etcd datastore to store and retrieve cluster configuration data, such as resource definitions, configuration settings, and state information.
Conclusion
In summary, Kubernetes, often referred to as K8s, is a powerful container orchestration platform with numerous benefits. Kubernetes' architecture enables robust container orchestration, making it a preferred choice for deploying and managing modern, cloud-native applications at scale. Its modular and extensible design empowers organizations to build resilient, scalable, and portable solutions across various infrastructure environments.