Introduction of k8s¶
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
K8s is Cloud Native application, so it follows the 12 factor.
Kubernetes takes care of service discovery, scaling, load balancing, self-healing, leader election, etc. therefore, developers no longer have to build these services inside their application.

In the above figure, you can notice that K8s cluster contains two types of nodes:
- Master node:
- Worker node:
Master nodes¶
It is responsible for managing whole cluster. It monitors the health check of all nodes in the cluster.
It stores the members information regarding the different nodes, planning which containers are
schedules to which worker nodes, monitoring the containers and nodes, etc. So, when a worker node
failed, it moves the workload from failed node to another healthy worker node.
Kubernetes master is responsible for scheduling, provisioning, configuring and exposing API’s to
the client. So, all these done by a master node using the components called as control plane components.
Four basic components of the master node(control plane):
- API server : is a centralized component where all the cluster components are communicated. Scheduler, controller manager and other worker node component communicate with the API server. Scheduler and controller manager request information to API server before taking any action. This API server exposes the Kubernetes API.
- Scheduler : is responsible for assigning your application to worker node. It will automatically detect which pod to place on which node based on the resource requirements, hardware constraints and other factors. It will smartly find out the optimum node which fulfills the requirements to run the application.
- Controller manager : maintains the cluster, it handles node failures, replicating components, maintaining the correct number of pods, etc. It constantly tries to keep system in desired state by comparing it with current state of system.
- etcd : is a data store that stores the cluster configuration. It is recommended that you have a back-up as it is the source of truth for your cluster. And, if anything happened, you can restore all the cluster components from this stored cluster configuration. Etcd is a distributed reliable key-value store where all the configuration is stored in a documents and it’s schema-less.
Worker node¶
The Worker node are nothing but a virtual machine(VM’s) running in cloud or on-prem, a physical server running inside your data center. So, any hardware capable of running container runtime can become a worker node. These nodes expose the underlying compute, storage and networking to the applications. They do the heavy-lifting of application running inside the Kubernetes cluster. Together, these nodes form a cluster and run a workload assign to them by master node component as same as manager assign a task to individual team member. This way we could be able to achieve fault-tolerance and replication.
Three basic components of the Worker Node(Data plane)
- Kubelet : runs and manages the containers on node, and it talks to API server. The scheduler will update the spec.NodeName with respective worker nodes name and kubelet controller will get a notification from API server, and it will then contact the container runtime like Docker for e.g. to go out and pull images that requires to run the pod.
- Kube-proxy : load balances traffic between application components It is also called as service proxy which run on each node in the Kubernetes cluster. It will constantly look for new services and appropriately create the rules on each node to forward traffic to services to the back-end pods respectively. Container runtime: which runs the containers like Docker, rkt or containerd. Once you have the specification that describe your image for your application, the container runtime will pull the images and run the containers.
Basic concepts¶
Pod (Kubernetes Concept)¶
A pod is the smallest deployable unit in Kubernetes. It represents a group of one or more containers that are
scheduled and managed together.
Pod Characteristics: - Shared Environment: Containers within a pod share the same network namespace (they can communicate with each other using localhost) and storage volumes. - Single IP Address: A pod gets a single IP address, and all containers within that pod share this IP. - Multiple Containers: A pod can have multiple containers, usually working together as a single unit. For example, A main container (e.g., an NGINX web server) responsible for serving web content. A sidecar container (e.g., a logging agent like Fluentd) to handle logging. - Lifecycle Management: Kubernetes handles the lifecycle of the pod, ensuring that the desired number of replicas are running, restarting containers if needed, and ensuring the pod matches its defined state. - High-Level Abstraction: A pod abstracts away container specifics, letting Kubernetes manage the complexity of container scheduling, scaling, networking, and storage.
Pod Sandbox¶
A pod sandbox sets up the environment for the containers, including network, storage, and DNS settings. Each pod is associated with one sandbox. All containers within the same pod share the same sandbox, meaning they share the same network namespace and IP address.
The sandbox also isolates the resources that belong to a pod from others.
Sandbox status¶
Container (containerd Concept):¶
A container is a runtime instance of a containerized application (such as an individual Docker container).
It is a single, isolated process on the system with its own filesystem, networking, and process tree.
Container Characteristics: - Single Process Isolation: Containers are isolated environments, running a single application process. Each container has its own filesystem, but by default, it is isolated from other containers. - Managed by containerd: In Kubernetes, containerd is responsible for creating, starting, stopping, and managing containers on each node. Each pod consists of one or more containers managed by containerd. - Networking: A container has its own network namespace (unless it is part of a pod where containers share the same network). - Single Unit: A container is usually thought of as a single unit of an application, typically mapped to one image (e.g., an NGINX server running in a container).