In the last post, i explored setting up a k3s Kubernetes cluster. However, it struck me that we didn’t really explain what Kubernetes is and how it actually functions. So, I think this post will help to provide some background to make things clearer for future posts.
Throughout this series, we’ll delve into a more theoretical exploration of how Kubernetes and other technologies work.
What is kubernetes ?
Following kubernetes official documentation:
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes is like a superhero for managing your apps. Picture it as your personal team of robots that handle everything from starting, stopping, and scaling your apps to making sure they’re always available and running.
Why Should You Care About Kubernetes?
Alright, let’s get real here. You probably dont need to host your static website with Kubernetes. But when it comes to playing in mayor leagues with large-scale deployments, Kubernetes can be very handy offering you some things like:
- Service discovery and load balancing: Expose containers via DNS or IP, with load balancing to stabilize traffic.
- Storage orchestration: Automatically mount storage systems like local storage or cloud providers.
- Automated rollouts and rollbacks: Describe desired container states for controlled updates.
- Automatic bin packing: Optimize resource utilization by fitting containers onto nodes.
- Self-healing: Restart, replace, or kill unresponsive containers for reliable operation.
- Secret and configuration management: Securely store and manage sensitive data without rebuilding images.
- Batch execution: Manage batch and CI workloads, replacing failed containers as needed.
- Horizontal scaling: Scale applications up or down easily based on demand.
- Extensibility: Add features to your Kubernetes cluster without altering source code.
Using or not using Kubernetes depends on the specific use case. For example, if you’re running a small, static website with low traffic, Kubernetes might be overkill and a simpler hosting solution would suffice. However, for complex, distributed applications with high scalability and reliability requirements, such as e-commerce platforms or microservices architectures, Kubernetes can be highly beneficial.
Ultimately, the decision to use Kubernetes should be based on the specific needs and characteristics of your application and infrastructure.
Just remember, not every website needs a nuclear reactor to run, so choose wisely ! ☢️
But wait… How does it work ? 🏎️
First of all, when you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one node.
Kubernetes works by coordinating a cluster of machines to manage and run containerized applications.
Here’s a simplified overview:
Kubernetes Architecture
A kubernetes cluster is made with 2 types of nodes
- The worker node(s) host the Pods that are the components of the application workload.
- The control plane nodes that manage the worker nodes and the Pods in the cluster.
Master Node / Control Plane
Controls the cluster and manages its components.
Formed by the following components:
- API Server: Acts as the gateway for users and other components to interact with the cluster.
- Scheduler: Decides which worker nodes should run the pods based on their resource requirements and other constraints.
- Controller Manager: Monitors the cluster state and makes sure everything runs fine by managing various controllers.
- etcd: Stores the configuration data and state of the cluster, serving as the cluster’s memory bank.
Worker Nodes
Host containers and run application workloads.
Common Components
This components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
- Kubelet: An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
- Kube-proxy: kube-proxy is a network proxy that runs on each node in your cluster, mantaining network rules.
- Container runtime: Service in charge of running the containers.
- Optional Addons: DNS, Network Plugins, Web UI, Container Resource Monitoring, …
More info about how this components interact with each other in the kubernetes official documentation
Kubernetes Main Objects
Kubernetes objects are persistent entities that represent the state of your cluster. They define what applications or workloads are running, what resources they use, and how they interact with each other.
In Kubernetes, YAML is used to define configurations for objects. It’s human-readable, declarative, and organizes metadata and specifications for each object.
Namespaces 🏠
Namespaces in Kubernetes are a way to divide cluster resources between multiple users, teams, or applications.
|
|
Pods 🏃♂️
The smallest deployable units in Kubernetes. A pod encapsulates one or more containers, storage resources, and an IP address.
|
|
Pods are the basic building blocks of applications in Kubernetes and are managed as a single entity. More on that in Pods section of the kubernetes official documentation.
Workload Management 🏗️
Refers to the management of applications and tasks within the cluster.
|
|
This includes deploying, scaling, and updating pods using resources such as Deployments, StatefulSets, DaemonSets, and Jobs. More on that in Workload Management section of the kubernetes official documentation.
Volumes 📄
Provide a way to persist data in containers. Volumes can be attached to containers within pods and enable data to survive container restarts or failures.
|
|
Services 👨🍳
Enable communication between different parts of an application within the Kubernetes cluster. A service defines a logical set of pods and a policy for accessing them.
|
|
Kubernetes supports different types of services, including ClusterIP, NodePort, and LoadBalancer, each providing different levels of network accessibility. More on that in Services, Load Balancing, and Networking section of the kubernetes official documentation. Kubernetes supports various types of volumes, including emptyDir, hostPath, persistentVolumeClaim, and cloud storage volumes. More on that in Storage section of the kubernetes official documentation.
Ingress 🎟️
Provide external access to services within the Kubernetes cluster. An Ingress defines rules for routing external HTTP and HTTPS traffic to services based on hostnames or paths.
|
|
Other kubernetes objects
In this article we have listed the main kubernetes objects, but there are a lot of other objects that allow executing various types of actions like : ConfigMap, Secret, Job, CronJob, ServiceAccount, Role, ClusterRole, RoleBinding, ClusterRoleBinding …
What happens exactly when we create an object?
|
|
Inside Kubernetes components, the following actions occur when you deploy a Deployment object with the provided YAML configuration:
The API Server receives the YAML configuration file defining the Deployment object and validates the configuration against the Kubernetes API schema. Stores the validated configuration in etcd, the cluster’s distributed key-value store.
The controller Manager detects the creation of the Deployment object, and creates the deployment based on the specified template and desired replica count sending the deployment requirements to the scheduler.
From now the controller manager monitors the state of the deployment and ensures it matches the desired state defined.
Once the scheduler recieves the deployment is in charge of assigning the Pods to suitable worker nodes based on resource availability and other scheduling constraints.
Kubelet (on worker nodes) receives instructions from the Scheduler to create Pods on the node,pulls the specified container image (in this case, Nginx) from the container registry and creates and manages the lifecycle of Pods based on the specifications defined in the Deployment’s pod template.
The container runtime instantiates and runs the Nginx container inside the Pods. Once deployed is in charge of managing the container’s lifecycle, including starting, stopping, and monitoring its health.
Each Pod gets its own unique cluster-wide IP address.
The next step is creating a Service to allow pods/ingresses accesing this deployment.
In case we want to access it from outside the cluster, a ingress object should be created.
Conclusions
As you have read in this article the essence of Kubernetes is that automates the deployment, scaling, and management of containerized applications, making it easier to manage complex applications across a cluster of machines.
Stay connected on my social media channels, and feel free to suggest future post topics!