Static Pods, Manual Scheduling, Labels, and Selectors in Kubernetes : Day 13 of 40daysofkubernetes
Introduction
In Kubernetes architecture, the Kube-Scheduler plays a crucial role in assigning Pods to nodes. When the API Server receives a request to schedule Pods, it passes this request to the Scheduler, which then decides the most efficient node for each Pod. But here's an interesting question: since the Kube-Scheduler is itself a Pod, who schedules the Scheduler Pod? The answer lies in the concept of Static Pods.
In this blog, we will explore Static Pods in depth. Let's get started!
Static Pods
Static Pods are a special type of Pod in Kubernetes that are directly managed by the kubelet on a specific node, rather than being managed by the Kubernetes API Server. They are primarily used for deploying critical system components, like the Kube-Scheduler and Kube-Controller-Manager, especially in environments where Kubernetes is bootstrapping itself.
Key Characteristics of Static Pods:
Directly Managed by kubelet: Unlike regular Pods, Static Pods are created and managed by the kubelet running on each node. The kubelet watches a specific directory on the filesystem for Pod manifest files and creates or deletes Pods based on the presence of these files.
No API Server Involvement: Static Pods do not go through the API Server for their lifecycle management. This means they are not visible in the Kubernetes API and cannot be managed with
kubectl
commands.Node-specific: Static Pods are tied to a specific node. If the node goes down, the Static Pods running on it are not rescheduled to other nodes automatically.
Used for Bootstrapping: Static Pods are often used to bootstrap a Kubernetes cluster. For example, the Kube-Scheduler and Kube-Controller-Manager are often started as Static Pods to ensure they are available to manage other components of the cluster.
Static Pod Manifests: The configuration files for Static Pods are typically located in
/etc/kubernetes/manifests
or another directory specified in the kubelet's configuration. These manifest files define the desired state of the Static Pods.Automatic Mirror Pods: When a Static Pod is created, the kubelet automatically creates a corresponding mirror Pod in the API Server. This mirror Pod is for informational purposes and allows tools like
kubectl
to see that the Static Pod is running, even though the actual Pod is managed by the kubelet.
Managing Static Pods
To manage Static Pods, you need to modify the manifest files located in the /etc/kubernetes/manifests
directory on the node. Here's how you can do it:
Access the Node: Use Docker to exec into the node.
docker exec -it <node-name> bash
Navigate to the Manifests Directory:
cd /etc/kubernetes/manifests ls -lrt
You'll see the YAML files for all static pods. If you remove the kube-scheduler.yaml
file and create a new pod, it will not be assigned to any node because the Kube-Scheduler pod is not running. Let's see an example.
Remove and Restore the Scheduler Manifest:
mv kube-scheduler.yaml /tmp/
Create a new pod and observe that it remains in the pending state.
Describing the pod will show that no node is assigned to it.
When you move the kube-scheduler.yaml
file back to its original location, the pod will be scheduled and start running.
Manual Scheduling
Manual scheduling in Kubernetes involves explicitly assigning a Pod to a specific node without relying on the automated scheduling logic of the Kube-Scheduler. This approach can be useful in scenarios where you need to control exactly where a Pod runs, such as for performance reasons, licensing constraints, or specific hardware requirements.
Example
First, remove the kube-scheduler.yaml
file from /etc/kubernetes/manifests
. Then, create a pod1.yaml
file with the following content:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
nodeName: cka-cluster-worker
In this YAML file, we specify the node name on which we want to schedule our pod. Apply this YAML file using kubectl
:
kubectl apply -f pod1.yaml
You can see that our pod is created and scheduled on the specified node that we defined in our YAML file.
Labels and Selectors
Labels and selectors are fundamental concepts in Kubernetes used to organize and manage resources. Labels are key-value pairs attached to objects, such as Pods, that can be used to identify and group them. Selectors are used to filter and select objects based on their labels.
Example
Here’s an example of a Deployment that uses labels and selectors:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
tier: backend
spec:
template:
metadata:
name: nginx
labels:
app: v1
spec:
containers:
- name: nginx
image: nginx:1.23.0
replicas: 3
selector:
matchLabels:
app: v1
Key Sections
Deployment Metadata
Deployment Spec
Pod Template
Deployment Metadata
metadata:
name: nginx-deploy
labels:
tier: backend
name: nginx-deploy
: This sets the name of the Deployment tonginx-deploy
.labels: tier: backend
: This label is assigned to the Deployment object itself, indicating that this Deployment is part of thebackend
tier. This label is useful for identifying and grouping the Deployment within the Kubernetes cluster, but it is not used in the Pod scheduling process.
Deployment Spec
spec:
replicas: 3
selector:
matchLabels:
app: v1
replicas: 3
: Specifies that three replicas of the Pods should be running.selector: matchLabels: app: v1
: This selector tells the Deployment which Pods it is responsible for managing. It will select and manage all Pods that have the labelapp: v1
.
Pod Template
template:
metadata:
name: nginx
labels:
app: v1
spec:
containers:
- name: nginx
image: nginx:1.23.0
template: metadata: name: nginx
: This sets the name of the Pods created by the Deployment tonginx
.labels: app: v1
: This label is applied to each Pod created by the Deployment. It is crucial because it matches the selector defined in the Deployment spec (selector: matchLabels: app: v1
).spec: containers
: Defines the container specification for the Pods, in this case, running thenginx
container with the imagenginx:1.23.0
.
Conclusion
In this YAML file, labels and selectors are used to:
Assign meaningful metadata to the Deployment (
tier: backend
).Define the specific labels for the Pods (
app: v1
).Use selectors to manage a group of Pods based on their labels (
app: v1
).
Thank you for reading my blog. If you have any queries, please comment, and I will make sure to address them all.