In this guide, we will cover K8s Deployments vs StatefulSets vs DaemonSets Explained.
When deploying the application in Kubernetes, you have several workload controllers to choose from, each designed for specific use cases. The most common controllers are Deployments, StatefulSets, and DaemonSets. If you have ever scratched your head wondering, “What’s the real difference between these controllers?”, this guide is for you.
Understanding the differences between these controllers is crucial for designing scalable and reliable Kubernetes applications. In this guide, we will break down each of them, compare their use cases, and help you decide which one to use for your workload.
Deployments: The Go-To for Stateless Apps
In a Kubernetes environment, frontend Pods are like “disposable actors” in a play in the sense if one crashes, another steps in immediately, and nobody notices. That’s the Deployment controller in Kubernetes!
A Deployment is a Kubernetes controller that manages stateless applications by ensuring a specified number of identical Pods (replicas) are running simultaneously, distributing traffic efficiently. It provides features like:
- Rolling updates and rollbacks (zero-downtime deployments)
- Scaling (increasing/decreasing replicas)
- Self-healing (replaces failed Pods automatically)
When to Use Deployments?
● Stateless apps (e.g., Nginx, React web servers, REST APIs, microservices)
● When you need easy scaling and rolling updates without downtime
● When Pods are interchangeable (no unique identity required)
How to Work with Deployment
A frontend web application where each Pod serves HTTP requests independently. If one Pod fails, another can take its place seamlessly.
Deployment YAML (nginx-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
The spec.replicas tells Kubernetes to maintain three running pods at all times. If any pod crashes or is deleted, the Deployment controller will automatically recreate it.
Applying the Deployment
kubectl apply -f nginx-deployment.yaml
This creates and manages 3 replicas of a NGINX web server, ensuring high availability and easy scaling.
Verifying the Deployment
kubectl get deployments
This lists all Kubernetes Deployments in your current namespace, showing their key status information:

Checking Pods
kubectl get pods
This lists all Kubernetes pods in your current namespace, showing their basic status. It shows 3 identical pods (note the shared hash ‘96b9d695’). The names/IDs of pods change if recreated.
Simulate Pod Failure
Let’s delete a pod:
kubectl delete pod nginx-deployment-96b9d695-d4ndd
Now check pods, what happens:
kubectl get pods

The selected pod has been deleted, and a new pod has been created (one with ‘x6n6x’ in its name/id at the end). This shows that deleting a pod triggers automatic recreation (maintaining 3 replicas). The new pod has a different name but an identical configuration. Thus, deployments self-heal i.e., they maintain the desired state.
Scaling Deployments
kubectl scale deployment nginx-deployment --replicas=5
This scales the nginx-deployment to 5 replicas. It should create two new pods. Verify this using:
kubectl get pods

Note that two new but identical pods have been created. New pods are exact replicas of existing ones (same hash – ‘96b9d695’ in this case).
Rolling Update
kubectl set image deployment/nginx-deployment nginx=nginx:1.25
This updates the container image of the nginx container in the nginx-deployment to nginx:1.25
kubectl rollout status deployment nginx-deployment
This command checks the rollout status of the nginx-deployment. It shows the progress and waits until the rollout is either completed (or fails). Now check the pods using:
kubectl get pods

Note that new pods show a different hash (‘96b9d695’ vs ‘6f66f85776’) and the AGE of new pods differs which means updates happen incrementally (old and new pods coexisted temporarily). This shows smooth rollout or zero-downtime of deployments i.e. requests are served throughout.
To summarize for deployment in Kubernetes,
- Pods are ephemeral – If a Pod crashes, a new one replaces it with a different name.
- Scaling is easy – Just update replica count and apply.
- Updates roll out smoothly without downtime.
- Pods share no unique data, i.e., any Pod can handle any request.
StatefulSets: Manage Stateful Applications
StatefulSets are useful in cases where the app stores important data (e.g., databases). A StatefulSet is designed for stateful applications where Pods require:
- Stable, unique identities (Pods must keep their names and storage after restarts)
- Persistent storage (each Pod gets its own PersistentVolume)
- Ordered deployment & scaling (Pods start/stop one by one – no chaos!)
When to Use StatefulSets?
- For Databases (e.g., MySQL, PostgreSQL, MongoDB)
- When Pods need stable network identities
- When data persistence and ordered scaling are required
How to Use StatefulSets
Consider you’re running a MySQL cluster where each Pod has its own database storage. If MySQL restarts, it must retain the same identity and data.
Note:If you don’t have dynamic storage class in your cluster, then create the one using following commands.
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
and then run
kubectl patch storageclass local-path \
-p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
StatefulSet YAML (mysql-statefulset.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:oracle
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
The YAML configuration above defines a StatefulSet that deploys three replicas of a MySQL database. Unlike standard Deployments, each pod in a StatefulSet has a unique, stable identity(like mysql-0, mysql-1, mysql-2) and maintains its own persistent volume via a volumeClaimTemplate. These volumes are crucial for data durability, ensuring that even if a pod is rescheduled to a different node, its data remains intact.
Applying the StatefulSet
kubectl apply -f mysql-statefulset.yaml
This creates a StatefulSet with 3 MySQL replicas.
Verifying the StatefulSet
kubectl get statefulsets
This command lists all StatefulSets in the current Kubernetes namespace. It shows their names, replica counts, and current status.
Checking Pods
kubectl get pods

Note the ordered naming of the pods, mysql-0, mysql-1, … This shows that the pods are created sequentially.
Check Persistent Volumes
kubectl get pvc
This lists all PersistentVolumeClaims (PVCs) in the current namespace. It shows their status, capacity, and associated volumes.

Note that each pod gets its own persistent volume (PVC name follows volumeClaimTemplateName-podName).
Simulate Pod Failure
kubectl delete pod mysql-1
This deletes the mysql-1 pod. Now, check what happens to pods using:
kubectl get pods kubectl get pvc

Note that after deletion, a new pod with the same name and PVC has been created. This shows that StatefulSets pods are self-healing and the replacement pod retains the same name (mysql-1) and the same PVC (data preserved).
Verify Data Persistence
kubectl exec mysql-0 -- mysql -uroot -ppassword -e "CREATE DATABASE test_db;"
This creates a test_db database in the mysql-0 pod.
kubectl delete pod mysql-0
This deletes the mysql-0 pod. Note that the pod is automatically recreated (due to StatefulSet).
kubectl exec mysql-0 -- mysql -uroot -ppassword -e "SHOW DATABASES;" | grep test_db

It verifies that the database test_db still exists even after pod recreation, proving data persistence.
To summarize for StatefulSet in kubernetes,
- Pods have stable hostnames & storage. Pod names follow a numbered sequence, e.g., mysql-0, mysql-1, ….
- Pods have persistent storage – each Pod retains its data even if it restarts.
- Scaling is ordered – Pods start/stop one by one. mysql-0 starts first, then mysql-1, and so on.
DaemonSets: Run Pods on Every Node
A DaemonSet ensures that a copy of a Pod runs on every node (or selected nodes) in the cluster. It’s typically used for Cluster-wide services (e.g., logging, monitoring agents) and Node-specific tasks (e.g., network plugins, storage daemons).
When to Use DaemonSets?
- Logging agents (e.g., Fluentd, Filebeat)
- Monitoring tools (e.g., Prometheus Node Exporter)
- Network plugins (e.g., Calico, Cilium)
How to use DaemonSets
Consider that you need to collect logs from every node in your cluster, so you deploy Fluentd as a DaemonSet.
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd labels: app: fluentd-logging spec: selector: matchLabels: name: fluentd template: metadata: labels: name: fluentd spec: containers: - name: fluentd image: fluent/fluentd:latest volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog hostPath: path: /var/log
This YAML manifest defines a DaemonSet to ensure that a copy of a specific pod runs on every node in the cluster. With hostpath, we grant Fluentd access to node logs.
Applying the DaemonSet
kubectl apply -f fluentd-daemonset.yaml
It deploys Fluentd as a DaemonSet, ensuring one log-collector pod runs per cluster node.
Verifying the DaemonSet
kubectl get daemonsets
This command lists all DaemonSets in the current namespace, showing their name, desired/current pods, and readiness status.
Checking Pods (One per Node)
kubectl get pods

Verify Host-Level Access
kubectl exec fluentd-f4wc7 — ls /var/log

This lists the contents of the ‘/var/log‘ directory inside the ‘fluentd-f4wc7‘ pod. It helps inspect log files or verify directory structure within a running container.
Demonstrate Pod Deletion
kubectl delete pod fluentd-f4wc7
This deletes the specified pod. This will force a restart. Let’s verify using:
kubectl get pods -o wide
This lists all pods with node placement details.

Note that a new pod has been created within the node after deletion of the pod. DaemonSet immediately recreates a new pod on the same node.
To summarize for DaemonSet in Kubernetes,
- One Fluentd Pod runs on every node.
- Kubernetes automatically deploys Fluentd on new nodes. No scaling is needed.
- Fluentd is used as a logging agent (has host-level access).
- Recreates pods if deleted or failed.
| Feature | Deployment | StatefulSet | DaemonSet |
| Use Case | Stateless apps
(Web apps, APIs) |
Stateful apps
(Databases) |
Node-level services
(Logging, Monitoring) |
| Scaling | Any number of Pods
(Instant, Parallel) |
Ordered scaling
(One by one) |
One Pod per node
(Automatic) |
| Pod Identity | No stable hostname | Stable hostname | Random hostname |
| Storage | Ephemeral / Temporary | Persistent volumes | Optional |
Conclusion
Use Deployments if your app is stateless and you need easy scaling/updates (e.g., frontend apps, APIs).
Use StatefulSets if your app is stateful (e.g., databases) and needs stable identities.
Use DaemonSets if you need a Pod on every node ( logging, monitoring agents, networking).
By choosing the right Kubernetes workload controller, you ensure your applications run efficiently, scale properly, and maintain high availability.
Source link