You have got RHEL 10 up and running, and now you are thinking about setting up Kubernetes. With Red Hat pushing Podman as the default container tool and CRI-O (Container Runtime Interface for Open Container Initiative) as the go-to runtime for Kubernetes clusters, this is the perfect time to get familiar with how these pieces fit together.
This guide will cover how to install Kubernetes on RHEL 10 with one control plane node and one worker node.
This isn’t the easiest way to run Kubernetes, as tools like minikube, kind, and Red Hat’s CodeReady Containers offer quicker setup times and lower resource requirements. But this guide is arguably the most relevant if you are targeting enterprise infrastructure.
For Kubernetes specifically, we use CRI-O as the runtime. It was built specifically for Kubernetes, which means it only includes what’s needed, nothing extra. This keeps things fast and secure.
Prerequisites
Before we begin, ensure you have
- Two RHEL 10 servers (one for the control plane and one for the worker) with root or sudo access
- An active Red Hat subscription (or trial for testing).
- At least 2 CPUs and 2 GB of RAM are preferred on each machine.
- Network connectivity between the nodes.
For this tutorial, I will call the control plane node k8s-control and the worker node k8s-worker.
Initial System Setup
First, let’s prepare both our future control plane and worker nodes. Run these steps on both nodes. I know it seems tedious, but trust me, getting the foundation right saves headaches later.
Update Your System
First things first, get everything updated:
sudo dnf update -y
Reboot the system if there are any kernel updates:
sudo reboot
Configure Hostnames and Host Files
Set up your hostnames for the control plane as well as the worker node properly.
On the control plane:
sudo hostnamectl set-hostname k8s-control

On the worker node:
sudo hostnamectl set-hostname k8s-worker
Now edit /etc/hosts on both nodes and add entries for both machines so that we can reach our machine via its hostname. It will make things easier for the tutorial.
sudo tee -a /etc/hosts << EOF 192.168.1.96 k8s-control 192.168.1.4 k8s-worker EOF
Replace these IPs with whatever your machines are actually using. You can find them with
ip addr or hostname -i
Disable Swap
Kubernetes really doesn’t like swap. It wants predictable memory behavior, so swap needs to go:
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab
That second command comment out the swap entry in fstab so that it stays off after reboots.
Configure SELinux
SELinux is great for security, but for a test cluster, it can get in the way. You can set it to permissive mode:
sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
In production, proper SELinux policies are framed for Kubernetes instead of disabling it. But for learning and testing, permissive mode works fine.
Load Kernel Modules
Kubernetes needs specific kernel modules for networking. Load them now and make sure they load on boot:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter
Configure Kernel Parameters
Set up some sysctl parameters that Kubernetes requires for networking:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system

This enables IP forwarding and lets bridge traffic go through iptables, which is essential for pod networking.
Configure Firewall Rules
Open up the ports Kubernetes needs.
On the control plane node:
sudo firewall-cmd --permanent --add-port=6443/tcp sudo firewall-cmd --permanent --add-port=2379-2380/tcp sudo firewall-cmd --permanent --add-port=10250-10252/tcp sudo firewall-cmd --reload
On the worker node:
sudo firewall-cmd --permanent --add-port=10250/tcp sudo firewall-cmd --permanent --add-port=30000-32767/tcp sudo firewall-cmd --reload
Port 6443 is for the Kubernetes API server, 2379-2380 are for etcd, and 10250 is for the kubelet. The 30000-32767 range is for NodePort services.
Installing CRI-O
Now for the container runtime. CRI-O is not in the default RHEL repos, so we need to add the repository first. Run this on both nodes:
CRIO_VERSION=v1.34 cat <<EOF | sudo tee /etc/yum.repos.d/cri-o.repo [cri-o] name=CRI-O baseurl=https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/rpm/ enabled=1 gpgcheck=1 gpgkey=https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key EOF sudo dnf install -y cri-o

Version 1.34 matches up with Kubernetes 1.34. You will want to keep CRI-O and Kubernetes versions aligned.
If version 1.34 packages aren’t available yet for your system, try version 1.32 or check what versions are available by browsing to this repo in your browser. You can also use Fedora’s native CRI-O package if you’re comfortable with that approach.
Install CRI-O. Enable it and check that it’s running:
sudo dnf install -y cri-o sudo systemctl enable --now crio sudo systemctl status crio

Installing Kubernetes Components
Time to install kubelet, kubeadm, and kubectl. These are the core Kubernetes tools you need on every node. Again, these are not in the default repos. Add the Kubernetes repository:
KUBERNETES_VERSION=v1.34 cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni EOF
Now, we can install the Kubernetes packages:
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Enable kubelet so it starts on boot:
sudo systemctl enable --now kubelet
Initialize the Control Plane
On your control plane node, initialize Kubernetes controller run,
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock
The pod network CIDR specifies the IP range for pod networking. We are using 10.244.0.0/16, which works well with Flannel (the network plugin we’ll install shortly). The cri-socket flag tells kubeadm to use CRI-O.

Note: If you see the “conntrack not found in system path” error, install it using:
sudo dnf install -y conntrack
Once the control plane is initialized successfully, we shall see following output on the screen

This will take a minute or two. When it finishes, you will see a bunch of output, including a join command. Save that join command somewhere safe. You will need it to add your worker node.
The output will also tell you to run these commands to set up kubectl for your regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run these commands. This sets up kubectl so you can manage your cluster without sudo.
Install CNI (Container Network Interface) Plugin
Your cluster won’t work properly without a network plugin. Pods need to talk to each other across nodes, and that’s what the CNI plugin provides. We will use Flannel because it’s simple and works well:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Give it a minute, then check that everything is running:
kubectl get pods -n kube-flannel
You should see flannel pods running.Also, check your nodes:
kubectl get nodes

Your control plane node should show as Ready now.
Join the Worker Node
Head over to your worker node and run that join command you saved earlier. It should look something like:
sudo kubeadm join 192.168.1.96:6443 --token oz3qhu.wr6nmjz40n8pp664 \ --discovery-token-ca-cert-hash sha256:f7f1a8dd8b97446d3ff57caa6b9297c7336ff9bf1b19b0c03ad6f9e2fb062577

If you lost the join command, you can generate a new one on the control plane with:
kubeadm token create --print-join-command
After running the join command, head back to your control plane and verify the worker joined:
kubectl get nodes

Both nodes should show as Ready. If the worker is NotReady, give it a minute. It takes a bit for everything to come up.
Deploy and Test a Sample Application
Let’s make sure everything works by deploying a simple nginx application. This will create a deployment with three replicas and expose it as a service.
Create the deployment file called nginx-deployment.yaml:
cat <<EOF > nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 EOF
Apply it:
kubectl apply -f nginx-deployment.yaml

Watch the pods come up:
kubectl get pods -w
Create a service to expose your nginx deployment:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
Check what port it got assigned:
kubectl get svc nginx-deployment
You will see something like 80:31823/TCP. That 31823(or whatever number it shows) is the NodePort.

Check which nodes your pods are running on:
kubectl get pods -o wide
You should see pods distributed across your worker nodes. You may see no nodes scheduled on the control plane, like in smaller clusters, because control plane nodes have built-in protection against regular workloads.
You can now access nginx from the worker (or control node if scheduled there) node’s IP address on that port (31823 in the image):
curl http://k8s-worker:31823
Or from your browser if you have network access to the nodes.

Conclusion
Setting up Kubernetes on RHEL 10 with CRI-O is not as scary as it might seem. Sure, there are a lot of moving parts, but once you understand how they fit together, it all makes sense.
The combination of RHEL 10 and CRI-O gives you a solid, enterprise-ready container platform. Red Hat has spent years refining these tools, and it shows. You get security, stability, and performance without the overhead of Docker’s daemon.
Whether you are building a lab environment to learn or setting up infrastructure for actual workloads, this setup will serve you well. Just remember to keep your system updated, monitor your cluster health, and do not be afraid to dig into the logs when something goes wrong.
Source link