How to Install Multi-Node K3s Cluster on Ubuntu 24.04


K3s is a lightweight Kubernetes distribution designed for simplicity and low resource usage. It is ideal for labs, learning Kubernetes, edge deployments, and small environments.

In this guide, we will learn how to install multi-node K3s cluster on Ubuntu 24.04 step-by-step. One node will act as the server (control plane) and another node as agents (worker).

Prerequisites

  • Ubuntu 24.04 installed on 2 or more machines or VMs.
  • Local User with sudo access
  • Stable Internet on these VMs
  • All nodes should be reachable over the network
  • Minimum 1 GB RAM per node (2 GB recommended)

Note: For this tutorial, I have set up two Ubuntu machines with Vagrant.

This is the Vagrantfile I have used.

Vagrant.configure("2") do |config|
  config.vm.define "master" do |master|
    master.vm.box = "bento/ubuntu-24.04"
    master.vm.hostname = "master"
    master.vm.network "private_network", ip: "10.10.10.10"
  end

  config.vm.define "worker1" do |worker1|
    worker1.vm.box = "bento/ubuntu-24.04"
    worker1.vm.hostname = "worker1"
    worker1.vm.network "private_network", ip: "10.10.10.11"
  end
end

You can use these commands to access these two machines:

vagrant up
vagrant ssh master
vagrant ssh worker1

1) Update System Packages

Always start with updated packages to avoid dependency issues. Run the following command on every nodes.

sudo apt update && sudo apt upgrade -y

Reboot if the kernel was updated.

sudo reboot

2) Set Hostname

The hostname lets us recognize the machine directly via its name instead of IP. Though we have already configured hostname with our Vagrantfile, in case you are using it directly on two different cloud server, this step is needed.

For the master node, which acts as the main controller:

sudo hostnamectl set-hostname master

Install multi-node k3s cluster on Ubuntu 24.04

For the worker node, set an identifiable hostname:

sudo hostnamectl set-hostname worker1

3) Install Multi-Node K3s Cluster on Ubuntu 24.04

On the master node, install the k3s server with this single curl command:

curl -sfL https://get.k3s.io | sh -

4) Verify Server Node

Check that kubernetes is running on the master node with this command:

systemctl status k3s

Systemctl output showing K3s Kubernetes service running on server

Check node:

sudo k3s kubectl get nodes

K3s Kubernetes cluster nodes status

5) Get the Node Join Token

K3s generates a token that you can pass to every worker node.

sudo cat /var/lib/rancher/k3s/server/node-token

K3s server node token retrievalCopy this value.

Also note the server’s IP address:

ip a

6) Install k3s Agent (Worker Node)

Run these commands on every worker node. A reminder to first update packages:

sudo apt update && sudo apt upgrade -y

Join the cluster:

curl -sfL https://get.k3s.io | \
K3S_URL=https://<CONTROL_PLANE-IP>:6443 \
K3S_TOKEN=<NODE-TOKEN> \
sh -

Worker node joining k3s control plane using K3S_URL and K3S_TOKEN command

Replace:

  • <SERVER-IP> with the control plane(master controller) IP.
  • <NODE-TOKEN> with the token from step 5.

7) Verify Agent Service

Run these commands on the worker node:

systemctl status k3s-agent

It should be in running status. In case you get an error while checking its status, maybe you mistyped the command or you might need to adjust some configuration.

k3s-agent Service Status on worker node

8) Verify Multi-Node Cluster

Head back to the server node or control plane, run the following command to check status of control and worker node.

sudo k3s kubectl get nodes -o wide

kubectl get nodes wide output verifying k3s control plane and worker nodes

You see, it’s reporting 1 control-plane node, 1 worker node, and all nodes are in ready state.

Additionally, you might want to label all the workers’ nodes. You can achieve this with a simple command:

sudo k3s kubectl label node worker1 node-role.kubernetes.io/worker=worker

9) Test Cluster Functionality

Now, you can try deploying an nginx as a sample deployment.

sudo k3s kubectl create deployment nginx --image=nginx --replicas=4 
sudo k3s kubectl get pods -o wide

k3s multi-node cluster scheduling nginx pods on control plane and worker nodes

Conclusion

That’s all from this guide, I  hope you have found it informative and useful. Feel free to share your feedback and queries in below comments section.

This setup is perfect for Kubernetes learning, testing workloads, and building a home lab. Building upon this, you can try deploying services using YAML manifests, add ingress and loadbalancer support, and explore Helm and monitoring tools.



Source link

Leave a Comment