Setting Up a Kubernetes Cluster with Kubeadm: A Comprehensive Guide 2023


In today’s rapidly evolving technological landscape, Kubernetes has emerged as a powerful container orchestration platform. By efficiently managing and automating the deployment, scaling, and management of containerized applications, Kubernetes has become the standard for modern software development. In this comprehensive guide, we will explore the step-by-step process of setting up a Kubernetes cluster using Kubeadm, a popular tool for bootstrapping clusters.

In this blog, we are going to install Kubernetes cluster on ubuntu 22.04 with kubeadm having a single master node and two worker nodes. we are using kubeadm to create a cluster.

What is the use of Kubeadm in Kubernetes?

Provision required VMs to install Kubernetes using kubeadm.

How to provision VMs to setup a single master two-worker Kubernetes cluster we achieve this using two different software.
VirtualBox which is our hypervisor which is responsible for running our virtual machine
another is Vagrant this is like an automation tool that will spin up VMs on Virtualbox for you with specific specifications.
Prerequisite for this you need t install VirtualBox and need to install Vagrant.

in the Vagrant file, we setup the master node to one and the worker node to two. Ip address ranges start from 192.168.30.* You can check the rest of the configurations.

Now run vagrant status

Now run vagrant up, this will spin up VMs that we required to install Kubeadm

SSH into VMs

You can ssh into VM by the terminal by running a simple command

You can use any other ssh tools like MobaXtream, PuTTY, or WinSCP. For that, you required pem file. Vagrant will create a pem file in the following directory.

Setup Kubernetes cluster using kubeadm

Here are the high-level steps involved in setting up a Kubernetes cluster using Kubeadm:

  1. Install the container runtime (containerd) on all nodes.
  2. Install Kubectl, Kubeadm, and Kubelet on all nodes.
    Initialize the Kubeadm control plane configuration on the master node.
  3. Save the command with the token which will help to join other nodes to master.
  4. Install the Wavework network plugin.
  5. Join the Kubernetes worker nodes to the control plane (master) using the previously saved join command.
  6. Validate all cluster components and nodes.
  7. Deploy a sample nginx application and validate its functionality.

Step1 | Disable swap on all the Nodes

You need to disable swap on all the nodes using the following command, then kubeadm will work properly.

sudo swapoff -a
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true

when system reboots the fstab will make sure that SWAP will be off.

Step2 | Configure iptables Bridged Traffic on all the Nodes

Run the following block of code on all the nodes for IPtables to see the bridged traffic.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1

# Apply sysctl params without reboot
sudo sysctl --system

Verify the br_netfilter, overlay and  system variables by following commands.

lsmod | grep br_netfilter
lsmod | grep overlay

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Step3 | Install containerd Runtime On All The Nodes

The basic requirement for a Kubernetes cluster is container runtime. You can have any one of the following container runtimes.

  1. CRI-O
  2. Docker Engine
  3. containerd

We will be using Containerd for this setup there are multiple methods to install containerd we are using apt package manager to install it as we are using Ubuntu image here.

To install containerd run following commands on all nodes.

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install

Step4 | Install cgroup Drivers.

To interact with control group we need to install cgroup drivers. There are two types of cgroup drivers.

  1. cgroupfs
  2. systemd

All cluster components should use the same cgroup drivers. Like your system, Container Runtime and Kubelate should use the same drivers.

check which cgroup drivers your system is using.


ps -p 1

configure containerd to use systemd cgroup drivers. To use the systemd cgroup driver you need to add following block of code into /etc/containerd/config.toml file.

vi /etc/containerd/config.toml delete all things from the file and add the following block of code.

    SystemdCgroup = true

Now, Restart the containerd service

sudo systemctl restart containerd
sudo systemctl status containerd

Step5 | Install kubectl, kubelet and kubeadm on all three Nodes

Install the required packages needed to install kubeadm.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg

Add the apt repository.

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update and install the latest version of Kubectl, kubelet, and kubeadm.

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

If you want to install a specific version you can also Install Kubernetes Specific kubeadm version

sudo apt-get install -y kubelet=1.26.1-00 kubectl=1.26.1-00 kubeadm=1.26.1-00

Hold the Kubernetes packages to prevent auto updates.

sudo apt-mark hold kubelet kubeadm kubectl

Step 6 | On Master Node Initialize Kubeadm to Setup Control Plane

Here we have nodes with only private IP addresses and we access the API server over the private IP of the master node.

Run the following steps on the master node.

In this demo, we are using Private IP for our master Node,
Find the IP address of your VM (master node).

Run the following init command it will setup kubedam with a private IP address.

sudo kubeadm init --apiserver-advertise-address=$IPADDR  --apiserver-cert-extra-sans=$IPADDR  --pod-network-cidr=$POD_CIDR --node-name $NODENAME --ignore-preflight-errors Swap

If you are configuring a Kubeadm cluster on Cloud platforms and require access to the master API server via the Public IP of the master node server, the only variation lies in the Kubeadm initialization command for Public and Private IPs.

If you intend to utilize the Public IP of the master node, proceed with initializing the control plane configurations of the master node using the following kubeadm command.

sudo kubeadm init --apiserver-advertise-address=$IPADDR  --apiserver-cert-extra-sans=$IPADDR  --pod-network-cidr=$POD_CIDR --node-name $NODENAME --ignore-preflight-errors Swap

After successfully initializing Kubeadm, you will receive an output containing the location of the kubeconfig file and the join command along with the token.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token e31pbp.hinfrk282r7gppuq \
        --discovery-token-ca-cert-hash sha256:5a385e239d646c25f9887466fc36f545591cead28a5d4a3e6580d9cf1b18f74e

It is crucial to copy and save this information in a file as it will be required for adding the worker node to the master.

To create the kubeconfig in the master and enable interaction with the cluster API using kubectl, execute the following commands based on the output received.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To create the kubeconfig in the master and enable interaction with the cluster API using kubectl, execute the following commands based on the output received.

Next, verify the kubeconfig by running the following kubectl command, which lists all the pods in the kube-system namespace.

The expected output should display two Coredns pods in a pending state. This behavior is normal, and they will transition to a running state once the network plugin is install.

Step7 | Install Weaveworks Network Plugin for Pod Networking

To enable pod networking, you need to install a network plugin since Kubeadm does not configure one by default. For this setup, I will use the Calico network plugin.

Please note that when executing the following kubectl command, ensure you are in the directory where you have configured the kubeconfig file. You can execute it either from the master node or your workstation, as long as you have connectivity to the Kubernetes API.

To install the Weaveworks network plugin on the cluster, run the following command:

kubectl apply -f
root@kubemaster:/home/vagrant# kubectl get ds -A
kube-system   kube-proxy   1         1         1       1            1    5m19s
kube-system   weave-net    1         1         1       1            1           <none>                   2m2s
root@kubemaster:/home/vagrant# kubectl edit ds weave-net -n kube-system
daemonset.apps/weave-net edited

After some time, if you check the pods in the kube-system namespace, you should see the Weaveworks pods and the CoreDNS pods running successfully.

Step8 | Join Worker Nodes To Kubernetes Master Node(Control plane node)

Assuming that containerd, kubelet, and kubeadm utilities are already set up on the worker nodes, we can proceed to join the worker node to the master node using the Kubeadm join command obtained during the master node setup.

If you missed copying the join command, you can recreate the token with the join command by executing the following command on the master node.

kubeadm token create --print-join-command

Remember to use sudo if you are running as a normal user. This command handles the TLS bootstrapping process for the nodes.

sudo kubeadm join --token j4eice.33vgvgyf5cxw4u8i \
    --discovery-token-ca-cert-hash sha256:37f94469b58bcc8f26a4aa44441fb17196a585b37288f85e22475b00c36f1c61

Once successfully executed, you will see the output indicating that the node has joined the cluster. To verify if the node has been added to the master, execute the following kubectl command from the master node.

kubectl get nodes

Example output,

root@master-node:/home/vagrant# kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
master-node     Ready    control-plane   14m     v1.24.6
worker-node01   Ready    <none>          2m13s   v1.24.6
worker-node02   Ready    <none>          2m5s    v1.24.6

you can also check avaliable kubernetes pods

About the Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *