How to manually build your own K8s cluster from scratch?! Using kubeadm

Mostafa Wael
FAUN — Developer Community 🐾
7 min readAug 26, 2022

--

kubeadm

Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that will pass the Kubernetes Conformance tests. kubeadm also supports other cluster lifecycle functions, such as bootstrap tokens and cluster upgrades.

The kubeadm tool is good if you need:

  • A simple way for you to try out Kubernetes, possibly for the first time.
  • A way for existing users to automate setting up a cluster and test their application.
  • A building block in other ecosystems and/or installer tools with a larger scope.

You can install and use kubeadm on various machines: your laptop, a set of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the cloud or on-premises, you can integrate kubeadm into provisioning systems such as Ansible or Terraform. [Reference].

Before you begin

You should have at least 3 servers with 2 GB or more of RAM per machine (any less will leave little room for your apps) and 2 CPUs or more. Preferably running Ubuntu 20.04 Focal Fossa LTS.

Don’t forget to hit the Clap and Follow buttons to help me write more articles like this.

Set hostnames to your servers

The first step is to set hostnames to your servers-nodes- so that you can identify them easily.

  • On the control plane node:sudo hostnamectl set-hostname k8s-control
  • On the first worker node:sudo hostnamectl set-hostname k8s-worker1
  • On the second worker node: sudo hostnamectl set-hostname k8s-worker2

Now, you can log out of your servers and log in again, you will see that the hostname has changed.

Ensure full network connectivity among all machines in the cluster

To do this, On all nodes, set up the hosts file to enable all the nodes to reach each other using the predefined hostnames.

  • Open the hosts file: sudo vim /etc/hosts
  • Add the following at the end of the file.
<control plane private IP> k8s-control 
<worker 1 private IP> k8s-worker1
<worker 2 private IP> k8s-worker2

Make sure you have updated the file on all your nodes.

We have used the private IP instead of the public one, because the public IP may change if the server restarts.

Now, you can log out of your servers and log in again, for the changes to take effect.

Install/Enable some kernel modules

We need these modules to be enabled when the server starts.

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf 
overlay
br_netfilter

EOF
  • br_netfilter module: Required to enable transparent masquerading and to facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes pods across the cluster nodes.
  • overlay network driver: creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.

But, we want these modules to be currently enabled without restarting the system right now. We can do this using the modprobe command.

sudo modprobe overlay 
sudo modprobe br_netfilter

Add some network configurations

Some network configurations that k8s will need, we will add them to a configuration file in sysctl.d

cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Now, run sudo sysctl --system to load those configurations.

Don’t forget to hit the Clap and Follow buttons to help me write more articles like this.

Install Containerd

Containerd is a container runtime that manages the lifecycle of a container on a physical or virtual machine (a host). It is a daemon process that creates, starts, stops and destroys containers. It is also able to pull container images from container registries, mount storage, and enable networking for a container. [Reference]

To run containers in Pods, Kubernetes uses a container runtime. By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime. If you don’t specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of known endpoints. If multiple or no container runtimes are detected kubeadm will throw an error and will request that you specify which one you want to use.

See container runtimes for more information.

Note: Docker Engine does not implement the CRI which is a requirement for a container runtime to work with Kubernetes. For that reason, an additional service cri-dockerd has to be installed. cri-dockerd is a project based on the legacy built-in Docker Engine support that was removed from the kubelet in version 1.24. [Reference]

  1. Install the package: sudo apt-get update && sudo apt-get install -y containerd
  2. Create a directory for the configurations file: sudo mkdir -p /etc/containerd
  3. Now, we will generate the contents of the configurations file using sudo containerd config default and save its contents in the configurations file using sudo tee /etc/containerd/config.toml . We can combine the two commands and only run sudo containerd config default | sudo tee /etc/containerd/config.toml instead.
  4. We will need to restart it, to make sure it uses these configurations: sudo systemctl restart containerd .

Make sure to do these steps on all your nodes.

Install k8s packages

Before installing the packages, we should disable the swap, as k8s requires it to be disabled: sudo swapoff -a .

  1. Install some packages to help us install k8s packages, they may be already available on your system by the way: sudo apt-get update && sudo apt-get install -y apt-transport-https curl .
  2. Configure the package repository: curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add --
  3. Configure the repository itself, by adding this file kubernetes.list with a reference to the k8s repository.
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list 
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

4. Update local packages: sudo apt-get update

5. Now, install the k8s packages and make sure all the versions are the same: sudo apt-get install -y kubelet=1.24.0–00 kubeadm=1.24.0–00 kubectl=1.24.0–00

More details:

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you. If you do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behavior. However, one minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API server version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server, but not vice versa. [Reference]

6. Last but not least, we need to disable automatic updates of those packages. When working with k8s, we prefer to have manual control on its version to avoid conflicts: sudo apt-mark hold kubelet kubeadm kubectl

Make sure to do these steps on all your nodes. And don’t forget to hit the Clap and Follow buttons to help me write more articles like this.

Initialize the cluster and set up kubectl access

To create our cluster:

  1. Initialize the cluster using kubeadm and give the pods a network CIDR, which is just an IP range to be used by our internal virtual pods' network: sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.24.0
  2. Add our Kube configurations, to be able to interact with the cluster:
mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3. If this command worked without problems, then your cluster was created properly: kubectl get nodes .

You can see that your node status is NotReady, that’s because we haven’t configured our network plugin.

4. Install the Calico network add-on: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml . There are multiple other plugins you can use too.

Calico, from network software provider Tigera, is a third-party plugin for Kubernetes geared to make full network connectivity more flexible and easier. Out of the box, Kubernetes provides the NetworkPolicy API for managing network policies within the cluster. [Reference]

5. Finally, let’s join our worker nodes to the cluster. We can do this by creating a join command on our control plane node, copying its output, and running it as a root on the worker nodes: kubeadm token create --print-join-command

Just wait!

You can wait for a minute or two then, run kubectl get nodes to see that all your nodes are in the Ready status.

Congratulations! you have manually created your own k8s cluster nearly from scratch!

I am Mostafa Wael, A DevOps & Cloud engineer. Mainly interested in infrastructure, Linux, and Software Engineering. Don’t forget to hit the Clap and Follow buttons to help me write more articles like this.

Enjoy the feeling of power!

Helpful resources

--

--

DevOps & Cloud engineer | Top 2% on StackOverflow @2022, @2023 | Helping Businesses Optimize Infrastructure.