Environment for comparing several on-premise Kubernetes distributions (K3s, MicroK8s, KinD, kubeadm)

Peter Gillich
FAUN — Developer Community 🐾
9 min readJan 3, 2021

--

There are several Kubernetes on-premise distributions. A part of them are lightweight, others use more VMs or physical nodes.

It’s hard to select the best free solution, so I created a make-based development environment on Ubuntu (and partly Windows) to compare them. See the install description here: https://github.com/pgillich/kind-on-dev

The base distribution is the kubeadm in VMs (created by Vagrant) as a production environment.

Other distributions are candidates for developer and CI environment. This distributions are container-based (not VM-based), in order to keep resource consumption low. The developer/CI environment should be similar to production environment as much as possible, in order to catch bugs in early phase and maintain same/similar deployments. Multi-node support and fast cluster cleanup are also important.

MiniKube, Minishift and RedHat CRC are ruled out, because it run in VM, instead of in containers.

There are commercial alternatives, too, for example:

Distributions and components

The environment supports the below distributions:

The components of the deployments are similar:

  • External Load Balancer: MetalLB (except K3s, addon in MicroK8S)
  • Ingress Controller: Traefik (built-in K3s)
  • Helm
  • Kubernetes Dashboard
  • Metrics Server (built-in K3s, , addon in MicroK8S)
  • Prometheus (with additional exporters and scrapers) and its Alertmanager
  • Grafana (with dozens of pre-installed K8s dashboards)
  • NFS storage (experimental)
  • Ephemeral Containers (only for KinD and Kubeadm/Vagrant; experimental)

The Prometheus-related components (Prometheus and its Alertmanager, Grafana) simulate application workload.

Image proxy/cache was not configured (which would speed up KinD deployment).

Preparation and deployment

The deployment can run on different hardware. Vargrant configuration supports Virtualbox and libvirt/KVM hypervisors. The 4 different distributions use different backend (K3s: embedded containerd, MicroK8S: containerd, KinD: Docker, kubeadm: VM) and different external address spaces, so they may run parallel on the same Ubuntu OS (MicroK8S and Kind MetalLB address spaces are in conflict), for example:

K3s, KinD, kubeadm (Vagrant) on same host

Preparation is described at https://github.com/pgillich/kind-on-dev .

There are a few packages, which should be installed on the host machine. The make environment helps in this, too:

  • make install-kubectl (if not installed yet)
  • make install micro (only for MicroK8S, if not installed yet)
  • make install-docker (only for KinD, if not installed yet)
  • make install-kind (only for KinD, if not installed yet)
  • make install-kvm (only for libvirt/KVM)
  • make generate-vagrant (only for Vagrant, needed)
  • DO_VAGRANT_ALIAS=true make vagrant-install (only for Vagrant, if not installed yet and vagrant would be used in CLI)
  • make install instal-helm (if not installed yet)

In an optimal situation, only the K8s distribution should be selected in the main config file:

After the needed installations and configurations, the cluster can be deployed by the below command:

make all 

At the end of the deployment, access information will be printed out, for example:

Example setups

K3s can be deployed in the shortest time. K3s and KinD were deployed faster on VirtualBox than bare metal. The secret reason for this strange result is the disk: the VirtualBox used SSD for the VM disk, the bare metal OS used HDD.

Cluster nodes:

  • K3s: 1 (master and worker on same node)
  • MicroK8S: 1 (master and worker on same node)
  • Kind: 4 (1 master, 3 worker)
  • kubeadm in VMs: 3 (1 master, 2 workers)

HW parameters:

  • low-end laptop: Intel Celeron, 2 procs, 4 GB RAM, HDD
  • medium laptop: Intel i7–8565U, 8 processors, 16 GB RAM, SSD
  • low-end desktop: AMD Phenom II, 4 procs, 16 GB RAM, HDD
  • VirtualBox VM on low-end desktop: 4 procs, 8 GB RAM, SSD

Deployment times:

Low-end laptop, Lubuntu 16.04

  • K3s (1 node): 6 min
  • Kind (3 workers): 14 min

Medium laptop, Ubuntu 20.04

  • K3s (1 node): 3.5 min
  • MicroK8s (1 node): 2.5 min
  • KinD (1 worker): 4.5 min
  • KinD (3 workers): 6 min
  • kubeadm in KVM VMs (1 master, 2 workers): 10.5 min

Low-end desktop, Ubuntu 18.04

  • K3s (1 node): 4 min
  • Kind (3 workers): 8.5 min
  • kubeadm in KVM VMs (1 master, 2 workers): 23 min

Low-end desktop, Windows 10

  • kubeadm in VirtualBox VMs (1 master, 2 workers): 17.5 min

Low-end desktop, Windows 10, VirtualBox VM, Lubuntu 20.04

  • K3s (1 node): 2.5 min
  • Kind (3 workers): 6.5 min
  • Kind (1 worker): 6 min
  • kubeadm in nested KVM VMs (1 master, 2 workers): 40 min

Applications

The clusters are ready to install own applications. Simple examples can be found in my earlier articles:

Detailed comparison

A lot of comparisons can be found on the Internet about the K8s distributions, so this article focuses on the experiences.

It’s possible to run the 3 different deployments on the same host, but it needs more HW resources. On weaker hardware, only 2 deployments can be run same time. It’s possible to start another distribution install without changing the .env file, for example:

make all K8S_DISTRIBUTION=k3s

Deployment time

Each makes rule while waiting to finish the component deployment (for example: all new Pods are running), before starting the next step. It makes the whole time longer, but I experienced troubles when the component deployments were congested.

The deployment time depends on the number of nodes. Please keep this fact in mind if you try to compare the deployment times.

K3s

K3s is a tiny K8s distribution with a lot of simplifications. It’s packed into one binary with additional components, for example: CRI plugin (containerd), CNI plugin (Flannel), Ingress Controller (Traefik), Local Path Provisioner, Helm, Metrics server. Some features (alpha) are disabled. It’s possible to enable/disable optional components by flags.
Finally, the whole binary size is less than 60 MB.

K3s runs K8s components in the built-in containerd, by default (Docker can also be used). K3s is installed as a systemd service (daemon). After starting the daemon, a few seconds are needed to start K8s components and a few more seconds to start the Pods (the whole start time can take longer than 1 minute, depending on the HW).

Below component deployments are skipped:

  • MetalLB (not needed for 1-node cluster)
  • Flannel (already deployed)
  • Traefik (already deployed)
  • Metrics Server (already deployed)
  • Local Path Provisioner (already deployed)

Traefik dashboard is not installed by default, so it’s enabled after K3s install.

There was an interesting experience when a host ran 3 different deployments same time. K3s realized, disk usage on the image filesystem was over the high threshold (85%) and tried to evict pods, including the K8s component coredns. This kind of behavior should be investigated more deeply in a production environment.

The default Kubernetes API address: port configuration is overlapped with KinD, so the K3s listening port was changed to avoid this issue.

There are distributions based on K3s:

  • K3d, which can run more K3s nodes in one Kubernetes cluster.
  • K3os, which is a disk/VM image with pre-installed K3s. Raspberry Pi is also supported.

MicroK8s

MicroK8s is a tiny K8s distribution. It has addons, which can be activated from CLI. If HA is enabled, the CNI is Calico. If HA is disabled, the CNI is Flannel, so this solution disables HA in MicroK8S automatically.

Below addons are activated, instead of installing by kubectl or helm:

  • dns (CoreDNS)
  • storage
  • metallb
  • metrics-server

MicroK8s has several built-in logic, which is comfortable, if our use case fits to the implemented logic. But makes headache, if our use case does not fits to the MicroK8s concept. The logic is distributed to a lot of lines of several source code, which is hard to maintain and document. For example, the HA-based CNI decision (Calico/Flannel) is handled at several places of the source code, see a part of https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/disable.cilium.sh

The cluster reset takes long time (minutes), or never finished, because of below log message:

Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]

On Ubuntu 20.04, HA was unable to disable. The microk8s inspectreturned error (claimed for flanneld and etcd), because it exited by success code:

$ systemctl status snap.microk8s.daemon-flanneld.service
● snap.microk8s.daemon-flanneld.service - Service for snap application microk8s.daemon-flanneld
Loaded: loaded (/etc/systemd/system/snap.microk8s.daemon-flanneld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Sat 2021-01-16 18:59:25 CET; 7min ago
Process: 20890 ExecStart=/usr/bin/snap run microk8s.daemon-flanneld (code=exited, status=0/SUCCESS)
Main PID: 20890 (code=exited, status=0/SUCCESS)
jan 16 18:59:25 ubuntu-20 systemd[1]: Started Service for snap application microk8s.daemon-flanneld.
jan 16 18:59:25 ubuntu-20 systemd[1]: snap.microk8s.daemon-flanneld.service: Succeeded.

The workaround was uninstalling MicroK8s (with --purge), installing it again, restarting the computer.

KinD

KinD runs K8s components in Docker containers. VMs aren’t needed, so similar to K3s, it has a much lower footprint than a VM-based deployment.

KinD is used for testing Kubernetes but can be used for CI and local runs. Kind can handle more clusters on a host. It does not have plugins and addons, which looks like disadvantage to K3S and MicroK8S at the first time. Using plugins and addons is comfortable, if the needed config option is supported and well documented, otherwise the beautiful feature makes headache, if we would like to use similar deployment to a real Kubernetes cluster.

KinD deployment time takes longer than K3s deployment time because K3s is optimized for low footprint, instead of providing Kubernetes features as much as possible.

The latest KinD version (0.9.0) makes the configuration of feature gates easier because it’s enough to set it by one flag, instead of setting flags on half a dozen K8s components. This KinD deployment enables the Ephemeral Containers feature.

Flannel is unable to install on KinD, because a binary file is missing on the nodes. See more details: https://medium.com/swlh/customise-your-kind-clusters-networking-layer-1249e7916100

In order to compare consumed resources to K3s, a one-worker KinD cluster (1 master, 1 worker) was installed:

K3s and KinD consumed resources, inside of the cluster

Note: KinD deployment uses more namespaces, because:

  • MetalLB is not needed for K3s
  • Local Path Provisioner is deployed to kube-system namespace on K3s

It looks like KinD uses 2–5x more resources, but it’s not quite true. Let’s see the details…

K3s, consumed Pod resources:

KinD, consumed Pod resources:

There are Pods on KinD, which are duplicated. It’s normal because KinD runs 2 nodes (master, worker). Below Pods are missing in the list of K3s Pods:

  • kube-system, etcd-<CLUSTER>-control-plane
    K3s uses SQLite, instead
  • kube-system, kindnet
    K3s uses built-in Flannel
  • kube-system, kube-apiserver-<CLUSTER>-control-plane
    Built into K3s server
  • kube-system, kube-controller-manager-<CLUSTER>-control-plane
    Built into K3s server
  • kube-system, kube-proxy
    Built into K3s server
  • kube-system, kube-scheduler
    Built into K3s server
  • metallb-system, all:
    not installed on K3s

K3s server processor and memory usage can be printed out by top:

It can be seen, k3s-server consumes ~7 % CPU and ~1 GB RAM.

Summary about the K3s-KinD resource consumption:

  • Overall: K3s consume fewer resources.
  • Inside of the cluster, K3s uses less resources, but outside of the cluster, K3s consumes ~7 % CPU and ~1 GB RAM.
  • KinD uses more resources because there are Deployments and DaemonSets, which run Pods on all nodes (KinD runs 2 nodes, K3s runs 1 node).
  • K3s uses less resources because it has simplified and reduced built-in components.

Kubeadm on VMs (Vagrant)

This deployment follows the official kubeadm-based Kubernetes install on Ubuntu hosts (tested with 18.04 and 20.04 images). The cluster deployment is managed by Vagrant. The goal of this deployment is installing a K8s cluster from scratch, following the official install guide. There was another expectation: the install steps must be easy to understand:

  • install-ubuntu.sh: OS install
  • Vagrantfile: K8s cluster install

There are many clever deployment solutions, for example, the suggested Kubespray, but it’s hard to understand because it’s written in Ansible.

Deployment takes longer than the container-based deployments because the OS install takes a long time and virtualization has overhead. There is another drawback: dedicated CPU and RAM must be allocated to VMs, which needs more HW resources.

Several options must be used to work installed components together, which needs extra work comparing to K3s and KinD, see more details at the Vagrant README.md. This deployment enables the Ephemeral Containers feature.

There are known install issues of the libvirt Vagrant plugin (Ruby compile errors), so a custom Docker image is used to run Vagrant.

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--