From Minikube to Kind

Asish M Madhu
FAUN — Developer Community 🐾
8 min readJun 4, 2021

--

“Be kind to all”, - everybody has their own stories.

Image Courtesy: https://unsplash.com/photos/njGrQxgsp5Y?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink

“Kubernetes in Docker” aka Kind is changing the way one can setup k8s cluster locally . Using this tool, we can spin up multiple local k8s clusters in a matter of few minutes, all using docker containers. This is ideal for local development and testing of kubernetes based applications. It adds value for testing Kubernetes.

Below are some of the advantages of using kind

  1. Developer friendly, and can be used for testing k8s apps. It can also be easily replicated.
  2. We can spin up multi-node k8s clusters with HA support.
  3. Supports building kubernetes release builds from source. Imagine the advantage of testing an upcoming k8s release, locally.
  4. It supports Linux, macOS, and windows.
  5. Make docker images easily available to kubernetes clusters.
  6. Good documentation and very stable

Compare Kind with Minikube

Minikube has been around more many years, which basically spins up a VM, and acts as a single node K8s cluster. So you require a hypervisor like VirtualBox to be running. As a DevOps, while giving a demo using minikube has its own performance impact, which takes up a lot of local system resources and can slow down the system while presenting something.

When I tried with Kind, the main advantage is its fast boot-up time. Even the initial setup and installation were completed in a matter of 2–3 minutes. It relies on a nested docker image which has the capability to run systemd, kubernetes, containerd etc. The kubernetes running inside the docker image uses containerd as runtime.

Installation

Requirement

  1. Docker (Mandatory)
  2. Go (Not mandatory). You can install kind as a binary for your distribution. In this article we will go through the installation of kind on Ubuntu 18.04

Installing Kind on Ubuntu 18.04

user1@asish-lab1:~$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.0/kind-linux-amd64 2> /dev/null
user1@asish-lab1:~$ chmod u+x kind
user1@asish-lab1:~$ sudo mv kind /usr/local/bin/
user1@asish-lab1:~$ kind --version
kind version 0.11.0

Kind mainly consists of the command line interface(kind), docker images which are called “node-images” which have the capability to run nested containers, systems, and kubernetes.

Design

Each cluster is identified by docker object labels. To run kubernetes in a container, a suitable node image is used. It consists of a standard base layer usually Ubuntu and adding basic utilities like systemd, certificare, mounts, etc.

You can find the docker image here:

Cluster Creation

Each node-image runs as a docker container. This container boots to a paused state, with ENTRYPOINT waiting for SIGUSR1. With this setup, one can log in to the container using docker exec and view what is happening inside the cluster as seen below in usage. Internally it does mounting and pre-loading saved images, basically preparing the node. Once the node is booted and the docker service is ready, kubeadm does the remaining job to bring up the k8s cluster. Once the cluster is ready we can connect it via kubeconfig.

Usage

Creating K8s cluster

The kind binary helps to perform every action related to kind, like creating clusters, deleting clusters, viewing clusters, pushing images to clusters etc.

Creating a cluster in Kubernetes

user1@asish-lab1:~$ kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kindHave a nice day! 👋
user1@asish-lab1:~$

By default, kind creates a cluster with the name “kind”

user1@asish-lab1:~$ kind get clusters
kind

Creating multiple Clusters

We can create many clusters with custom name from the same local machine, bypassing as below

user1@asish-lab1:~$ kind create cluster --name=cluster1
Creating cluster "cluster1" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-cluster1"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster1Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂user1@asish-lab1:~$ kind create cluster --name=cluster2
Creating cluster "cluster2" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-cluster2"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster2Have a nice day! 👋user1@asish-lab1:~$ kind get clusters
cluster1
cluster2
kind

Let us observe what is happening behind the scenes and connect to the docker node image for the respective cluster.

user1@asish-lab1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02ca87cb24f0 kindest/node:v1.21.1 "/usr/local/bin/entr…" 2 minutes ago Up About a minute 127.0.0.1:34921->6443/tcp cluster2-control-plane
a3de908d1241 kindest/node:v1.21.1 "/usr/local/bin/entr…" 4 minutes ago Up 3 minutes 127.0.0.1:38767->6443/tcp cluster1-control-plane
74b4e6e6ac1f kindest/node:v1.21.1 "/usr/local/bin/entr…" 26 hours ago Up 4 hours 127.0.0.1:41675->6443/tcp kind-control-plane
#Connecting to cluster2 exampleuser1@asish-lab1:~$ docker exec -it 02ca87cb24f0 bash
root@cluster2-control-plane:/#
# Inside the docker images, K8s cri used is containerdroot@cluster2-control-plane:/# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/kindest/kindnetd v20210326-1e038dc5 6de166512aa22 54MB
docker.io/rancher/local-path-provisioner v0.0.14 e422121c9c5f9 13.4MB
k8s.gcr.io/build-image/debian-base v2.1.0 c7c6c86897b63 21.1MB
k8s.gcr.io/coredns/coredns v1.8.0 296a6d5035e2d 12.9MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ffd 86.7MB
k8s.gcr.io/kube-apiserver v1.21.1 6401e478dcc01 127MB
k8s.gcr.io/kube-controller-manager v1.21.1 d0d10a483067a 121MB
k8s.gcr.io/kube-proxy v1.21.1 ebd41ad8710f9 133MB
k8s.gcr.io/kube-scheduler v1.21.1 7813cf876a0d4 51.9MB
k8s.gcr.io/pause 3.4.1 0f8457a4c2eca 301kB
root@cluster2-control-plane:/# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
cc127273a6853 296a6d5035e2d 5 minutes ago Running coredns 0 3652a8a490c69
99658c4a44963 e422121c9c5f9 5 minutes ago Running local-path-provisioner 0 5ee5ac70676fc
3fedbd55b91df 296a6d5035e2d 5 minutes ago Running coredns 0 378d0944e6518
6fa3c08522eec ebd41ad8710f9 5 minutes ago Running kube-proxy 0 5239f53b3c6a1
6ec653d8066bc 6de166512aa22 5 minutes ago Running kindnet-cni 0 3416e493809bc
404125857bd43 0369cf4303ffd 6 minutes ago Running etcd 0 f513749bdbf7f
a934d84d6eef4 7813cf876a0d4 6 minutes ago Running kube-scheduler 0 3f552f0d9389f
f9e28fd2f34e5 d0d10a483067a 6 minutes ago Running kube-controller-manager 0 ad1ae8fd51090
17623e2871abf 6401e478dcc01 6 minutes ago Running kube-apiserver 0 e71a1f551bbcb

We can use kubeconfig to switch context

user1@asish-lab1:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-cluster1 kind-cluster1 kind-cluster1
* kind-cluster2 kind-cluster2 kind-cluster2
kind-kind kind-kind kind-kind
minikube minikube minikube myservice
user1@asish-lab1:~$ kubectl config set-context kind-cluster1
Context "kind-cluster1" modified.
user1@asish-lab1:~$

Deleting Cluster

You can delete the cluster as seen below

user1@asish-lab1:~$ kind get clusters
cluster1
cluster2
kind
user1@asish-lab1:~$ kind delete cluster --name cluster2
Deleting cluster "cluster2" ...
user1@asish-lab1:~$

Loading a docker image to k8s cluster

It is pretty easy to use a custom image available to a local k8s cluster. You can use the load option to upload a local docker image available in the workstation, to the container runtime inside the kubernetes cluster in the docker image as below. Here I am loading a local docker image to “cluster-1” k8s cluster.

user1@asish-lab1:~$ kind load docker-image asishmm/myapp1 --name cluster1
Image: "asishmm/myapp1" with ID "sha256:50fef27ce001ec109476af7c814588aeba06b713acabd3124b5c7f7e5e387fe3" not yet present on node "cluster1-control-plane", loading...
user1@asish-lab1:~$ kubectl config use-context kind-cluster1
Switched to context "kind-cluster1".
user1@asish-lab1:~$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kind-cluster1 kind-cluster1 kind-cluster1
kind-kind kind-kind kind-kind
minikube minikube minikube myservice
user1@asish-lab1:~$ kubectl get pods
No resources found in default namespace.
user1@asish-lab1:~$ kubectl run myapp --image=asishmm/myapp1
pod/myapp created
user1@asish-lab1:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp 0/1 ContainerCreating 0 2s
user1@asish-lab1:~$ kubectl get pods myapp -o jsonpath="{..image}" ; echo
asishmm/myapp1 docker.io/asishmm/myapp1:latest

Building Images

As previously mentioned kind, runs cluster through a node-image, build on top of a base image. You can create your own custom node-image, if you have the kubernetes source in your host machine in $GOPATH/src/k8s.io/kubernetes

kind build node-image

Might require a minimum of 6GB for the build on the host machine.

Configuration

We can have a custom kind config and pass it to kind while creating a cluster. This is useful when you want to define a state for the test cluster.

Below is a sample config file

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: hacluster
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker

Use this config file and create a 3 master and 3 worker k8s setup. It took approximately 5 mins for this to complete.

user1@asish-lab1:~/lab/k8s/kind$ kind create cluster --config multi-node.yaml 
Creating cluster "hacluster" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦 📦 📦 📦
✓ Configuring the external load balancer ⚖️
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining more control-plane nodes 🎮
✓ Joining worker nodes 🚜
Set kubectl context to "kind-hacluster"
You can now use your cluster with:
kubectl cluster-info --context kind-haclusterHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
user1@asish-lab1:~/lab/k8s/kind$ kubectl cluster-info --context kind-hacluster
Kubernetes control plane is running at https://127.0.0.1:41541
CoreDNS is running at https://127.0.0.1:41541/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
user1@asish-lab1:~/lab/k8s/kind$ kubectl get nodes -o wide --context kind-hacluster
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hacluster-control-plane Ready control-plane,master 5m40s v1.21.1 172.18.0.5 <none> Ubuntu 20.10 5.4.0-73-generic containerd://1.5.1
hacluster-control-plane2 Ready control-plane,master 5m1s v1.21.1 172.18.0.6 <none> Ubuntu 20.10 5.4.0-73-generic containerd://1.5.1
hacluster-control-plane3 Ready control-plane,master 3m17s v1.21.1 172.18.0.4 <none> Ubuntu 20.10 5.4.0-73-generic containerd://1.5.1
hacluster-worker Ready <none> 2m2s v1.21.1 172.18.0.8 <none> Ubuntu 20.10 5.4.0-73-generic containerd://1.5.1
hacluster-worker2 Ready <none> 2m2s v1.21.1 172.18.0.7 <none> Ubuntu 20.10 5.4.0-73-generic containerd://1.5.1

We can use this state file defined in our automation jobs during CI, to have a consistent test cluster being created whenever needed and perform testing and then tear down this cluster. In this way, we can use k8s clusters as epithermal objects in the DevOps CI cycles. What a value addition.

Challenges

I did not find a way to update an existing kind confg file without deleting and creating the cluster. For example, if I want to add a new node or make some changes to the config file and apply it, I had to delete and apply it back. Let me know if there is a better way to update in the comment section.

Conclusion

I believe this tool has immense value in testing k8s based applications. Though minikube is used mainly within a local machine, to do development activities, kind goes beyond that and acts as a good testbed for K8s. If you find this article helpful, please share it with your friends who might get some value out of this.

Reference: https://kind.sigs.k8s.io/

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--

I enjoy exploring various opensource tools/technologies/ideas related to CloudComputing, DevOps, SRE and share my experience, understanding on the subject.