What’s new in Kubernetes Version 1.20 and How to upgrade to 1.20.x?

Rakesh Jain
FAUN — Developer Community 🐾
13 min readJan 13, 2021

--

Kubernetes 1.20 was released on Dec 8, 2020! They call it “The Raddest Release”.

Kubernetes as a Technology as a Platform and as a Business Model growing by leaps and bounds. Kubernetes team is continuing to work on improving user experience by enhancing the feature sets.

This was a much-awaited release this year.

What this release brings to you?

As per their official announcement This release consists of 42 enhancements: 11 enhancements have graduated to stable, 15 enhancements are moving to beta, and 16 enhancements are entering alpha.”

For those who are not aware of these Development Stages:
Pre Alpha: all activities performed during the software project before formal testing. These activities can include requirements analysis, software design, software development, and unit testing.

Alpha: The alpha phase of the release life cycle is the first phase of software testing. Can be described as an early version (typically unstable) of a program or application.

Beta: A Beta phase generally begins when the software is feature complete but likely to contain a number of known or unknown bugs. It further has few stages like Perpetual beta, Open and closed beta etc.

Release candidate: A release candidate (RC), also known as “going silver”, is a beta version with potential to be a stable product, which is ready to release unless significant bugs emerge.

Stable release: Also called production release, the stable release is the last release candidate (RC) which has passed all verifications / tests.

Major Highlights from v1.20

A heads up towards Docker Deprecation

Dockershim, the container runtime interface (CRI) shim for Docker is being deprecated. Support for Docker is deprecated and will be removed in a future release (planned for version 1.22 next year).

It doesn’t affect the Kubernetes environments running on different cloud platforms such as EKS on AWS, AKS on Azure, RedHat Open shift, and so on as they have already moved to default to other container runtimes such as containerd and CRI-O, respectively.

Please note that the Docker-produced images will continue to work in your clusters with all CRI compliant runtimes as Docker images follow the Open Container Initiative (OCI) image specification.

For more details on this go through my article- Kubernetes deprecated docker! or Kubernetes community Blog.

Exec Probe Timeout Handling

A longstanding bug regarding exec probe timeouts that may impact existing pod definitions has been fixed. Prior to this fix, the field timeoutSeconds was not respected for exec probes. Instead, probes would run indefinitely, even past their configured deadline, until a result was returned. With this change, the default value of 1 second will be applied if a value is not specified and existing pod definitions may no longer be sufficient if a probe takes longer than one second.

If you just want to continue with previous behavior set the newly added feature gateExecProbeTimeout to false.

More details can be found here.

Volume Snapshot Operations are now stable

This feature provides a standard way to trigger volume snapshot operations and allows users to incorporate snapshot operations in a portable manner on any Kubernetes environment and supported storage providers.

Kubectl Debug Graduates to Beta

From now on you can use kubectl debug command for common debugging.
You should now use kubectl debug instead of kubectl alpha debug`

For more information on this go here.

GA: Process PID Limiting for Stability

This feature now adds support for configuring the kubelet to limit the number of PIDs a pod can utilize. This limits their potential impact on other pods on a node.

For more details go here.

Alpha: Graceful node shutdown

Currently, when a node shuts down, pods do not follow the expected pod termination lifecycle and are not terminated gracefully which can cause issues for some workloads. The GracefulNodeShutdown feature is now in Alpha. GracefulNodeShutdown makes the kubelet aware of node system shutdowns, enabling graceful termination of pods during a system shutdown.

Alpha with updates: IPV4/IPV6

This allows both IPv4 and IPv6 service cluster IP addresses to be assigned to a single service and also enables a service to be transitioned from single to dual IP stack and vice versa.

Beta: API Priority and Fairness

Kubernetes 1.20 now enables API Priority and Fairness (APF) by default. This allows kube-apiserver to categorize incoming requests by priority levels.

CronJobs (previously ScheduledJobs)

CronJobs (previously ScheduledJobs) are meant for performing all time-related actions, namely backups, report generation, and the like. Each of these tasks should be allowed to run repeatedly (once a day/month, etc.) or once at a given point in time.

To try this feature out you will need to enable the CronJobControllerV2 feature flag.

How to upgrade your Kubernetes cluster to v1.20?

Now let’s start the demonstration of upgrading your Kubernetes cluster from 1.19.x version to 1.20.x.

Prerequisites

A working Kubernetes cluster with at least one master and one worker node.

My Environment

Kubernetes Master Node ->
172.42.42.200 kmaster-rj.example.com/kmaster-rj, Ubuntu 18.04 LTS

Kubernetes Worker Nodes ->
172.42.42.201 kworker-rj1.example.com/kworker-rj1, Ubuntu 18.04 LTS
172.42.42.202 kworker-rj2.example.com/kworker-rj2, Ubuntu 18.04 LTS

Check the newest version available (On Master/Control plane node)

Update the package cache first.

root@kmaster-rj:~# apt update
Hit:2 http://ppa.launchpad.net/bashtop-monitor/bashtop/ubuntu bionic InRelease
Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:5 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
142 packages can be upgraded. Run 'apt list --upgradable' to see them.

You will find the latest version 1.20 in the list, it should look like 1.20.x-00.

root@kmaster-rj:~# apt-cache madison kubeadm
kubeadm | 1.20.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.20.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.6-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.5-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.4-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.3-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.2-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

Upgrading control plane nodes

The upgrade procedure on control plane nodes should be executed one node at a time.

Step 1: upgrade kubeadm (On Master/Control plane node)

root@kmaster-rj:~# apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.20.1-00
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:3 http://ppa.launchpad.net/bashtop-monitor/bashtop/ubuntu bionic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
kubeadm
The following packages will be upgraded:
kubeadm
1 upgraded, 0 newly installed, 0 to remove and 141 not upgraded.
Need to get 7,708 kB of archives.
After this operation, 160 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.20.1-00 [7,708 kB]
Fetched 7,708 kB in 1s (5,350 kB/s)
(Reading database ... 75793 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.20.1-00_amd64.deb ...
Unpacking kubeadm (1.20.1-00) over (1.19.6-00) ...
Setting up kubeadm (1.20.1-00) ...

Step 2: Verify the kubeadm version (On Master/Control plane node)

root@kmaster-rj:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:07:13Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Step 3: Verify the upgrade plan (On Master/Control plane node)

root@kmaster-rj:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.6
[upgrade/versions] kubeadm version: v1.20.1
[upgrade/versions] Latest stable version: v1.20.1
[upgrade/versions] Latest stable version: v1.20.1
[upgrade/versions] Latest version in the v1.19 series: v1.19.6
[upgrade/versions] Latest version in the v1.19 series: v1.19.6
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.19.6 v1.20.1
Upgrade to the latest stable version:COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.6 v1.20.1
kube-controller-manager v1.19.6 v1.20.1
kube-scheduler v1.19.6 v1.20.1
kube-proxy v1.19.6 v1.20.1
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.20.1_____________________________________________________________________The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
____________________________________________________________________

Above command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. It also shows a table with the component config version states.

Step 4: Choose a version to upgrade to (On Master/Control plane node)

root@kmaster-rj:~# kubeadm upgrade apply v1.20.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.1"
[upgrade/versions] Cluster version: v1.19.6
[upgrade/versions] kubeadm version: v1.20.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.1"...
Static pod: kube-apiserver-kmaster-rj hash: ca19d1a3c534805e8f47381caf3063f4
Static pod: kube-controller-manager-kmaster-rj hash: b5e37ec64802ad1ee865a41314ccd01f
Static pod: kube-scheduler-kmaster-rj hash: 092bf93ace9b8cbbba45a9126334ea19
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-kmaster-rj hash: 4a2056f053f1ca1909b86404eb65cd11
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-13-09-58-27/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-kmaster-rj hash: 4a2056f053f1ca1909b86404eb65cd11
Static pod: etcd-kmaster-rj hash: 4a2056f053f1ca1909b86404eb65cd11
Static pod: etcd-kmaster-rj hash: 4a2056f053f1ca1909b86404eb65cd11
Static pod: etcd-kmaster-rj hash: 94417d36c0e48fdf5c03655da9543034
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests629701940"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-13-09-58-27/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-kmaster-rj hash: ca19d1a3c534805e8f47381caf3063f4
Static pod: kube-apiserver-kmaster-rj hash: e1eb6ececa489105263c8f38b2ad3d85
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-13-09-58-27/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-kmaster-rj hash: b5e37ec64802ad1ee865a41314ccd01f
Static pod: kube-controller-manager-kmaster-rj hash: 68d79396812e09a48466beaec532705f
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-01-13-09-58-27/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-kmaster-rj hash: 092bf93ace9b8cbbba45a9126334ea19
Static pod: kube-scheduler-kmaster-rj hash: 9be8cb4627e7e5ad4c3f8acabd4b49b3
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.1". Enjoy!

To upgrade the other control plane nodes you need to follow the same steps we followed above.

Step 5: Drain the node (On Master/Control plane node)

Prepare the node for maintenance by marking it unable to schedule and evicting the workloads:

root@kmaster-rj:~# kubectl drain kmaster-rj --ignore-daemonsets
node/kmaster-rj cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2vs9l, kube-system/kube-proxy-kmj6b
evicting pod kube-system/coredns-f9fd979d6-fr4fr
pod/coredns-f9fd979d6-fr4fr evicted
node/kmaster-rj evicted

Step 6: Upgrade kubelet and kubectl (On Master/Control plane node)

root@kmaster-rj:~# apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.20.1-00 kubectl=1.20.1-00
Hit:2 http://ppa.launchpad.net/bashtop-monitor/bashtop/ubuntu bionic InRelease
Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
kubectl kubelet
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 139 not upgraded.
Need to get 26.8 MB of archives.
After this operation, 1,353 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.20.1-00 [7,948 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.20.1-00 [18.9 MB]
Fetched 26.8 MB in 4s (6,837 kB/s)
(Reading database ... 75793 files and directories currently installed.)
Preparing to unpack .../kubectl_1.20.1-00_amd64.deb ...
Unpacking kubectl (1.20.1-00) over (1.19.6-00) ...
Preparing to unpack .../kubelet_1.20.1-00_amd64.deb ...
Unpacking kubelet (1.20.1-00) over (1.19.6-00) ...
Setting up kubelet (1.20.1-00) ...
Setting up kubectl (1.20.1-00) ...

Step 7: Restart the kubelet (On Master/Control plane node)

root@kmaster-rj:~# systemctl daemon-reload
root@kmaster-rj:~# systemctl restart kubelet

Step 8: Uncordon the node (On Master/Control plane node)

root@kmaster-rj:~# kubectl uncordon kmaster-rj
node/kmaster-rj uncordoned

Step 9: Verify the version (On Master/Control plane node)

root@kmaster-rj:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster-rj Ready control-plane,master 163d v1.20.1
kworker-rj1 Ready <none> 163d v1.19.6
kworker-rj2 Ready <none> 163d v1.19.6

Upgrade worker nodes

Step 1: Upgrade kubeadm: (On worker node)

root@kworker-rj2:~# apt-get update && \
> apt-get install -y --allow-change-held-packages kubeadm=1.20.1-00
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:3 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Fetched 88.7 kB in 1s (114 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubeadm
1 upgraded, 0 newly installed, 0 to remove and 140 not upgraded.
Need to get 7,708 kB of archives.
After this operation, 602 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.20.1-00 [7,708 kB]
Fetched 7,708 kB in 2s (4,422 kB/s)
(Reading database ... 74981 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.20.1-00_amd64.deb ...
Unpacking kubeadm (1.20.1-00) over (1.18.8-00) ...
Setting up kubeadm (1.20.1-00) ...

Step 2: upgrades the local kubelet configuration (On worker node)

root@kworker-rj2:~# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Step 3: Drain the node (On Master/Control plane node)

root@kmaster-rj:~# kubectl drain kworker-rj2  --ignore-daemonsets
node/kworker-rj2 cordoned

Step 4: Upgrade kubelet and kubectl (On worker node)

root@kworker-rj2:~# apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.20.1-00 kubectl=1.20.1-00
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Fetched 88.7 kB in 1s (125 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 138 not upgraded.
Need to get 26.8 MB of archives.
After this operation, 1,353 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.20.1-00 [7,948 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.20.1-00 [18.9 MB]
Fetched 26.8 MB in 4s (6,575 kB/s)
(Reading database ... 74981 files and directories currently installed.)
Preparing to unpack .../kubectl_1.20.1-00_amd64.deb ...
Unpacking kubectl (1.20.1-00) over (1.19.6-00) ...
Preparing to unpack .../kubelet_1.20.1-00_amd64.deb ...
Unpacking kubelet (1.20.1-00) over (1.19.6-00) ...
Setting up kubelet (1.20.1-00) ...
Setting up kubectl (1.20.1-00) ...

Step 5: Restart the kubelet (On worker node)

root@kworker-rj2:~# systemctl daemon-reload
root@kworker-rj2:~# systemctl restart kubelet

Step 6: Uncordon the node (On Master/Control plane node)

root@kmaster-rj:~# kubectl uncordon kworker-rj2
node/kworker-rj2 uncordoned

Step 7: Verify the status of the cluster (On Master/Control plane node)

After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:

root@kmaster-rj:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster-rj Ready control-plane,master 163d v1.20.1
kworker-rj1 Ready <none> 163d v1.20.1
kworker-rj2 Ready <none> 163d v1.20.1

Great! Our cluster has been upgraded to the newer version v1.20.1.

That’s all!

Hope you like the article. Please let me know your feedback in the response section.

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--