Setup a Free Kubernetes Cluster on Oracle Cloud

Loi Cheng
FAUN — Developer Community 🐾
10 min readJun 22, 2021

--

Learn how to create a full-fledged 4-node permanent cluster. It is always-free, not a trial. Unlike other free solutions, this cluster will not sleep nor be deleted due to inactivity, and it can be use for small projects.

https://www.oracle.com/cloud/free/

In May 2021, Oracle launched new Arm-based compute instances on the Oracle Cloud (OCI). They also added some of these instances to its always-free tier. Combined with the existing free AMD compute instances, this addition makes the very generous always-free tier even more generous.

OCI free instances:

  • 2x AMD Instances with 1GB each
  • Up to 4x ARM instances with 4 cores and 24GB split between them

These free machines are sufficient to create a personal kubernetes cluster. All other cloud providers are not even close in their free-tier offerings. AWS instances are only 1 year free, Google only gives 1 very small 500MB instance, and Azure is also only 1 year free.

The Free Kubernetes Cluster

My cluster has 4 nodes:

  • ARM 12GB 2Core — control plane
  • ARM 12GB 2Core — control plane
  • AMD 1GB 1Core
  • AMD 1GB 1Core
4 Free Instances !

This free route definitely requires more manual work to get it up and running, compared to the pre-packaged cloud provider kubernetes. However, we do gain much more knowledge and understanding of kubernetes through the manual setup process.

Step-by-Step

There are many guides available already on kubernetes, so this article will only give an overview of the steps, and highlight the custom steps needed specifically for this OCI cluster setup. References to other guides are also provided for the more generic procedures.

This article is more of a procedural guide and limited in explanations on how things work, or why it is needed —which the referenced detailed guides should provide. Some basic knowledge of SSH, cloud computing and DNS will be needed.

Note that the kubernetes codebase constantly changes. So, instructions and commands in this guide can become outdated and non-functional in the near future (avoid guides from 2018 or earlier). As of June 2021, the steps below should mostly work.

1. Create an OCI account:

Go to https://www.oracle.com/cloud/ and make an account, fairly straightforward.

2. Create instances

Creating instances is fairly straightforward. I used Ubuntu for all 4 nodes. The 2 free AMD instances has to be VM.Standard.E2.1.Micro. For the ARM instances, I made 2 instances, each with 2CPU and 12GB.

Ubuntu Full (Not Minimal)
Adjustable Ampere ARM Instances

Note on assigning IP addresses: By default a new instance will have an ephemeral public address. To switch between ephemeral and public IP addresses, in the VNIC section, the NO PUBLIC IP has first to be selected and updated, then the public IP choices should be available. I used reserved addresses for the ARM machines and ephemeral ones for the AMD machines.

A bit unintuitive IP address management

3. Get a free domain

While the cluster can be accessed through their IP addresses, it is much better to assign a domain to them, so we can change the IP addresses later. Freenom provides free domains that we can use.

Simple DNS configuration

In my DNS settings, I made 2 A records, with identical names and pointing to my 2 reserved IP addresses.

4. Clean system firewalls

The Oracle instances have pre-installed firewall rules that can block some networking functions needed for kubernetes. To fix it, these commands should be applied on each machine.

## save existing rules
sudo iptables-save > ~/iptables-rules
## modify rules, remove drop and reject linesgrep -v "DROP" iptables-rules > tmpfile && mv tmpfile iptables-rules-modgrep -v "REJECT" iptables-rules-mod > tmpfile && mv tmpfile iptables-rules-mod## apply the modifications
sudo iptables-restore < ~/iptables-rules-mod
## check
sudo iptables -L
## save the changes
sudo netfilter-persistent save
sudo systemctl restart iptables

5. Hosts file

The hosts files on each machine should be modified to allow nodes to communicate with each other using DNS.

## hosts file
sudo nano /etc/hosts
## add these lines (change the values to match cluster)
private.ip.arm.machine1 your.freenomdomain.tk
private.ip.arm.machine2 your.freenomdomain.tk

6. Install kubeadm on all instances

SSH into each machine, and follow all the steps very carefully in the official kubeadm guide. The procedure should be identical for all the nodes.

For container runtime, use Docker — scroll all the way down in this guide below, and only follow the instructions under “Docker”. Make sure to also follow the steps on setting up systemd

7. Set up the control plane with kubeadm

A detailed official guide is available, again with many options to choose from.

kubeadm init

The kubeadmin init command will set up the machine as a control plane. A bit of research was needed to figure out the proper command, as running it without any arguments may not work, or may build a cluster with unwanted features.

For this cluster, kubeadm with the args below worked best.

## start k8s control planeCERTKEY=$(kubeadm certs certificate-key)echo $CERTKEY## save your CERTKEY for future use## replace the addresses with your ownsudo kubeadm init --apiserver-cert-extra-sans=your.freenomdomain.tk,your.reserved.public.ip1,your.reserved.public.ip2 --pod-network-cidr=10.32.0.0/12 --control-plane-endpoint=your.freenomdomain.tk --upload-certs --certificate-key=$CERTKEY

When complete, kubeadm will output some instructions like below.

Your Kubernetes control-plane has initialized successfully!
...

8. Add the pod-network

There are a lot of pod-network add-ons to choose from. Weave worked the best for this cluster.

There are a lot of details on installing weave, but this single command should suffice.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

It is best to stick with one pod-network. Installing and uninstalling different pod-networks multiple times can break the networking on cluster, as not all components are automatically removed when uninstalled, and can create conflicts.

9. Connect all the nodes together

The provided instructions from kubeadm init output can be used to join the other nodes. They should look like below.

## Connect ARM machines as control planekubeadm join your.freenomdomain.tk:6443 --token xxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:yyyyyyyyyyyy \
--control-plane --certificate-key zzzzzzzzzzz
## Connect AMD machineskubeadm join your.freenomdomain.tk:6443 --token xxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:yyyyyyyyyyyy \

I had to remake my cluster several times to get my desired configuration (ex. dns). I used these commands to redo my cluster (commands need to be run on each affected machine)

## remove clustersudo kubeadm reset
sudo rm -rf /etc/kubernetes
sudo rm -rf /etc/cni/net.d
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/lib/etcd
sudo rm -rf $HOME/.kube

10. Lens

Lens is a GUI to manage the kubernetes cluster, and is a great complement to kubectl commands. I installed it on my local machine.

Lens UI

At this point, the kubernetes cluster should be in good shape. The rest of the steps below are to enable hosting web apps.

11. MetalLB

MetalLB is a free load balancer unrelated to the cloud provider load balancer. To setup MetalLb, a custom config yaml file should be created.

# layer2metallb.yamlapiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- arm.public.ip1/32 ## replace with your instance's IP
- arm.public.ip2/32 ## replace with your instance's IP

MetalLb can then be installed with the following commands.

## metallbkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yamlkubectl apply -f layer2metallb.yamlkubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

The video below provides more detail on MetalLb.

MetalLB and Nginx Ingress

12. Nginx Ingress

Nginx ingress is used together with MetalLB to enable public access to webapps on the cluster.

Helm can be used to install nginx-ingress. The guide below shows how to install helm.

A customized yaml file is needed to make nginx work with metallb. The default yaml should first be downloaded.

helm show values ingress-nginx/ingress-nginx > ngingress-metal-custom.yaml

These specific lines in the yaml should be modified

hostNetwork: true ## change to false#...hostPort:
enabled: false ## change to true
#...kind: Deployment ## change to DaemonSet#...

Nginx ingress can then be installed with below commands.

kubectl create ns ingress-nginxhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo updatehelm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --values ngingress-metal-custom.yaml

More detailed information can be found in the official NGINX ingress guide.

13. Cert Manager with LetsEncrypt

Cert-manager can automate the process of upgrading http sites to https. The setup is fairly generic. These commands worked for this cluster.

## install w manifestskubectl create ns cert-managerkubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml

It is recommended to also complete the Verifying the Installation portion of the official guide below.

After the verification, the official guide has many options on what to do next. For this particular cluster, only the following yaml file is needed to create the LetsEncrypt issuer, and complete the setup.

#prod-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@email.address
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: nginx

The file should be applied with kubectl

kubectl create -f prod-issuer.yaml

14. Test Page

A second domain should be created from freenom.com to host the webapps. This domain should have A records that point to the public IP addresses of the control plane nodes.

A simple deployment can be created to test the cluster. In the test-tls-deploy.yml below, the host addresses should be replaced to match the newly created addresses.

#test-tls-deploy.yml
apiVersion: v1
kind: Namespace
metadata:
name: test-tls-deploy
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: test-tls-deploy
labels:
app: test
spec:
selector:
matchLabels:
app: test
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: nginx
ports:
- containerPort: 80
name: test
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "500m"
affinity:
podAntiAffinity: ## spread pods across nodes
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test
topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
namespace: test
name: test-tls-service
labels:
app: test
spec:
selector:
app: test
type: NodePort
ports:
- port: 80 ## match with ingress
targetPort: 80 ## match with deployment
protocol: TCP
name: test
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: test-tls-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- www.yourwebappdomain.tk
secretName: test-tls-secret
rules:
- host: www.yourwebappdomain.tk
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: test-tls-service
port:
number: 80

Use kubectl to apply the deployment

kubectl apply -f test-tls-deploy.yml

If all went well, the simple nginx website should be accessible from the newly created web address.

Done!

15. Conclusion

At this point, the cluster should work like the pre-packaged cloud provider clusters. The cluster is predominately ARM based, so containers should be built to run on ARM64 architecture, meaning that the base docker images should be ARM based, and any installed binaries should also be ARM based. The deployment yaml files should also be configured select ARM machines. There are 2 AMD machines in the cluster, which can run AMD64 containers. These machines are fairly limited in compute (1GB each, compared to 12GB for ARMs) so only very non-intensive containers should be launched to those instances.

— — —

Join Hired.com and review offers from companies with upfront compensation. Plus, invite friends to the Hired Marketplace and we’ll send you $1,000 when they accept an offer https://hired.com/x/55238d4c05c7ddac5963ad347e3cd020

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--