How to Forward Kubernetes Logs to Elasticsearch (ELK) using Fluent-bit and visualize it by kibana.

Mohammad jomaa
FAUN — Developer Community 🐾
8 min readSep 20, 2022

--

The contents:

  1. The introduction.
  2. Deploy an Elasticsearch cluster (ELK) & kibana.
  3. Deploy Fluent-bit.
  4. Configure Fluent-bit to forward Kubernetes Logs to Elasticsearch.
  5. Create a data view in Kibana and view Kubernetes logs.
  6. Conclusion.

Prerequisites :

We assume that the reader has basic knowledge of kubernetes , Elasticsearch, kibana,fluent-bit,git and Helm.

Technical prerequisites:

1-Prepare a Kubernetes cluster and access it by kubectl CLI, to install kubectl use my article , section number 4.1.

2- Install Helm, to install Helm use my article , section number 4.2.

3- Install git & Clone my github repo

git clone https://github.com/MohammadJomaa/kube_ansible.git

1. Introduction

Kubernetes helps in managing and running container-based applications, and there are many tools that you should use next to Kubernetes to achieve many purposes like security, performance, and observability to allow your application scales and runs safely & healthy, but for all previous purposes and other you need a base to start from. This common base is the logs that will be generated from your environment on all levels like Infrastructure, Cluster, and Application levels.

So you need three main components to achieve those goals:

  1. Agent: to collect the logs from the {Nodes - Cluster - Applications} and convert & clean them in such a way that will be suitable. Fluent-bit is used to fulfill this goal.
  2. Central Logs repository: to save the logs for future analysis & investigation, so we will use ElasticSearch for achieving this purpose.
  3. Visualizing the data (logs) and creating custom dashboards to search, observe, and analyze your data, so for this kibana is used.

In this article, we are going to implement the three main components with a brief explanation for each used tool so be ready ^_^.

1.1 What is an ElasticSearch?

“Elasticsearch is a distributed document store. Instead of storing information as rows of columnar data, Elasticsearch stores complex data structures that have been serialized as JSON documents. When a document is stored, it is indexed and fully searchable in near real-time — within 1 second. Elasticsearch uses a data structure called an inverted index that supports very fast full-text searches. An inverted index lists every unique word that appears in any document and identifies all of the documents each word occurs in. Elasticsearch also has the ability to be schema-less, which means that documents can be indexed without explicitly specifying how to handle each of the different fields that might occur in a document. For more details

1-2 What is a kibana?

“Kibana enables you to give shape to your data and navigate the Elastic Stack.

Fig 1.2 Kibana

With Kibana, you can:

  • Search, observe, and protect your data. From discovering documents to analyzing logs to finding security vulnerabilities, Kibana is your portal for accessing these capabilities and more.
  • Analyze your data. Search for hidden insights, visualize what you’ve found in charts, gauges, maps, graphs, and more, and combine them in a dashboard.
  • Manage, monitor, and secure the Elastic Stack. Manage your data, monitor the health of your Elastic Stack cluster, and control which users have access to which features.”

For more details.

1-3 What is a Fluent-bit?

Fluent-bit fig1.3

“Fluent Bit is a Fast and Lightweight Logs and Metrics Processor and Forwarder for Linux, OSX, Windows, and BSD family operating systems. It has been made with a strong focus on performance to allow the collection of events from different sources without complexity.”

2. Deploy an Elasticsearch cluster (ELK) & kibana.

2.1 Create a directory for persistent volume that uses by Elasticsearch pod in saving data:

mkdir /opt/data_es1

2.2 Install custom resource definitions:

#install ElasticSearch operator in kub
kubectl apply -f "https://download.elastic.co/downloads/eck/2.3.0/crds.yaml"

2.3 Install the operator with its RBAC rules:

kubectl apply -f "https://download.elastic.co/downloads/eck/2.3.0/operator.yaml"                
kubectl create namespace elasticsearch

2.4 Monitor the operator logs:

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

2.5 Deploy (ELK) & kibana

Check out an ElasticKibana.YAML file (Fig 2.5.1), in this file we are creating

  • Elasticsearch object.
  • PersistentVolume (pv-es2) for elastic data.
  • kibana object.
#if you did not clone the rep  you can do it now
#git clone https://github.com/MohammadJomaa/kube_ansible.git
cd kube_ansible/ansible_kub/k8s/ElasticSearche/
Fig 2.5.1 ElasticKibana.yaml
  • Replace Data Path in the file before applying it

vi ElasticKibana.yaml
Replace
hostPath:
path: "<<Data_Path>>/data_es1"
With
hostPath:
path: "opt/data_es1"
#Apply the file
kubectl apply -f ElasticKibana.yaml -n elasticsearch
  • To check the Elasticsearch object that you have created
#just in case of hostpath
sudo chmod -Rf 777 '/opt/data_es1/'
kubectl get elasticsearch -n elasticsearch
Elasticsearch Operator
  • To check the pods in elasticsearch namespace
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=elastic-search' -n elasticsearch
Elasticsearch Pods
  • To check the service that you will use to access an elasticsearch:
kubectl get service  elastic-search-es-http -n elasticsearch
elasticsearch Services

The default type is ClusterIP you can change it to NodePort

#to change type of service
kubectl -n elasticsearch edit service elastic-search-es-http
#then change type from ClusterIP to NodePort
Edit the service
Port of service

Elastisearch and kibana username is : ( elastic ) and password will be as below:

PASSWORD=$(kubectl -n elasticsearch get secret elastic-search-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')echo $PASSWORD
  • To check it by the browser or terminal use this link:
curl -u "elastic:$PASSWORD" -k "https://localhost:<NodePort>"

*- ($PASSWORD) Save this password that it will use in Fluent-bit section

For more detailes

Elastic: https://www.elastic.co/guide/en/cloud-on-k8s/1.8/k8s-deploy-elasticsearch.html

kibana: https://www.elastic.co/guide/en/cloud-on-k8s/1.8/k8s-deploy-kibana.html

Check ElastisSearch service

2.6 Kibana

We have installed kibana in the previous file, then in this section just we will discover kibana pods & services

  • To get kibana objetc that we have created in the previous step
kubectl -n elasticsearch get kibana
kibana operator
  • Getting the pods for kibana:
kubectl -n elasticsearch get pod --selector='kibana.k8s.elastic.co/name=kibana'
kibana pods
  • To get the service for kibana,

please change the type of this service from ClusterIp to NodePort or if you are in production env you can use ingress

kubectl -n elasticsearch get service kibana-kb-http
kibana-kb-http service
  • To change the type of service:
kubectl -n elasticsearch edit service kibana-kb-http
#then change type from ClusterIP to NodePort
Port of kibana-kb-http
  • Print the password of kibana (same password for elasticsearch)
kubectl -n elasticsearch  get secret elastic-search-es-elastic-user  -o=jsonpath='{.data.elastic}' | base64 --decode; echo
  • Now you can access kibana by browser :
https://<NodeIp>:<kibana-kb-http port>
#user elastic
#pass: pls check the previous step
https://<NodeIp>:<kibana-kb-http port>
kibana

3. Deploy Fluent-bit.

Fluent-bit fig 3.1
  • To install fluent-bit we use Helm as below:
cd kube_ansible/ansible_kub/Helm/fluent-bit/fluent-bit/#create namespace for fluent-bit
kubectl create ns fluent-bit
helm upgrade --install fluent-bit . --namespace=fluent-bit
fluent-bit installtion
  • (Optional ) Get Fluent Bit build information by running these commands:
export POD_NAME=$(kubectl get pods — namespace fluent-bit -l “app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluent-bit” -o jsonpath=”{.items[0].metadata.name}”)kubectl — namespace fluent-bit port-forward $POD_NAME 2020:2020curl http://127.0.0.1:2020

Now you have created fluent-bit and all the necessary components like ClusterRole, ClusterRolebinding, ServiceAccount, Configmap, services .etc, all of that are done by Helm, you can check it for more details.

  • To check all objects in fluent-bit namespace:
kubectl -n fluent-bit get all
fluent-bit namespace
  • To check services & Configmaps:
kubectl -n fluent-bit get configmap,svc
configmap,svc — fluent-bit

5. Configure Fluent-bit Forward Kubernetes Logs to Elasticsearch

Now let’s check that last part in our article and figure out the Configmap of fluent-bit.

kubectl -n fluent-bit edit configmap fluent-bit

You will find something like that in Fig 3.1 or Fig 3.2:

fig-3.1 fluent-bit Configmap

Just replace pass with $PASSWORD that we extract in the previous section.

PASSWORD=$(kubectl -n elasticsearch get secret elastic-search-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

Fig 3.2 Edit the configmap in fluent-bit

You should restart the pods for getting the new configuration

kubectl -n fluent-bit rollout restart daemonset.apps/fluent-bit
restart daemonset.apps/fluent-bit

Please check this to understand the parameters and configurations of Fluent-bit to make it suitable for your requirements.

6. Create data view in Kibana and view Kubernetes logs

  • Open kibana dashboard by browser and click Discover (Fig — 6.1)
Fig — 6.1
  • Click on Create data view (Fig — 6.2)
Fig — 6.2
  • In index Pattern print logstash then click save data view to kibana(Fig — 6.3)
Fig — 6.3
  • Now you can enjoy Kubernetes logs and visualize them as you want using the capabilities of kibana and Elasticseaerch (Fig — 6.4 , Fig — 6.5)
Fig — 6.4
Fig — 6.5

6. Conclusion:

In this article we set up three main components for making logs more useful and easy to use in troubleshooting with the ability to save historical logs for making patterns and dashboards that will help you in management and decision-making, Fluent-bit is used for collecting & clean the logs in nodes then sending them to Elasticsearch for saving & indexing, and finally visualizing & creating custom dashboards by kibana,

kindly note this article is not prepared for running in a production environment and it is just for explaining the concept and making the idea easy for you.

If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Join FAUN & get similar stories in your inbox each week

--

--