Deploy Elastic Cloud on Kubernetes with Monitoring

Elastic Cloud on Kubernetes + Observability

Magsther
FAUN — Developer Community 🐾

--

Introduction

In this post, we will deploy Elastic Cloud on a Kubernetes cluster and deploy the Elasticsearch Exporter which is a Prometheus exporter for various metrics about Elasticsearch.

Deploying Kubernetes

We need to have a Kubernetes cluster installed. For a local cluster, this can be done with for example Minikube or Kind. In this case, we use Kind.

kind create cluster

Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind

Deploying Kube-Prometheus

Deploying Kube-Prometeheus to our Kubernetes cluster can be done via a Helm chart.

The kube-prometheus stack is a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm repo updateHang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈

Install the helm chart.

helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack

To access Prometheus, we use port forwarding

kubectl port-forward svc/my-kube-prometheus-stack-prometheus 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Open a browser and go to: http://localhost:9090/ to see the Prometheus UI.

Deploy Elastic Cloud on Kubernetes

For deploying Elastic Cloud, we will use the excellent quick start guide from Elastic. This installs the custom resource definitions and the operator with its RBAC rules.

The operator automatically creates and manages Kubernetes resources to achieve the desired state of the Elasticsearch cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use.

kubectl create -f https://download.elastic.co/downloads/eck/2.3.0/crds.yamlkubectl apply -f https://download.elastic.co/downloads/eck/2.3.0/operator.yaml

Monitor the operator logs

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

Deploy the Elasticsearch cluster

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.3.3
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
EOF

Once its deployed, you can get an overview of the current Elasticsearch clusters in the Kubernetes cluster, including health, version and number of nodes with: kubectl get elasticsearch

After a short while the Health status changes to green:

NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart green 1 8.3.3 Ready 3m59s

Getting the Elastic credentials

Before we can make requests to our Elasticsearch cluster, we need to find out the credentials for the elastic user. This can be done using the following kubectlcommand

kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'

Now you can use Port Forwarding to access the application:

kubectl port-forward service/quickstart-es-http 9200

Make a request tolocalhost:

curl -u "elastic:<your-password>" -k "https://localhost:9200"

The output should look like this:

{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"cluster_uuid" : "zltflbdMQ5eeUSd89bZcCQ",
"version" : {
"number" : "8.3.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
"build_date" : "2022-07-23T19:30:09.227964828Z",
"build_snapshot" : false,
"lucene_version" : "9.2.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}

If you instead want to test this from inside the Kubernetes cluster, then run this command:

curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

Elasticsearch Exporter

Now we want to add monitoring to our Elasticsearch cluster.

As mentioned above, Elasticsearch Exporter is a Prometheus exporter for various metrics about Elasticsearch, written in Go.

The exporter fetches information from an Elasticsearch cluster on every scrape.

Deploy the exporter using a Helm chart.

helm install prometheus-elasticsearch-exporter prometheus-community/prometheus-elasticsearch-exporter

If you now check the Kubernetes services, you will see that it is deployed:

kubectl get services

prometheus-elasticsearch-exporter ClusterIP 10.96.151.14 9108/TCP

Run the port-forward command on the Elasticsearch Exporter service and open a browser to 127.0.0.1:9108/metrics

kubectl port-forward svc/prometheus-elasticsearch-exporter 9108:9108
elastic exporter

Here you can see that the exporter fetches information from our Elasticsearch cluster.

Connect Kube-Promeheus and Elastic Exporter

Now we need to tell Kube-Prometheus to scrape the metrics that the Elasticsearch Exporter expose.

Create the servicemonitor

In Kubernetes, we can created a ServiceMonitor resource in Kubernetes.

This will allow our existing Kube-Prometheus to scrape the metrics from elastic-exporter.

Create a new file called servicemonitor.yaml with the following content.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: elastic-exporter
labels:
release: my-kube-prometheus-stack
spec:
endpoints:
- interval: 5s
port: http
selector:
matchLabels:
app: prometheus-elasticsearch-exporter

Apply the yaml file with kubectl apply -f servicemonitor.yaml

After a little while, the servicemonitor is deployed on our Kubernetes cluster.

Kube-Prometheus + Elastic Exporter

Now we can verify that Prometheus is in fact scraping the metrics from our elastic exporter. Again, you can use the port-forward command to do this:

kubectl port-forward svc/my-kube-prometheus-stack-prometheus 9090:9090

Then open the Prometheus UI and click on target.

Here we can verify that Prometheus is now scraping metrics from the elasticsearch exporter.

Visualize the data with Grafana

As we mentioned above, Kube-Prometheus comes integrated with Grafana. To visualize the data from our Elasticsearch cluster, we first need to add Prometheus as data source to Grafana.

Login to Grafana using the default login (admin)

kubectl port-forward svc/my-kube-prometheus-stack-grafana 3000:80

You can retrieve the password using this command:

kubectl get secret my-kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Here we can see that the data source is already added.

Add a Grafana Dashboard to visualize the data.

Instead of creating your own dashboard, you can use one created by the community. Go to this page and type in Elastic in the search field.

On the create tab, select Import. Paste the ID of the dashboard you want to import and click Load. Select the Data Source as Prometheus and click Import.

That’s it

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Developers: Learn and grow by keeping up with what matters, JOIN FAUN.

--

--