Monitor Elasticsearch with Kube-Prometheus

Scrape metrics from Elasticsearch

Magsther
FAUN — Developer Community 🐾

--

In this post, we will see how to monitor Elasticsearch with Kube-Prometheus by scraping the metrics from a Elastic cloud instance.

Some of the topics that will covered in this post are:

Monitor Elasticsearch with Kube-Prometheus

In the Elastic Cloud on Kubernetes post, we automated the deployment of elastic cloud to a Kubernetes cluster.

Prior to that, we also deployed Kube-Prometheus to automatically collect metrics from all Kubernetes components. We could see that also came with some pre-configured Grafana dashboards.

You can read more about how we deployed this using Terraform here.

Now we want to tell our Kube-Prometheus deployment to scrape the metrics from the Elastic cloud instance, so that we can monitor it properly.

Where to start?

There are five steps that we need to take:

  1. Deploy Elasticsearch Exporter to Kubernetes.
  2. Verify that the exporter can scrape metrics from our Elastic instance.
  3. Tell Kube-Prometheus to scrape the exporter (using a ServiceMonitor)
  4. Add Prometheus as data source to Grafana
  5. Add a Grafana Dashboard to visualize the data.

In this post we will do step 1–4 , and the last one in an upcoming post.

What is Elasticsearch Exporter?

Elasticsearch Exporter is a Prometheus exporter for various metrics about Elasticsearch. It fetches information from an Elasticsearch instance on every scrape.

I recommend that you read through the documentation to find out how to configure the exporter to best fit with your use case.

Elasticsearch Exporter — Helm

We can deploy our exporter using a helm chart. This can be found in the prometheus-community charts repository.

Installing the helm chart manually in a Kubernetes cluster is as simple as running:

helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm repo updatehelm install [RELEASE_NAME] prometheus-community/prometheus-elasticsearch-exporter

Elasticsearch Exporter and Terraform

To deploy the exporter using Terraform, we will create a new file in our module (elastic-cloud).

You can find the code here

Create a new file and call ites-exporter.tf

First we add a new resource (helm_release) and name it es_exporter.

helm_release describes the desired status of a chart in a kubernetes cluster.

A Release is an instance of a chart running in a Kubernetes cluster.

A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.

resource "helm_release" "elasticsearch_exporter" {
repository = var.helm_chart_repository
name = "prometheus-elasticsearch-exporter"
chart = "prometheus-elasticsearch-exporter"
version = var.elasticsearch_exporter_chart_version
timeout = 3600
}

In our resource we add repository (the repository URL where to locate the requested chart.)

A name, (the name of the release)

A chart name (the name of the chart to be installed

The version (to specify the exact chart version to install. If this is not specified, the latest version is installed.)

The timeout (time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to 300 seconds.

Local vs Remote Elastic cluster

If you are running your Elastic cluster locally (and accessible on localhost:9200), then this is all you need to do.

If you have a remote Elastic cluster, then you will need to change the value es.uri in your code since it requires basic auth.

Please make a note of the es.uri argument, as this is the address of the Elasticsearch node we will connect to. It’s also here we need to add our credentials if basic auth is needed.

set {
name = "es.uri"
value = replace(ec_deployment.elastic_deployment.elasticsearch.0.https_endpoint, "https://", "https://${ec_deployment.elastic_deployment.elasticsearch_username}:${ec_deployment.elastic_deployment.elasticsearch_password}@")
}

To set values in the helm_relase in Terraform, you use set.

Variables

In the variables.tf file, we define variables for the helm chart repository and for the elasticsearch exporter chart version.

The variable for helm_chart_repository:

variable "helm_chart_repository" {
type = string
default = "https://prometheus-community.github.io/helm-charts"
description = "Helm Chart repository"
}

The variable for elasticsearch_exporter_chart_version:

variable "elasticsearch_exporter_chart_version" {
type = string
description = "Helm repository version of elasticsearch-exporter"
default = "4.13.0"
}

Since we are using the Helm provider from the Terraform Registry, we will also need to add this to our providers.tf file like this:

helm = {
source = "hashicorp/helm"
ersion = "~> 2.6.0"
}

Deploying Elasticsearch Exporter using Terraform

When you add new providers to Terraform, you need to run terraform init to install them.

Terraform Plan

If we now run terraform plan, it should detect our changes.

Terraform Apply

You can apply the terraform plan with terraform apply

After a little while elastic-exporter is deployed to our Kubernetes cluster.

Interacting with the Elasticsearch Exporter

Elasticsearch Exporter is now fetching information from our Elasticsearch instance.

Let’s have a look of what was deployed in the previous step.

Check that the endpoint exposes metrics

Here we use Port Forwarding to access applications in our Kubernetes cluster

kubectl port-forward prometheus-elasticsearch-exporter-xxx 9108:9108

Kubernetes Servicemonitor

Servicemonitor describes the set of targets to be monitored by Prometheus.

The Prometheus resource declarative describes the desired state of a Prometheus deployment, while a ServiceMonitor describes the set of targets to be monitored by Prometheus.

Source

Create a Servicemonitorresource in Kubernetes to allow our existing Kube-Prometheus to scrape the metrics from elastic-exporter.

Create a new file called servicemonitor.yaml with the following content.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: elastic-exporter
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
endpoints:
- interval: 5s
port: http
namespaceSelector:
matchNames:
- monitoring
selector:
matchLabels:
app: prometheus-elasticsearch-exporter

Add the following to the main.tf file in the elastic-cloud module

resource "kubernetes_manifest" "elastic_servicemonitor" {
manifest = yamldecode(templatefile("${path.module}/templates/servicemonitor.yaml", {
name = var.ec_instance_name
appName = local.elastic_cloud.app
}))
}
  • Here, we added a kubernetes_manifest resource and named it elastic_servicemonitor
  • We specified where servicemonitor.yaml can be found.

The appName is important and needs to map the matchLabels in our yaml file.

Deploying Elasticsearch Exporter using Terraform

It’s time to deploy it using Terraform.

Run terraform plan, to verify the changes that Terraform detected and apply the terraform plan with terraform apply

After a little while, the servicemonitor is deployed on our Kubernetes cluster.

You can verify this with kubectl get servicemonitor

Again, use Port Forwarding to access the Prometheus UI:

kubectl port-forward svc/kube-prometheus-stackr-prometheus 9090:9090 --namespace monitoring

Check the target and verify that it’s there.

Prometheus is now scraping metrics from the elastic-exporter.

Conclusion

In this article, we deployed an Elastic Exporter which scrapes metrics from an Elastic instance using the Helm resource from Terraform to our Kubernetes cluster.

We then added a Servicemonitor so that our Kube-Prometheus deployment could scrape the metrics. In the next post, we will create a Grafana dashboard to visualize the data in a Grafana Dashboard.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Developers: Learn and grow by keeping up with what matters, JOIN FAUN.

--

--