Worldwide Load Testing with JMeter and Kubernetes on Google Cloud Platform

Romain Billon
FAUN — Developer Community 🐾
8 min readMay 25, 2021

--

Photo by NASA on Unsplash

So you just developed a fantastic application. You hosted it on the cloud on a provider and some people complain about “slowness” as how users use to say. You check your application health and metrics and everything is OK. CPU usage, Memory usage, processing time. But eh, latencies seems to be unstable depending on the client geo location. You want to prove your theory. What about load testing your application ? Yeah but building a geo localized load injection plan is though and performance as a service platform are pretty expensive for a punctual test.

Two weeks ago, I published an article about a very cool JMeter starter kit on Kubernetes. Based on this, I made geo Localized load testing with JMeter. Using Google Cloud Platform to provision compute instances in user defined regions around the world. All compute instances are then clustered in a Kubernetes cluster with ease with k3sup (very nice work from https://github.com/alexellis and the 41 other contributors !).

Today, I’ll show you how to use this tool, from writing your JMeter scenario, configuring your Google cloud platform project, your gcloud until provisioning your machine, configuring them and launch a fully observable performance test and JMeter stack in Kubernetes in real time ! That’s exciting isn’t it ?

I. Preparing the test

That’s obvious, but before running the load test all around the world on powerful machines in a sweet Kubernetes cluster, you’ll need the test scenario.

So first, you’ll need to clone the repository locally

git clone https://github.com/Rbillon59/global-k8s-load-test

Let’s open a JMeter. (When downloaded and unziped, run)

cd apache-jmeter-version/bin && ./jmeter &

First, we will create a new User Defined Variables where we will centralize the JMeter properties we will pass at runtime with the .env. With some default values. Note the “instanceZone” variable which will be populated by the GCP zone where the load injector is located. So we will be able to follow the response times based on different locations

Create a Thread Group with all the needed variables. The thread group will define the test duration (duration), the number of concurrents virtual users (number of threads) and the time required to open each virtual users (rampup)

Let’s create a basic HTTP request to your app endpoint with all the previously set variable. Note the instanceZone inside the request name.

Now let’s add a bit of live monitoring to see what’s is going on during the test. Add a Backend Listener in JMeter and set the influxDb URL as the following (to use the one in the cluster) : http://influxdb:8086/write?db=telegraf

Finally, setting the throughput 10 requests per seconds for each open threads

The complete scenario should look like this

Save your JMeter scenario inside the just cloned repository at scenario/my-scenario/my-scenario.jmx

The file tree of the repo is the following :

+-- scenario
| +-- dataset
| +-- module
| +-- my-scenario
| +-- my-scenario.jmx
| +-- .env

Lastly, update the .env to the endpoint of your application and the required

## JMeter variables
# Application endpoint
host=pock8s9glzv4sk-wiremock.functions.fnc.fr-par.scw.cloud
# Application endpoint port
port=443
# Application scheme
protocol=https
# Number of threads per load injectors
threads=50
# Test duration in seconds
duration=600
# Time to open all the threads in seconds
rampup=60
## Provisioning variables# Typology of machines used in GCP
MACHINE_TYPE="e2-standard-4"
# GCP Network created to put the instances in
NETWORK_NAME="k3s-network"
# Network tags to apply to the compute instances to match firewall rules
MACHINE_NETWORK_TAG="k3s"
# Boot disk size (If longer test, think about larger)
MACHINE_DISK_SIZE="10GB"
# GCP zones. Typically which location you want to test
ZONES=("europe-west1-b" "europe-central2-a" "australia-southeast1-a" "northamerica-northeast1-a" "us-west1-a" "asia-east2-a")
# The number of load injectors. Based on the number of zone. One injector by zone
nb_injectors=${#ZONES[@]}

Ok we have our scenario, let’s play !

II. Deploying the cluster

Starting from scratch on GCP

Installing gcloud and kubectl.

I’ll not reinvent the wheel and make another tuto on how to install gcloud. I think the documentation is far enough to get in.

As well for kubectl

Generating the GCP service account

You need to generate a service account and use it to authenticate your local gcloud calls.

You need to generate a service account that can manage Google Cloud Platform “Compute” ressources. First, you need to go on the GCP console https://console.cloud.google.com/ and create a new GCP project

Then open the side menu and head to “Compute Engine” to enable the Compute API.

Once done, go to “IAM” then service account. Clic “Create”

Now give your account two roles :

Compute admin

Service account user

Be careful, these permissions are powerful.

Hit Done and click one the three little dots in the newly created account to enter in “Manage Keys” menu.

Google Cloud Platform Manage keys menu

Create a new json key and download it.

Save the generated key on your device, at the root of the repository (locally).

/ ! \ Never push private keys inside the repository. Public or private

Adding SSH keys

Lastly, you need to generate and add a SSH key to the compute instances metadata in order to be able to install the k3s cluster.

Inside the “Compute” menu, look for “Metadata” and add a public key

That’s it !

Provisioning the instances and creating the k3s cluster

At the root of the repository, you’ll found a script called deploy-k3s-clusters.sh

You can run it with the following parameters :

./deploy-k3s-clusters.sh -s <path_to_service_account> -p <gcp_project_id> -k <path_to_private_ssh_key> -u <ssh_username> -j <jmeter_scenario_folder>
  • The -s is for the path to the downloaded service account private key used to manage the gcloud authentication part
  • The -p is the GCP project ID which you can find inside the project selection box
  • The -k is used to specify the ssh private key path used to ssh into the instances. The private key of the public key added to GCP
  • The -u is for the ssh username
  • The -j is in order to use the .env of the JMeter scenario. With machine types zones etc… It’s the name of the folder which must be equal to the name of the JMeter scenario, but without the .jmx extension

You can add -d to delete all instances

Exemple of the script :

./deploy-k3s-clusters.sh -s sa-compute.json -p worldwide-loadtest -k $HOME/.ssh/gcp-compute -u rbillon -j my-scenario
  1. Instance provisionning

2. Using k3sup project to deploy k3s in each provisioned instance

3. Then deploying the JMeter Kubernetes stack inside the cluster

Check that everything is ok

kubectl get pods

III. Run the load test !

Use the script start_test.sh at the root of the repository :

source scenario/my-scenario/.env && ./start_test.sh -n default -i ${nb_injectors} -j my-scenario.jmx
  • The -n is for the Kubernetes namespace (beware if not using default, some ressources are namespaced so you need to change them manually)
  • -i is for the number of load injectors (1 per zones). The variable come from the .env sourced before
  • -j to select the scenario to run

This script will deploy the master and workers nodes all over the kubernetes nodes. For more information, I invite you to read this more detailed article

Hint : Be patient, the cluster could have a pretty high latency because of the nodes world repartition !

Expose Grafana with a little :

kubectl port-forward grafana-<podId> 3000

And point your favorite browser to : http://localhost:3000

Default credentials are admin and XhXUdmQ576H6e7

Aaand that’s it, you can now see the influence of the customers location on the response time of you applications. We can see here that the Australian people have thrice the response time as the European. (456ms vs 155ms in pct90)

To free all ressources, just add -d at the previous command line :

./deploy-k3s-clusters.sh -s worldwide-load-test-1e242ca2a6eb.json -p worldwide-load-test -k $HOME/.ssh/gcp-compute -u rbillon -j my-scenario -d

Or more violently, delete the GCP project

Conclusion

Here it is, the more your know about your application behavior, the more you’ll be able to tune it. With this bunch of tools, you can set up in minutes a geo localized performance test. I hope you’ll enjoy playing with it. Just beware, the setup has a lot of security holes to deal with simplicity. These resources should be ephemeral if let as is.

If you like my work and want to learn more about performance testing, DevOps, and technical quality. Feel free to follow me on Medium, on Twitter and on LinkedIn

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--