How to setup highly available Pacemaker/Corosync cluster with HAProxy load balancer

Ajko
FAUN — Developer Community 🐾
4 min readSep 24, 2021

--

This page explains how to setup Pacemaker/Corosync cluster for highly available, scalable resource manager and HAProxy load balancer for splitting traffic to backend.

Pacemaker/Corosync HA cluster with HAProxy

Technology stack:

HAProxy: HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

Reference: https://www.haproxy.org/

Pacemaker: The High Availability Add-On cluster infrastructure provides the basic functions for a group of computers (called nodes or members) to work together as a cluster.

A Pacemaker stack is built on five core components:

  • libQB — core services (logging, IPC, etc)
  • Corosync — Membership, messaging and quorum
  • Resource agents — A collection of scripts that interact with the underlying services managed by the cluster
  • Fencing agents — A collection of scripts that interact with network power switches and SAN devices to isolate cluster members
  • Pacemaker itself

Reference: https://clusterlabs.org/

Virtual IP: VIP floats between virtual machines or baremetal machines of pacemaker cluster. VIP receives the request and HAProxy passes the request to target API/backend.

Getting started

Prerequisites:

  1. At first provision 3 (Recommended) VMs or baremetal machines.
  2. Make sure resources have open ports on TCP/2224 for PCSD and UDP/5405 for Corosync
  3. Pick one IP to serve as VIP in range of your resource’s subnet
  4. Operating system (OS) Ubuntu Focal (except package installation part the rest can be used for other operating systems)

Installation:

Pacemaker and Corosync package installation

$ apt install pacemaker corosync pcs fence-agents

Create hacluster user password

$ passwd hacluster

Enable and start Pacemaker service

$ systemctl start pcsd

$ systemctl enable pcsd

Cluster creation and configuration

Note: These steps are only required to be executed on the first node.

Now we need to authenticate members (ha01, ha02, ha03) of the cluster

Important: Here members are hostname of cluster nodes and should be resolvable by DNS.

$ pcs host auth ha01 ha02 ha03

It will prompt for username and password, use hacluster as username and its password which was defined in previous steps.

Setup and start the cluster

$ pcs cluster setup <CLUSTER NAME> --start ha01 ha02 ha03 --force

Enable cluster nodes

$ pcs cluster enable --allha01: Cluster Enabled
ha02: Cluster Enabled
ha03: Cluster Enabled

STONITH

It is a technique to isolate compute resources from the cluster to avoid disruption. It is strongly recommended for the production cluster.

Here we are going to disable it for tutorial purposes

$ pcs property set stonith-enabled=false

HAProxy configuration

These steps only required to execute on the first node.

Installing HAProxy

apt install haproxy

Creating resources for VIP in the cluster

  • VirtualIP : in this case, is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in
  • IPaddr2 : name of the resource script.

pcs resource create VirtualIP IPaddr2 ip=<VIRTUAL IP> cidr_netmask=24

Create a resource for HAProxy

Here it will create a resource and bind it to haproxy systemd

pcs resource create HAProxy systemd:haproxy

Co-locate the HAProxy and VIP to run together in one place

pcs constraint colocation add VirtualIP with HAProxy score=INFINITY

Configuring HAProxy

Below is the sample haproxy.cfg

Edit or create /etc/haproxy/haproxy.cfg and enabling haproxy status page as an example

global
chroot /var/lib/haproxy
daemon
group haproxy
pidfile /var/run/haproxy.pid
user haproxy

defaults
log 127.0.0.1 local0
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1800s
timeout server 1800s
timeout check 10s
listen stats
bind :9999
mode http
stats enable
stats hide-version
stats uri /stats

Now check haproxy service on the node which VIP is running and after modifying the haproxy.cfg restart the service

systemctl restart haproxy

In order to check the status page of haproxy in your browser put FQDN or IP address of the node which VIP is running

http://<NODE OF CLUSTER VIP IS RUNNING>:9999/stats

Check pacemaker cluster status

$ pcs statusCluster name: demo
Cluster Summary:
* Stack: corosync
* Current DC: ha02 (version 2.0.3-4b1f869f0f) - partition with quorum
* Last updated: Fri Sep 24 16:59:16 2021
* Last change: Mon Sep 20 11:59:32 2021 by root via crm_resource on ha01
* 3 nodes configured
* 2 resource instances configured

Node List:
* Online: [ ha01 ha02 ha03 ]

Full List of Resources:
* VirtualIP (ocf::heartbeat:IPaddr2): Started ha01
* HAProxy (systemd:haproxy): Started ha01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Cluster’s nodes status

$ pcs cluster statusCluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: ha02 (version 2.0.3-4b1f869f0f) - partition with quorum
* Last updated: Fri Sep 24 17:00:01 2021
* Last change: Mon Sep 20 11:59:32 2021 by root via crm_resource on ha01
* 3 nodes configured
* 2 resource instances configured
Node List:
* Online: [ ha01 ha02 ha03 ]

PCSD Status:
ha03: Online
ha02: Online
ha01: Online

Now you have a fully functional HA cluster with HAProxy load balancer configured.

In a separate story I will deep dive in to the HAProxy configurations.

Have fun…!

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--