Monitoring HAProxy?

I've got a trio of Load Balancers running HAProxy that I need to monitor, and I found the HAProxy module and installed it. I verified using the 'Test Applies To' and it found all 3 servers, so I assume that means it's been associated right?  It's been a few days and the resources are not displaying any HAP related info nor do I have a dropdown (IDK the correct term) under the resource itself like there is for CPU, Disks, etc.

Second question.. reading the description here: am I correct in assuming that the only stats this module will report is sessions?  If so that's missing a ton of important stats....



13 replies

Yes, the only datapoint that DataSource tracks is sessions. It would seem the active discovery is not returning anything in your case. Navigate to the DataSource (the same place you tested the AppliesTo) and click the "Test Active Discovery" button. If you see no results there, that's your problem. 

This DS uses the HTTP discovery method, meaning that discovery involves pulling up a web page and scraping it for the instances (that's the word you were looking for). In this case, it's looking at https://[hostname/ip]/haproxy?stats and scrapes looking for anything matching RegEx: <th colspan=2 class=.pxname.>(.*?)</th>. I would start by hitting one of your stats enabled frontend on one of the haproxies to see if the page loads. If it does not, you probably need to add that frontend to your haproxy config.

It's possible that this used to be enabled out of the box for older versions of haproxy and the newest version of haproxy requires you to explicitly configure it.

Once you get the page loading in your browser, you might need to make some changes to the DS to get the discovery to pull the page correctly. Once that's working, it looks like it shouldn't be hard to get the other stats from the table on that page. You'll just have to get real familiar with RegEx.

I just got haproxy up and running in Docker and i'll take a look today during any free time i have to see what can be done to pull some of the other stats. Did you manually add the haproxy category to your servers or was it discovered? I'm not aware of a propertysource that auto-discovers haproxy installed on devices, but it wouldn't be the first time there's a propertysource i'm unaware of.

Ok, i think i have something for you. Using this haproxy.cfg file:

frontend stats
    bind :8404
    mode            http
    log             global
    maxconn 10

    timeout client  100s
    timeout server  100s
    timeout connect 100s
    timeout queue   100s

    stats enable
    stats hide-version
    stats refresh 30s
    stats show-node
    stats uri  /haproxy?stats

frontend mysite
frontend hissite
frontend theothersite


I was able to write a DS to pull in 56 different datapoints for each frontend. Your mileage may vary. My /haproxy?stats is running on port 80, not 8404 (running inside a container where the container runtime remaps from 80:8404. Either way, you can add a property to the host called "haproxy.port" to specify a port other than 80 that your stats page is running on. I'll be publishing this to the Exchange shortly where it will need to undergo code review, but here it is in the meantime:

FYI, instead of scraping the HTML like the old version did, i dove into the json version of the data. I don't know if this just wasn't available in previous versions of HAProxy, or if someone thought it was easier to scrape the HTML. Either way, it necessitates a new DS since the collection method changes from WEBPAGE to BATCHSCRIPT. You should be able to import it into your portal without changing the existing HAProxy DS. Once you get it working, you can delete the existing HAProxy DS.

Stuart do yo happen to have a CSV version of this Datasource?

What do you mean a CSV version?

so since i run multiple ha processes, each have their own stats page.  With the JSON format I was getting weird values as stats would pull from different proceses thus my graph would be bouncing around.  I finally figured out that there is a lau script that can pull stats from all the processes and aggregate them to CSV.

So how my main HAprocess page shows this output

And i'm wanting to parse it for the metrics i want

<from my lau stats page>



ie.... i want to pull "BILLY_back, FOX1    scur value"

I may just do this with python and snmp, seems much more of a simple approach, yet requires code on servers


15 hours ago, danp said:

so since i run multiple ha processes, each have their own stats page

Is each page at its own address (port)? If so, we should be able to easily modify the discovery and collection scripts to pull from each one.

14 hours ago, danp said:

I may just do this with python and snmp, seems much more of a simple approach, yet requires code on servers

If that's an option, you can try it. If it's pure SNMP, you might try the no-code option of building an SNMP DataSource in LM. 

Yes, each process will require it's own stats page.  

What we found is that when we ran multiple processes that master haproxy stats page would pull randomly from one of the running processes.  Thus our stats would look like 32 current_sessions then a second later would read 15 current_sessions, the the graph was skewed.  When we really needed 32+15 for total sessions.

We used lau to aggregate the stats as shown here:

Thus it creates a master aggregate page... ours on port 8880 which dumps the csv.

I ended up just doing a simple python script to pull that stats back as a keyvalue pair and extending snmp to pull them: (it was the easy solution)


import requests
import io
import csv

r = requests.get('')
f = io.StringIO(r.text)
reader = csv.reader(f, delimiter=',')
for row in reader:
    if row[0] == 'BILLY_back':

thus returns a clean K-V pair, which can easily be used as an snmp extension and metrics pulled into a very simple datasource



Cool that it's working. I think it would be pretty easy to modify the existing DS to pull from separate pages, then LM can aggregate if you need it but also show individual stats as well. Is there a programmatic way to discover the addresses of all the pages?

I know all the addresses they would be 8881-4  with the master aggregated page as 8880.

I can see how we would be able to use the json slurper to pull the individual pages but that just seems like a waste of processes when we can just parse the aggregate page with some sort of web CVS slurper.  

Like i said, if it's working, you're done, especially if you're not interested in per-process stats.

@Stuart Weenig Hey Stuart.. Coming back to this due to some internal changes. Your old DS worked great here.. I never thanked you :shame: but thank you for it. So I went looking for it again today and your lmcommunity repo seems to be gone?

Also checked the LM DS repo within the app and can't find any 2_4.  Does it still exist? If so can you point me towards it?

Userlevel 7
Badge +17

I left LM, lots of things changed. New repo with stuff I'm rebuilding is here.