Yes, each process will require it's own stats page.
What we found is that when we ran multiple processes that master haproxy stats page would pull randomly from one of the running processes. Thus our stats would look like 32 current_sessions then a second later would read 15 current_sessions, the the graph was skewed. When we really needed 32+15 for total sessions.
We used lau to aggregate the stats as shown here: https://discourse.haproxy.org/t/lua-solution-for-stats-aggregation-and-centralization/27
Thus it creates a master aggregate page... ours on port 8880 which dumps the csv.
I ended up just doing a simple python script to pull that stats back as a keyvalue pair and extending snmp to pull them: (it was the easy solution)
import requests
import io
import csv
r = requests.get('http://127.0.0.1:8880/')
f = io.StringIO(r.text)
reader = csv.reader(f, delimiter=',')
for row in reader:
if row[0] == 'BILLY_back':
print(f"{row[1]}_SessCur={row[4]}\n{row[1]}_Status={row[22]}")
thus returns a clean K-V pair, which can easily be used as an snmp extension and metrics pulled into a very simple datasource
BACKEND_SessCur=60
BACKEND_Status=4
FOX1_SessCur=15
FOX1_Status=4
FOX0_SessCur=15
FOX0_Status=4
FOX3_SessCur=15
FOX3_Status=4
FOX2_SessCur=15
FOX2_Status=4