Forum Discussion

wfaulk's avatar
wfaulk
Icon for Neophyte rankNeophyte
3 hours ago

Collecting a very large number of datapoints

I have a need to collect data about CPU P-levels in VMware hosts.

The way that VMware is structured in LogicMonitor relies on a vCenter server, and then all of its hosts are created in Active Discovery. There does not seem to be a way to create a datasource that uses those AD-found hosts as target resources.

So I have a script that hits my vCenter, loops through each of a few dozen hosts, each of which has around 80 CPUs, each of which has around 16 P-levels. When you multiply that all up, that's about 30,000 instances. The script runs in the debug environment in about 20 seconds and completes without error, but the output is truncated and ends with "(data truncated)".

When I try to "Test Active Discovery", it just spins and spins, never completing. I've waited up to about 20 minutes, I think.

It seems likely that this is too much data for LogicMonitor to deal with all at once. However, I don't seem to have the option to be more precise in my target for the script. It would make more logical sense to collapse some of these instances down to multiple datapoints in fewer instances, but there isn't a set number of P-levels per CPU, and there isn't a set number of CPUs per host, so I don't see any way to do that.

There doesn't seem to be any facility to collect this data in batches.

What can I do?

No RepliesBe the first to reply