Forum Discussion

Eric_Egolf's avatar
5 years ago
Solved

Smoothing Datapoints

We have datapoints that are very spiky by nature. In order to see the signal through the noise so to speak we need to average like 10 datapoints together... effectively smoothing the data. For example if we took 1 minute polls of CPU Processor Queue or CPU ready we would want to plot the average of the past 10 datapoints. If anyone has suggestions on how to do this or how they approach datasets that are inherently to noisy for threshold based alerting I would love to hear about it. 

  • My first thread here was an effort to get my head around the API to get the kind of functionality I was used to coming from the SCOM world: 

    /topic/2281-rest-api-raw-data-processing-using-powershell/

     

    I see dataSources as a timed script event and propertySources as a run once script event (I normally target one of the Collectors with it to minimize its resource impact and allowing it to only run once per day or so).  This allows me to leverage the API and bend it to my will to not only gather realtime data, but also to make historical comparisons that the interface alone doesn't allow for.

    One of the things I am trying to figure out is how to leverage an azure script to grab performance data from the last hour on key metrics to add to the alert emails that get sent to our support staff so they can have an at a glance view of how the server has been running leading up to an alerting event without having to log into another system to look at it.

    I also have reports that I run using the metrics collected by SCOM, now by LM that will let me perform "right-sizing" of VMs in Azure and Hyper-V for our customers to get them maximum performance at minimal cost.

4 Replies

Replies have been turned off for this discussion
  • My first thread here was an effort to get my head around the API to get the kind of functionality I was used to coming from the SCOM world: 

    /topic/2281-rest-api-raw-data-processing-using-powershell/

     

    I see dataSources as a timed script event and propertySources as a run once script event (I normally target one of the Collectors with it to minimize its resource impact and allowing it to only run once per day or so).  This allows me to leverage the API and bend it to my will to not only gather realtime data, but also to make historical comparisons that the interface alone doesn't allow for.

    One of the things I am trying to figure out is how to leverage an azure script to grab performance data from the last hour on key metrics to add to the alert emails that get sent to our support staff so they can have an at a glance view of how the server has been running leading up to an alerting event without having to log into another system to look at it.

    I also have reports that I run using the metrics collected by SCOM, now by LM that will let me perform "right-sizing" of VMs in Azure and Hyper-V for our customers to get them maximum performance at minimal cost.

  • Cole, are  you suggesting that I could create a datasource, say "smoothed CPU". That datasource would be a powershell script datastore. Then in the manner described in your post, that script would pull the last 10 datapoints from another datasource, say CPU Process Queue via api and then do the averaging/smoothing? 

  • If I understand this approach correctly, it may also solve another item I have been looking at for years. The concept of comparing the average CPU usage for a period of time...lets say the average of an hour on monday with the average of the previous monday and alert if it is over that amount by say 30%. The same holds true for bandwidth on my 300 some odd customer firewalls where we always seem to find out after a customer calls saying internet is slow...then a quick check to Logicmonitor shows they are using way more bandwidth than usual. Would much prefer to simply have a datasource called "Internet Usage Increase" that calls these things out. 

  • exactly correct.  If you want to go crazy with metrics as well, you can always add an SQL Express instance to the collector (or some other server that is network accessible) and use that for whatever storage you'd want for anything else as that's accessible from powershell as well.  Start tying together multiple tools (the true strength of powershell) to make a larger product.  So you could start aggregating your datasets, even writing SQL internal to the DB that would perform that processing for you.