Sample Rate Increase or high-water-mark report
Default sample rate of 1 minute leads to misleading results, particularly on high bandwidth interfaces. There has to be a way to infer what happens on a 10G interface in-between polls; instead of reading only the bit rate during poll, is there any way to compare total bytes/packets sent between polls, and use the difference to plot estimated bit rates that would be required to account the discrepancy? Please see the two attached graphs - one is logic monitor @ 1 minute polling rate and the other is a competitor @ 10 sec for the same interface. Although the logic monitor sample is 2 hours and the 'other' is 20 minutes, you get the idea. They tell two completely different stories. There has to be a better way to read and interpret the results from bursty traffic; we are preparing 100 G interfaces shortly and there is no way we can reliably monitor these with Logic Monitor unless there is some way to account for overall data transmitted and not just random polling interval results.1View0likes0CommentsSample Rate Increase
Sample rates of 60 seconds should suffice for most things, but when it comes to todays network gear, a lot can happen in 1 minute. Particularly if you are talking about high density 10Gig environments. See the attached graphs showing the same interface over roughly the same period of time. They tell two completely different stories. I don't expect to catch EVERY spike, but it should be possible to increase polling frequency, or to also query the various counters (inclding 5 min average counters) and extrapolate how fast an interface needed to move to support those calculated averages.2Views1like2Comments