Forum Discussion

L33BYT's avatar
2 years ago

Data throughput - I need to workout or display/graph total data

We are trying to work out and compare how much data is used from our wifi access points compared to the site traffic in total.

The AP portals are fairly easy as they report on total data used and for which application. However all the graphing for logic monitor shows transfer speeds. Not total data.

Is there any graph or datapoint I can create that will enable me to show total traffic transferred over a period of time so I can then deduct the wifi traffic from total traffic and be left with a value.

2 Replies

  • What you're looking for is InOctets and OutOctets. However, there's a trick. These two datapoints are set as type "counter", which means that the underlying data is read like an odometer. To calculate the "value" of this datapoint, there is a two step calculation (same for all counter/derive type datapoints):

    1) The previous poll's reading of the OID is subtracted from the current reading of the OID. This gives you the number of bytes since the last time the OID was polled. Side note: I do wish there was a way to stop it here as this number, particularly in your case, is very useful.

    2) LM then takes it one step further trying to normalize the data. Let's look at why this is done then how it's done. For example, let's say that you're polling one OID every minute and another OID every 5 minutes. Even if these OIDs were incrementing at the same rate, the difference between the previous poll and the current poll would be drastically different. This difference makes it difficult to compare the data. So, as most mathematicians would do, the data is "normalized" by performing some sort of math that brings everything into the same unit. This is "why" the data is normalized. Now the "how": simply divide the delta by the number of seconds between them*. So in practice you get the number of bytes per second. 

    All of this to say that you can undo this calculation by multiplying InOctets/OutOctets by the number of seconds between polls to get the raw number of bytes. You do this with two complex datapoints: 1) create one to contain the poll interval or the number of seconds between polls. This is done by making a complex datapoint and making the value ##POLLINTERVAL##. 2) create a pair of complex datapoints multiplying the InOctets/OutOctets by the poll interval datapoint. This should give you the number of bytes that were received/sent since the last poll. Then you can graph it however you'd like to get total volume.

     

    *Unless it's changed with the scripted version of the interfaces datasource, there is some possibility of error. LM assumes the amount of time between the two polls is exactly equal to the desired poll interval. This isn't always the case though since queueing and latency delays can introduce up to several thousand milliseconds of difference between the desired poll interval and the actual poll interval. This means that instead of the poll interval being 60.000 seconds, it might be 60.892 seconds, or 59.026 seconds. This difference is minimal and only really becomes marginally significant when monitoring interfaces with really high bandwidths (~40+Gbps) and only when those interfaces are highly utilized. For example, if you had a current, true utilization of 34.698Gbps and the poll rate was off by 500ms, the calculated utilization would be 34.412Gbps, an error of 0.288Mbps. On a 40Gbps interface, this error is negligible. 

  • I recently had a similar request, in that the issue at hand was to look at (hundreds of) Cisco routers defined in our instance of LogicMonitor and perform the comparison of Throughput between the Primary Interface and backup VLAN interface.  The approach I took was slightly different.  Problem was the metrics between polls for InOctets and OutOctets metrics differed wildly between the polling intervals (by default 2 minutes).  So computing Total throughput in “TotalBps” ((InOctets*8) + (OutOctets*8)) based on these metrics would result in peaks and valleys and this is regardless how these 2 metrics are defined (counter, derive, gauge).  So I defined a virtual Datapoint “Total95” in one of the graphs “percent(TotalBps,95)” which returns the 95 percentile value of all available values of datapoint “TotalBps” for the displayed time range, in my case Last Hour, using aggregated data.  Now the comparison can be done.  Also, the fact you can plot all these devices metrics in Dashboard and save said Dashboard into a Report which then can be fetched programmatically via API which then can be parsed out for individual device analysis is awesome.  Instead of going to hundreds of devices one by one and risking exceeding API poll limit you just make one API call to get the report and then crunch the numbers using the scripting of your choice.  I am not sure if this is the right approach, of course, but it seems to be working so far, the comparison is yielding correct results, as confirmed by Network Engineers on prem.