Forum Discussion
What you're looking for is InOctets and OutOctets. However, there's a trick. These two datapoints are set as type "counter", which means that the underlying data is read like an odometer. To calculate the "value" of this datapoint, there is a two step calculation (same for all counter/derive type datapoints):
1) The previous poll's reading of the OID is subtracted from the current reading of the OID. This gives you the number of bytes since the last time the OID was polled. Side note: I do wish there was a way to stop it here as this number, particularly in your case, is very useful.
2) LM then takes it one step further trying to normalize the data. Let's look at why this is done then how it's done. For example, let's say that you're polling one OID every minute and another OID every 5 minutes. Even if these OIDs were incrementing at the same rate, the difference between the previous poll and the current poll would be drastically different. This difference makes it difficult to compare the data. So, as most mathematicians would do, the data is "normalized" by performing some sort of math that brings everything into the same unit. This is "why" the data is normalized. Now the "how": simply divide the delta by the number of seconds between them*. So in practice you get the number of bytes per second.
All of this to say that you can undo this calculation by multiplying InOctets/OutOctets by the number of seconds between polls to get the raw number of bytes. You do this with two complex datapoints: 1) create one to contain the poll interval or the number of seconds between polls. This is done by making a complex datapoint and making the value ##POLLINTERVAL##. 2) create a pair of complex datapoints multiplying the InOctets/OutOctets by the poll interval datapoint. This should give you the number of bytes that were received/sent since the last poll. Then you can graph it however you'd like to get total volume.
*Unless it's changed with the scripted version of the interfaces datasource, there is some possibility of error. LM assumes the amount of time between the two polls is exactly equal to the desired poll interval. This isn't always the case though since queueing and latency delays can introduce up to several thousand milliseconds of difference between the desired poll interval and the actual poll interval. This means that instead of the poll interval being 60.000 seconds, it might be 60.892 seconds, or 59.026 seconds. This difference is minimal and only really becomes marginally significant when monitoring interfaces with really high bandwidths (~40+Gbps) and only when those interfaces are highly utilized. For example, if you had a current, true utilization of 34.698Gbps and the poll rate was off by 500ms, the calculated utilization would be 34.412Gbps, an error of 0.288Mbps. On a 40Gbps interface, this error is negligible.
Related Content
- 6 years ago
- 6 years ago
- 11 years ago