I am trying to determine how CW computes port utilization in the mini-RMON statistics history view. There is no explanation in the help. In the help for the real-time view, it says that the statistics are calculated using the following formula:
Pkts * (9.6 + 6.4) + (Octets * .8)
Interval * 10,000
which makes absolutely no sense to me.
What I really want to know is, if a 100Mb port is completely saturated in one direction (e.g. xmit) and completely quiet in the other (e.g. recv), does the application report this as 50% utilization?
For a full duplex link, the formula is the max of the change between rx and tx * 8 * 100 divided by the time period * ifSpeed. So, for a 100 Mbps link with an ifSpeed of 100,000,000, the utilization over 10 seconds would be:
max(0, 125000000) * 8 * 100
10 * 100000000
Thanks Joe, but that leads to a couple more questions.
First, when you say "max of the change between rx and tx", this implies to me that the utilization figure measures in only one direction - and doesn't tell you which one it is. Am I understanding this correctly?
Second, I don't understand the use of 125,000,000 byte/s in your example. I would have chalked it up as a typo but your formula does evaluate to 100. Clearly at 100 Mbit/s the theoretical maximum throughput is 12,500,000 byte/s. One of is is misplacing a decimal point (probably me).
The other issue is that mini-RMON does not seem to be using this formula to calculate utilizion % in the statistics history. Take these 3 samples for instance:
Interval Start Time Util % Total Packets Total Octets Broadcasts Multicasts
5 May 2009 13:22:03 EDT 6.72 628431 492629784 1825 267
5 May 2009 13:27:03 EDT 25.73 2089774 1888742415 1795 273
5 May 2009 13:32:03 EDT 7 .72 815833 563679551 2073 274
If I take the second one and plug it into your formula, I don't come up with the 25.73% utilization figure that CW does:
1,888,742,415 * 8 * 100
---------------------------- = 50.37%
300 * 100,000,000
You're absolutely right. That is why it is recommended to calculate the rx and tx directions of a full duplex link independently. However, when calculating an overall interface utilization, the max/delta formula is typically used.
I'm not sure what MiniRMON is using here. I don't have that code. However, the formula in the help is bogus. My guess is that Total Octets is a sum of tx and rx, but the utilization calculation is only being done for one of the directions.
I finally got clarification on the formula used by MiniRMON Manager. The documentation is clearly wrong. The formula used is:
utilization = ((deltaPkts * 16) + (deltaOctets * 0.8)) / (deltaTime * 100 * (speed / 10000000))
This is based on the formula found in the RMON-MIB for etherStatsOctets, and does produce a valid percentage.
The data MiniRMON is presenting to you for octets and packets, however, are totals, and not deltas.
Oh, as for the number 125000000 in my previous formula, that represents 100 Mbps ethernet fully saturated for 10 seconds. Hence the extra 0.
Nope, that doesn't work either. But that is close to the formula given in the documentation. "Pkts * (9.6 + 6.4)" = "(deltaPkts * 16)". Actually, the formula you gave is off by a factor of 200 from what CW reports. The formula that is in the documentation is exactly what is in the RMON-MIB and is off by a factor of 2 from what CW reports.
I was confused by the maginc numbers, 9.6, 6.4 and 0.8. I managed to stumble across what seems like a reasonable explanation of these figures. I will reproduce that information here (in case the link disappears) and include the link below.
The formula seems to be off by a factor of 10, so rewrite the formula in the RMON-MIB as:
Pkts * (96 + 64) + (Octets * 8)
----------------------------------- * 100%
Interval * IntSpeed
96 is the number of bit times (minimum) for the interframe gap.
64 is the number of bits in the preamble + SFD (start frame delimiter).
8 is the number of bits in an octet.
Now this formula begins to make sense. I plucked that explanation from http://www.wifi-forum.com/wf/showthread.php?t=68496
However, it still doesn't match what mini-RMON comes up with. mini-RMON takes the result of that formula and divides by 2. This leads me to believe that the packet and octet counts that are reported and used in the calculation represent xmit + recv, and that the final utilization figure represents an average of the xmit and recv utilization.
Incidentally, mini-RMON is reporting delta figures, not totals. If the figures were totals they would be steadily incrementing, which they are not.
138. 7 May 2009 13:17:10 EDT 10.35 1023823 756661275 1973 267
139. 7 May 2009 13:22:10 EDT 51.5 3999924 3783661308 2115 277
140. 7 May 2009 13:27:10 EDT 25.16 2024947 1847503810 1996 276
141. 7 May 2009 13:32:10 EDT 4.78 499899 348781700 1912 267
This is the formula from the MiniRMON code according to the development. I don't have that code, so I had to take their word for it. I'm going to try and get the full code, and see for myself how this is being calculated.
By the way, when you take your delta time, make sure you're taking the difference in sysUpTime. Since sysUptime is in TimeTicks, if you're taking time as a number of seconds, that would explain the 100 times offset.
The delta time I am using is 300, since that is my sampling interval (5 minutes). Using this figure in the RMON-MIB equation results in 2 * the utilization% that mini-RMON reports.
The formula that I gave above is in effect the same as the formula in the RMON-MIB and in the CW documentation. Multiplying everything by 10 just makes everything make sense. Now I can see that it represents this:
Packet overhead + number of bits xmitted (and/or received)
Number of bits that could have been xmitted (and/or received)
The other thing about the MIB formula is that it uses "interval * 10,000" in the denominator. The 10,000 figure 1) assumes 10Mbit ethernet, 2) is further divided by 100 so that the result of the formula is a percentage figure (rather than multiplying the result by 100) and 3) is off from the interface speed by a factor of 10 as are the figures in the numerator of the formula.
Yes, this equates to the same formula that MiniRMON is using (and is much clearer).
The counts being shown in CiscoView are, in fact, raw counters. They represent the values of etherHistoryPkts and etherHistoryOctets at the sample time. I was confused as to WHICH utilization we were talking about. So far, I have been looking at current stats utilization. However, you're looking at historical utilization.
The utilization for RMON history is obtained from the device from the etherHistoryUtilization object. Therefore, this may point a device bug. I'm certainly seeing one on my 3560 which reports 0% utilization when the math begs to differ. On what device, and what version of code are you testing?
Yes, sorry I wasn't clear about that. I am looking at history, and the lines that I pasted into my posts are directly from that display.
This is a Cat6000 (yes, a 6000, not a 6500) running CatOS 6.4(21). That should be some pretty mature code I would think.
Incidentally, I did run across this description of utilization calculation in the CISCO-STACK-MIB. I was a little confused by the first statement since it seems that miniRMON divides the result by 2.
Ethernet Utilization: (If Full Duplex, multiply by 2)
10 Mbps I/O-pkts * (9.6 + 6.4) + (0.8 * I/O-Bytes)
Ethernet Util = -----------------------------------------
Interval * 10,000
Yep. I just checked the code, and internally, etherHistoryUtilization is calculated as (by the device):
etherHistoryUtilization = ((etherHistoryPkts * 8) + etherHistoryOctets * 8 * 100 * 100) / (historyControlInterval * ifSpeed * duplex)
Where duplex is 1 for half-duplex ports and 2 for full-duplex ports. This is because full-duplex links can both send and receive at ifSpeed.
MiniRMON is not doing any math on the history figures except to divide the etherHistoryUtilization number by 100 to get a percentage.