I need some help to determine where packets are being dropped in the network for a new QoS configuration I've rolled out.
My test setup is running iperf from two source systems to a single target system. The two data flows are labelled at different priority levels and sure enough, my lower priority traffic gets dropped as expected .. so all is looking good. What I can't tell is where the data is getting dropped.
The data route traverses a 6509 (IOS), 4507 (IOS), 3750 all with QoS enabled, queueing setup and appropriate service policies or trust state applied.
I've looked at all the counters in the route and I can see that my traffic being dropped on the final leg by the queueing, so all good once again.
Cat4507 command - show interface counter mod x
Cat 6500 command - show counters interface x/x
Cat3750 commands - show platform pm if-numbers
show platform port-asic stats drop port x asic y
However, one thing that came to light is the mismatch of tx rate at one end of a port channel to the rx rate of the other end. Anybody got any idea why the two don't look the same? I cleared the counters at both end and left it running for 10+ minutes so expected the 5 minute rates to be the same:
5 minute input rate 3722000 bits/sec, 4271 packets/sec
5 minute output rate 127230000 bits/sec, 11332 packets/sec
5 minute input rate 96120000 bits/sec, 8553 packets/sec
5 minute output rate 2544000 bits/sec, 3263 packets/sec
How come I'm missing 30 odd Mbps of rx traffic on the 4507r? Are the algortithms different for calculating the rx rate on the platforms or something?