I am having a problem with a large number of output drops occuring on a GigabitEthernet interface.
Hardware info: Catalyst 4506 with SupIV running IOS 12.2(31)SG. Linecard is a WS-X4448-GB-RJ45.
The interface and the connected device are both configured for autonegotiation and are negotiating to 1000/full. I notice the output drops whenever the utilization rises above a few hundred kbps. All of the drops are from the Tx-Drops-Queue-2, and most of the outbound traffic seems to be from that queue 2. Here is the output of that interface's counters (reset 3 days ago):
Port InBytes InUcastPkts InMcastPkts InBcastPkts
Gi3/43 1664569971 3669717 0 287
Port OutBytes OutUcastPkts OutMcastPkts OutBcastPkts
Gi3/43 26552853364 30904440 2013536 2289256
Port InPkts 64 OutPkts 64 InPkts 65-127 OutPkts 65-127
Gi3/43 719167 11786994 1168599 3725029
Port InPkts 128-255 OutPkts 128-255 InPkts 256-511 OutPkts 256-511
Gi3/43 393734 1284840 315997 1674191
Port InPkts 512-1023 OutPkts 512-1023
Gi3/43 306808 783957
Port InPkts 1024-1522 OutPkts 1024-1522 InPkts 1523-1600 OutPkts 1523-1600
Gi3/43 765699 15952221 0 0
Port Tx-Bytes-Queue-1 Tx-Bytes-Queue-2 Tx-Bytes-Queue-3 Tx-Bytes-Queue-4
Gi3/43 20982 26414485135 69943042 68405045
Port Tx-Drops-Queue-1 Tx-Drops-Queue-2 Tx-Drops-Queue-3 Tx-Drops-Queue-4
Gi3/43 0 94349 0 0
Port Dbl-Drops-Queue-1 Dbl-Drops-Queue-2 Dbl-Drops-Queue-3 Dbl-Drops-Queue-4
Gi3/43 0 0 0 0
Port Rx-No-Pkt-Buff RxPauseFrames TxPauseFrames PauseFramesDrop
Gi3/43 0 0 2 0
Any ideas for further troubleshooting are appreciated. Thank you,
Edit: I noticed that the formatting for the command output is not aligned. Is there a better way to paste this information?
I only recently set up some QoS config when we started using voip between locations, but it was only set on those specific interfaces. This interface is not related to the voip infrastructure. Are you saying that I should configure QoS for this interface?
This interface was previously negotiating to 100/Full due to the connected device configured as 100/Full. Now they are both auto, and are picking up 1000/Full as expected. There is no way to move traffic away from this link, as the connected device is our ERP system. I don't belive over-utilization is the problem. Using snmp monitoring, I have not seen the interface ever spike above 100mbps (which would now be only 10% of the link).
I was seeing the same issue when the device was using 100/Full and I hoped that increasing the line speed would correct the problem. Is it possible that somehow the queues did not "readjust" after the interface started using 1000/Full?
I have used a packet sniffer after creating a span session with this interface as the source. If the interface shows an output rate of say, 500pps, should my sniffer also be receiving about the same? I noticed that the sniffer received significantly fewer packets than the packet rate would suggest. For example, the 5 minute output rate is 143pps, but my sniffer only captured 705 packets in 30 seconds (~23pps).
I think it should be related to burst traffic pattern. Burst traffic could cause output drop even in a low link utilization. If you could shape this burst traffic somewhere in your network, it could be a help.
By the way, in 4448 line card, every group of 8 ports share the same resource. It's a 8:1 oversubscription architecture. Can you find out if any other port within 3/41-48 also experienced the same output drop? One thing you can try is to find a unused group of 8 ports (1-8, 9-16 and so on) and move the server to there to see if it can help.
I am not sure why SPAN did not capture all packets. You can change "load-interval" to 30 sec under your interface so that "show interface" will give you 30 sec average rate instead of 5 min. Then comparing it with the SPAN capture.
When you enable QoS globally, QoS should be enabled by default under the interface. You can check it by "show qos interface g3/43".
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...