07-17-2008 07:00 AM - edited 03-06-2019 12:15 AM
Hello,
I have several ports on a Cat6513(sup720) with high value in Total output drops.
I've seen that there is a bug for Cat3750G that explains this behaviour but i don't know if in Catalyst 6513 it's the same.
The port configuration is:
interface GigabitEthernet9/1
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 4,8,30-137,141,144-148,184,185,201-204,206-211
switchport trunk allowed vlan add 297,299,300,309-313,317,323,333,338,351,352
switchport trunk allowed vlan add 360,379-499,1018,1095,1102,1500-1516
switchport trunk allowed vlan add 1519-1521,1708-1710,1712,1716,1722,1725,1726
switchport trunk allowed vlan add 1728,1730-1739,1747,1798,3500,3504,3532,3537
switchport trunk allowed vlan add 3596,3600-3899,3930,3964-3979,3999-4030,4032
switchport trunk allowed vlan add 4033
switchport mode trunk
no ip address
channel-group 2 mode active
GigabitEthernet9/1 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 0009.11e2.da44 (bia 0009.11e2.da44)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 16/255, rxload 12/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is SX
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:48, output 00:00:40, output hang never
Last clearing of "show interface" counters 9w5d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 42718965
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 50534000 bits/sec, 7917 packets/sec
5 minute output rate 62992000 bits/sec, 15032 packets/sec
22311122136 packets input, 14777324868038 bytes, 0 no buffer
Received 1537763244 broadcasts (1524004268 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
42625310427 packets output, 25104597170119 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
The other ports in the same channel doesn't have the same value:
GigabitEthernet9/12 is up, line protocol is up (connected)
Hardware is C6k 1000Mb 802.3, address is 0009.11e2.da4f (bia 0009.11e2.da4f)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 12/255, rxload 10/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is SX
input flow-control is off, output flow-control is off
Clock mode is auto
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:30, output 00:00:40, output hang never
Last clearing of "show interface" counters 4w5d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11702
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 40751000 bits/sec, 7936 packets/sec
30 second output rate 50065000 bits/sec, 12920 packets/sec
11839072880 packets input, 7507886006536 bytes, 0 no buffer
Received 743727027 broadcasts (739524176 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
23592763556 packets output, 13392934501997 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Does anyone have idea what it can be?
thanks.
07-17-2008 08:25 AM
Hi,
Counters for 9w 5d in Ge9/1 whereas you have it collected for 4w 5d for Ge9/12. Shouldn't it be better to "clear counters" at the same time and also to make "load-interval" to 30 seconds on both the interfaces before closely monitoring the utilization further?
Why do you think Ge9/1 shows 2 interface resets? Could it be because of the full output queue(interface resets can occur if packets queued for transmission were not sent within several seconds time) or interface shutdown?
07-17-2008 08:59 AM
Output drops are usually caused by insufficient bandwidth, or no available buffer at the receiving side (the other end of the connection), possibly due to flow control.
The effect you see is because there is a lot of traffic to be sent, and no bandwidth available to send it on ... and it times out or the transmitter runs out of buffer.
Good Luck
Scott
07-18-2008 01:48 AM
If there is insufficient bandwith why the reliability is:
reliability 255/255, txload 12/255, rxload 10/255
and the output queue:
Output queue: 0/40 (size/max) ??
07-18-2008 02:14 AM
I advise you open a service request with CiscoTAC.
09-12-2008 01:15 AM
What type of linecard is this that has the output drops? Send a show mod.
09-12-2008 01:35 AM
I send you the output of show inventory
show inventory:
NAME: "WS-C6513", DESCR: "Cisco Systems Catalyst 6500 13-slot Chassis System"
PID: WS-C6513 , VID: , SN:
NAME: "WS-C6K-VTT 1", DESCR: "VTT FRU 1"
PID: WS-C6K-VTT , VID: , SN:
NAME: "WS-C6K-VTT 2", DESCR: "VTT FRU 2"
PID: WS-C6K-VTT , VID: , SN:
NAME: "WS-C6K-VTT 3", DESCR: "VTT FRU 3"
PID: WS-C6K-VTT , VID: , SN:
NAME: "WS-C6513-CL 1", DESCR: "CXXXX Clock FRU 1"
PID: WS-C6513-CL , VID: , SN:
NAME: "WS-C6513-CL 2", DESCR: "CXXXX Clock FRU 2"
PID: WS-C6513-CL , VID: , SN:
NAME: "1", DESCR: "WS-X6548-RJ-45 SFM-capable 48-port 10/100 Mbps RJ45 Rev. 5.2"
PID: WS-X6548-RJ-45 , VID: , SN:
NAME: "2", DESCR: "WS-X6548-RJ-45 SFM-capable 48-port 10/100 Mbps RJ45 Rev. 5.2"
PID: WS-X6548-RJ-45 , VID: , SN:
NAME: "3", DESCR: "WS-SVC-SSL-1 1 ports SSL Module Rev. 3.2"
PID: WS-SVC-SSL-1 , VID: , SN:
NAME: "4", DESCR: "WS-X6066-SLB-APC 4 ports SLB Application Processor Complex Rev. 1.8"
PID: WS-X6066-SLB-APC , VID: , SN:
NAME: "5", DESCR: "WS-SVC-FWM-1 6 ports Firewall Module Rev. 4.0"
PID: WS-SVC-FWM-1 , VID: V02, SN:
NAME: "6", DESCR: "WS-SVC-FWM-1 6 ports Firewall Module Rev. 4.0"
PID: WS-SVC-FWM-1 , VID: V02, SN:
NAME: "7", DESCR: "WS-SUP720-3B 2 ports Supervisor Engine 720 Rev. 4.4"
PID: WS-SUP720-3B , VID: , SN:
NAME: "msfc sub-module of 7", DESCR: "WS-SUP720 MSFC3 Daughterboard Rev. 2.3"
PID: WS-SUP720 , VID: , SN:
NAME: "switching engine sub-module of 7", DESCR: "WS-F6K-PFC3B Policy Feature Card 3 Rev. 2.1"
PID: WS-F6K-PFC3B , VID: , SN:
NAME: "8", DESCR: "WS-SUP720-3B 2 ports Supervisor Engine 720 Rev. 4.4"
PID: WS-SUP720-3B , VID: , SN:
NAME: "msfc sub-module of 8", DESCR: "WS-SUP720 MSFC3 Daughterboard Rev. 2.3"
PID: WS-SUP720 , VID: , SN:
NAME: "switching engine sub-module of 8", DESCR: "WS-F6K-PFC3B Policy Feature Card 3 Rev. 2.1"
PID: WS-F6K-PFC3B , VID: , SN:
NAME: "9", DESCR: "WS-X6516-GBIC SFM-capable 16 port 1000mb GBIC Rev. 5.7"
PID: WS-X6516-GBIC , VID: , SN:
NAME: "10", DESCR: "WS-X6516-GBIC SFM-capable 16 port 1000mb GBIC Rev. 5.5"
PID: WS-X6516-GBIC , VID: , SN:
NAME: "switching engine sub-module of 10", DESCR: "WS-F6K-DFC3B Distributed Forwarding Card 3 Rev. 2.3"
PID: WS-F6K-DFC3B , VID: V01, SN:
NAME: "11", DESCR: "WS-X6748-GE-TX CEF720 48 port 10/100/1000mb Ethernet Rev. 2.3"
PID: WS-X6748-GE-TX , VID: V01, SN:
NAME: "switching engine sub-module of 11", DESCR: "WS-F6700-CFC Centralized Forwarding Card Rev. 2.0"
PID: WS-F6700-CFC , VID: , SN:
NAME: "12", DESCR: "WS-X6516-GE-TX SFM-capable 16 port 10/100/1000mb RJ45 Rev. 2.5"
PID: WS-X6516-GE-TX , VID: , SN:
NAME: "switching engine sub-module of 12", DESCR: "WS-F6K-DFC3B Distributed Forwarding Card 3 Rev. 2.3"
PID: WS-F6K-DFC3B , VID: V01, SN:
NAME: "13", DESCR: "WS-X6748-GE-TX CEF720 48 port 10/100/1000mb Ethernet Rev. 2.3"
PID: WS-X6748-GE-TX , VID: V01, SN:
NAME: "switching engine sub-module of 13", DESCR: "WS-F6700-CFC Centralized Forwarding Card Rev. 2.0"
PID: WS-F6700-CFC , VID: , SN:
NAME: "PS 1 WS-CAC-4000W-INT", DESCR: "220v AC power supply, 4000 watt 1"
PID: WS-CAC-4000W-INT , VID: V01, SN:
NAME: "PS 2 WS-CAC-4000W-INT", DESCR: "220v AC power supply, 4000 watt 2"
PID: WS-CAC-4000W-INT , VID: V01, SN:
09-12-2008 03:50 AM
As Scott notes, it could be normal congestion drops. If my math is correct, although you have 42 million drops that's against 42 billion output packets; a drop rate of .1%, which shouldn't be of too much concern.
Since it's a gig interface (on a LAN?), allowing for a latency of about 1 ms, you could pretty safely double the output queue depth (i.e. 80). This might drop your percentage of drops; might not too.
PS:
Many think drops are bad. Drops can be bad but they also can be quite normal. Drops are the common method for all TCP stacks to determine end-to-end available bandwidth. I.e. Many TCP stacks will continue to increase bandwidth demand until they see drops. (NB: TCP can also stop increasing bandwidth demand under other condidtions.)
09-18-2008 02:46 AM
It is very interesting your comment, I thought put an analyzer to determine the origin of traffic that originates these drops (Cisco TAC told me that the drops are due to peak traffic) but it is not possible ...
Is there any document that specifies the percentages of error from which let you worry?
09-18-2008 04:01 AM
Don't recall seeing one document that notes acceptable drop rates for all the different protocols. Impact of drops varies greatly to different protocol. Recall, generally, TCP handles drop rates up to about 1% quite well. (TCP can deliver data at much, much higher drop rates, but usually at a much lower transfer rate and perhaps with many retramissions [which might add to the congestion].)
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: