03-11-2009 01:27 PM - edited 03-06-2019 04:31 AM
I am working with a customer and we are seeing a number of switches showing a large volume of output drops. This customer currently uses the 4948 series switch for their storage network. So these are the backend connections between server and storage devices. They are on the same same vlan and these switches are dedicated to this purpose only.
Can anyone lend a hand as to why we would see output drops on some interfaces but not all interfaces? Is this 'normal' on a storage network?
Here is a sample of a show interface on one of the switches.
W4948-04#sh int g1/17
GigabitEthernet1/17 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet Port, address is 0019.e72a.7a90 (bia 0019.e72a.7a90)
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 11/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000-TX
input flow-control is on, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output never, output hang never
Last clearing of "show interface" counters 02:09:31
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 24970
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 44172000 bits/sec, 5042 packets/sec
0 packets input, 12535104 bytes, 0 no buffer
Received 0 broadcasts (0 multicast)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
39354545 packets output, 41188424424 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
W4948-04#
Any and all help is appreciated.
03-11-2009 02:37 PM
Hello,
I see you only have flow-control active for input traffic. I would suggest to activate it for output traffic too.
Flow-control allows the switch to instruct the server to transmit at a slower rate to avoid packet drops.
03-11-2009 05:40 PM
"Can anyone lend a hand as to why we would see output drops on some interfaces but not all interfaces? Is this 'normal' on a storage network? "
Most likely reason, transient congestion. Much depends on where the traffic is flowing and when. Consider three ports all the same bandwidth, if two ports send a bust of traffic to one port, the port will queue packets; too many to queue, excess packets dropped.
Jorge's suggestion of flow-control might help (depends on each end of link supporting it - depends on interswitch connections too).
Your output queue is only 40 packets. Increasing it to perhaps better handle gig speed's BDP would likely decrease the drop percentage.
Your drop percentage (if my math is correct) is only .063%, which shouldn't be much of an issue.
03-12-2009 08:27 AM
Joseph / Jorge,
Thank you guys for getting back to me. I appreciate your insight.
I am looking at both of your suggestions.
Joseph, I do agree that the drop percentage is extremely small and should not impact production.
03-12-2009 08:50 AM
I manage the network for many customers data centers and in about 80% of them I have output drops in the servers port. It does not matter if the server is connected directly to a Gigabit, an Ethernet or to a HP Blade switch.
What I can say is that this small drop rate does not affect the performance at all.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: