input queue drops on Gigabit interface

Answered Question
Jun 21st, 2010
User Badges:

I am getting input queue drops on a gigabit interface with overrun errors on my 6500. There are no flushes.


CORE#sh int gi 1/39
GigabitEthernet1/39 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0005.7444.5482 (bia 0005.7444.5482)
  Description: Connected to VMWare

  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 3/255, rxload 93/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 6d19h, output 00:00:27, output hang never
  Last clearing of "show interface" counters 3w4d
  Input queue: 0/2000/801093/0 (size/max/drops/flushes); Total output drops: 11540
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 368487000 bits/sec, 33005 packets/sec
  5 minute output rate 13371000 bits/sec, 11604 packets/sec
     933309234 packets input, 1242917076535 bytes, 0 no buffer
     Received 274849 broadcasts (49043 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 801093 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     543871433 packets output, 97345861601 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out


The interface configuration is

interface GigabitEthernet1/39
description Connected to VMWare
switchport
switchport trunk allowed vlan 48,52,53,55,60,79,80,208-218
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpduguard enable
!


By default the max input queue size is 2000, I increased it to 3500 using the hold-queue 3500 in command, but still the input queue drops are happening.


How can I fix this?


Thanks in advance.

Correct Answer by Giuseppe Larosa about 7 years 1 month ago

Hello Jenny,


>> Input queue: 0/2000/801093/0 (size/max/drops/flushes); Total output drops: 11540

>> 543871433 packets output, 97345861601 bytes, 0 underruns


>> Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT


as noted by Hitesh you have output drops not very big in comparison to total output packets.

2,1218e-5


that is practically negligible.


This is not an issue with hold queues and not an inbound issue.


The issue may come from oversubscription of the linecard. If this is a 6148 linecard for example you may have an 8:1 oversubscription ratio.


The question is that performance of a group of N ports is limited so even the specific port is not running at full rate it can face output drops when total use of the ports that share the same HW resource (ASIC chip) has reached a threshold.


We had faced the same issue on 4548 linecards on C4500.


if you have a linecard less then 6748 you can face these issues (also with C6748 but there oversubscription should be minimal). Different generations of ethernet linecards have different oversubscription ratios.


Ganesh: a C6500 can only work in CEF mode because it is a multilayer switch with MLS CEF based

Hitesh: unless the post has been changed by original poster I see

>> Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT


Hope to help

Giuseppe

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Loading.
Ganesh Hariharan Mon, 06/21/2010 - 22:56
User Badges:
  • Purple, 4500 points or more
  • Community Spotlight Award,

    Member's Choice, February 2016

CORE#sh int gi 1/39
GigabitEthernet1/39 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0005.7444.5482 (bia 0005.7444.5482)
  Description: Connected to VMWare

  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 3/255, rxload 93/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 6d19h, output 00:00:27, output hang never
  Last clearing of "show interface" counters 3w4d
  Input queue: 0/2000/801093/0 (size/max/drops/flushes); Total output drops: 11540
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 368487000 bits/sec, 33005 packets/sec
  5 minute output rate 13371000 bits/sec, 11604 packets/sec
     933309234 packets input, 1242917076535 bytes, 0 no buffer
     Received 274849 broadcasts (49043 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 801093 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     543871433 packets output, 97345861601 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out


The interface configuration is

interface GigabitEthernet1/39
description Connected to VMWare
switchport
switchport trunk allowed vlan 48,52,53,55,60,79,80,208-218
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
spanning-tree bpduguard enable
!


By default the max input queue size is 2000, I increased it to 3500 using the hold-queue 3500 in command, but still the input queue drops are happening.


How can I fix this?


Thanks in advance.


Hi,

The output drop rate in the specified interface is around  0.00212 %.

To eliminate try the following :-


1. Minimize periodic broadcast traffic like routing and Service Advertising Protocol (SAP) updates (if applicable) by using access lists or by other
     means.
2. Turn off fast switching for heavily used protocols. For example, turn off IP fast switching by using the 'no ip route-cache' interface configuration
     command.


Paste the output of the show buffer to see the buffer overflow is there or not

Hope to Help !!


Ganesh.H

paolo bevilacqua Mon, 06/21/2010 - 23:17
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

2. Turn off fast switching for heavily used protocols. For example,turn off IP fast switching by using the 'no ip route-cache' interfaceconfiguration


That is a very bad and ineffective recommendation that should never be done. Beside, it not even supported on a 6500.

Hitesh Vinzoda Tue, 06/22/2010 - 00:17
User Badges:
  • Silver, 250 points or more

Hi,


I think if we do this than the traffic would be process switched and every packet will be punted to CPU.. am i right?


Hitesh Vinzoda

jennyjohn Wed, 06/23/2010 - 00:45
User Badges:

Hi Ganesh,


As I understand Input queue drops generally occurs when a packet is process-switched. So disabling CEF or fast switching doesn't sound like a good idea.

Hitesh Vinzoda Tue, 06/22/2010 - 00:26
User Badges:
  • Silver, 250 points or more

Hi,


you are having output drops not the input drops... so increasing queue size is not a remedy. This symptom appears when the output queue is full. This could be result of traffic from multiple inbound links being switched to single outbound link. Also when a large amount of bursty traffic comes in on a gigabit interface and is switched out to a 100 Mbps interface.


Your output shows that you are running 100Mbps on that interface, while the interface is a GigE interface. If possible make the device on GigE interface to work at 1Gbps.


Also monitor this port using tools like MRTG or PRTG for interface utilization for any bandwidth bottlenecks.


HTH


Hitesh Vinzoda


Pls rate useful posts


Correct Answer
Giuseppe Larosa Tue, 06/22/2010 - 05:35
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Jenny,


>> Input queue: 0/2000/801093/0 (size/max/drops/flushes); Total output drops: 11540

>> 543871433 packets output, 97345861601 bytes, 0 underruns


>> Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT


as noted by Hitesh you have output drops not very big in comparison to total output packets.

2,1218e-5


that is practically negligible.


This is not an issue with hold queues and not an inbound issue.


The issue may come from oversubscription of the linecard. If this is a 6148 linecard for example you may have an 8:1 oversubscription ratio.


The question is that performance of a group of N ports is limited so even the specific port is not running at full rate it can face output drops when total use of the ports that share the same HW resource (ASIC chip) has reached a threshold.


We had faced the same issue on 4548 linecards on C4500.


if you have a linecard less then 6748 you can face these issues (also with C6748 but there oversubscription should be minimal). Different generations of ethernet linecards have different oversubscription ratios.


Ganesh: a C6500 can only work in CEF mode because it is a multilayer switch with MLS CEF based

Hitesh: unless the post has been changed by original poster I see

>> Full-duplex, 1000Mb/s, media type is 10/100/1000BaseT


Hope to help

Giuseppe

jennyjohn Wed, 06/23/2010 - 00:44
User Badges:

Hello Giuseppe,


    I don't know how your answer got selected as the correct answer.


    I do see input drops as well as output drops. True, output drops seems negligible.


    The interface is on a WS-X6548-GE-TX module and I do see lot of input/output queue drops on many ports on the card. It is true, with the backplane on a 6500 providing a BW of only 40Gbps for each slot, there will be oversubscription by the WS-X6548-GE-TX module. But, I have around 13 interfaces on the card that are unused, so would there still be oversubscription. Shouldn't I be getting 1Gig on each port?

   

Then is there no solution for this problem?

Actions

This Discussion