cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
16020
Views
12
Helpful
15
Replies

2960 buffers (drops+packet loss due to micro-bursts)

johnelliot6
Level 2
Level 2

Have a pair of 2960's in a stack, one port(trunk) connects to another DC and we are seeing ~5% packet-loss

and large output drops to this DC.



#sh interfaces gigabitEthernet 1/0/17 counters errors



Port        Align-Err     FCS-Err    Xmit-Err     Rcv-Err  UnderSize  OutDiscards


Gi1/0/17            0           0           0           0          0       182867



GigabitEthernet1/0/17 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is a0cf.5b87.ec11 (bia a0cf.5b87.ec11)

  Description: QinQ_to_DC2

  MTU 1998 bytes, BW 100000 Kbit, DLY 100 usec,

     reliability 255/255, txload 41/255, rxload 23/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 6d13h, output 00:00:00, output hang never

  Last clearing of "show interface" counters 04:02:15

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 183592

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  30 second input rate 9047000 bits/sec, 2075 packets/sec

  30 second output rate 16324000 bits/sec, 2309 packets/sec




As you can see, 30sec rate isnt excessive, but as the drops are outdiscards it would appear we are getting hit by the small buffers/microburst issue.

Gi1/0/17 is mapped to asic 0/20



Gi1/0/17  17   17   17   0/20 1    17   17   local     Yes     Yes



Port-asic Port Drop Statistics - Summary


========================================



Port 20 TxQueue Drop Stats: 308277833



And majority appear to be in Queue 1:



Port 20 TxQueue Drop Statistics

    Queue 0

      Weight 0 Frames 3

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 1

      Weight 0 Frames 308240408

      Weight 1 Frames 458

      Weight 2 Frames 0

    Queue 2

      Weight 0 Frames 37898

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 3

      Weight 0 Frames 91

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 4

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 5

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 6

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0

    Queue 7

      Weight 0 Frames 0

      Weight 1 Frames 0

      Weight 2 Frames 0




Done a bit of reasearch, and as we have mls qos configured(we have some ssh/rdp policies in place on access-ports), we need to look at "tweaking" the buffer allocations on the switch to hopefully mitigate(reduce) these drops.



There appears to be a range of recommendations when it comes to these tweaks - Hoping someone has some suggestions on

what to set with "mls qos queue-set output" to alleviate the drops?(start conservative, then apply more aggressive if needed)....and also, does adjusting the buffers require an outage window?



Our traffic is primarily backup(replication which is very bursty), and Internet



Thanks in advance.

15 Replies 15

paolo bevilacqua
Hall of Fame
Hall of Fame

Try removing all and any mls and qos commands

In my expereince will have things working smooth and solid, fully using buffer resources,  without headache-causing complex configuration of trial-and error.

Let us know.

Review Cisco Networking products for a $25 gift card