Execessive ASIC drops when QoS is enabled on 3750

Unanswered Question
Aug 20th, 2008
User Badges:
  • Green, 3000 points or more

Have anyone of you experienced high ASIC drops when QOS is enabled on a 3750?


We have a 3750 stack acting as core which connects to call recording machine.. all the Voice VLAN traffic is spanned across to the port connected to the server.


When i enable QoS on the switch, i see a lot of asic drops ... below is an o/p within a span of 2 seconds


This results in voice cracks.


Port-asic Port Drop Statistics - Summary

========================================

RxQueue 0 Drop Stats: 0

RxQueue 1 Drop Stats: 0

RxQueue 2 Drop Stats: 0

RxQueue 3 Drop Stats: 0


Port 0 TxQueue Drop Stats: 0

Port 1 TxQueue Drop Stats: 199

Port 2 TxQueue Drop Stats: 1112775020

Port 3 TxQueue Drop Stats: 32969c


Port-asic Port Drop Statistics - Summary

========================================

RxQueue 0 Drop Stats: 0

RxQueue 1 Drop Stats: 0

RxQueue 2 Drop Stats: 0

RxQueue 3 Drop Stats: 0


Port 0 TxQueue Drop Stats: 0

Port 1 TxQueue Drop Stats: 199

Port 2 TxQueue Drop Stats: 1112937066

Port 3 TxQueue Drop Stats: 32969


When i disable QoS, there are no drops at all... I tried to increase the queue depth and buffers for this port but still there seems to be no dfference


mls qos queue-set output 2 buffers 93 5 1 1

mls qos queue-set output 2 threshold 1 400 400 100 400

mls qos queue-set output 2 threshold 2 400 400 100 400

int gigabitEthernet 2/0/20

queue-set 2


Not sure whether my configs are at fault or is it a problem with Cisco


Narayan


  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (2 ratings)
Loading.
andrew.butterworth Fri, 08/22/2008 - 07:45
User Badges:
  • Gold, 750 points or more

I believe you are seeing the default behaviour - agressive drops due to queue thresholds. This has been posted lots of times and is down to 'bug' CSCsc96037. Cicso modified the code to allow more of the common pool buffers to be used by ports in IOS 12.2(25)SEE1 and later.


Your 'mls qos queue-set output threshold xxxx' parameters can be altered to solve this.


HTH


Andy

royalblues Fri, 08/22/2008 - 12:01
User Badges:
  • Green, 3000 points or more

Andy


I am using 122-25.SEE3 Ip services but still seems to be hitting the bug


Configuring mls qos queue-set output 2 threshold 1 3200 3200 100 3200 did reduce the drops but i think i need to be doing it for all the thresholds


Narayan


andrew.butterworth Fri, 08/22/2008 - 13:38
User Badges:
  • Gold, 750 points or more

This is what I currently use as a template when deploying QoS on these switches:


mls qos queue-set output 1 threshold 1 800 800 50 3200

mls qos queue-set output 1 threshold 2 560 640 100 800

mls qos queue-set output 1 threshold 3 800 800 50 3200

mls qos queue-set output 1 threshold 4 320 800 100 800


Obviously there are changes to the default DSCP-to-Queue and CoS-to-Queue mappings as well.


If changing that one line fixed some issues then you need to look at the traffic and understand which queues are being used.

That one line you posted changes queue 1 of queue-set 2 to use more of the common pool buffers, however the other 3 queues won't.


HTH


Andy

royalblues Sat, 08/23/2008 - 04:47
User Badges:
  • Green, 3000 points or more

Thanks Andy


I will remember the template


The following thing seems to work for me now

mls qos queue-set output 2 threshold 3 800 800 50 3200

mls qos queue-set output 2 threshold 4 900 900 100 3200


We are going to add about 400 more users when i may need to tune these again


Narayan



Pavel Bykov Mon, 04/27/2009 - 06:50
User Badges:
  • Silver, 250 points or more

Andrew, Do you know what exactly does this do in practice?

For example, "mls qos queue-set output 1 threshold 2 560 640 100 800" will not allocate 800% buffers, since Drop threshold 2 and Drop Threshold 3 will max out at 640% of the queue.


And on top of that, 640% will not be reach, unless you are forcibly dissalocating buffers from some other ports into the common pool, using another queue set.

bonnardopjl Mon, 07/19/2010 - 09:00
User Badges:

Hi, Royal Blue !


I think the issue is due to strange default setting of the srr shaper.

Try this one on the egress interface :

srr-queue bandwidth shape  0  0  0  0


Explanation: the default shaping parameters for an interface are:

Shaped queue weights (absolute) :  25 0 0 0

Queue 1 is shaped to 1 / 25th of the bandwidth, in other words 4% of the bandwidth, which is not convenient.

It is necessary to override this value, this is done thanks to srr-queue bandwidth shape  0  0 0  0 

Since the traffic was mostly Real Time traffic, it goes in Q1 !!!

Actions

This Discussion