Our main Avaya PBX medpro lan cards are connected to a 3750 switch, 100mbps/full duplex, running 12.2(25)SEA. This PBX will have a maximum of 300 G.711 calls going through the medpros and I've configured Auto Qos and mls qos as shown below on the ports the medpros are connected to. We've run into trouble with the default Auto QOS parameter when a medpro gets above 100 calls and we found out that the port was shaping at 10mbps.
We added 'priority-queue out' on the medpro interfaces which really helped, but we're still having problems, which I think is related to the default input queue parameters. The problem is randonly dropped packets as the call load increases.
I tweeked the following global mls input-queue parameters with some good results but am still experiencing dropped packets.
The challange - what would be the best mls input/output queue parameters for a 3750 switch which is expected to handle an average of 10mbps of G711 traffic per interface upwards to 30mbps traffic per interface. (Unfortunately I cannot use another switch although I do have a 4506 on order).
Here is the config on the medpro interfaces, along with the global mls statements that I changed - see *'s. I moved DSCP 46 to input queue 2 threshold 2 and gave more bandwidth and buffers to queue 2 - but I need to do something more with this. I almost feel like turing off auto qos completely....would that be a possibility?
mls qos map cos-dscp 0 8 16 26 32 46 48 56
*mls qos srr-queue input bandwidth 50 50
mls qos srr-queue input threshold 1 8 16
*mls qos srr-queue input threshold 2 34 100
*mls qos srr-queue input buffers 33 67
mls qos srr-queue input dscp-map queue 1 threshold 2 9 10 11 12 13 14 15
mls qos srr-queue input dscp-map queue 1 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue input dscp-map queue 1 threshold 3 32
mls qos srr-queue input dscp-map queue 2 threshold 1 16 17 18 19 20 21 22 23
*mls qos srr-queue input dscp-map queue 2 threshold 2 33 34 35 36 37 38 39 46
switchport access vlan 126
switchport mode access
srr-queue bandwidth share 10 10 60 20
srr-queue bandwidth shape 10 0 0 0
mls qos trust dscp
power inline never
auto qos voip trust
I think the srr-queue in the interface applies to output queue and queue 1 is shaped to 10M . And as you are using the DSCP 46 which by default is sent via Queue 1 and you have also given priority-queue out for the queue1 to be a priority queue .
Just change this " command srr-queue bandwidth shape 30 0 0 0 " -so that your maximum limit is achieved .
I hope you only have ur voice applications set to dscp 46 , so that if any of your other applications having dscp 40 to 47 may use Queue1
pls let me know if your question was diff
I agree with the comments above... however in addition it's worth pointing out that the input queues are rarely the source of a problem. Basically this is for packets which are waiting to be switched to an outgoing interface, so to be queued on ingress the switch fabric would need to be overloaded.. this rarely happens even if you try to make it happen with packet generators or whatever.
You absolutely can turn off autoqos - in my experience once you reach the point where you know why autoqos isn't doing what you need, you are far better off reading the QOS section of the switch manual and configuring it yourself... typically when doing this you will modify the srr statements on the ports, the queue buffer/threshold settings, and the DSCP-output-q tables.
You will need to modify the output scheduling as suggested on the port to the medpro. If you are using the gig uplinks then 10% default should be OK unless you have a lot of other voice kit on the other ports.
Please rate helpful posts...
Thanks. Once I get the intferace output queues set correctly would you recommend I put the input queue parameters back to default because as you say the input queues only have problems when a switch fabric is overloaded?
Thanks for your help!
Regarding your comment...
'Just change this " command srr-queue bandwidth shape 30 0 0 0 " -so that your maximum limit is achieved.'
...should it be 3 0 0 0 instead? This is how the configuration guide explains the command:
"This example shows how to configure the weight ratio of the SRR scheduler running on an egress port. Four queues are used, and the bandwidth ratio allocated for each queue in shared mode is 1/(1+2+3+4), 2/(1+2+3+4), 3/(1+2+3+4), and 4/(1+2+3+4), which is 10 percent, 20 percent, 30 percent, and 40 percent for queues 1, 2, 3, and 4. This means that queue 4 has four times the bandwidth of queue 1, twice the bandwidth of queue 2, and one-and-a-third times the bandwidth of queue 3.
Switch(config)# interface gigabitethernet2/0/1
Switch(config-if)# srr-queue bandwidth share 1 2 3 4"
Here , we are not going to touch the Share part , as the bw is split for four queues .As our requirement is priority queueing , we are going to only change the shape part , where you can make the bw limit of the first queue . As you said ,ur limit may reach 30M , change from 10 to 30
Hope it helps
Thanks Vanesh - I confused myself when looking at the config guide and looked at the shared part instead of the shaped part. Here is what I meant to comment on:
Concerning shaping, the config quide says:
"For weight1 weight2 weight3 weight4, enter the weights to control the percentage of the port that is shaped. The inverse ratio (1/weight) controls the shaping bandwidth for this queue. Separate each value with a space. The range is 0 to 65535."
"This example shows how to configure bandwidth shaping on queue 1. Because the weight ratios for queues 2, 3, and 4 are set to 0, these queues operate in shared mode. The bandwidth weight for queue 1 is 1/8, which is 12.5 percent:
Switch(config)# interface gigabitethernet0/1
Switch(config-if)# srr-queue bandwidth shape 8 0 0 0"
So if I read this correctly the dafault value of 10 for queue 1 is really 1/10 which is 10% of the port - in our case this equates to 10% of 100mb which is 10mb.
Therefore if I change it to 30, the inverse would be 1/30 which is more like 3%. By using 3 the inverse would be 1/3 or 30%
What do you think?
Hey , thanx man I just got confused and you are right , so I think we need to configure
" srr-queue bandwidth shape 3 0 0 0 "
pls check and share the results
Jim I read through this and reread the product documentation. I am very interested to see if you have observed further results. The documentation seems to indicate that when the priority egress queue is enabled (priority-queue out) that weight1 in *both* the srr shape *and* srr share commands is ignored. Queue 1 is serviced exclusively and until exhausted. But -- does that documentation match what you see?
"When you configure the priority-queue out command, the shaped round robin (SRR) weight ratios are affected because there is one fewer queue participating in SRR. This means that weight1 in the srr-queue bandwidth shape or the srr-queue bandwidth shape interface configuration command is ignored (not used in the ratio calculation). The expedite queue is a priority queue, and it is serviced until empty before the other queues are serviced."