3750 Srr-queue bandwidth limit problem

Unanswered Question
May 23rd, 2008
User Badges:


In order to attempt a quick and dirty way of limiting the outbound traffic on our 3750 switch I attempted to simply use:

srr-queue bandwidth limit 22

on a FastE uplink. From what I understood the command should only limit egress traffic from the port to which it is applied. This appears to be the case when the bandwidth is not maxed out. For example if there is 15 Mbps of outbound traffic going through the port and 15 Mbps coming into the port all traffic passes as was my intention. However, when outbound traffic goes over ~20 Mbps (what my shaping goal was, I assumed other 2 Mbps in the difference between the applied command and the actual output I see was TCP overhead), inbound traffic is restricted to nothing for all intensive purposes.

Is something applied wrong, is this a bug of some type, what is going on here?

My thought was CPU overload with the shaping, however, the CPU never goes above 10%.

Any help would be appreciated.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
cisco_lad2004 Fri, 05/23/2008 - 07:38
User Badges:
  • Gold, 750 points or more

I am not sure I understood your issues correctly. but here is what I have experienced using same command on 3560.

when I tried limiting BW to 10Mb, I could traffic was only allowed in at 6Mb Rate. and when I moved on to 15Mb, I got 12Mb exactly out of it. and yes u guessed it 20Mb, gave me a throughput of 18Mb. I can see increments of multipliers of 6Mb.

At the end, I used a different approach..long winded but gives me excatly what I need.


caplinktech Fri, 05/23/2008 - 07:53
User Badges:

Hi Sam,

I still think the delta in the numbers is some type of overhead because I am getting 20Mbps on the nose (not a multiple of 6) when set to 22.

The problem I am having is that when the egress traffic hits that limit, the switch is also blocking inbound traffic as well. But I don't think it is attempting to use the sum traffic for 2 reasons, 1, the documentation for the command indicated it is egress traffic only, and 2, the port will pass 15 in and 15 out without any blocking.

I do realize that I could use a policer and a service-policy to accomplish the same thing but the simplicity of this command if I could get it working correctly would be a nice "tool" to have handy in other situations where multiple ports all needed to be shaped to different speeds for whatever reason.

cisco_lad2004 Fri, 05/23/2008 - 08:05
User Badges:
  • Gold, 750 points or more

Hi David

I see what u mean with regards to ur issue. I have not experienced this.

however I just managed to get this info:



Usage Guidelines

If you configure this command to 80 percent, the port is idle 20 percent of the time. The line rate drops to 80 percent of the connected speed. These values are not exact because the hardware adjusts the line rate in increments of six.

Note The egress queue default settings are suitable for most situations. You should change them only when you have a thorough understanding of the egress queues and if these settings do not meet your quality of service (QoS) solution.


johgill Sat, 05/24/2008 - 06:51
User Badges:
  • Bronze, 100 points or more

Your understanding is correct, srr-queue bandwidth limit should idle the tx ring to emulate a slower link. When you use a value of 22, you will actually get an operation bandwidth of 23.81% of link speed. In newer code, you can use "sh mls qos int x/y/z queueing" to see the exact rate you are getting:

3750-1#sh mls qos int fa 3/0/3 queue


Egress Priority Queue : disabled

Shaped queue weights (absolute) : 25 0 0 0

Shared queue weights : 25 25 25 25

The port bandwidth limit : 22 (Operational Bandwidth:23.81)

The port is mapped to qset : 1

This is variance is due to a hardware limitation.

The shaping does not utilize the CPU, it is programmed into the port ASIC.

My first thought would be to try this same test with UDP traffic. We know TCP does not like drops and quickly drops the tx window when this occurs. Is this possible to try?


John Gill

caplinktech Sat, 05/24/2008 - 08:01
User Badges:

Hi John, don't think my ios version handles the operational bandwidth, but here is my output from that command. As you will see there is no other qos running on this particular switchport.

myswitch#sho mls qos int g1/0/24


trust state: not trusted

trust mode: not trusted

trust enabled flag: ena

COS override: dis

default COS: 0

DSCP Mutation Map: Default DSCP Mutation Map

Trust device: none

qos mode: port-based

myswitch#sho mls qos int g1/0/24 que


Egress Priority Queue : disabled

Shaped queue weights (absolute) : 25 0 0 0

Shared queue weights : 25 25 25 25

The port bandwidth limit : 22

The port is mapped to qset : 1

However, I am still unsure if my problem is being understood correctly. I may have introduced a bit of confusion when I specifically stated TCP overhead, when in fact I probably should have said IP overhead as the traffic flowing in the port is both tcp and udp.

The key here is that the behavior of the switchport in relation to Egress traffic is exactly as expected. If egress traffic is at 15 Mbps, everything works great. If Egress traffic jumps above ~20Mbps the Egress traffic is shaped perfectly to the limited speed. The problem I am experiencing is that while the Egress is being shaped (bandwidth over ~20Mbps out), the switchport appears to dropping about 90% of INBOUND traffic as well. In some ways while shaping is occurring it is like the shaper is acting on the traffic sum rather than on the Egress traffic only. However at the same time if only the sum exceeds the ~20Mbps, no shaping occurs, which is what has me confused.

Next time it occurs (won't happen now that it is the weekend and bandwidth usage is way down), I will post an image of the port output to hopefully clarify what I am trying to describe.

johgill Sat, 05/24/2008 - 08:35
User Badges:
  • Bronze, 100 points or more

The operational bandwidth will be the same as the output I posted, it was just added in the output of a later version of code.

I understand your point of TCP overhead, but I am thinking of the TCP receive window and how it reacts to dropped packets. If you suddenly hit a wall and drop packets, the other side of the connection will slow down. If there are still aggressive flows and the original flow keeps dropping packets, you will continue to slow down the receiving flow.

I am saying if you test with UDP instead of TCP, you can get a more realistic idea of what the switch is doing vs. what TCP is doing in reaction to the dropped packets.


John Gill


This Discussion