cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1337
Views
0
Helpful
7
Replies

QoS 3750 buffers and weighted tail drop

Mark Bowyer
Level 1
Level 1

From what I understand, you assign buffers to each queue which are not guaranteed in times of congestion. Then you assign weighted tail drop thresholds which are guaranteed during times of congestion. My question is, when it talks about congestion, does it mean buffer congestion or bandwidth congestion and does congestion start when the 1st threshold is hit or when it gets to 100% of the buffers or bandwidth?

Sent from Cisco Technical Support iPad App

7 Replies 7

Mark Bowyer
Level 1
Level 1

Another thing I am confused about is the default shaped and shared weights on the interface.

Egress Priority Queue : enabled

Shaped queue weights (absolute) :  25 0 0 0

Shared queue weights  :  25 25 25 25

The port bandwidth limit : 100  (Operational Bandwidth:100.0)

The port is mapped to qset : 1

Does this mean that by default, output queue 1 on every interface is restricted to 4mb? and how does the fact that the priority queue is enabled, affect that?

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

With the shown default settings, Q1 is limited to 4% of the interface's bandwidth.

I believe when PQ is enabled, shape/share no longer applies to Q1.  (I recall shape ratios for other queues no longer account for Q1.)

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Buffers may, or may not, be guaranteed.  It depends on device configuration.

WTD values are set so you can't exceed them, but you might have drops due to lack of buffers before you reach them.

Congestion is when interface is congested, which is anytime there's a frame/packet waiting to be transmitted.

Ok, what I found is, I pretty much just want to give video conferencing and voip, priority over everything else. I assigned vid conf and voip markings to the ingress and egress priority queues, on ingress for queues 1 and 2 I set threshold 1 to 50 and 2 to 85 and gave the voip and vid conf markings threshold 3 as well as everything else which is marked with 0 and using queue 1.

But what I found is, although I am not restricting the buffers on any of the traffic, when the queue was getting full, although I had put the realtime traffic in the priority queue, it was still dropping packets. So, I put all of the other traffic being marked with 0, down to threshold 2 in ingress queue 1 which was at 85. At one site I was still seeing drops, until I changed threshold 2 on queue 1 to 16.

So I guess it goes back to what you just said about the buffers starving out before they get to the threshold. How would you go about troubleshooting these buffer issues? There doesnt seem to be many commands that tell you anything about buffer usage.

How would you design it, would you give more buffers to the realtime traffic compared to all other traffic? But then how can you tell if the other traffic is suffering?

If you are going from a 100mb link that is utilized say 5%, to a 1gb link that is utilized say 2%, can there be any congestion? Will everything be sent out straight away without buffering much? Obvioulsy if the link is fully topped out, then it will buffer.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

On a 3750, you might find disabling QoS works better.  This allows all traffic to share all the buffers supported by the hardware.  Often this is ideal for transient congestion.

If you have sustained congestion, then you can configure 3750 QoS, but getting an optimal QoS configuration, on the 3750 architecture, is surprisingly difficult (due to buffer resource considerations).

A nice reference on 3750 QoS, is: https://supportforums.cisco.com/docs/DOC-8093

PS:

You can have congestion (transient) even on a 2% utilized gig link.  Conversely, even a fully topped out link might not buffer (or be congested).

Congestion is when a frame/packet must wait to be transmitted.

Congestion is quite common, but depending on it severity, it may not be adverse to the needs to the traffic.

So is it also possible that you could be sending packets quicker on ingress than your egress can handle?

Sent from Cisco Technical Support iPad App

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Yes, it's possible.  If ingress and egress support same bandwidth, you might see such if the device cannot keep up with the ingress rate.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: