cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2751
Views
35
Helpful
26
Replies

drops in queue on subinterface with out congestion (QOS)

Hello colleagues

The problem is, that i have ~ 7-8 Mbps on 65Mbps link, but my service policy seems to drop packets according to queue size.

In my understanding, policy map start to drop traffic only when congestion appear, is it so?

same situation on ASR 1002 and 7206

ios  Version 15.2(4)S4 and  Version 15.1(4)M7

 

Here is config for policy map on WAN link

 

class-map CRITICAL

match precedence 5

class-map match-any IMPORTANT

match precedence 3

match precedence 6

class-map BUSSINESS

match precedence 1

 

policy-map SHAPING

class class-default

  shape average 65000000

  service-policy SP-QOS

 

policy-map SP-QOS

Class CRITICAL

priority percent 70

Class IMPORTANT

bandwidth percent 12

Class BUSSINESS

bandwidth percent 12

random-detect dscp-based

Class class-default

bandwidth percent 6

random-detect

 

interface GigabitEthernet0/0/1.9

encapsulation dot1Q 9

 ip address xxxxxxxxx

 ip nat outside

 ip nbar protocol-discovery

service-policy output SHAPING

 

and here, what I see in statistic

  Service-policy output: SHAPING

 

    Class-map: class-default (match-any)

      129798770 packets, 40181696005 bytes

      5 minute offered rate 7304000 bps, drop rate 0000 bps

      Match: any

      Queueing

      queue limit 270 packets

      (queue depth/total drops/no-buffer drops) 0/11680/0

      (pkts output/bytes output) 129779213/40165064207

      shape (average) cir 65000000, bc 260000, be 260000

      target shape rate 65000000

 

      Service-policy : SP-QOS

 

        queue stats for all priority classes:

          Queueing

          queue limit 512 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 13484092/3912327041

 

        Class-map: CRITICAL (match-all)

          13480980 packets, 3911647909 bytes

          5 minute offered rate 422000 bps, drop rate 0000 bps

          Match:  precedence 5

          Priority: 70% (45500 kbps), burst bytes 1137500, b/w exceed drops: 0

 

 

        Class-map: IMPORTANT (match-any)

          7721261 packets, 1021340017 bytes

          5 minute offered rate 108000 bps, drop rate 0000 bps

          Match:  precedence 3

          Match:  precedence 6

          Queueing

          queue limit 64 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 7715467/1020879486

          bandwidth 12% (7800 kbps)

 

        Class-map: BUSSINESS (match-all)

          47724414 packets, 15630803933 bytes

          5 minute offered rate 3393000 bps, drop rate 0000 bps

          Match:  precedence 1

          Queueing

          queue limit 64 packets

          (queue depth/total drops/no-buffer drops) 0/7388/0

          (pkts output/bytes output) 47741826/15628877447

          bandwidth 12% (7800 kbps)

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 1 packets

            dscp       Transmitted         Random drop      Tail drop          Minimum        Maximum     Mark

                    pkts/bytes            pkts/bytes       pkts/bytes          thresh         thresh     prob

 

            af11     2990415/861605540      25/32285        645/884994            28            32  1/10

            af12      677310/367870926       0/0             10/5466              24            32  1/10

            af13      154681/48657019        3/1634           4/307               20            32  1/10

 

        Class-map: class-default (match-any)

          60817716 packets, 19603759092 bytes

          5 minute offered rate 3363000 bps, drop rate 0000 bps

          Match: any

          Queueing

          queue limit 64 packets

          (queue depth/total drops/no-buffer drops) 0/4292/0

          (pkts output/bytes output) 60837828/19602980233

          bandwidth 6% (3900 kbps)

            Exp-weight-constant: 4 (1/16)

            Mean queue depth: 1 packets

            class       Transmitted         Random drop      Tail drop          Minimum        Maximum     Mark

                    pkts/bytes            pkts/bytes       pkts/bytes          thresh         thresh     prob

 

            0         3474437/1110326227    269/276936       742/851882            16            32  1/10

            1               0/0               0/0              0/0                 18            32  1/10

            2               0/0               0/0              0/0                 20            32  1/10

            3               0/0               0/0              0/0                 22            32  1/10

            4               0/0               0/0              0/0                 24            32  1/10

            5               0/0               0/0              0/0                 26            32  1/10

            6               0/0               0/0              0/0                 28            32  1/10

            7               0/0               0/0              0/0                 30            32  1/10

26 Replies 26

i think i understand why there are drops. Looks like RED is working fine and do some drops in order to avoid congestion

 

 

PS. Nope, still see that Tail Drops counter is growing

Hi, Is that fixed?

i still see a little bit dropped packets, but not so much as at first. i tuned a little parameters, acording to Akash advices.

Have you tried to remove random-detect ?

no, i need random detect on this queue. But, random drops has their own counter in this output (Random drop  pkts/bytes). And i`m confused about tail drops

So tried that increasing threshold min and max right? 

  1. queue limit

  2. Exp-weight-constant

http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/quality-of-service-qos/white_paper_c11-481499.html 

 we had exact similar issue, but looks like need to consider what traffic going through the class, if like replication/file copying which using extreme burst packets ... then looks like that random-detect is checking utilization dropping some packets randomly and never gets the maximum bandwidth ... so may be looking for fair queue and remove that random-detect ... after that our link and that class is working 100%.

Thank you!

yes, there is some burst traffic... i`ll try you suggestion!

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Why do you need RED?

As the other posters have already noted, WRED (default) parameters are likely insufficient for 65 Mbps.  (Often Cisco defaults appears to be what you would want for a "typical" T1/E serial WAN link.)

WRED can be surprisingly difficult to configure correctly.

WRED tail drops happen when the average queue length exceeds the max threshold.  I.e. very much like tail drops on a normal FIFO queue.

However, ideally, you should not see WRED tail drops, just random early drops.  When you see WRED tail drops, you have an issue.

When you see WRED tail drops, you have an issue.

Yes, on most of my links i didn`t see any tail drops, just random drops. This link - is one that have such issue. Strange thing is that - all my links have very simular traffic (web,ftp,rdp) and none of them has tail drops on such small utilization.

 

for my understanding, please correct me if i wrong:

All parameters for RED configuration i see for this situation - is thresholds. So, we can increase max threshold in order to avoid tail drops. But if we will increase it to much, we will have a lot of non-actual traffic that waiting in a queue?

For my situation, we see small utilization of bandwith, so if i see tail drops, it mean that i have such bursty traffic, that tockens i get in one time is not enough. To avoid it - i can increase Bc. It helps a little.

If i turn off RED - there will be no mechanism for congestion avoidance on my link, and aggressive non-critical traffic will take a lot of bandwith, that`s why i want RED on class-default. Am i wrong?

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Are your other links are also using a shaper for the same bandwidth too?  If so, then yes, for some reason your traffic appears to be more bursty on this interface.

Yes, anytime you increase queue depths you often also increase latency (from more packets waiting in the queue to transmit).

Also yes, increasing Bc will help allow transient bursts to pass but then you're going "overrate" for increased time intervals.  That might be fine, but I assume there's a reason your shaping in the first place.  If you're overrate, your WAN vendor might drop excess traffic or queue it.  The problem with the former is obvious (although loss of what gets drop first might not be), but the latter also means you lose control over traffic prioritization.

Personally, if it's supported, I generally recommend FQ over WRED.  If there's transient congestion within the class, FQ will insure one monster flow isn't adverse to other flows within the same class.  Also, FQ should tail drop packets from the congestion causing flow(s) before the non-congested flows in that class.

BTW, for WRED to do its magic, the traffic has to be the kind that slows its transmission rate when a drop is detected, e.g. TCP.

all important UDP traffic (VoIP) goes to critical queue, so if any UDP goes to class-default (and WRED) -> int`s not so bad. Main traffic in class-default is TCP, so it`s ok.

why do you prefer FQ instead of WRED? how it can replace WRED?

 

Thank you for help!

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Why do I prefer FQ?  Already explained, I thought.

How do enable FQ?  Remove WRED statements from class and add fair-queue.  (Note, you many need to adjust queue-length parameter.)

But in case of FQ, you must describe all traffic very carefully. I mean:

In case of WRED, in perfect world, i will not have monster flow - one random drop and our growing tcp flow decrease it speed. So all flow in one class will have they band in most cases and there will be not so much fight for bandwith

In case of FQ, if i will have such big flow - it will influence on all flow in a class. Only tail drop will decrease this flow speed. If in one class will be 2 big flows, they will fight for bandwith and tail droped. So bandwith utilization will be not so effective

am i wrong?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Are you wrong?  Yes and no.

First, with WRED, if there's a monster flow, along with other flows, the packet being dropped is random, i.e. it might not be from the monster flow.

Second, with WRED, the packet being randomly dropped is based on a moving average, so actual queue size can be much larger than the drop value trigger.  (Interesting, because it's a moving average, packets can be also dropped when the queue is small [after shrinking]).

Third, normal WRED drops packets as they are being added to the queue, so that, along with WRED's moving average, can delay a flow "knowing" about the drop.  I.e. A lot more packets can slam into your queue before the flow "knows" to slow.

Lastly, with WRED, it's one queue, so while all the above is happening, other flows are being impacted.

Yes, with FQ the monster flow will be tailed dropped, but other concurrent flows should not be.  Often, the monster flow, because there's no moving average delay, may slow faster than when using WRED.

With FQ, other flows are dequeued concurrently, so the monster flow has little impact on their queuing latencies (again unlike WRED).  Also again, other flow packets shouldn't be "mistakenly" dropped, as WRED can do.

In theory, a better WRED is Cisco's FRED, but its not widely provided across their products.

Interesting here's what Van Jacob has to say about RED: "there are not one, but two bugs in classic RED."

Research RED and you find lots and lots of research and variants to try to make it work better, including Dr. Floyd's (the creator of RED) revision - that should be a clue that it's not as simple as it appears.

 

PS:

Some later TCP implementation carefully monitor RTT, if they detect a jump in latency, they slow their flows, so for those you don't want to drop them at all.

Also note, I spent over 10 years working different QoS methodologies, in a large scale international (production) network.  My experience, WRED works better with high speed links and many flows.  It doesn't work so well at the slow end (i.e. T1/E1) with few flows.  At the slow end, latency is the bigger issue, and FQ better manages that.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card