cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1811
Views
4
Helpful
6
Replies

WRED configuration with CBWFQ

pankajkulkarni
Level 1
Level 1

Hello All,

I have a query regarding the tail drop behavior when using WRED in conjunction with CBWFQ.

What happens when traffic for any of the precedence queue exceed the max-threshold?

Does the excess packets spill over to class-default or tail-drop resulting in retransmission?

policy-map policy

class class_test

bandwidth 4000

random-detect

random-detect exponential-weighting-constant 10

random-detect precedence 0 32 256 100

random-detect precedence 1 64 256 100

random-detect precedence 2 96 256 100

random-detect precedence 3 120 256 100

class class-default

fair-queue

Thankyou in advance,

Pankaj

6 Replies 6

jwdoherty
Level 1
Level 1

Excess packets that match a class do not spill over into class-default. Class-default has the packets they don't match an explict prior class. Sort of a "none of the above" class. (If not explicity defined, CBWFQ still has an implicit class-default.)

WRED tracks an average queue depth. As each packet is added to the class queue, a check is made to see whether the packet should be dropped.

In your example you're using precedence based WRED. So if the packet has an precedence of 2 and the average queue depth is 24, the packet would not be dropped. If the average queue depth was 257, every precedence 2 packet would be dropped (effectively FIFO tail drop). If the average queue depth was between 96 and 256 there is a change of packet being dropped. (Strait line probability; zero drop at 95 and 1% at 256.)

With regard to retransmissions, generally depends on what the protocol does about dropped packets.

Hope this helps.

BTW:

I've found in practice, and especially on low speed WAN interfaces, multi-tier tail drops works better than staggered random drops.

e.g.

random-detect precedence 0 63 64 10

random-detect precedence 1 127 128 10

random-detect precedence 2 191 192 10

random-detect precedence 3 255 256 10

There are two situations to consider -

1. During period of non-congestion

2. During period of congestion.

Let us assume there are 3 classes configured, classA (priority 4), classB (priority 3)and classC (priority 2) and each has been allocated 25% of the interface bandwidth. Each class is configured for precedence based WRED.

1. During period of non-congestion (Link is not congested but the individual Queue is full).

Let's say, classA queue is FULL. Whereas the other classes are empty (0). Will the packets arriving in classA be dropped or would be sent to class-default.

Tail-dropping the packets in the above scenario would be sub-optimal since additional interface bandwidth is available. Dropping packets on non-congested interfaces resembles policing than congestion management.

Using "congestion management" we define the minimum reserved bandwidth for each class during congestion.

2. During period of congestion.

What happens when queue is FULL and the interface is experiencing congestion too.

Thanks for the previous reply and would appreciate any ideas.

Pankaj

Assume for this discussion you're not doing any class traffic shaping or class traffic policer, nor are you using a LLQ class.

If the link isn't congested, there shouldn't be any queuing in any class. When there is link congestion, packets queue up in the class queues. When there are queued packets in any class or classes, there shouldn't be any additional link bandwidth, since bandwidth settings set floors, not ceilings.

When multiple classes compete for bandwidth, they share it based on their ratios. Using your example of a setting of 25% for all classes, and if there were only packets in classA, they would obtain 100% link bandwidth. If packets were in two classes, they each would get 50% of the link bandwidth. If packets are in three classes, they each would get a third of the link's bandwidth.

If you had defined classA to have 20% and classB and classC to each have 10%, any class by itself would get 100% of the link bandwidth. If all three classes had packets, classA would obtain 50% of the link while classB and classC would obtain 25% each. If just classB and classC, each would get 50%. If classA and classB (or classC), classA would get two thirds and classB would get one third.

As for WRED, its real purpose is to try to better manage a TCP sender's transmission rate and to avoid global synchronization of TCP flow rates.

As for ideas, instead of jumping into managing individual classes, and WRED settings, I suggest you try CBWFQ with just a class default defined that uses FQ. That alone often does very well. (Avoid using FQ in class default if you have any other classes other the LLQ on non-7500s.)

Next level up would be to either mark traffic with IP precedence, FQ will weigh flows (believe you get the ratios of 1:2:4 if you use precedences of 0, 2 and 4), and/or LLQ for real-time traffic.

Where individual classes can be handy is when dealing with TCP vs. non-TCP traffic. Some non-TCP traffic can't tolerate drops like TCP can, so mixing both in the same class that has drops can be a problem. Also, TCP should slow with drops, non-TCP often doesn't, which if they are in the same class usually has the non-TCP traffic capturing all the class bandwidth.

If I understand right, without the implementation of WRED for individual classes, during congestion the individual classes would compete for class-default bandwidth in the ratio of allocated bandwidth.

Now, if WRED is configured per class and the no of packets in the queue exceeds the max-threshold the excess packets would be tail-dropped resulting in the source to slow the transmission rate rather than compete for class-default bandwidth.

Pankaj

WRED is a drop management technique used within a class. It doesn't have anything directly to do with the class-default class's interaction with other defined classes or how the classes compete for bandwidth.

Class-default competes for bandwidth against other user defined classes within the policy.

The part that might be confusing is there's always a class-default. If you don't define it, it's implicit. If it's not explicitly defined, or if you defined it, but without FQ, I believe it defaults to 25% of the available bandwidth.

Yes, when WRED's max-threshold is exceeded, it tail drops just like FIFO. Whether the source will slow depends on the protocol the dropped packets are from. TCP should slow, many other protocols will not.

Jospeh,

Thanks for the prompt responses and detailed explanation for my queries.

Apprecaite the time and effort spent.

Pankaj

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: