cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
935
Views
10
Helpful
7
Replies

QOD WRED - MPD theory

avillalva
Level 1
Level 1

Hi all,

With regards to the Mark Probability when applied to WRED I was wondering why Cisco have selected 1/10 as default.

As I understand it, this means that a maximum of 1/10 packets will be dropped while the average queue depth is between min and max thresholds.

It seems to me that the MPD should increase automatically (and exponentially as the queue depth reaches max threshold.(A little like the amount of energy required to reach the speed of light.)

What is the significance 1/10 packets as it relates to TCP? Why would you jump directly from 1/10 packets to 10/10 packets (once the max threshold is reached)?? why not 1/9, 1/8, 1/7...etc as you approax the max?

Thanks all, I look forward to an interesting discussion.

Andres

7 Replies 7

Istvan_Rabai
Level 7
Level 7

Hi Andres,

I suppose there is no big theory behind why WRED is implemented as it is.

The current algorithm is simple enough to implement, and especially for the router to process it.

And it achieves the desired result of slowing down some randomly selected TCP flows to avoid congestion.

Cheers:

Istvan

Joseph W. Doherty
Hall of Fame
Hall of Fame

"With regards to the Mark Probability when applied to WRED I was wondering why Cisco have selected 1/10 as default."

If I recall correctly, that was what was recommended when idea published.

"It seems to me that the MPD should increase automatically "

It does. 1/n is at max.

"What is the significance 1/10 packets as it relates to TCP? Why would you jump directly from 1/10 packets to 10/10 packets (once the max threshold is reached)??"

Seems to work well to get TCP to reduce it's congestion window. Beyond MAX, you then have classical FIFO tail drop.

"Thanks all, I look forward to an interesting discussion. "

Unsure you might find that here, but lots of information on the pluses and minuses of RED on the Internet. You might start with Dr. Floyd's RED web page, i.e. http://www.icir.org/floyd/red.html

Hi,

Thanks for your response. With regards to this particular reply:

"It does. 1/n is at max. "

Since it increases automatically then why would they have a max?? why wouldn't the max just be 1 (100%)?

There are plenty of simple mathematical formulas that will give you a smooth exponential growth to 1 (y=x^2) as an example.

Doesn't it make more sense to incrementally increase the rate of drop past 1/10 if 1/10 is not sufficient? Why would it just jump to 100%? it's almost like the QOS version of "bugger it...I give up".

Max+1 is 100% drop, for "normal" RED; alternatives have been proposed.

RED drops at 100% at max+1 for several reasons. Buffer space is finite. Large buffers, although they might preclude some drops, increase queuing latency. Not all flows adapt (i.e. slow) to drops (RED is really best suited for TCP like flows).

The design purpose of RED was to provide some "simple" but better than FIFO tail drop to improve "goodput". There's research, though, that sometimes tail drop is better in some situations. This due to how some TCP variants respond to single packet drops vs. multi-packet drops.

One problem I've seen with routine RED, if multiple flows share the same queue, RED can drop packets from flows that aren't causing the congestion. (Cisco's flow based WRED addresses this issue, but not as widely supported as the non-flow based WRED.)

[edit]

BTW, the reason I provided the reference to Dr. Floyd's RED page, believe she and Dr. Jacobson were the two the came up with and published the idea.

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Andres,

>> What is the significance 1/10 packets as it relates to TCP? Why would you jump directly from 1/10 packets to 10/10 packets (once the max threshold is reached)?? why not 1/9, 1/8, 1/7...etc as you approax the max?

The Wred idea is to selectively discard packets before the physical queue is full to avoid to fall back to tail drop.

By providing different thresholds for different traffic classes and different MPDs you can influence the performance of TCP flows based on their IP prec or DSCP.

the drop probability grows linearly from 0 for queue length less then min_threshold to 1/MPD as approaching max_threhold-.

if the drop probability would be left to grow in a smoother way the WRED effect would be reduced:

all TCP sessions that have at least one packet dropped reduce their windows.

This works well when multiple TCP sessions with different QoS marking are present on the link.

Again the idea is to provide diffserv services.

Hope to help

Giuseppe

Hi Guiseppe,

Thanks for your response.

Thanks for letting me know that the drop rate is linear.

"all TCP sessions that have at least one packet dropped reduce their windows"

I think I see, so the concept is that if the session is a large one (i.e more than 10 packets transeferred) then the probability of reducing the rate of this session becomes 1. If the session is smaller reducing the rate is ineffective because it will be over sooner rather than later anyway?

Do you think that would be the basic idea behind the MPD?

Hello Andres,

TCP windows are expressed in bytes and not in packets but this is just a detail here.

WRED helps TCP goodput by avoiding effects of synchronization in TCP sessions:

with generalized tail drop all TCP sessions have packets dropped and all of them reduce their windows.

This can cause a sawlike behaviuor on the link usage with reduced performances because all TCP sessions will increase the sessions at the same time and this will likely cause congestion again the queue will be full again again generalized tail-drop and so on.

By dropping in a differentiated way for different traffic classes and randomly within each single class (if the average queue is between the two thresholds) another objective is to maximize usage of links and the so called TCP goodput.

With WRED some TCP sessions will reduce their windows with other sessions not affected this allows to have an higher average (overe time) level of usage of the links.

Let me say that is really difficult to understand the effects of WRED parameters and to test them.

We tried using Chariot emulators of TCP hosts on 155 Mbps SDH link in lab tests but results variation with the same parameters were comparable with the possible effects of changing the parameters.

Hope to help

Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card