With regards to the Mark Probability when applied to WRED I was wondering why Cisco have selected 1/10 as default.
As I understand it, this means that a maximum of 1/10 packets will be dropped while the average queue depth is between min and max thresholds.
It seems to me that the MPD should increase automatically (and exponentially as the queue depth reaches max threshold.(A little like the amount of energy required to reach the speed of light.)
What is the significance 1/10 packets as it relates to TCP? Why would you jump directly from 1/10 packets to 10/10 packets (once the max threshold is reached)?? why not 1/9, 1/8, 1/7...etc as you approax the max?
Thanks all, I look forward to an interesting discussion.
Max+1 is 100% drop, for "normal" RED; alternatives have been proposed.
RED drops at 100% at max+1 for several reasons. Buffer space is finite. Large buffers, although they might preclude some drops, increase queuing latency. Not all flows adapt (i.e. slow) to drops (RED is really best suited for TCP like flows).
The design purpose of RED was to provide some "simple" but better than FIFO tail drop to improve "goodput". There's research, though, that sometimes tail drop is better in some situations. This due to how some TCP variants respond to single packet drops vs. multi-packet drops.
One problem I've seen with routine RED, if multiple flows share the same queue, RED can drop packets from flows that aren't causing the congestion. (Cisco's flow based WRED addresses this issue, but not as widely supported as the non-flow based WRED.)
BTW, the reason I provided the reference to Dr. Floyd's RED page, believe she and Dr. Jacobson were the two the came up with and published the idea.
>> What is the significance 1/10 packets as it relates to TCP? Why would you jump directly from 1/10 packets to 10/10 packets (once the max threshold is reached)?? why not 1/9, 1/8, 1/7...etc as you approax the max?
The Wred idea is to selectively discard packets before the physical queue is full to avoid to fall back to tail drop.
By providing different thresholds for different traffic classes and different MPDs you can influence the performance of TCP flows based on their IP prec or DSCP.
the drop probability grows linearly from 0 for queue length less then min_threshold to 1/MPD as approaching max_threhold-.
if the drop probability would be left to grow in a smoother way the WRED effect would be reduced:
all TCP sessions that have at least one packet dropped reduce their windows.
This works well when multiple TCP sessions with different QoS marking are present on the link.
Thanks for letting me know that the drop rate is linear.
"all TCP sessions that have at least one packet dropped reduce their windows"
I think I see, so the concept is that if the session is a large one (i.e more than 10 packets transeferred) then the probability of reducing the rate of this session becomes 1. If the session is smaller reducing the rate is ineffective because it will be over sooner rather than later anyway?
Do you think that would be the basic idea behind the MPD?
TCP windows are expressed in bytes and not in packets but this is just a detail here.
WRED helps TCP goodput by avoiding effects of synchronization in TCP sessions:
with generalized tail drop all TCP sessions have packets dropped and all of them reduce their windows.
This can cause a sawlike behaviuor on the link usage with reduced performances because all TCP sessions will increase the sessions at the same time and this will likely cause congestion again the queue will be full again again generalized tail-drop and so on.
By dropping in a differentiated way for different traffic classes and randomly within each single class (if the average queue is between the two thresholds) another objective is to maximize usage of links and the so called TCP goodput.
With WRED some TCP sessions will reduce their windows with other sessions not affected this allows to have an higher average (overe time) level of usage of the links.
Let me say that is really difficult to understand the effects of WRED parameters and to test them.
We tried using Chariot emulators of TCP hosts on 155 Mbps SDH link in lab tests but results variation with the same parameters were comparable with the possible effects of changing the parameters.
Question We run asr9001 with XR 6.1.3, and we have a very long delay to
login w/ SSH 1 or 2 to the device compare to IOS device. After
investigation, the there is 1s delay between the client KEXDH_INIT and
the server (XR) KEXDH_REPLY. After debug ssh serv...
Introduction The purpose of this document is to demonstrate the Open
Shortest Path First (OSPF) behavior when the V-bit (Virtual-link bit) is
present in a non-backbone area. The V-bit is signaled in Type-1 LSA only
if the router is the endpoint of one or ...
Hi, I am seeing quite a few issues with patch install and wanted to
share my experience and workaround to this. Login to admin via CLI, then
access root with the “shell” command Issue “df –h” and you’ll probably
see the following directory full or nearly ...