cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
683
Views
22
Helpful
13
Replies

WRED+bandwidth allocation algorithm ?

kruas-san
Level 1
Level 1

Dear Gurus!

Plese tell me what is the bandwidth allocation algorithm:

I have 3 flows within 1 class-map.

WRED is enabled on the output ATM pvc(the total bandwidth of pvc is 2000kbps).

Flows have dscp marks af11(10) af12(12) af13(14).

When the pvc is congested the flows get the following bandwidth share in outgoing interface: af11(dscp 10) - 59% ; af12(dscp 12) - 40%; af13(dscp 14) - 1%; + bulk traffic with dscp 0.

Total bandwidth allocated for flows 10,12,14 is near 1500000bits per sec (75% of 2000kbps)

All flows have same packet sizes - 1400 bytes

Plese tell me how 7200 calculates bandwidth allocation between flow with dscp 10 , 12 and 14.

I have the following

router 7206

IOS (tm) 7200 Software (C7200-JK8S-M), Version 12.2(13), RELEASE SOFTWARE (fc1)

cisco 7206VXR (NSE-1) processor (revision A)

ATM PA - OC3

config:

class-map match-all af11

match ip dscp af11 af12 af13

policy-map gold

class af11

bandwidth percent 65

random-detect dscp-based

random-detect dscp 10 28 45 10

random-detect dscp 12 28 43 10

random-detect dscp 14 28 40 10

class class-default

bandwidth percent 10

................

interface ATM2/0.34 point-to-point

description RBNet

bandwidth 2000

ip address 10.0.4.1 255.255.255.252

pvc rbnet 15/64

vbr-nrt 2000 1500 50

tx-ring-limit 3

encapsulation aal5mux ip

service-policy output gold

Thanks in advance, Andrei

1 Accepted Solution

Accepted Solutions

Andrei,

You have to understand that there is only *one* queue that all three of these flows share. That's why the fact that the queue exceeds 40 packets most of the time means that the DSCP AF14 traffic gets very littl bandwidth share.

If you want 3 separate queues, you need to create a different class-map for each DSCP value.

Pls do remember to rate the posts...

Paresh

View solution in original post

13 Replies 13

pkhatri
Level 11
Level 11

Hi,

I'm afraid that there is no deterministic calculation you can use to determine the exact bandwidth each flow within a class will get.

The amount of bandwidth each flow will get will depend on:

- the rate of that flow

- the rate of other competing flows within the same class

- overall congestion state of the interface

- WRED thresholds applicable to each flow

When using class-maps within a service-policy, a single queue is allocated to each non-class-default class. Therefore, scheduling within that one queue is essential first-in first-out.

You mentioned that you were sending traffic marked with each of the 3 DSCP values but you did not mention how much traffic of each value you were sending. That is going to have a bearing on what traffic actually gets scheduled out.

In short, you can't calculate the percentages for each flow but the considerations I outlined above apply in determining the dynamic behaviour.

Pls do remember to rate posts.

Paresh

Thank yo for your answer.

Could you tell me a little bit more.

The rate of each flow is 100 pkts per second.

Thus in sum we have 300 pkts per second.

All packets have the same length 1400 bytes.

There is no other traffic passing this router(its a lab router)

The wred threshholds are:

DSCP Th_min Th_max Drop Probability fact

10 28 45 10

12 28 43 10

14 28 40 10

Please tell me the logic? How to share bandwidth for them.

Theshholds differ very little, but the resulting bandwidth allocation differs greately.

Thanks, Andrei

Andrei,

You do present an interesting case. Before I comment on the possible reasons that you are seeing what you are seeing, could I ask you to perform an additional test ... What I would like to see is for you to run the same test, but this time, use the same WRED thresholds for each of the DSCP values. Then, measure the throughput achieved for each DSCP value. Once you do that, we are in a better position to analyse this.

Pls do remember to rate posts.

Paresh

I have already done that.

For thresholds

dscp min_th max_th drop_prob

10 28 40 10

12 28 40 10

14 28 40 10

Each flow in this conditions got 33% of bandwidth.

i.e. Equal share.

I cant understand than, And I can't find anything in literature.

Could you give me a hint what the reason for such behaviour.

Thanks, Andrei

One more question, Andrei. Are these TCP flows or are you just pushing out UDP streams ?

Paresh

Its UDP streams.(I create with rude/crude).

The paket rate precision is good (100pkts per second)

Plese tell me your idea.

Andrei

I also tried with ICMP streams and obtained the same resuls.

Ok Andrei,

Here's my thinking about what is happening...

You are transmitting a total of 300*1400*8 = 3.36Mbps into a circuit that is configured for 2Mbps. Since this traffic is being sent at a continuous rate, that means that the queue is in a state of constant congestion. In fact, the size of the queue is going to be sitting around the maximum of 45 packets pretty much all the time. The only time that space is created in the queue is when:

1. A packet is scheduled out of the queue

2. Packets are dropped due to RED

Now, if the queue is always around the 45 packet mark, that means that once the queue reaches that point, all packets for DSCP 14 are going to be dropped since the queue depth is greater than the maximum threshold of 40 for DSCP 14.

Considering the DSCP 10 traffic now...Both the DSCP 10 and DSCP 12 traffic is operating in the RED drop zone since the queue size is much higher than their minimum threshold of 20. Therefore, roughly 1/10 packets are getting dropped for each of these flows. 1/10 of each flow would give you roughly 2 packets of each based on the fact that very little of the DSCP 14 traffic is getting through. Every time RED drops these packets, space is created in the queue and if at that point, the queue size is less than 43, packets for DSCP 12 are accepted. The queue size is always going to be less than or equal to 45, so slightly more of the DSCP 10 packets are accepted.

I hope that explains the behaviour adequately.

Pls do remember to rate posts.

Paresh

Andrei,

I mentioned my reasoning of the behaviour in that last post. I should also talk about how you get around this.

Firstly, I think that your methodology of choosing thresholds is flawed, which leads to this non-intuitive behaviour.

You configured the following:

random-detect dscp 10 28 45 10

random-detect dscp 12 28 43 10

random-detect dscp 14 28 40 10

All of the DSCP values had different maximum queue sizes, which is what causes the problem. You should keep the maximum size the same for all of them and tweak the minimum threshold.

So I would urge you to try the following:

random-detect dscp 10 28 40 10

random-detect dscp 12 30 40 10

random-detect dscp 14 32 40 10

I'm quite confident that you will get a much different set of results with this.

Pls do remember to rate posts.

Paresh

Dear Paresh

Thank you for helping me.

one more question.

You absolutely right packets are dropped,

flows are stationary, and the average queue length

for each flow oscillates near Max Threshhold.

Now the question.

How this drops connected with bandwidth share in the output interface?

The flows are stationary, due to different thresholds they receive different drops, but the share of bandwidth for each flow should be equal (isn't it?)

Thanks, Andrei

I mean the following:

Its like a round robin shedulin between 3 queues.

Packets are dropped by WRED at the tail of the queue,

But from the head of the queue packets are sent to the network.

Streams are continious, thus in every moment there are packets in each queue to send.

Andrei,

You have to understand that there is only *one* queue that all three of these flows share. That's why the fact that the queue exceeds 40 packets most of the time means that the DSCP AF14 traffic gets very littl bandwidth share.

If you want 3 separate queues, you need to create a different class-map for each DSCP value.

Pls do remember to rate the posts...

Paresh

Thank you Paresh.

Your answer also explains the fact, that packets from all flows had the same propagation time (RTT for ICMP).

Thanks again.

My respect,

Andrei