Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

ASR1000 CBWFQ QOS Configuration Example

Configuring QOS can be an extremely complex task depending on how granular the user wants to control both in policy as well as out of policy traffic. Meaning, is bandwidth guaranteed and if so how is excess bandwidth handled. Many software forwarding platforms today use a two (2) parameter scheduler that include "maximum" and "excess".

The ASR1000 series implements a more advanced three (3) parameter scheduler.

The three parameters are:

Min = bandwidth command
Max = shape command
Excess_weight = bandwidth remaining command

Usually available bandwidth above the guaranteed amount is referred to simply as excess but giving the description "excess_weight" may make it easier to explain as in the example below.

This becomes important because the same configuration on say a 7200VXR series platform may not result in the same traffic handling policy on an ASR1000 series device.

Let's take the below policy to illustrate

policy-map queue-test


          bandwidth percent 10


          bandwidth percent 20

Now, let's say we offer 100% load to each class.

On the 7200 platform the result would be

100% load ---> class1 excess_weight 10  ---> receives 33% throughput
100% load ---> class2 excess_weight 20  ---> receives 66% throughput

On the ASR1000 platform the same configuration would result in

100% load ---> class1 min 10, excess_weight 1  ---> receives 45% throughput
100% load ---> class2 min 20, excess_weight 1  ---> receives 55% throughput

Let's explain the calculation a little for the ASR1000 implementation.  We service the "Min" guarantees first so class1 gets 10% based on the "bandwidth" command since the "bandwidth" command sets the Min. Similarly, class2 gets 20%  which leaves 70% of the overall bandwidth available to be used and is not guaranteed anywhere. Meaning we have 70% in the "excess pool".

This excess is shared among classes that are looking for bandwidth based on their "Excess_weight". In the implementation today each class gets an equal share of the excess which is how the respective throughput values are derived.

Total load: 100%
class1: min = 10
class2: min = 20

Excess is 70% so with two classes that's 35% of the excess to each class. Therefore class1 would get 45% (10 min + 35% excess) and class2 would get 55% (20 min + 35% excess).

Let's take the above and apply it to a network scenario with real values.

Assume we would like to have a subrate gigabit ethernet link running at 200 Mbps and guarantee the following

class "voice" is guaranteed 10 Mbps of low latency traffic and is policed above that rate
class "data1" is guaranteed 10 Mbps
class "data2" is guaranteed 10 Mbps
class "data3" is guaranteed 80 Mbps
class "default" is guaranteed 10 Mbps

Here is the policy-map configuration

policy-map shaper

     class class-default

          shape average 200000000

policy-map queue-test

     class voice


          police 10000000

     class data1

          bandwidth 10000

     class data2

          bandwidth 10000

     class data3

          bandwidth 80000

     class class-default

          bandwidth 10000

The result of this configuration would be the following throughput for each class if each class was sending at 200 Mbps. We use the 200 Mbps sending for each class to represent how all the excess would be used if there was only data for a single class at any given time.

Under the full load for all classes the result would be:

class voice => gets 10 Mbps and is strictly policed above 10 Mbps with or without congestion class data1 => gets 30 Mbps
class data2 => gets 30 Mbps
class data3 => gets 100 Mbps
class default => gets 30 Mbps

That gives a total of 190 Mbps allocated to the bandwidth classes and 10 Mbps for the priority for a total of the 200 Mbps which the parenter shaper allows.

The bandwidth received for each class was calculated based on an equal excess weight for each class and a excess bandwidth of 80 Mbps (200 total - 10 priority - 110 Min). That means each bandwidth (non priority) class would get 20 Mbps of the excess.

In the future the allocation of excess bandwidth will be configurable similar to something like this versus all classes getting an equal weight of excess bandwidth:

policy-map queue-test
          bandwidth percent 10
          bandwidth remaining percent 10
          bandwidth percent 20
          bandwidth remaining percent 20