cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5250
Views
8
Helpful
12
Replies

WFQ with Hold Queue

snarayanaraju
Level 4
Level 4
Dear Friends,
I am not able to figure out the relation between WFQ's parameter HOLD QUEUE and Conversation Value.
I put this concept in my Lab and found that "SHOW INTERFACE" command shows both Hold Queue parameter in OUTPUT QUEUE and Conversation parameter in CONVERSATION parameter.
I reduced the Hold Queue to its minimum value and observered that the packets were dropped when flows went higher than the active queue. Then,
I reduced the Converstion to its minimum value and observered that the packets were dropped when flows went higher than the active queue
My understanding on this is, Hold Queue is used in FIFO and WFQ as the maximum value where aggressive dropping will  happen in CDT either Tail dropping and WRED will happen. So, HOLD QUEUE should be higher value than Conversation value.
Am I correct? Please let me know your views if  I am deviating
regards & thanks in advance
sairam
2 Accepted Solutions

Accepted Solutions

Hello Sairam,

In this case, if i have different traffic flows, then ultimately all will be in class-default only.Thus, that much flows may be created inside class-default. Here what is the tool used to drop the excess flows as CDT does in typical WFQ??

There are several mechanisms here:

  • You can use the command queue-limit to define the maximum size of the entire default queue (encompassing all conversation queues if the WFQ is activated in the default queue)
  • You can use the WRED on the entire default queue to provision for proactive random-based drops before the queue is full

Note that if you know there are some flows that may behave badly, or which require a special treatment you should not put them into the default class but rather make a special class for them and set the QoS provisions separately. The default queue should be considered as best effort queue. The router will try to give all flows a fair treatment but with no special guarantees. If you want such guarantees then the flow is not supposed to be in the default queue.

Also note that even if there was no CDT, the WFQ mechanism itself would be working fair enough because its principle is always to service the queue that has been the most neglected (the so-called max-min fairness). What CDT does is an additional improvement - if a packet arrives that is over limit but that can be managed to be sent in time then fine - let it go, at the expense of some other packet that would be so or so sent later. The WFQ in Cisco implementation, as activated directly on an interface, is basically a closed and untweakable solution - you cannot modify its basic behavior - so it tries to be as smart as possible. The CDT in plain WFQ helps to ensure that a higher priority packet arriving later can still be queued at the expense of other packets in different queues. You can do this in CBWFQ by having a separate class for such packets, whereas in WFQ, it had to somehow deal with it alone. The CDT is a fine concept in plain WFQ with many implications. However, with the advent of CBWFQ, it is in a sense superfluous.

NOTE: Initial topic (CDT > HQ) is still not clarified. please help, however efforts has been taken from my side to get the answer

Well, sometimes, the most simple answer is the closest to the truth - perhaps we are trying to find a sophisticated answer to a plain fact - the sanity check on the relation between the CDT and HQ simply isn't implemented in IOS. Perhaps the implementors forgot to do it, perhaps they didn't want to do it as you can use it to effectively deactivate the CDT... There are several places in the IOS CLI where it is up to you to enter sane values and the IOS will not try to outsmart you and tell you that your are wrong. This might be just one of those.

Best regards,

Peter

View solution in original post

Hello Sairam,

You are heartily welcome.

Regarding the show policy-map interface command output you have highlighted, that is logical and correct. The CBWFQ is actually built upon WFQ. However, whereas WFQ classifies flows automatically and on its own, in CBWFQ, the classification is done by you (or more precisely, by class-maps) and each non-default queue is given its own single conversation queue. In essence, each class in a policy-map is given its own queue. Thus, a policy-map is a system of queues. The weight of each queue is calculated as interface bandwidth divided by the class bandwidth. Now, you have all you need for running WFQ on these queues - they have their weights and there is a mechanism for classifying packets into these queues. The WFQ does not run inside these queues, rather it runs using these queues as the only queues it works with.

The only exception here is the default class, as within this class, the WFQ makes also its subsequent classification and uses its own conversation queues to schedule packets.

Therefore, the CBWFQ can be seen as a WFQ in which the automatic classification has been replaced by user-created classes and in which the weights are assigned to classes, not to individual packets. After this, the principle of scheduling packets is the same as with plain WFQ - the packets with the lowest sequence numbers will be served first.

You may be interested in reading this article, it does contain a rough info about the CBWFQ:

http://www.cisco.com/en/US/tech/tk39/tk824/technologies_tech_note09186a0080093d62.shtml

Best regards,

Peter

View solution in original post

12 Replies 12

snarayanaraju
Level 4
Level 4

Hi Friends,

I made indepth study again on this and found that HOLD QUEUE is for entire WFQ and CDT is for particular QUEUE. If HOLD QUEUE threshold is not exceeded, CDT will play a role on individual Queues

let me know your expertise comments also please

sairam

Hi peter,

I am not able to view the contents. I am worried what is the problem which made to hide your post.

Can you please post the repy again in this thread.

sairam

Hello Sairam,

The NetPro forum backend seems to have a problem. I certainly do not like losing my posts. Oh well, nobody's perfect...

Okay, so let's go over again. You are correct in your second post. The CDT value limits the size of a particular conversation queue while the hold queue limit is the upper limit of all packets in the WFQ system in whatever queue. However, the CDT is not just a simple limit on the conversation queue size. When a packet is about to be WFQ-enqueued, this decision process is invoked:

  1. Is the hold queue limit already reached? If yes, drop the packet and do not enqueue it. Otherwise, proceed to the next step.
  2. Is the CDT limit reached for the particular conversation queue into which the packet belongs? If not, enqueue it and end this decision process. Otherwise, proceed to the next step.
  3. Is there a packet in any conversation queue whose WFQ sequence number is higher that the sequence number of the packet in question? If yes, drop the packet with the highest sequence number (it may be in a different queue) and enqueue the current packet. Otherwise, drop the current packet (obviously, it has the highest sequence number itself).

The sequence number I am talking about in step 3 refers to the internal sequence number that is assigned to each packet by WFQ. Note that under certain circumstances, the size of a particular conversation queue may actually exceed the CDT limit, but at the expense of other queues. The idea behind the CDT is that the WFQ tries to throttle down the most aggresive flows that will obviously have the highest sequence numbers and thereby are the most eligible to be dropped when a CDT on any queue is exceeded.

The details about WFQ are not very well documented, unfortunately. By far the best in-depth description I have read is in the QoS Certification Exam Guide book by Wendell Odom and Michael Cavanaugh from Cisco Press. I highly recommend obtaining one and reading it. The Cisco website provides far less information about the WFQ, sadly.

I hope this answer does not disappear as well

Best regards,

Peter

Hi Peter,

Thanks for your detailed reply and also for the book you suggested. In fact I got these details from that author only. It is true and surprising why Cisco have not documented these subtle info on their config guides.

To continue the discussion, In WFQ  I observered that I am able to configure HOLD QUEUE value lesser than CDT value. For example, I am allowed to configure HOLD QUEUE = 1 and CDT = 4096. Having been said that HOLD QUEUE is superset or First check point for Queue limit, why IOS is not prompting error when CDT value is configured Higher than HOLD QUEUE value.

looking for your view please

sairam

Hello Sairam,

You are asking a very good question - why does the IOS allow the Hold Queue limit to be lower than the CDT? Honestly, I do not know for sure. There is one thing that comes to my mind: the CDT-based drops are somewhat random in nature -  it may be the newly arrived packet that will be dropped, or some other packet that has already been queued, all depending on sequence numbers. From this viewpoint, the CDT-based drops are slightly similar to the RED in one aspect - their randomness. The hold queue limit is strict and deterministic and as far as I know, there is no exception to it. In essence, the hold queue limit behaves as a tail drop. There may perhaps be situations where you want the WFQ just strictly drop the packets that are overfilling the WFQ system but you do not want the CDT-based drops, i.e., if a packet has been queued, it shall not be subsequently dropped. This situation can be achieved by exactly what you have configured - the hold-queue limit being lower than the CDT.

I hope the other friends here will also share their views about this.

Best regards,

Peter

Hi Peter

Keeping the query unanswered & shall we will wait for other experts comments on this too.

BTW I want to share another observation, when configuring CBWFQ's class-default class, there is no option to configure CDT. Only No. of Dynamic queues option is found as below:

QOS(config-pmap-c)#fair-queue ?
  <16-4096>  Number Dynamic Conversation Queues
 

QOS(config-pmap-c)#fair-queue 4096 ?
 

I am wondering why the option to configure CDT inside the class-default is removed. I know there would be some logic for this, But I didnt get any clues behind the reason. I am working to find out the concept.

I will be greatly appreciated, If you share your opinion on this regard

regards,

sairam

Hi Sairam,

I am also looking forward to reading other experts' comments.

Regarding the question about the lack of CDT with CBWFQ - if you think deeper over the CDT and its impact, you will see that the CDT - as opposed to a simple queue limit - actually makes room from high-priority packets and helps to throttle the most aggresive flows. In CBWFQ, you have tools to make it yourself according to your needs - you can sort your traffic by whatever key to your classes, you can assign the bandwidth guarantees and policing limits, you can combine every class with the WRED to prevent its overfilling, you can set the maximal packet count in an individual class - so basically you have the elementary tools for your CBWFQ classes that allow you to impact your traffic in a way that is similar to what the CDT did, and more.

Again, this is not an explanation from a book - this is how I personally see it, and I would love to see this issue discussed by other friends here as well.

Best regards,

Peter

Hi Peter,

Yes, Initially I also had the same thought.Then, while trying to understand its behaviour I have some concerns.

Having it is said that CBWFQ is intelligent enough to allocate provisions for individual manually configured queues which uses FIFO as scheduler inside each of the queues, Let us consider we have created only one queue called VOIP and other traffic will be in class-default which is configured as FAIR QUEUE.

In this case, if i have different traffic flows, then ultimately all will be in class-default only.Thus, that much flows may be created inside class-default. Here what is the tool used to drop the excess flows as CDT does in typical WFQ??

Hope I understood the concept to some level, apologies if I am deviating or confusing the readers

NOTE: Initial topic (CDT > HQ) is still not clarified. please help, however efforts has been taken from my side to get the answer

Looking forward to hear from you

regards,

sairam

Hello Sairam,

In this case, if i have different traffic flows, then ultimately all will be in class-default only.Thus, that much flows may be created inside class-default. Here what is the tool used to drop the excess flows as CDT does in typical WFQ??

There are several mechanisms here:

  • You can use the command queue-limit to define the maximum size of the entire default queue (encompassing all conversation queues if the WFQ is activated in the default queue)
  • You can use the WRED on the entire default queue to provision for proactive random-based drops before the queue is full

Note that if you know there are some flows that may behave badly, or which require a special treatment you should not put them into the default class but rather make a special class for them and set the QoS provisions separately. The default queue should be considered as best effort queue. The router will try to give all flows a fair treatment but with no special guarantees. If you want such guarantees then the flow is not supposed to be in the default queue.

Also note that even if there was no CDT, the WFQ mechanism itself would be working fair enough because its principle is always to service the queue that has been the most neglected (the so-called max-min fairness). What CDT does is an additional improvement - if a packet arrives that is over limit but that can be managed to be sent in time then fine - let it go, at the expense of some other packet that would be so or so sent later. The WFQ in Cisco implementation, as activated directly on an interface, is basically a closed and untweakable solution - you cannot modify its basic behavior - so it tries to be as smart as possible. The CDT in plain WFQ helps to ensure that a higher priority packet arriving later can still be queued at the expense of other packets in different queues. You can do this in CBWFQ by having a separate class for such packets, whereas in WFQ, it had to somehow deal with it alone. The CDT is a fine concept in plain WFQ with many implications. However, with the advent of CBWFQ, it is in a sense superfluous.

NOTE: Initial topic (CDT > HQ) is still not clarified. please help, however efforts has been taken from my side to get the answer

Well, sometimes, the most simple answer is the closest to the truth - perhaps we are trying to find a sophisticated answer to a plain fact - the sanity check on the relation between the CDT and HQ simply isn't implemented in IOS. Perhaps the implementors forgot to do it, perhaps they didn't want to do it as you can use it to effectively deactivate the CDT... There are several places in the IOS CLI where it is up to you to enter sane values and the IOS will not try to outsmart you and tell you that your are wrong. This might be just one of those.

Best regards,

Peter

Hi Peter,

Thanks and I must atleast now admire your patience for typing  such a lengthy explanations which all are your valuable thoughts and helping engineers like me.

Also one more observation i like to share with you. The command output of "show policy-map interface ser 0/0" shows some conversation parameters like below

QOS#show policy-map interface ser 0/0
Serial0/0

  Service-policy output: SAIRAM

    Class-map: VOIP-MATCH (match-all)
      10 packets, 1040 bytes
      30 second offered rate 0 bps, drop rate 0 bps
      Match: ip precedence 5
      Queueing
        Output Queue: Conversation 137
        Bandwidth 750 (kbps)Max Threshold 299 (packets)
        (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

Even though manually configured classes uses only FIFO, IOS still shows this parameters. This references of WFQ inside manually configured class is not used by packet scheduling it seems

As you said there could be many things in IOS which need not be given much importance.

It is just thought and experiences I like to share. Thanks for your time.

sairam

Hello Sairam,

You are heartily welcome.

Regarding the show policy-map interface command output you have highlighted, that is logical and correct. The CBWFQ is actually built upon WFQ. However, whereas WFQ classifies flows automatically and on its own, in CBWFQ, the classification is done by you (or more precisely, by class-maps) and each non-default queue is given its own single conversation queue. In essence, each class in a policy-map is given its own queue. Thus, a policy-map is a system of queues. The weight of each queue is calculated as interface bandwidth divided by the class bandwidth. Now, you have all you need for running WFQ on these queues - they have their weights and there is a mechanism for classifying packets into these queues. The WFQ does not run inside these queues, rather it runs using these queues as the only queues it works with.

The only exception here is the default class, as within this class, the WFQ makes also its subsequent classification and uses its own conversation queues to schedule packets.

Therefore, the CBWFQ can be seen as a WFQ in which the automatic classification has been replaced by user-created classes and in which the weights are assigned to classes, not to individual packets. After this, the principle of scheduling packets is the same as with plain WFQ - the packets with the lowest sequence numbers will be served first.

You may be interested in reading this article, it does contain a rough info about the CBWFQ:

http://www.cisco.com/en/US/tech/tk39/tk824/technologies_tech_note09186a0080093d62.shtml

Best regards,

Peter

Peter Paluch
Cisco Employee
Cisco Employee

Hello Sairam,

I do not know if you removed your second comment in this thread - I cannot see it now but it seems to me that it contained valid information.

Anyway, in Cisco's WFQ, the CDT value is calculated per conversation queue while the hold queue limit is the total number of packets in the WFQ system. When a packet arrives and is about to be WFQ-enqueued, this decision process is invoked:

  1. Is the Hold Queue limit exceeded? If yes, drop the packet immediately without enqueueing it. Otherwise, proceed to the next step.
  2. Is the CDT value exceeded for the conversation queue into which the packet belongs? If not, enqueue the packet and stop the decision process. If the CDT is exceeded, proceed to the next step.
  3. Is there a packet with a higher WFQ sequence number in any conversation queue? If yes, drop the packet with the highest sequence number (it may be in a different queue) and enqueue the current packet. Otherwise, drop the current packet (obviously, it has the highest sequence number itself).

The sequence number in the step 3 refers to the sequence numbers as calculated and internally assigned by the WFQ. You can see that the CDT actually works in a somewhat non-intuitive way. It is not a hard limit on a particular conversation queue size. Under circumstances, a particular conversation queue size may exceed the CDT, however, at the expense of other queues. By this approach, the WFQ throttles the most aggresive flows, as their sequence numbers will be the highest and thus the most eligible for discarding if a CDT is exceeded for any conversation queue.

By far the best description of the WFQ I have ecountered can be found in the QoS Exam Certification Guide by Wendell Odom and Michael Cavanaugh. I strongly recommend reading that one. Not even official Cisco webpages go into such deep details.

Best regards,

Peter

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco