Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 

QOS and Queuing

Hello

Please forgive me that is long to read, maybe you already closed the browser as you saw that long post :)

Have been investigating for a long time on internet, but couldnt find an explaination that clears my questions. I dont need straight answers, even your opinions are enough. Now I will describe the question the way I understand the QOS concept, so please feel free to make corrections about misconceptions.

Main purpose of QOS is handling the traffic flow in case of a congestion, and additional purposes are shaping or policing the specific flows, or marking them although there is no congestion.

When there is no congestion, there are no software queues. Packets are simply switched between interfaces and placed to the hardware queues. There is no need of prioritizating packet flows since there is no congestion that packets flow as soon as they arrive (ignore the serialization delay from that scenario)

As congestion occurs, packets are being dropped. By default, the packet at the tail is dropped without checking if it is a voice/mission critical packet or not. To avoid losing critical data, the configured Queuing strategy kicks in and activates software queues. These software queues are manually created in Priority Queuing, Custom Queuing and dynamically created in WFQ, CBWQ and LLQ in accordance with class-maps, which are parralel to hardware queue like the following diagram

Q1------ \

Q2------ \

Q3------ --> HQ-------

Q4------ /

Q5------ /

In Custom Queuing and CBWFQ, packets from software queues are placed into hardware queue in a round-robin fashion. The advantage of CBWFQ besides NBAR and etc, you can assign the bandwidth limit that can be used per queue, unlike CQ. Here is the first question

I classify the necessary traffic, say voice %20 bandwidth, SQL %40 bandwidth and web %15 and %25 left to class-default that is for other traffic like routing protocol updates etc, in which I applied WRED to prevent tail-drop happening. Long story short, I manually defined the traffic which should use my total, say a T1 line 1.5Mbps.

So If a congestion occurs, my desired packets will still be forwarded without congestion to hardware queue, then why would I still need to prioritize the voice over others and use LLQ? Is it because if I have too many classes thus too many queues, the round-robin processing may last long enough (150 ms- delay caused by other interference along the path, distance and hops) untill it arrives to voice queue and cause jitter? Is that why Cisco recommends max class amount as 11?

Above is the question that I am %90 sure the answer is yes, but anyway just wanted to hear your opinions. The real question is this.

Above is why we should give priority to one voice queue, makes sense. So priority queueing has 4 queues (high-medium-normal-low). The difference is, if a packet arrives to a higher priority queue, the round-robin process starts again from the high prio queue without! completing its process. So if the high or high and medium queues are overhelmed, process will never come to normal and low queues, which is a drawback.

My question is, in prioritization basis (ignore the NBAR and other functionalities) what makes LLQ different over priority queuing?

The difference I saw is, you assign "priority percent %x" say %15 to one or two classes, and assign "bandwidth percent" to remaining classes. Now how will the packets in prioritized que will be treated? Lets say that %15 of that assigned bandwidth is overhelmed by that prioritized traffic, when will the packets in class based queues be processed? Should they wait for the prioritized que to complete? If yes, that means there is no difference between Priority Queuing and LLQ, if no, what is the difference?

Thanks for reading and allowing time on this

Regards

1 ACCEPTED SOLUTION

Accepted Solutions
Super Bronze

Re: QOS and Queuing

Yes, queues contain packets, but the dequeue process should account for bytes. Bytes should correspond to the bandwidth proportion defined for the class.

Given

class x

bandwidth percent 10 (or 5, or 40)

class y

bandwidth percent 10 (or 5, or 40)

(and assuming only these two classes competing for bandwidth)

If class x and class y packets are the same size, then the outbound flow should look like xyxyxy, etc.

However if class x packets were twice the size of class y packets, xyyxyyxyy, etc.

Given

class x

bandwidth percent 20 (or 10, or 40)

class y

bandwidth percent 10 (or 5, or 20)

If class x and class y packets are the same size, then the outbound flow should look like xxyxxyxxy, etc.

However if class x packets were twice the size of class y packets, xyxyxyx, etc.

14 REPLIES
Super Bronze

Re: QOS and Queuing

"My question is, in prioritization basis (ignore the NBAR and other functionalities) what makes LLQ different over priority queuing?"

There are two differences. First, LLQ provides implicit policiers to avoid starving the non-LLQ queues from all bandwidth. Second, only the LLQ queue preempts others queues, each PQ queue preempts all queues of lower priority.

"Main purpose of QOS is handling the traffic flow in case of a congestion, and additional purposes are shaping or policing the specific flows, or marking them although there is no congestion."

Although dealing with congestion, etc., is a common practical aspect of QoS, the concept is more broad. If you consider that much of IP is just "best effort", i.e. no service guarantees, QoS attempts to provide service guarantees.

"When there is no congestion, there are no software queues."

It would be a very, very unusual data network that doesn't have congestion. However, much congestion is very brief and doesn't noticably adversely impact data traffic. Often the software queues don't see the congestion because the hardware queues haven't overflowed into them.

". . . which I applied WRED to prevent tail-drop happening."

You can still have tail drops with WRED. Generally ineffective with most traffic other than TCP, and even with TCP, a somewhat difficult technology to obtain the benefit as Dr. Floyd intended. (Perhaps the biggest benefit, for those who activate it, is the "elastic" queue size.")

"In Custom Queuing and CBWFQ, packets from software queues are placed into hardware queue in a round-robin fashion."

It's more involved than a simple round-robin of packets.

"Is that why Cisco recommends max class amount as 11? "

Haven't seen that. Can you provide a reference? [edit - You don't mean Cisco's current 11 class QoS model, do you?]

Re: QOS and Queuing

Joseph,

Thanks for your brief explaination. I cant remember (googled but not found) the source for that 11 recommendation. It was something like at least 4 and max 11, anyway something is still not clear in my mind about that prioritization.

Please correct if I am wrong, but the "bandwidth" and "priority" are two different things handledl. Let me try to explain.

"First, LLQ provides implicit policiers to avoid starving the non-LLQ queues from all bandwidth"

Awsome! Lets say that we have 1.5 Mbps serial interface. What that means, if data more than 1.5 Megabit per second! is sent to that interface, it's hardware queue will drop the packets as FIFO. So we create, lets say 4 classes, as following

priority percent %15

bandwidth percent %30

bandwith percent %10

bandwidth percent %20

rest %25

Great, now we created 4 software queues that packets enter before entering the hardware queue, and these queues are set in such a fashion that overall traffic will never exceed the %100 bandwidth, and prioritized queue will never go above %15 while others can if bandwidth available. This is done.

But now, I have 4 queues that I created in my hand, and 1 hardware queue. I told one of them that "You are prioritized". Now how should I place packets that are waiting in software queues to hardware queue? For instance here is the question that I maked up from my mind.

Each letter is 1 packet. V=Voice , S=SQL Replica, W=Web, P=P2P . Each queue length is 4096 bytes, lets say I used a nice codec that 6 packets of voice placed in queue, web packets are larger, p2p are larger and so on. These queues are created and filled in such a way that you (router) as an engine that can process 1514 Mega bits per second, will write them into your hardware queue in 1 second without dropping any packets (dropping phase is passed over if happened any before entering the software queue).

PQ1: VVVVVV

Q2: SSSS

Q3: PPP

Q4: WWWWW

So the question is, how would you fill the following Hardware Queue ?

HQ: ?????????????????? ->Line

If I told Q1 that I am going to prioritize your packets, then something like following should happen,

HQ: WWSWPSWPSWPSVVVVVV ->Line

If that is true, what happens if another a few voice packets enter the PQ again while processing the other queues? If it turns back to voice again and leave other queues inact, this exactly defines Priority queuing. If it says that "for 6 amount of PQ packets, I will process x amount of that queue and y amount of this queue packets" then this is what Custom Queuing is. And what does LLQ do?

I am trying to draw a picture of this entire process in my head which is essential for me to learn. Please do not hesitate to make corrections or describe your whole point of view on this

Thank you

Re: QOS and Queuing

If you have time, 19th slide of the following presentation describes this issue, and the most critical part of the diagram is missing.

http://www.cisco.com/comm/applications/ecomm/qlm/ccnp/ONT/ONT10S04L05_QLM/player.html

I attached the missing part (according to me) there should be a symbol there (square, rectangle, something) that defines the "decision" between Priority Queue and CBWFQ Scheduler in LLQ. Btw what is it scheduling?

Super Bronze

Re: QOS and Queuing

Packets are drawn from the LLQ until it's empty.

Gold

Re: QOS and Queuing

One thing to point out here is that these percentages do not police the traffic.

The priority does only if there is contention if not it can send all it wants. The bandwidth places no top limit even during contention.

In congestion all the classes that have the bandwidth option will fight with the "rest" class to get a share of that. The other issue that makes this very complex is that the "rest" percentage will also include any unused bandwidth from the other classes.

Re: QOS and Queuing

Thanks for clarification Tim, any thoughts about that Prioritization?

Super Bronze

Re: QOS and Queuing

I might understand your confusion.

Before any software queue takes packets, packets are normally sent to the interface's hardware queue, which is FIFO. When/while that queue is full, newly arriving packets are then enqueued into the software queues. When the hardware queue drains such that it wants additional packets, they are drawn from software queues. Any packets within the LLQ are sent first to the hardware queue. When the LLQ is empty, then packets are drawn from the other queues but in proportion to their bandwidth allocations.

The FIFO nature of the hardware queue can allow non-LLQ class packets ahead of LLQ class packets because neither has queued within the software queues. Usually the hardware queue isn't too deep, so it's often not a problem. In situations where it might be, you might be able to decrease its depth so that if fills sooner allowing the software queues to prioritize as desired/expected.

Re: QOS and Queuing

Joseph,

Thanks for your great response. We are getting closer. The key sentece is, "When the LLQ is empty, then packets are drawn from the other queues but in proportion to their bandwidth allocations"

I think the missing ring or the my misconception in my mind is the relationship between the "bandwidth" and the "queue" since the queue itself is measured with "packets" while bandwidth is measured with bps.

Can you explain that relationship?

Thanks

Super Bronze

Re: QOS and Queuing

Yes, queues contain packets, but the dequeue process should account for bytes. Bytes should correspond to the bandwidth proportion defined for the class.

Given

class x

bandwidth percent 10 (or 5, or 40)

class y

bandwidth percent 10 (or 5, or 40)

(and assuming only these two classes competing for bandwidth)

If class x and class y packets are the same size, then the outbound flow should look like xyxyxy, etc.

However if class x packets were twice the size of class y packets, xyyxyyxyy, etc.

Given

class x

bandwidth percent 20 (or 10, or 40)

class y

bandwidth percent 10 (or 5, or 20)

If class x and class y packets are the same size, then the outbound flow should look like xxyxxyxxy, etc.

However if class x packets were twice the size of class y packets, xyxyxyx, etc.

Re: QOS and Queuing

Joseph,

Thanks for hanging in that with me, I really appreciate that. With your example, I figured out the relation between configured bandwidth and packet placement. Once that missing part is found, the rest came itself. The confusing part was there were so many different "units" to take into concideration, like Packets for queue measurement, bps for bandwidth, bytes for packets and the most important one the time interval "per second". That per second was the reason why I couldnt place "prioritizing" into the picture in my mind. Because the bandwidth was being divided and shared between queues for ex q1 50kbps, q2 70kbps. This second is the same second, that q1 and q2 processed the packets at the same time so how does prioritiing occur? Then I understood that a second or wahtever time interval was actually being divided to the amount of queues, then bandwidths are assigned accordingly, here I got the terms "first" and "order". A second was the minimum time interval for me to draw a picture about that, but it is actually a very long time for a router :)

And the following is the complete picture in my mind

Re: QOS and Queuing

A 1.544 Mbit line can transmit approx 188.47 KByte of data in one second without dropping any single packet, (assuming it is not affected by any x factors like serialization). The data packets will be dispatched to the line from the output hardware queue in the same way they are arrived at the queue (FIFO). For instance, If 300 Kbytes of data is tried to be sent over that interface (from LAN stations over a shared segment for example) in one second, congestion will occur since 1,544 Mbit can transmit 188,47 bytes of data in one second. In case of that congestion, 300-188,47= 111,53, (111,53x1024)/MTU= 76 packets, which arrive last to the queue will be dropped (Tail-Drop) by default in every second that 111,53Kbytes per second (913Kbits per second) as overhead continues(Ignoring TCP Synchronization to focus on dropping mechanism only). So it is safe to arrive at the conlusion that hardware queue is equals to the maximum data amount in bytes that it can dispatch to the line without dropping packets, divided by the MTU (unit for queue is “packets”). So we get a 128 packets long hardware queue for a T1 line.

Since hardware queue, which has FIFO as queuing strategy by default, can not be modified to suit our needs, software queues can be implemented in a paralel fashion behind the hardware queue, utilizing router memory.

In CBWFQ, each class has a manually dedicated minimum amount of bandwidth in Bits per second (percent of bandwidth also refers to a bps) by admin, which will be used to transmit packets waiting as bytes in software queue to the hardware queue . In some non-cisco resources, it is told that CBWFQ will guarantee the amount of bandwidth issued to class in policy-map, which is correct, but wont guarantee the “order” of the packets that are placed into hardware queue, CBWFQ uses a packet ordering algorithm which is a form of round-robin. But Cisco's QLM http://www.cisco.com/comm/applications/ecomm/qlm/ccnp/ONT/ONT10S04L05_QLM/player.html (Slide 7) states the opposite of statements above. It says the queue with the greater amount of bandwidth set, will have the first place for its packets in hardware queue in time interval. Then it is a safe conclusion that if the greater amount of your traffic is Voice(or any delay sensitive data), you wont need LLQ. What happens if that winner queue wants to use more traffic and violates the remaining %25 which is left for default traffic? Well, that probability also exists for other queues with lower bandwidth set, even in LLQ. I mean if the winner queue is prioritized thus has a max limit, then the queue with the next high bandwidth that is not prioritized will take over the “fighting for unset bandwidth”.

For example, lets consider that a data stream as VSVSVWPPVS where V=Voice , S=SQL Replica, W=Web, P=P2P and is a total of 188.47 Kbytes and each letter is one packet (This is imaginary, just for simplicity sake to demonstrate a full bandwidth with less amount of packets, ignoring the %25 default traffic, ignore a reasonable MTU). I put 10 packets so each packet is generated by the %10 bandwidth of interface.

VSVSVWPPVS arrived at a CBWFQ configured interface through concerned flow direction that is congested in one second . CBWFQ configuration is “V=bandwidth percent 40” ,”S=bandwidth percent 30”,”P=bandwidth percent 20”,”W=bandwidth percent 10”. The placement of that stream to hardware queue will be as “WPPSSSVVVV -->Line”

Re: QOS and Queuing

But what if the type of traffic with the lower amount of bandwidth set, should have the first seats for packets in hardware queue, just like it is in todays Networks (Voice

Any comments, advises appreciated

Super Bronze

Re: QOS and Queuing

Perhaps it might further your understanding if you look at the purpose of CBWFQ beyond trying to account for how each packet is sequenced or dequeued scheduled from the CBWFQ queues.

Excluding FQ within a class, each class attempts to mimic a link of some minimum guaranteed capacity.

For instance, if we have class X defined to have 500 Kbps, mimimual performance for traffic in that class would be similar to a link of 500 Kbps. If the average traffic rate "offered" to the class was only 100 Kbps, traffic in that class would unlikely see any queuing within that traffic class. If the average traffic rate offered to the class was 1 Mbps, and assuming excess bandwidth not available, traffic would queue and, usually by default, tail drop within that class queue since class queues are usually FIFO.

Now if a CBWFQ class can mimic a link of some defined bandwidth, why use LLQ since it too has a defined bandwidth? Although it guarantees a certain amount of bandwidth, LLQ vs. "normal" CBWFQ classes do differ slightly. First, if there's overall link congestion, a LLQ policier caps bandwidth by dropping packets whereas the normal class will attempt to enqueue the excess packets. Second, LLQ preempts other normal class queues so there's often less "jitter" within traffic for its class traffic (often important for VoIP). Third, on many platforms, if you use FQ within class-default, the class bandwidth guarantees might break down, but LLQ will not break down.

Re: QOS and Queuing

Thanks for that additional info Joseph :)

252
Views
20
Helpful
14
Replies
CreatePlease to create content