Please forgive me that is long to read, maybe you already closed the browser as you saw that long post :)
Have been investigating for a long time on internet, but couldnt find an explaination that clears my questions. I dont need straight answers, even your opinions are enough. Now I will describe the question the way I understand the QOS concept, so please feel free to make corrections about misconceptions.
Main purpose of QOS is handling the traffic flow in case of a congestion, and additional purposes are shaping or policing the specific flows, or marking them although there is no congestion.
When there is no congestion, there are no software queues. Packets are simply switched between interfaces and placed to the hardware queues. There is no need of prioritizating packet flows since there is no congestion that packets flow as soon as they arrive (ignore the serialization delay from that scenario)
As congestion occurs, packets are being dropped. By default, the packet at the tail is dropped without checking if it is a voice/mission critical packet or not. To avoid losing critical data, the configured Queuing strategy kicks in and activates software queues. These software queues are manually created in Priority Queuing, Custom Queuing and dynamically created in WFQ, CBWQ and LLQ in accordance with class-maps, which are parralel to hardware queue like the following diagram
Q3------ --> HQ-------
In Custom Queuing and CBWFQ, packets from software queues are placed into hardware queue in a round-robin fashion. The advantage of CBWFQ besides NBAR and etc, you can assign the bandwidth limit that can be used per queue, unlike CQ. Here is the first question
I classify the necessary traffic, say voice %20 bandwidth, SQL %40 bandwidth and web %15 and %25 left to class-default that is for other traffic like routing protocol updates etc, in which I applied WRED to prevent tail-drop happening. Long story short, I manually defined the traffic which should use my total, say a T1 line 1.5Mbps.
So If a congestion occurs, my desired packets will still be forwarded without congestion to hardware queue, then why would I still need to prioritize the voice over others and use LLQ? Is it because if I have too many classes thus too many queues, the round-robin processing may last long enough (150 ms- delay caused by other interference along the path, distance and hops) untill it arrives to voice queue and cause jitter? Is that why Cisco recommends max class amount as 11?
Above is the question that I am %90 sure the answer is yes, but anyway just wanted to hear your opinions. The real question is this.
Above is why we should give priority to one voice queue, makes sense. So priority queueing has 4 queues (high-medium-normal-low). The difference is, if a packet arrives to a higher priority queue, the round-robin process starts again from the high prio queue without! completing its process. So if the high or high and medium queues are overhelmed, process will never come to normal and low queues, which is a drawback.
My question is, in prioritization basis (ignore the NBAR and other functionalities) what makes LLQ different over priority queuing?
The difference I saw is, you assign "priority percent %x" say %15 to one or two classes, and assign "bandwidth percent" to remaining classes. Now how will the packets in prioritized que will be treated? Lets say that %15 of that assigned bandwidth is overhelmed by that prioritized traffic, when will the packets in class based queues be processed? Should they wait for the prioritized que to complete? If yes, that means there is no difference between Priority Queuing and LLQ, if no, what is the difference?
PQ: Exclusively starves other traffic while serving packets in its queue. Infact PQ is so greedy, other traffic may never get serviced as long as there are packets in the PQ.
LLQ: It combines the bandwidth reservation of CBWFQ with PQ.
LLQ uses Policing to ensure that Bandwidth configured for PQ is not exceeded and when exceeded, drops the packets and then service the non-LLQ queues.
Hence LLQ ensures guranteed badnwidth only to the contracted amount.
Eg if a LLQ has 30kbps of bandwidht reserved. This can take a single G.729 call. If a 2nd call comes in, the policer will discard packets and both calls will sound bad. Hence its important to use CAC to limit the number of calls allowed to that configured for LLQ bandwidth, so the policer does not drop packets.
LLQ traffic uses the EF Diffserv PHB.
The EF PHB has 2 components.
1. Queueing to provide low delay, jitter or loss plus a guranteed bandwidth.
2. Policing to prevent EF traffic/LLQ traffic from starving other traffic of bandwidth.
So in summary, PQ will always treat PQ traffic firts and will starve other traffic of bandwidth
LLQ on the other hand will serve LLQ queues to the tune of bandwidth configured for them and then service other queues too.
HTH, Please rate useful posts
Please rate all useful posts
"The essence of christianity is not the enthronement but the obliteration of self --William Barclay"
You have reached the Cisco Logistics Support Center.. To Check Status of
your RMA, visit Product Returns & Replacements (RMA). Need help? Contact
us by Phone or Email. North Americas Phone: 1800 553 2447 Option 4
Email: firstname.lastname@example.org Europe Phone: +3...
The short answer is that you don't.... That isn't entirely true while at
the same time it kind of is, but for the most part you don't configure
the softkeys. You enable or disable them via TCL. Here is the long
answer. Be sure to read the whole thing or e...
Topology: IP Phone > Switches > Microsoft NPS setup to forward 802.1x
proxy to > ISE 2.1 patch 3 Authentication: EAP-TLS using Cisco MIC SANs
Phone Models 802.1X support? 802.1x flavor Addtl Comment EAP-MD5 EAP-TLS
Cisco 3905 Y Y N Cisco 6911 Y Y N Cisco ...