Question... During times of no congestion on an interface with LLQ configured, is the bandwidth truly reserved for the LLQ queue? Or can other traffic classes use as much bandwidth as they need until there is congestion? For example, we have a FastEthernet interface with a LLQ queue defined for 50% of the bandwidth. Does that mean that even if the traffic defined in the LLQ (voice for example) is only using 1Mb that there is 49Mb of wasted bandwidth that other traffic classes can not use? Or does the LLQ bandwidth only come into play if/when there is congestion on the interface?
So what is defined as 'congestion'. Is congestion defined as reaching the interface bandwidth (or configured bandwidth) or is congestion just exceeding the hardware buffer AKA 'tx-ring-limit'? I'm trying to get a good answer on when to expect to see packet reach the software queues (CBWFQ, LLQ, etc). The reason being is that I have a Serial T3 interface (shaped down to 10Mb since thats the CIR) that I keep seeing drops and packets in queue on the 'class-default' queue although I never see the bandwidth reaching the limit when these drops/queues occur. So i'm confused as to why there is packets reaching the queue to begin with, since there's plenty of free bandwidth and also why there are drops as well.
After further researching and working with TAC, it was determined that the reason that I am seeing drops on what appears to be an under utilized interface is due to micro-burst. The micro-burst are filling up the tx-ring and software queues which are causing the drops that I am seeing. And since micro-burst are very short spikes, they aren't showing up in the SNMP or interface usage averages.
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
I consider you have congestion anytime a frame/packet needs to be queued.
As your later post describes, congestion issues can be on a micro time scale, i.e. multi-second to subsecond, and are often "invisible" to higher monitoring tools monitoring on multi-minute time scales.
A short term high transmission ingress rate can often easily overflow a slower egress's queue.
Resulting drops may cause a fast sender to slow/pause (a lot) which lowers its average transmission rate - this also will "hide" the issue from "bandwidth" monitoring.
As per my understanding, LLQ configuration affects behavior of scheduler that how scheduler will dequeue next packet. In LLQ, scheduler always check first to LLQ , if any packet in LLQ, dequeue it, if not, pick next packet from other Non-LLQ queues. With this logic if there is no traffic on LLQ, scheduler will pick continuously from Non-LLQ till Tx ring has room. If no room, it will wait and again apply same logic. So if there is no traffic in LLQ, other classes can utilize complete available/link bandwidth.
This document gives several answers on frequently asked questions for PFRv3 channel state behavior.
Q1: What are all the channel operational states from a BR (border role) perspective and what are the rules/conditions to be in each st...
The need was to reach an host inside a LAN through a VPN connection managed by the LAN gateway (Cisco 1921).
The LAN gateway performs NAT and there was a dedicate nat rule for the host i wanted to reach through VPN.
I couldn't connect to the hos...
We have 3 identical switches configured by someone else and would like to claim some of the Gigabit ports(G1/G2/G3/G4) for use on servers. When we try to change the wiring and configuration, we run in to connectivity issues. Attached is a des...