Cisco Support Community
Community Member

when are QoS configs applied?

I've been under the assumption that all QoS policies are only applied during periods of congestion. That is to say that my cbwfq is only applied during times of congestion. That makes sense to me, since tail dropping only occurs during congestion, and wfq would only kick in during times of congestion. But what about policing and shaping? Are they exceptions, or are they too only put into effect during periods of congestion? If I have a class map catching napster traffic and have it policed, does that policing only occurr when the output queue is full? Would someone be able to use 80 percent of the bandwidth on a T1 for napster traffic if there wasn't any other traffic in that pipe? Shaping makes even less sense. I can't think of how shaping would be beneficial only during times of congestion. If i want to shape traffic from a central site with 3mb bandwdith down to 1.5mb of bandwidth when sending to a spoke with only 1.5 mb of bandwidth, I would think that is useful all the time. Anyone offer any clarification?



Hall of Fame Super Gold

Re: when are QoS configs applied?

Yes, policing ans shaping are applied all the time. Of course they won't "kick-in" if the offered traffic is smaller that thresholds.

Hope this helps, please rate post if it does

Community Member

Re: when are QoS configs applied?

ok, let me write this again and maybe you can tell me if my thinking is correct. The only time say cbwfq is put into effect is during times of congestion, (I'm still not sure what that threshold is though, when is an interface considered congested, 200/255, 250/255?), so if my cbwfq policy states gives business critical apps 25% of available bandwidth, does that mean traffic I've deemed business critical is given 25% of the output queue's bandwidth during times of congestion, which would lead to less drops for that traffic than say bulk data traffic? I've been looking at the output of the sh int s 0/0/0 command and am a little unclear on what the "available bandwidth" counter indicates. It seems to be what is left over when adding up all the percentages of my cbwfq configuration. Example of show int output

Hummelstown2811#sh int s 0/0/0

Serial0/0/0 is up, line protocol is up

Hardware is GT96K with integrated T1 CSU/DSU

MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,

reliability 255/255, txload 251/255, rxload 107/255

Encapsulation FRAME-RELAY, loopback not set

Keepalive set (10 sec)

LMI enq sent 61, LMI stat recvd 61, LMI upd recvd 0, DTE LMI up

LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 0 LMI type is ANSI Annex D frame relay DTE

FR SVC disabled, LAPF state down

Broadcast queue 0/64, broadcasts sent/dropped 33/0, interface broadcasts 23

Last input 00:00:04, output 00:00:00, output hang never

Last clearing of "show interface" counters 00:10:10

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 8441

Queueing strategy: Class-based queueing

Output queue: 0/1000/64/8432 (size/max total/threshold/drops)

Conversations 0/8/256 (active/max active/max total)

Reserved Conversations 3/3 (allocated/max allocated)

Available Bandwidth 108 kilobits/sec

5 minute input rate 649000 bits/sec, 261 packets/sec

5 minute output rate 1515000 bits/sec, 250 packets/sec

165671 packets input, 45751489 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

158241 packets output, 115675300 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

DCD=up DSR=up DTR=up RTS=up CTS=up

CreatePlease to create content