cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11644
Views
388
Helpful
45
Replies

Ask the Expert: Quality of Service (QoS) on Cisco IOS Routers

ciscomoderator
Community Manager
Community Manager

Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about quality of service (QoS) on Cisco IOS routers with Cisco subject matter expert Akash Agrawal.

A communications network forms the backbone of any successful organization. These networks transport a multitude of applications and data, including high-quality video and delay-sensitive data such as real-time voice. The bandwidth-intensive applications stretch network capabilities and resources, but also complement, add value, and enhance every business process. Networks must provide secure, predictable, measurable, and sometimes guaranteed services. Achieving the required QoS by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network becomes the secret to a successful end-to-end business solution. Thus, QoS is the set of techniques to manage network resources.

Quality of Service (QoS) on Cisco IOS RoutersAkash Agrawal is a customer support engineer in the Cisco High-Touch Technical Support Center in Bangalore, India, supporting Cisco's major service provider customers in routing and Multiprotocol Label Switching (MPLS) technologies. His areas of expertise include routing, switching, MPLS services, traffic engineering, and QoS. He has been in the networking industry for eight years, which includes five years in the service provider industry and three years in Cisco HTTS, and is a dual CCIE, in the routing and switching and service provider tracks.

Remember to use the rating system to let Akash know if you have received an adequate response. 

Because of the volume expected during this event, Akash might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Routing and Switching , shortly after the event. This event lasts through August 1, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

45 Replies 45

 

I would also like to share below link explaining CBWFQ behavior on ASR1K, written by DSE (Distinguished Services Engineer) of cisco.

 

https://supportforums.cisco.com/document/29201/asr1000-cbwfq-qos-configuration-example

 

-Akash

Nice reference.  Didn't know ASRs were different from 7200s.  Thanks.

Hi,

 

I checked on ISR 151-4.M8, 124-24.T8 and 7200 (124-24.T5). On each router I see same behavior what is mentioned the in the CCO document that bandwidth not consumed by other user-defined class gets allocated to class-default.

 

Example :

Configured a policy-map with two classes defined, bandwidth allocation to 400 meg and 35 meg. remaining 565 meg allocated to class-default. Also have configured WRED in the class-default with max threshold (80) more than queue-limit (64) but the queue-limit in hqf output is limited to 64.

R31#show policy-map test
  Policy Map test
    Class EF
      bandwidth 400000 (kbps)
      fair-queue
    Class AF12
      bandwidth 35000 (kbps)
    Class class-default
       packet-based wred, exponential weight 9

      class    min-threshold    max-threshold    mark-probablity
      ----------------------------------------------------------
      0       26               80               1/1
      1       26               80               1/1
      2       26               80               1/1
      3       26               80               1/1
      4       26               80               1/1
      5       26               80               1/1
      6       26               80               1/1
      7       26               80               1/1
      fair-queue
R31# 

R31#show hqf int gig0/0         

Interface Number 3 (type 27) GigabitEthernet0/0
 OUTPUT FEATURES 

   blt (0x4B06837C, index 0, fast_if_number 39) layer PHYSICAL
   scheduling policy: WFQ
   classification policy: CLASS_BASED
   drop policy: TAIL
   blt flags: 0x20000    scheduler: 0x49B53B14

   total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 0 total active 0 

   txcount 251 drops 0 qdrops 0 nobuffers 0 flowdrops 0
   qsize 0 aggregate limit/bytes 250000/0 availbuffers 250000
   holdqueue_out 1000 weight 1 perc 0.00 remaining_ratio/perc 0
   visible_bw 1000000 max_rate 1000000 allocated_bw 1000000 vc_encap 0 ecn_threshold NONE
   quantum 1500 credit A 0 credit B 0 backpressure_policy 0
   scheduler_flags 13F
   last_sortq[A/B] 0/0, remaining pak/particles 0/0
   leaf_blt[P] 0x49B53B14 burst packets/bytes[P] 0/0
   leaf_blt[NOTP] 0x49B53B14 burst packets/bytes[NOTP] 0/0
          
     next layer HQFLAYER_CLASS_HIER0 (max entries 256)

     blt (0x4B0682EC, index 0, fast_if_number 39) layer CLASS_HIER0
     scheduling policy: WFQ
     classification policy: FLOW_BASED
     drop policy: WRED

     blt flags: 0x20041    scheduler: 0x49B53AAC
  
     total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 435000 total active 1 
  
     txcount 251 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     qsize 0 aggregate limit/bytes 64/16000 availbuffers 64
     holdqueue_out 0 weight 4 perc 56.50 remaining_ratio/perc 0
     visible_bw 565000 max_rate 1000000 allocated_bw 565000 vc_encap 0 ecn_threshold NONE
     quantum 1537 credit A 721 credit B 0 backpressure_policy 0
     scheduler_flags 13F
     last_sortq[A/B] 68/0, remaining pak/particles 0/0
     leaf_blt[P] 0x49B53AAC burst packets/bytes[P] 0/0
     leaf_blt[NOTP] 0x49B469DC burst packets/bytes[NOTP] 1/60

     WRED: mode 0 byte/packet 3 queue average 0, weight 1/512, 
       Class 0 (hash 0): 26 min threshold, 80 max threshold, 1/1 mark weight
       251 packets output, drops: 0 random, 0 threshold
       Class 1 (hash 1): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 2 (hash 2): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 3 (hash 3): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 4 (hash 4): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 5 (hash 5): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 6 (hash 6): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)
       Class 7 (hash 7): 26 min threshold, 80 max threshold, 1/1 mark weight
       (no traffic)

       next layer HQFLAYER_FLOW (max entries 256)

       blt (0x4B0680AC, index 0, fast_if_number 39) layer FLOW
       scheduling policy: FIFO
       classification policy: NONE
       drop policy: PARENT_WRED_OR_TAIL
       blt flags: 0x0    scheduler: 0x49B4D174
    
       total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 0 total active 1 
    
       txcount 251 drops 0 qdrops 0 nobuffers 0 flowdrops 0
       qsize 0 aggregate limit/bytes 16/0 availbuffers 16
       holdqueue_out 0 weight 1 perc 0.00 remaining_ratio/perc 0
       visible_bw 565000 max_rate 1000000 allocated_bw 565000 vc_encap 0 ecn_threshold NONE
       quantum 1537 credit A 0 credit B 0 backpressure_policy 0
       scheduler_flags 13F
       last_sortq[A/B] 17/0, remaining pak/particles 0/0
       leaf_blt[P] 0x49B4D174 burst packets/bytes[P] 0/0
       leaf_blt[NOTP] 0x49B4D174 burst packets/bytes[NOTP] 0/0

     blt (0x4B06825C, index 1, fast_if_number 39) layer CLASS_HIER0
     scheduling policy: WFQ
     classification policy: FLOW_BASED
     drop policy: TAIL
     blt flags: 0x20044    scheduler: 0x49B53A44
  
     total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 435000 total active 1 
  
     txcount 0 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     qsize 0 aggregate limit/bytes 64/16000 availbuffers 64
     holdqueue_out 0 weight 6 perc 40.00 remaining_ratio/perc 0
     visible_bw 400000 max_rate 1000000 allocated_bw 400000 vc_encap 0 ecn_threshold NONE
     quantum 1632 credit A 0 credit B 0 backpressure_policy 0
     scheduler_flags 13F
     last_sortq[A/B] 68/0, remaining pak/particles 0/0
     leaf_blt[P] 0x49B53A44 burst packets/bytes[P] 0/0
     leaf_blt[NOTP] 0x49B53A44 burst packets/bytes[NOTP] 0/0

       next layer HQFLAYER_FLOW (max entries 256)

       blt (0x4B0681CC, index 0, fast_if_number 39) layer FLOW
       scheduling policy: FIFO
       classification policy: NONE
       drop policy: PARENT_WRED_OR_TAIL
       blt flags: 0x0    scheduler: 0x49B539DC
    
       total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 0 total active 1 
          
       txcount 0 drops 0 qdrops 0 nobuffers 0 flowdrops 0
       qsize 0 aggregate limit/bytes 16/0 availbuffers 16
       holdqueue_out 0 weight 1 perc 0.00 remaining_ratio/perc 0
       visible_bw 400000 max_rate 1000000 allocated_bw 400000 vc_encap 0 ecn_threshold NONE
       quantum 1500 credit A 0 credit B 0 backpressure_policy 0
       scheduler_flags 13F
       last_sortq[A/B] 0/0, remaining pak/particles 0/0
       leaf_blt[P] 0x49B539DC burst packets/bytes[P] 0/0
       leaf_blt[NOTP] 0x49B539DC burst packets/bytes[NOTP] 0/0

     blt (0x4B06813C, index 2, fast_if_number 39) layer CLASS_HIER0
     scheduling policy: FIFO
     classification policy: NONE
     drop policy: TAIL
     blt flags: 0x20004    scheduler: 0x49B4D1DC
  
     total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 435000 total active 1 
  
     txcount 0 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     qsize 0 aggregate limit/bytes 64/16000 availbuffers 64
     holdqueue_out 0 weight 63 perc 3.50 remaining_ratio/perc 0
     visible_bw 35000 max_rate 1000000 allocated_bw 35000 vc_encap 0 ecn_threshold NONE
     quantum 1500 credit A 0 credit B 0 backpressure_policy 0
     scheduler_flags 13F
     last_sortq[A/B] 68/0, remaining pak/particles 0/0
     leaf_blt[P] 0x49B4D1DC burst packets/bytes[P] 0/0
     leaf_blt[NOTP] 0x49B4D1DC burst packets/bytes[NOTP] 0/0

R31# 

 

HQF: class default is a FIFO queue with a bandwidth reservation equal to: 


A. If no user-defined classes exist with "shape" but without "bandwidth", the leftover "bandwidth" after subtracting all "bandwidth" and "priority" reservations in User Defined classes from (.99 * interface_"bandwidth").


class-default bandwidth = (.99 * interface_"bandwidth") - bandwidth_sum to all classes and priority reservations


B. If user-defined classes exist with "shape" but without "bandwidth", use this formula:


[((.99 *interface_"bandwidth") - SUM(MQC_"bandwidth",MQC_"priority")) /(#_of_classes_with_"shape"_but_without_"bandwidth") + 1]


Note: The formula above can be used when the queueing policy’s attach-point is a physical interface. If the attach point is instead a shaper (ATM PVC, MQC, FRTS), substitute interface_”bandwidth” with the appropriate shape rate. 


C. The “bandwidth” command explicitly configured in Class Default, if one exists.


Note: In HQF, all classes with queueing actions, whether configured with “fair-queue” or not, including class-default, will have an implicit bandwidth reservation equal to at least 1% of nterface_"bandwidth". This is a major departure from pre-HQF code because in HQF, class-default and user-defined classes with “shape” will always have a minimum bandwidth reservation, either implicit or explicit.


Note: In HQF, class-default can be converted to a FQ with “fair-queue”, which can co-exist with “bandwidth”, either implicit or explicit.


In HQF, we no longer use the pre-HQF “Weight” concept to schedule within a Fair-Queue. This is because in HQF, every class configured with “fair-queue” will have an implicit or explicit “bandwidth” reservation. In other words, all flows within a fair-queue will be treated  (and therefore scheduled) equally compared to each other. As such, what is the benefit or running “fair-queue” at all in HQF? 

Consider a User-defined class in which you're classifying using IP precedence 3. If the class were FIFO, a src/dest IP flow of Prec 3 could exhaust all 64 default queueing buffers. By adding the “fair-queue” command to coexist with the “bandwidth” reservation, we can now ensure every src/dest IP flow gets at least 16 buffers, where the flow-queue buffers are calculated as (.25 * “queue-limit”). 
This ensures every flow has a fair chance of being enqueued and not dropped. Each flow-queue with a non-zero queue-depth is scheduled to tx_ring in a round-robin fashion, with aggregate scheduling based on the classes “bandwidth” reservation, either implicit or explicit.

Hello, Akash.

I saw the citation before, but I doubt if it refers to HFQ.

Yes, it's possible to configure bandwidth statement with fair-queue under class-default (ISR2).

 

Class-map: class-default (match-any)  
          3791920878 packets, 3656347126083 bytes
          5 minute offered rate 3798000 bps, drop rate 4000 bps
          Match: any 
          Queueing
          queue limit 128 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 33/28179868/0/0
          (pkts output/bytes output) 3765332970/3901238747012
          bandwidth 35% (5075 kbps)
          police:
              cir 14500000 bps, bc 36250 bytes, be 36250 bytes
            conformed 3695585638 packets, 3525400342408 bytes; actions:
              set-dscp-transmit default
            exceeded 72390953 packets, 97855981428 bytes; actions:
              set-dscp-transmit default
            violated 23944298 packets, 33090817835 bytes; actions:
              set-dscp-transmit default
            conformed 3673000 bps, exceeded 88000 bps, violated 25000 bps
            Exp-weight-constant: 9 (1/512)
            Mean queue depth: 8 packets
            class       Transmitted      Random drop      Tail/Flow drop Minimum Maximum Mark
                        pkts/bytes       pkts/bytes      pkts/bytes   thresh  thresh  prob
            
            0       3765332984/3901238757168 24527383/28712961558 3652485/4115275562         20            60  1/10
            1               0/0               0/0              0/0                 22            40  1/10
            2               0/0               0/0              0/0                 24            40  1/10
            3               0/0               0/0              0/0                 26            40  1/10
            4               0/0               0/0              0/0                 28            40  1/10
            5               0/0               0/0              0/0                 30            40  1/10
            6               0/0               0/0              0/0                 32            40  1/10
            7               0/0               0/0              0/0                 34            40  1/10
          Fair-queue: per-flow queue limit 32 packets

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Vasilii, what I believe happens, bandwidth allocations, whether implicit or explicit, sets dequeuing scheduling weights and my understanding is no non-LLQ flow should be totally starved except possibly by LLQ.

Basically, if you set a class bandwidth to 25%, its dequeuing scheduling should be such that it will obtain at least 25% of the bandwidth assuming all the other 75% has been allocated and all classes are trying to obtain their allocated bandwidth or more.  (NB: an exception is class-default, FQ, pre-HQF.)

For example, if you have three classes:

policy-map sample

class A

bandwidth 25 percent

class B

bandwidth 25 percent

class class-default

bandwidth 25 percent

if they all want all the bandwidth, each should obtain 1/3 the bandwidth, because each will have the same dequeuing scheduling weight.  If one class had no traffic, and the other two still wanted all the bandwidth, each should obtain 50%.  If only one class had traffic, it could obtain all 100%.

In the above, you would obtain the same dequeuing result, as long as all 3 classes had the same bandwidth allocation (although the actual dequeuing scheduling weights could differ).

Hello, Joseph.

Thanks for the hint, but I know how CBWFQ is supposed to work.

My question is not about "if we configure class-default with bandwidth statement", but - "what if we omit bandwidth statement under class-default".

PS: I will appreciate proof-links.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Oh well, if you omit class-default's bandwidth statement, it should default, I believe, to reserved bandwidth value (default 25%) for pre-HQF and to 1% for post-HQF, but I wouldn't guarantee that (especially across different IOS versions - if you want a certain percentage, best to explicitly define).

In any case, whatever the default happens to be (likely an omitted bandwidth is the same as omitting [explicitly] the class class-default), since the class is always present (i.e. explicitly or implicitly defined), other non-LLQ classes shouldn't be able to totally starve it, but LLQ might.

Hello, Joseph.

Thanks for your post, but I need a clear description in some Cisco QoS document.

The only proof link Akash has provided was really old QoS Design document that was written long before HFQ.

stemrikar
Level 1
Level 1

Limiting priority queue on catalyst interface:

on 7600 catalyst card ( CEF720 24 port 1000mb SFP)  interface. how to police/rate limit priority queue so that other queues are not starved.

Hi,

 

I think It is related to LAN qos and out of scope of this discussion forum.

 

-Akash

stemrikar
Level 1
Level 1

service policy behavior difference:

CE_N<>PE_N<>P<>PE_F<>CE_F

below policies are applied on PE_N

interface Vlan10
 no ip address
 xconnect 192.168.1.1 10 encapsulation mpls
 service-policy input Policy-2Mb-In
end

interface Vlan20
 vrf forwarding Cust-B
 ip address 10.1.152.57 255.255.255.252
 service-policy input Policy-2Mb-In
end

Q. vlan 20 we all are clear will restrict traffic coming from  CE_N

Do policy on vlan10 restrict traffic coming from CE_F?

As i have seen this behavior on Cisco devices based network. Sorry have lost the logs.

 

 

Hi,

 

I checked in lab on 7600 with RSP720-3C-10GE, 7600-ES+20G3CXL and not see that incoming service-policy is matching outgoing traffic on L2circuit. Suspecting it could be related to particular hw limitation. You can also refer DDTS CSCso41900 for reference. If you still seeing this issue and have details of HW being used, that would be helpful

 

-Akash

stemrikar
Level 1
Level 1

MVPN qos:

In a SP network offering MVPN, below policy is used for customer marking

  Policy Map Policy-80MB-In
    Class class-default
     police cir 81920000 bc 15360000 be 30720000
       conform-action set-mpls-exp-imposition-transmit 5
       exceed-action drop
       violate-action drop

 

the unicast pakets when label switch will bear exp bit 5.

however the m-packets coming from same CE . will not be label switched . Please expalin how qos will be featured on the native ip (gre tunneled) packets in SP backbone.

 

If we set ip prec 5 it overrides set mpls exp 5 , so then in this scenario, what will be the marking on labeled switched packet.

.

Hi,

 

You can create another class to match all multicast traffic and set ip prec 5.

 

R1_7606A#show policy-map Policy-80MB-In-Parent
  Policy Map Policy-80MB-In-Parent
    Class class-default
     police cir 81920000 bc 15360000 be 30720000
       conform-action transmit 
       exceed-action drop 
       violate-action drop 
      service-policy Policy-80MB-In-child
R1_7606A#

R1_7606A#sh policy-map Policy-80MB-In-child
  Policy Map Policy-80MB-In-child
    Class MVPN
      set ip precedence 5
    Class class-default
      set mpls experimental imposition 5
R1_7606A#

R1_7606A#show class-map MVPN
 Class Map match-all MVPN (id 4)

   Match access-group name  Multicast-Traffic

R1_7606A#


R1_7606A#show ip access-lists Multicast-Traffic
Extended IP access list Multicast-Traffic
    10 permit ip any 224.0.0.0 15.255.255.255
R1_7606A#

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card