cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11466
Views
388
Helpful
45
Replies

Ask the Expert: Quality of Service (QoS) on Cisco IOS Routers

ciscomoderator
Community Manager
Community Manager

Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about quality of service (QoS) on Cisco IOS routers with Cisco subject matter expert Akash Agrawal.

A communications network forms the backbone of any successful organization. These networks transport a multitude of applications and data, including high-quality video and delay-sensitive data such as real-time voice. The bandwidth-intensive applications stretch network capabilities and resources, but also complement, add value, and enhance every business process. Networks must provide secure, predictable, measurable, and sometimes guaranteed services. Achieving the required QoS by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network becomes the secret to a successful end-to-end business solution. Thus, QoS is the set of techniques to manage network resources.

Quality of Service (QoS) on Cisco IOS RoutersAkash Agrawal is a customer support engineer in the Cisco High-Touch Technical Support Center in Bangalore, India, supporting Cisco's major service provider customers in routing and Multiprotocol Label Switching (MPLS) technologies. His areas of expertise include routing, switching, MPLS services, traffic engineering, and QoS. He has been in the networking industry for eight years, which includes five years in the service provider industry and three years in Cisco HTTS, and is a dual CCIE, in the routing and switching and service provider tracks.

Remember to use the rating system to let Akash know if you have received an adequate response. 

Because of the volume expected during this event, Akash might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Routing and Switching , shortly after the event. This event lasts through August 1, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

45 Replies 45

Hello.

We are running CBWFQ on ISR2; under class we configure queue-limit and WRED with thresholds larger that queue-limit.

Q: does configured queue-limit affect WRED or thresholds take precedence somehow?

Here is the case:

#sh ver | i IOS
Cisco IOS Software, C3900 Software (C3900-UNIVERSALK9-M), Version 15.1(2)T4, RELEASE SOFTWARE (fc1)

#sh policy-map int s1/0 out  
...
Class-map: class-default (match-any)
      24816524 packets, 14653092687 bytes
      30 second offered rate 20000 bps, drop rate 0 bps
      Match: any 
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/12185/0
      (pkts output/bytes output) 25088479/14797516799
      bandwidth 1807 kbps
        Exp-weight-constant: 6 (1/64)
        Mean queue depth: 0 packets
        class     Transmitted       Random drop      Tail drop          Minimum        Maximum     Mark
                  pkts/bytes     pkts/bytes       pkts/bytes          thresh         thresh     prob
        
        0         2531976/1566836392    477/671839        74/109836            26            68  1/5
        1               0/0               0/0              0/0                 26            68  1/5
        2               0/0               0/0              0/0                 26            68  1/5
        3               0/0               0/0              0/0                 26            68  1/5
        4               0/0               0/0              0/0                 26            68  1/5
        5           14429/3145522         0/0              0/0                 26            68  1/5
        6           10493/583445          0/0              0/0                 85           136  1/10
        7               0/0               0/0              0/0                 26            68  1/5

 

We see "Queue limit" value is default and equal to 64 packets.
But WRED max thresholds are set to 68 and 136 (per precedence).
Total drops = 12185, WRED drops = 477+74 = 551.

So, does this configuration cause a tail-drop problem in class's queue, or configured thresholds implicitly affect maximum queue length, allowing queue to grow up to 136+ packets?

PS: I found QoS behavior description for ASRs only (HFQ vs. old style), but I'm not sure if ISR2 are the same.

 

Hi,

 

When we configure WRED, WRED threshold takes precedence over queue-limit defined for each class.

Here is one example, in which you can see mean queue depth is much more larger than queue-limit

        Class-map: HSBC_ENHANCED_1 (match-any)
          264187 packets, 369861800 bytes
          30 second offered rate 5585000 bps, drop rate 2373000 bps
          Match: ip dscp af41 (34) af42 (36) af43 (38)
            264187 packets, 369861800 bytes
            30 second rate 5585000 bps
          Queueing
          queue limit 64 packets  <<<<<<
          (queue depth/total drops/no-buffer drops) 0/111675/24233
          (pkts output/bytes output) 152512/213516800
          bandwidth 2940 kbps
            Exp-weight-constant: 9 (1/512)
            Mean queue depth: 990 packets  <<<<<<<
            dscp     Transmitted       Random drop      Tail drop          Minimum        Maximum     Mark
                      pkts/bytes     pkts/bytes       pkts/bytes          thresh         thresh     prob
            
            af41      176745/247443000   87442/122418800      0/0                683          2049  1/1
            af42           0/0               0/0              0/0                410          1366  1/1
            af43           0/0               0/0              0/0                205           683  1/1

 

 

Below is stepwise decisions how QOS is performed on router

 

1.  When packet comes first, router checks, can i directly send the traffic to the driver? If yes then no need to queue it.

2.  If no then router will check if it can buffer the packet, if possible. It is determined by first by the local queue per class or in case of WRED by maximum threshold.

3.  If it turns out that current queue-depth is less than the number of queue-limit, it is passed otherwise we will drop that packet and that will be counted in both normal queue drop and WRED drop.

 

Regards,

Akash

 

In your output number of drops in WRED is lesser than overall drops that could be because of fact that WRED was applied after some time in this class. If you see total packets classified in class-default is 25088479 while is WRED it is close to 10% of it. May be that is why number of drops are also less.

 

Class-map: class-default (match-any)
      24816524 packets, 14653092687 bytes
      30 second offered rate 20000 bps, drop rate 0 bps
      Match: any 
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/12185/0
      (pkts output/bytes output) 25088479/14797516799  >>>>>>>>>>>>>
      bandwidth 1807 kbps
        Exp-weight-constant: 6 (1/64)
        Mean queue depth: 0 packets
        class     Transmitted       Random drop      Tail drop          Minimum        Maximum     Mark
                  pkts/bytes     pkts/bytes       pkts/bytes          thresh         thresh     prob
        
        0         2531976/1566836392    477/671839        74/109836            26            68  1/5
        1               0/0               0/0              0/0                 26            68  1/5
        2               0/0               0/0              0/0                 26            68  1/5
        3               0/0               0/0              0/0                 26            68  1/5
        4               0/0               0/0              0/0                 26            68  1/5
        5           14429/3145522         0/0              0/0                 26            68  1/5
        6           10493/583445          0/0              0/0                 85           136  1/10
        7               0/0               0/0              0/0                 26            68  1/5

Hello.

Good catch on packet number, but could we go to 7200 router:

http://www.cisco.com/c/en/us/support/docs/routers/7200-series-routers/110850-queue-limit-output-drops-ios.html

And read following:

The behavior is the same as in corresponding pre-HQF section, with one important exception. In HQF images, random-detect and queue-limit can co-exist in the same User-Defined class (or class class-default) and queue-limit will be enabled and tuned to 64 packets in a default configuration. As such, queue-limit will serve as a maximum current queue size in a random-detect class, therefore providing a mechanism to limit no-buffer drops discussed in the corresponding pre-HQF section. Due to this addition, the configured queue-limit must be at least as large as the random-detect max-threshold, where the random-detect max-threshold will default to 40 packets, or else the parser will reject the configuration.

 

Does this mean, that queue-limit we see in my example is just a "default" setting and cosmetic bug?

Hello,

 

Though it is not giving error and accepting the max-threshold more than queue-limit but i guess queue length will be limited to queue-limit in case of hqf. You can check queue-limit for any class in output of "show hqf int <int>"

If it is showing equal to max-threshold of the WRED, please let me know. 

R6_ASR6#show hqf int gig1/0/1 | i limit

   aggregate limit/bytes 4166/0 availbuffers 4166
   mincir 1000000 queue_limit 4166 excess_ratio 1
     aggregate limit/bytes 83/0 availbuffers 83  
     mincir 20000 queue_limit 83 excess_ratio 1  
     aggregate limit/bytes 83/0 availbuffers 83
     mincir 20000 queue_limit 83 excess_ratio 1
     aggregate limit/bytes 166/0 availbuffers 166
     mincir 40000 queue_limit 166 excess_ratio 1
R6_ASR6#

Good time-of-day.

One more question.

We configure CBWFQ; all classes are using bandwidth + queue-limit; sum of bandwidth = 75%; class-default configured with fair-queue only (no bandwidth statement).

Q: could this configuration lead to class-default starvation (no bandwidth is guaranteed) when the other classes borrow unallocated bandwidth? Does behavior depend on IOS version (before and after HFQ)?

PS: we are running a lot of ISR (12.4.24 and 12.4.15) and ISR2 (15.1 and 15.2).

 

Hi,

 

As per below CCO document, if bandwidth command is not defined in class-default, other classes can borrow unallocated bandwidth and can starve the class-default.  Are you see any different behaviour on ISR routers. Are you able to allocate to bandwidth to class-default with fair-queue enabled on ISR router?

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/QoS-SRND-Book/WANQoS.html

 an explicit bandwidth guarantee (of 25 percent) must be given to the Best-Effort class. Otherwise, if class-default is not explicitly assigned a minimum bandwidth guarantee, the Scavenger class still can rob it of bandwidth. This is because of the way the CBWFQ algorithm has been coded: If classes protected with a bandwidth statement are offered more traffic than their minimum bandwidth guarantee, the algorithm tries to protect such excess traffic at the direct expense of robbing bandwidth from class-default (if class-default is configured with fair-queue), unless class-default itself has a bandwidth statement (providing itself with a minimum bandwidth guarantee). 

Regards,

Akash

 

Hello.

Thanks you for the link.

But you refer to really old document; new version is available at http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND_40/QoSWAN_40.html

Anyway it doesn't give us a reference to IOS version. I assume it's pre-HFQ behavior.

So, could you please clarify if it differs for 12.4(24) and 15.x IOS?

---

If we read http://www.cisco.com/c/en/us/support/docs/routers/7200-series-routers/110850-queue-limit-output-drops-ios.html then we may see comparative difference between HFQ and pre-HFQ and here it's really clear:

HFQ: At all times, regardless of configuration, class class-default in HQF images will always have an implicit bandwidth reservation equal to the unused interface bandwidth not consumed by user-defined classes. By default, the class-default class receives a minimum of 1% of the interface or parent shape bandwidth. It is also possible to explicitly configure the bandwidth CLI in class default.

But the document describes 7200 routers and I need to know if the same behavior may be expected on other ISR/ISR2.

Thanks.

Hello,

Thanks for the link and i checked on it further and there is very good command to check bandwidth reservation to different classes in case of hqf and that is "show hqf interface <int>"

I tried this on ASR1K 152-4.S.

To check whether HQF is deployed or not command is "show subsys | i hqf"

When there is no bandwidth allocation to class-default mincir to class-default is 0, so i guess there is no minimum bw guarantee for class-default even in case of hqf. But CCO document is little confusing, i will check on this and get back to you.

 

R6_ASR6#show subsys | i hqf

hqf_c3pl_account                   Library     1.000.000     
hqf_blt                            Protocol    2.000.001     
hqf_dynamic_if_subsys              Protocol    1.000.001     
hqf_sss_subsys                     Protocol    1.000.001     
hqf_sa_sss_subsys                  Protocol    1.000.001     
hqf_rp_rsvp                        Protocol    2.000.001     
hqf_tunnel_subsys                  Protocol    1.000.001     
hqf_cp_c3pl                        Protocol    1.000.001     
hqf_ui                             Protocol    2.000.001     
hqf                                Protocol    1.000.001 

 

I configured below policy-map first without any bandwidth allocation to class-default 

R6_ASR6#show policy-map test

  Policy Map test
    Class EF
      bandwidth 40 (%)
    Class AF12
      bandwidth 40 (%)
    Class class-default
       packet-based wred, exponential weight 4
      
      class    min-threshold    max-threshold    mark-probablity
      ----------------------------------------------------------
      0       -                -                1/10
      1       1000             4200             1/10
      2       -                -                1/10
      3       -                -                1/10
      4       -                -                1/10
      5       -                -                1/10
      6       -                -                1/10
      7       -                -                1/10
R6_ASR6#

 

and output is as below

R6_ASR6#show hqf int gig1/0/1

Interface Number 8 (type 27) GigabitEthernet1/0/1
 OUTPUT FEATURES 

   blt (0x7F9DB4EF8208, index 0, qid 7, fast_if_number 9) layer PHYSICAL
   scheduling policy: WFQ (111)
   classification policy: CLASS_BASED (122)
   drop policy: TAIL (141)
   packet size fixup policy: NONE (0)
   blt flags: 0x4800020 (3-params scheduler)
   total guarantee percent 0 total remaining perc 0 total bandwidth guarantee 0 total active 0 

   txcount 397 txqbytes 25949 drops 0 qdrops 0 nobuffers 0 flowdrops 0  
   aggregate limit/bytes 4166/0 availbuffers 4166
   holdqueue_out 0 perc 0.00 remaining_ratio/perc 1
   visible_bw 1000000 max_rate 1000000 allocated_bw 1000000 excess_ratio 1
   mincir 1000000 queue_limit 4166 excess_ratio 1
   ecn_threshold NONE     offset 0 
   backpressure_policy 0
   (max entries 8) (layer flags 0x8)

     next layer HQFLAYER_CLASS_HIER0 (max entries 8)

     blt (0x7F9DB4EF8118, index 0, qid 8, fast_if_number 9) layer CLASS_HIER0 <<<<<< for class-default
     scheduling policy: FIFO (110)
     classification policy: NONE (120)
     drop policy: WRED (142)  <<<<<<<<< drop policy WRED
     packet size fixup policy: NONE (0)
     blt flags: 0x4804020 (3-params scheduler)
     total guarantee percent 8000 total remaining perc 0 total bandwidth guarantee 0 total active 1 
  
     txcount 397 txqbytes 25949 drops 0 qdrops 0 nobuffers 0 flowdrops 0 
     aggregate limit/bytes 4166/0 availbuffers 4166
     holdqueue_out 0 perc 0.00 remaining_ratio/perc 1
     visible_bw 0 max_rate 1000000 allocated_bw 1000000 excess_ratio 1
     mincir 0 queue_limit 4166 excess_ratio 1  <<<<<<<< mincir is 0
     ecn_threshold NONE     offset 0 
     backpressure_policy 0
  
     blt (0x7F9DB4EF7F38, index 1, qid 9, fast_if_number 9) layer CLASS_HIER0
     scheduling policy: FIFO (110)
     classification policy: NONE (120)
     drop policy: TAIL (141)
     packet size fixup policy: NONE (0)
     blt flags: 0x4800120 (3-params scheduler)
     total guarantee percent 8000 total remaining perc 0 total bandwidth guarantee 0 total active 1 
  
     txcount 0 txqbytes 0 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     aggregate limit/bytes 1666/0 availbuffers 1666
     holdqueue_out 0 perc 40.00 remaining_ratio/perc 0
     visible_bw 400000 max_rate 1000000 allocated_bw 400000 excess_ratio 1 
     mincir 400000 queue_limit 1666 excess_ratio 1 <<<<<< minimum bandwidth 400meg to class AF12
     ecn_threshold NONE     offset 0 
     backpressure_policy 0
  
     blt (0x7F9DB4EF8028, index 2, qid 10, fast_if_number 9) layer CLASS_HIER0
     scheduling policy: FIFO (110)
     classification policy: NONE (120)
     drop policy: TAIL (141)
     packet size fixup policy: NONE (0)
     blt flags: 0x4800120 (3-params scheduler)
     total guarantee percent 8000 total remaining perc 0 total bandwidth guarantee 0 total active 1 
  
     txcount 0 txqbytes 0 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     aggregate limit/bytes 1666/0 availbuffers 1666
     holdqueue_out 0 perc 40.00 remaining_ratio/perc 0
     visible_bw 400000 max_rate 1000000 allocated_bw 400000 excess_ratio 1
     mincir 400000 queue_limit 1666 excess_ratio 1 <<<<<< minimum bandwidth 400meg to class EF
     ecn_threshold NONE     offset 0 
     backpressure_policy 0
  
R6_ASR6# 

 

When configured bandwidth 20% on class-default


     next layer HQFLAYER_CLASS_HIER0 (max entries 8)

     blt (0x7F9DB4EF8118, index 0, qid 8, fast_if_number 9) layer CLASS_HIER0
     scheduling policy: FIFO (110)
     classification policy: NONE (120)
     drop policy: WRED (142)
     packet size fixup policy: NONE (0)
     blt flags: 0x4804120 (3-params scheduler)
     total guarantee percent 10000 total remaining perc 0 total bandwidth guarantee 0 total active 0 
  
     txcount 427 txqbytes 27840 drops 0 qdrops 0 nobuffers 0 flowdrops 0
     aggregate limit/bytes 833/0 availbuffers 833
     holdqueue_out 0 perc 20.00 remaining_ratio/perc 0
     visible_bw 200000 max_rate 1000000 allocated_bw 200000 excess_ratio 1
     mincir 200000 queue_limit 833 excess_ratio 1  <<<<<<<<< mincir changed to 200meg
     ecn_threshold NONE     offset 0 
     backpressure_policy 0
  

 

Thank you for clarification.

Could you please confirm the same behavior for ISR2 (IOS 15.x) devices?

I don't have "show hqf ..." command on any ISR2 device.

 

Please provide output of "show version" and "show inventory". Or you can attach show tech file.

Hello, Akash.

The command is hidden (no help available).

Thanks. I think show hfq could clarify everything on any platform IOS.

I think the use for FQ is that it doesn't allow single flow to saturate the class.

Hello Vasilii,

 

II am not sure with FQ you are referring to FIFO queuing or Fair-Queue but it is FIFO queue, I would like to mention that with Fair-Queue also one flow would not saturate the class and explanation is given below. 

 

In HQF, we no longer use the pre-HQF “Weight” concept to schedule within a Fair-Queue. This is because in HQF, every class configured with “fair-queue” will have an implicit or explicit “bandwidth” reservation. In other words, all flows within a fair-queue will be treated  (and therefore scheduled) equally compared to each other. As such, what is the benefit or running “fair-queue” at all in HQF? 

Consider a User-defined class in which you're classifying using IP precedence 3. If the class were FIFO, a src/dest IP flow of Prec 3 could exhaust all 64 default queueing buffers. By adding the “fair-queue” command to coexist with the “bandwidth” reservation, we can now ensure every src/dest IP flow gets at least 16 buffers, where the flow-queue buffers are calculated as (.25 * “queue-limit”). 
This ensures every flow has a fair chance of being enqueued and not dropped. Each flow-queue with a non-zero queue-depth is scheduled to tx_ring in a round-robin fashion, with aggregate scheduling based on the classes “bandwidth” reservation, either implicit or explicit.

 

I hope i was able to help you in answering your queries. 

-Akash

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco