Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

Interface dropping packets without reaching its full capacity

Hello Everyone,

First of all I would like to state that I'm far from being a QoS especialist. I need someone to shed some light on the following issue;

The below configuration is applied to a WAN link, and the interface is dropping packets on the Default class even when the link is being used at half of its capacity, specifically when the traffic is above 1424 kbps the dropping starts. What could be wrong here?

interface FastEthernet0/1.2

bandwidth 4096

encapsulation dot1Q 2

service-policy output ShapingD

policy-map ShapingD

class class-default

  shape average 4096000 40960 40960

  service-policy QoS_Out

policy-map QoS

class Mgmt_Out

  bandwidth 16

  set ip precedence 6

class Voice

  priority 1024

    police 1024000 192000 384000 conform-action set-prec-transmit 5 exceed-action drop  violate-action drop

class Platinum

  bandwidth 1536

  random-detect

     police 1536000 288000 576000 conform-action set-prec-transmit 3  exceed-action set-prec-transmit 3 violate-action set-prec-transmit 3

class Silver

  bandwidth 96

  random-detect

     police 96000 18000 36000 conform-action set-prec-transmit 1  exceed-action set-prec-transmit 1 violate-action set-prec-transmit 1

class Default

  bandwidth 1424

  random-detect

    police 1424000 267000  534000 conform-action set-prec-transmit 0 exceed-action set-prec-transmit 0 violate-action set-prec-transmit 0

Everyone's tags (4)
2 ACCEPTED SOLUTIONS

Accepted Solutions

Re: Interface dropping packets without reaching its full capacit

As far as I can see, the policer under class Platinum does nothing at all.

Match: ip precedence 3

police 1536000 288000 576000

conform-action set-prec-transmit 3

exceed-action set-prec-transmit 3

violate-action set-prec-transmit 3

It "remarks" conforming, exceeding and violating traffic to ip prec 3 (which is already marked as prec 3).

We can see WRED-drops and taildrops as well in that class.

In order to reduce the taildrops, you may want to remark violating traffic to 0.

WRED is enabled by the random-detect command.

If it doesn't meet your requirments, you can disable it (no random-detect) or you can change the minimum and maximum threshold:

precedence <0-7>

Regards

Rolf

P.S.: WRED starts dropping TCP when the min-threshold of the queue is reached. This is done to force the endstation to reduce the window size (TCP has kind of an aggressive nature in terms of occupying bandwidth).

Super Bronze

Re: Interface dropping packets without reaching its full capacit

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

The below configuration is applied to a WAN link, and the interface is dropping packets on the Default class even when the link is being used at half of its capacity, specifically when the traffic is above 1424 kbps the dropping starts. What could be wrong here?

What's wrong?  Likely you have bursts that are invisible to your load usage.

You're shaping at 4 Mbps.  A "50% load" across a minute might be caused by sending at a 2 Mbps rate across the whole minute, or sending at 8 Mbps for 15 of the 60 seconds (and not sending for the remaining 45 seconds).  The former wouldn't queue, but the latter would.  For the latter, queue capacity is exceeded, you'll have drops.  Again, important to note both will show as just 50% utilization across one minute.

In you detail stats, you have RED drops and tail drops.  This would indicate bursts that are invisible to your load stats.

If the bursts are indeed transitory, simple solution is often just to increase queue settings.  Your drop rate will likely decrease although you increase transitory latency.

BTW, your device's default values for RED, I believe, were "designed" for T1/E1 bandwidth.  As you have 4 Mbps, they could be increased.  (Also, IMO, the default values are not optimal even for T1/E1.  Further, optimal values for RED can be difficult to "find"; so much so, I often recommend RED not be used.)

14 REPLIES

Interface dropping packets without reaching its full capacity

Hi Ruben,

I'm neither a QoS expert and there's one thing I don't understand about the child policy-map "QoS": For the classes Platinum, Silver and Default you have policers which have the same action (rewrite packet precedence to 3/1/0 and send it) for conforming, exceeding and violating traffic. What is the reason for doing so?

Could you please provide the output of show policy-map interface FastEthernet0/1.2 ?

Regards

Rolf

New Member

Interface dropping packets without reaching its full capacity

Hi Fischer,

As far as I know the intention for this configuration is to lower the precedence of the packets as the traffic exceeds the bandwidth allocated for each class until reach 0. That's exactly why I don't get why the packet are dropped.

  Service-policy output: ShapingD

    Class-map: class-default (match-any)

      20084361 packets, 27984912474 bytes

      5 minute offered rate 431000 bps, drop rate 0 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int   (ms)      (bytes)

          4096000/4096000   10240  40960     40960     10        5120

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      0         19939900  2008019924 11186829  3113358735 no

      Service-policy :  QoS

        Class-map: Mgmt_Out (match-any)

          15560 packets, 1096499 bytes

          5 minute offered rate 0 bps, drop rate 0 bps

          Match: ip precedence 6  7

            15560 packets, 1096499 bytes

            5 minute rate 0 bps

          Match: access-group 160

            0 packets, 0 bytes

            5 minute rate 0 bps

          Queueing

            Output  Queue: Conversation 137

            Bandwidth 16 (kbps)Max Threshold 64 (packets)

            (pkts matched/bytes matched) 1294/90200

        (depth/total drops/no-buffer drops) 0/0/0

          QoS Set

            precedence 6

              Packets marked 15560

        Class-map: Voice (match-any)

          0 packets, 0 bytes

          5 minute offered rate 0 bps, drop rate 0 bps

          Match: ip precedence  5

            0 packets, 0 bytes

            5 minute rate 0 bps

          Queueing

            Strict Priority

            Output Queue: Conversation 136

            Bandwidth 1024 (kbps) Burst 25600 (Bytes)

            (pkts matched/bytes matched) 0/0

            (total drops/bytes drops) 0/0

          police:

              cir 1024000 bps, bc 192000 bytes, be 384000  bytes

            conformed 0 packets, 0 bytes; actions:

              set-prec-transmit 5

            exceeded 0 packets, 0 bytes; actions:

              drop

            violated 0 packets, 0 bytes; actions:

              drop

            conformed 0 bps, exceed 0 bps, violate 0 bps

        Class-map: Platinum (match-any)

          9561579 packets, 13259511619 bytes

          5 minute  offered rate 261000 bps, drop rate 0 bps

          Match: ip precedence 3

            9561579 packets, 13259511619 bytes

            5 minute rate 261000 bps

          Queueing

            Output Queue: Conversation 138

            Bandwidth 1536 (kbps)

            (pkts matched/bytes matched) 5007974/7152261088

        (depth/total drops/no-buffer drops) 0/69410/0

             exponential weight: 9

              mean queue depth: 0

  class    Transmitted      Random drop      Tail drop    Minimum Maximum  Mark

           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob

      0       0/0               0/0              0/0           20      40  1/10

      1       0/0                0/0              0/0           22      40  1/10

      2       0/0               0/0              0/0           24      40  1/10

      3 9492169/13159991239  53364/76487653   16046/23032727    26      40  1/10

      4       0/0               0/0               0/0           28      40  1/10

      5       0/0               0/0              0/0           30      40  1/10

      6       0/0               0/0              0/0           32      40  1/10

      7        0/0               0/0              0/0           34      40  1/10

   rsvp       0/0               0/0              0/0           36      40  1/10

          police:

              cir 1536000 bps, bc 288000 bytes, be 576000 bytes

            conformed 7895691 packets, 10865709422 bytes;  actions:

              set-prec-transmit 3

            exceeded 1242527 packets, 1785126424 bytes; actions:

              set-prec-transmit 3

            violated 423361 packets, 608675773 bytes; actions:

              set-prec-transmit 3

            conformed 235000 bps, exceed 13000 bps, violate 0 bps

        Class-map: Silver (match-any)

          0 packets, 0 bytes

          5 minute offered rate 0 bps, drop rate 0  bps

          Match: ip precedence 1

            0 packets, 0 bytes

            5 minute rate 0 bps

          Queueing

            Output Queue: Conversation 139

            Bandwidth 96 (kbps)

            (pkts matched/bytes matched) 0/0

        (depth/total drops/no-buffer drops) 0/0/0

             exponential weight: 9

             mean queue depth: 0

  class     Transmitted      Random drop      Tail drop    Minimum Maximum  Mark

           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob

      0       0/0               0/0              0/0           20      40  1/10

      1       0/0               0/0               0/0           22      40  1/10

      2       0/0               0/0              0/0           24      40  1/10

      3       0/0               0/0              0/0           26      40  1/10

      4        0/0               0/0              0/0           28      40  1/10

      5       0/0               0/0              0/0           30      40  1/10

      6       0/0               0/0              0/0            32      40  1/10

      7       0/0               0/0              0/0           34      40  1/10

   rsvp       0/0               0/0              0/0           36      40  1/10

          police:

              cir 96000 bps, bc 18000 bytes, be 36000  bytes

            conformed 0 packets, 0 bytes; actions:

              set-prec-transmit 1

            exceeded 0 packets, 0 bytes; actions:

              set-prec-transmit 1

            violated 0 packets, 0 bytes; actions:

              set-prec-transmit 1

            conformed 0 bps, exceed 0 bps, violate 0 bps

        Class-map: class-default (match-any)

          10507222 packets, 14724304356  bytes

          5 minute offered rate 149000 bps, drop rate 0 bps

          Match: any

          Queueing

            Output Queue: Conversation 140

            Bandwidth 1424 (kbps)

            (pkts matched/bytes matched) 6322015/9052997661

        (depth/total drops/no-buffer drops) 0/75051/0

             exponential weight: 9

             mean queue depth: 0

  class    Transmitted      Random  drop      Tail drop    Minimum Maximum  Mark

           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob

      0 10432171/14616735962  73447/105269524   1604/2298870     20      40  1/10

      1       0/0               0/0              0/0           22      40  1/10

      2        0/0               0/0              0/0           24      40  1/10

      3       0/0               0/0              0/0           26      40  1/10

      4       0/0               0/0              0/0            28      40  1/10

      5       0/0               0/0              0/0           30      40  1/10

      6       0/0               0/0              0/0           32      40  1/10

      7       0/0                0/0              0/0           34      40  1/10

   rsvp       0/0               0/0              0/0           36      40  1/10

          police:

              cir 1424000 bps, bc 267000 bytes, be 534000 bytes

            conformed 7856284 packets, 10915180974 bytes; actions:

              set-prec-transmit  0

            exceeded 1746711 packets, 2509195690 bytes; actions:

              set-prec-transmit 0

            violated 904198 packets, 1299925836 bytes; actions:

              set-prec-transmit 0

            conformed 123000 bps, exceed 14000 bps, violate 0 bps

Interface dropping packets without reaching its full capacity

So we can see WRED-drops for the Platinum and Default classes:

Class-map: Platinum (match-any)

(depth/total drops/no-buffer drops) 0/69410/0

class    Transmitted      Random drop      Tail drop    Minimum Maximum  Mark

pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob

3 9492169/13159991239  53364/76487653   16046/23032727    26      40  1/10

(53364/9492169 = 0,56%)

Class-map: class-default (match-any)

(depth/total drops/no-buffer drops) 0/75051/0

class    Transmitted      Random  drop      Tail drop    Minimum Maximum  Mark

pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob

0 10432171/14616735962  73447/105269524   1604/2298870     20      40  1/10

(73447/10432171 = 0,7%)

Are you familiar with the concept of WRED congestion avoidance?

Regards

Rolf

New Member

Interface dropping packets without reaching its full capacity

Thanks Fischer,

I'm not familiarized with WRED, but how can I avoid the early drop of the packets?

Hall of Fame Super Gold

Interface dropping packets without reaching its full capacity

Please state router model and IOS version by posting "show version" as it should always be done when reporting problems.

New Member

Interface dropping packets without reaching its full capacity

Hi Paolo,

The Router is a 1841, C1841-SPSERVICESK9-M, Version 12.4(23b)

Hall of Fame Super Gold

Interface dropping packets without reaching its full capacity

Update IOS and check again.

New Member

Interface dropping packets without reaching its full capacity

It is also happening in these two other routers

CISCO2911, C2900 Software (C2900-UNIVERSALK9-M), Version 15.0(1)M6 

Cisco 1841 Software (C1841-ADVIPSERVICESK9-M), Version 12.4(24)T3

I thinks this is more a configuration issue.

Hall of Fame Super Gold

Interface dropping packets without reaching its full capacity

You can try reducing the complexity of you policy-map until the results are met.

Re: Interface dropping packets without reaching its full capacit

As far as I can see, the policer under class Platinum does nothing at all.

Match: ip precedence 3

police 1536000 288000 576000

conform-action set-prec-transmit 3

exceed-action set-prec-transmit 3

violate-action set-prec-transmit 3

It "remarks" conforming, exceeding and violating traffic to ip prec 3 (which is already marked as prec 3).

We can see WRED-drops and taildrops as well in that class.

In order to reduce the taildrops, you may want to remark violating traffic to 0.

WRED is enabled by the random-detect command.

If it doesn't meet your requirments, you can disable it (no random-detect) or you can change the minimum and maximum threshold:

precedence <0-7>

Regards

Rolf

P.S.: WRED starts dropping TCP when the min-threshold of the queue is reached. This is done to force the endstation to reduce the window size (TCP has kind of an aggressive nature in terms of occupying bandwidth).

New Member

Interface dropping packets without reaching its full capacity

This is weird, I was reading about how to monitor WRED and this is the outputs that I got

#sh queueing random-detect

Current random-detect configuration:

#sh queue fastEthernet 0/1

'Show queue' not supported with FIFO queueing.

#sh int fa0/1 | in Queueing

  Queueing strategy: fifo

It seems like WRED is not enable

The output of th last command should look something like this

Queueing strategy: random early detection (WRED)

Re: Interface dropping packets without reaching its full capacit

That's a long story ...

In former times QoS was configured under every interface. Nowadays we use the MQC (modular QoS CLI).

With MQC, we use the show policy-map interface command. You can see the most interesting lines (regarding WRED) in my second posting.

Super Bronze

Re: Interface dropping packets without reaching its full capacit

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

The below configuration is applied to a WAN link, and the interface is dropping packets on the Default class even when the link is being used at half of its capacity, specifically when the traffic is above 1424 kbps the dropping starts. What could be wrong here?

What's wrong?  Likely you have bursts that are invisible to your load usage.

You're shaping at 4 Mbps.  A "50% load" across a minute might be caused by sending at a 2 Mbps rate across the whole minute, or sending at 8 Mbps for 15 of the 60 seconds (and not sending for the remaining 45 seconds).  The former wouldn't queue, but the latter would.  For the latter, queue capacity is exceeded, you'll have drops.  Again, important to note both will show as just 50% utilization across one minute.

In you detail stats, you have RED drops and tail drops.  This would indicate bursts that are invisible to your load stats.

If the bursts are indeed transitory, simple solution is often just to increase queue settings.  Your drop rate will likely decrease although you increase transitory latency.

BTW, your device's default values for RED, I believe, were "designed" for T1/E1 bandwidth.  As you have 4 Mbps, they could be increased.  (Also, IMO, the default values are not optimal even for T1/E1.  Further, optimal values for RED can be difficult to "find"; so much so, I often recommend RED not be used.)

New Member

Interface dropping packets without reaching its full capacity

Thanks a lot to both Fischer and Joseph, will try to play with the WRED thresholds and see what happen, make a lot of sense since the graphs are showing peaks of transitory traffic and also peaks of transitory latency.

1169
Views
0
Helpful
14
Replies