cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
345
Views
12
Helpful
6
Replies

QOS Question

johnelliot6
Level 2
Level 2

 

Hi everyone,

 

We have a CE (Cisco 881) with 100M connection to LAN, and a shaped 10M (Via policy map + also upstream carrier) connection to WAN and we see a lot of drops both on policy map (sh int policy-map foo) and also the WAN interface (Output drops) - Is their anyway to mitigate the drops? (Setting LAN int to 10M, policing(Ingress on LAN to 10Mb etc?)

 

Cheers.

 

 

6 Replies 6

mhnedirli
Level 1
Level 1

Hello, 

Can you share your configuration. You could configure as below

access-list 10 permit any

class-map TRAFFIC

 match access-list 10

!

policy-map PM_TRAFFIC

 class TRAFFIC

   shape average 50000

!

interface f0/0

 service-policy output PM_TRAFFIC

(Apologies - CE is not an 881, it is an 1841)

 

Shaping policy below:

ip access-list extended QOS1-PORTS
 permit tcp any any eq 2598
 permit tcp any any eq 443
 permit tcp any any eq 1494

 

class-map match-any QOS1

match access-group name QOS1-PORTS

!

!

policy-map childpolicy1

class QOS1

  priority percent 15

class class-default

  fair-queue

policy-map 10mb1

class class-default

  shape average 10240000

  service-policy childpolicy1

 

...and drops:

#sh policy-map int f0/0     

 FastEthernet0/0

 

  Service-policy output: 10mb1

 

    Class-map: class-default (match-any)

      78618917 packets, 39845268015 bytes

      5 minute offered rate 564000 bps, drop rate 0 bps

      Match: any

      Queueing

      queue limit 64 packets

      (queue depth/total drops/no-buffer drops) 0/432011/0

      (pkts output/bytes output) 78186916/38946371454

      shape (average) cir 10240000, bc 40960, be 40960

      target shape rate 10240000

 

      Service-policy : childpolicy1

 

        queue stats for all priority classes:

         

          queue limit 64 packets

          (queue depth/total drops/no-buffer drops) 0/0/0

          (pkts output/bytes output) 6801662/2109398624

 

        Class-map: QOS1 (match-any)

          6917042 packets, 2281512936 bytes

          5 minute offered rate 45000 bps, drop rate 0 bps

          Match: access-group name QOS1-PORTS

            6917042 packets, 2281512936 bytes

            5 minute rate 45000 bps

          Priority: 15% (1536 kbps), burst bytes 38400, b/w exceed drops: 115380

         

 

        Class-map: class-default (match-any)

          71701887 packets, 37563795414 bytes

          5 minute offered rate 501000 bps, drop rate 0 bps

          Match: any

          Queueing

          queue limit 64 packets

          (queue depth/total drops/no-buffer drops/flowdrops) 0/316631/0/316631

          (pkts output/bytes output) 71385254/36836972830

          Fair-queue: per-flow queue limit 16

 

#sh interface FastEthernet0/0

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 432010

 

Cheers.

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As noted in my original post, increasing your queue depths (or limits) might help.

Notice your queue limit is 64 (default I believe), and your FQ per flow queue limits are only 16 packets.

For 10 Mbps, 64 packets is rather shallow.  See if your IOS will take larger in the policy map configurations.

Thanks Joseph - Can you please provide any links / guidance on how to change the policy-map to increase the queue limits, and also fair queue limit?

And are there any potential issues if those limits are increased?(If they can be of course)

 

Cheers.

 

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Some IOS's will accept queue-limit # under the policy-maps's class statement.

The potential issue is, when you increase queue depths you also increase latency for the packets queued.  However, as you're also using FQ, flows with few packets shouldn't be exposed to excessive queuing latency as if they were sharing the same queue and a flow with lots of packets.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Perhaps, but much depends on the nature of your traffic.  It also depends on what your egress architecture is now.  (A small single FIFO queue, often will be worst case for drops.)

If drops are due to bursting, increasing queue depths might mitigate.  This might not be possible with your shaper, though.

Whether bursting or sustained volume drops, RED might reduce the number.

Lastly, FQ might also reduce the number of drops.

 

NB: Having interfaces physically set to available bandwidth can often work better than when using a shaper (or policer).  However, you might also move congestion to a port that you cannot as well manage congestion (due to device QoS options).  For example, the 881 probably has better QoS features than the LAN switch it's connected to.  If you run that LAN connection at 10 Mbps, you need to deal with drops seen on the LAN switch port not currently seen with it running at 100 Mbps.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card