cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10416
Views
0
Helpful
12
Replies

Queue-limit with shaping

dan.letkeman
Level 4
Level 4

Hello,

I have a 2921 where I am shaping some traffic based on subnet on my lan.  I have applied the shaping policy to the lan interface in the outgoing direction.

Topology is as follows:

ISP - ASA - ROUTER - LAN

Policy map:

Policy Map shape-lan

  Class tc-class

    Average Rate Traffic Shaping

    cir 5000000 (bps)

    queue-limit 4096 packets

I am seeing a lot of no-buffer drops on the policy and I am wondering what the best solution is to solve this:

Class-map: tc-class (match-any)

   8730680 packets, 10803689863 bytes

   5 minute offered rate 4453000 bps, drop rate 0 bps

   Match: access-group name tc-class-acl

     8730680 packets, 10803689863 bytes

     5 minute rate 4453000 bps

   Queueing

   queue limit 4096 packets

   (queue depth/total drops/no-buffer drops) 2/33819/33819

   (pkts output/bytes output) 8248711/10303551681

   shape (average) cir 5000000, bc 20000, be 20000

   target shape rate 5000000

Should I just be increasing the queue-limit or should I be chainging something else?

Thanks,

Dan.

1 Accepted Solution

Accepted Solutions

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

dan.letkeman wrote:

Ok, so I added the fair-queue command with a 1024 dynamic conversation queues, and now I didn't have any no-buffer drops but just some flowdrops.  This could be due to a limted about a traffic today, but it seemed better with a smaller queue limit and the fair-queue command

Class-map: tc-class (match-any)

  7403270 packets, 9285635494 bytes

  5 minute offered rate 0 bps, drop rate 0 bps

  Match: access-group name tc-class

    7403270 packets, 9285635494 bytes

    5 minute rate 0 bps

  Queueing

  queue limit 1024 packets

  (queue depth/total drops/no-buffer drops/flowdrops) 0/479/0/479

  (pkts output/bytes output) 7402791/9285046664

  shape (average) cir 5000000, bc 20000, be 20000

  target shape rate 5000000

  Fair-queue: per-flow queue limit 256

              Maximum Number of Hashed Queues 1024

Would this be happening because its just to congested?

Dan.

That would be my expectation.  The big difference between the types of drops, I believe the no-buffer drops would effectively be global tail drops while the flow drops would be specific to a flow or flows.  I.e. the latter shouldn't drop against non-congestion creating flows.

If your results hold, and you're concerned about these drops, what you could do is experiment a bit by increasing queue-limit until no-buffer drops reappear and back-off again.

BTW, drops are often normal with congested interfaces and can be actually "constructive" in that they provide feedback to TCP flows to slow their transmission rate to available bandwidth.  That said, ideally you still want the minimal number of drops to provide this feedback and you only want to target the "offending" congesting causing flows.

View solution in original post

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

What's your buffer stats look like?

Buffer elements:

     748 in free list (500 max allowed)

     7206263 hits, 0 misses, 617 created

Public buffer pools:

Small buffers, 104 bytes (total 51, permanent 50, peak 98 @ 1w2d):

     46 in free list (20 min, 150 max allowed)

     5451136 hits, 51 misses, 76 trims, 77 created

     0 failures (0 no memory)

Middle buffers, 600 bytes (total 25, permanent 25, peak 46 @ 1w2d):

     24 in free list (10 min, 150 max allowed)

     1531925 hits, 10 misses, 30 trims, 30 created

     0 failures (0 no memory)

Big buffers, 1536 bytes (total 50, permanent 50, peak 59 @ 1w0d):

     49 in free list (5 min, 150 max allowed)

     6356469 hits, 12 misses, 37 trims, 37 created

     0 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 10, permanent 10, peak 11 @ 1w2d):

     10 in free list (0 min, 100 max allowed)

     19 hits, 0 misses, 1 trims, 1 created

     0 failures (0 no memory)

Large buffers, 5024 bytes (total 1, permanent 0, peak 1 @ 1w2d):

     1 in free list (0 min, 10 max allowed)

     0 hits, 0 misses, 86 trims, 87 created

     0 failures (0 no memory)

Huge buffers, 18024 bytes (total 5, permanent 0, peak 5 @ 1w2d):

     5 in free list (4 min, 10 max allowed)

     0 hits, 0 misses, 171 trims, 176 created

     0 failures (0 no memory)

Interface buffer pools:

Syslog ED Pool buffers, 600 bytes (total 133, permanent 132, peak 133 @ 1w2d):

     101 in free list (132 min, 132 max allowed)

     13858 hits, 0 misses

IPC buffers, 4096 bytes (total 2, permanent 2):

     1 in free list (1 min, 8 max allowed)

     1 hits, 0 fallbacks, 0 trims, 0 created

     0 failures (0 no memory)

IPC Medium buffers, 16384 bytes (total 2, permanent 2):

     2 in free list (1 min, 8 max allowed)

     0 hits, 0 fallbacks, 0 trims, 0 created

     0 failures (0 no memory)

IPC Large buffers, 65535 bytes (total 2, permanent 2):

     2 in free list (1 min, 8 max allowed)

     0 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

Header pools:

Header buffers, 0 bytes (total 768, permanent 768):

     256 in free list (128 min, 1024 max allowed)

     512 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

     512 max cache size, 512 in cache

     652633530 hits in cache, 0 misses in cache

Particle Clones:

     1024 clones, 0 hits, 0 misses

Public particle pools:

F/S buffers, 1664 bytes (total 1536, permanent 1536):

     1024 in free list (256 min, 2048 max allowed)

     512 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

     512 max cache size, 512 in cache

     910 hits in cache, 0 misses in cache

Normal buffers, 1676 bytes (total 3840, permanent 3840):

     3840 in free list (128 min, 4096 max allowed)

     34400566 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

Private particle pools:

HQF buffers, 0 bytes (total 2000, permanent 2000):

     2000 in free list (500 min, 2000 max allowed)

     7926407 hits, 0 misses, 0 trims, 0 created

     0 failures (0 no memory)

IDS SM buffers, 240 bytes (total 128, permanent 128):

     0 in free list (0 min, 128 max allowed)

     128 hits, 0 fallbacks

     128 max cache size, 128 in cache

     0 hits in cache, 0 misses in cache

FastEthernet0/0/0 buffers, 1548 bytes (total 128, permanent 128):

     0 in free list (0 min, 128 max allowed)

     128 hits, 34400566 fallbacks

     128 max cache size, 64 in cache

     829203555 hits in cache, 34400502 misses in cache

GigabitEthernet0/0 buffers, 1664 bytes (total 1024, permanent 1024):

     0 in free list (0 min, 1024 max allowed)

     1024 hits, 0 fallbacks

     1024 max cache size, 442 in cache

     441985102 hits in cache, 0 misses in cache

GigabitEthernet0/1 buffers, 1664 bytes (total 1024, permanent 1024):

     70 in free list (0 min, 1024 max allowed)

     1024 hits, 0 fallbacks

     1024 max cache size, 957 in cache

     800575588 hits in cache, 910 misses in cache

GigabitEthernet0/2 buffers, 1664 bytes (total 1024, permanent 1024):

     0 in free list (0 min, 1024 max allowed)

     1024 hits, 0 fallbacks

     1024 max cache size, 768 in cache

     551000 hits in cache, 0 misses in cache

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Those stats look good, which likely means your shaping queue buffers don't count against them.

What's your free memory look like? (You're requesting about 6 MB just for these buffers.)

What's the MTU on the shaped interface?

PS:

BTW, the solution might be decreasing your queue-limit.  The no-buffer drops infers you don't have enough memory to support your requested 4096 buffers.

Memory looks fine.  At least it never went too low.

                      Head      Total(b)          Used(b)       Free(b)        Lowest(b)    Largest(b)

Processor   2A67AFC0   320360512    52742812   267617700    51790492   228353424

          I/O    D800000    41943040       20036880    21906160    21864320    19549308

MTU:

GigabitEthernet0/0 is up, line protocol is up

  Description: LAN

  Internet address is 10.10.10.1/30

  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,

     reliability 255/255, txload 5/255, rxload 1/255

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Those look good too, so at least to me, not obvious why you're getting buffer failures.

What do I look for in the show buffers command when I am seeing drops?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I was looking at the "# failures (# no memory)", but all yours show zeros.  You also appear to have ample free RAM.

PS:

This paper, http://www.cisco.com/en/US/products/hw/routers/ps341/products_tech_note09186a0080af893d.shtml, mentions its possible to have no-buffer drops pass some interface limits.  You might try setting queue-limit down, as both Kishore and I suggested; perhaps try 1024.

Thanks.  I will give that a try.  I was watching it today, but there were no drops at all even with the queue-limit at 4096.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

My guess would be that today your buffers didn't need to go deep enough to hit the exhaustion condition.

Ok, so I added the fair-queue command with a 1024 dynamic conversation queues, and now I didn't have any no-buffer drops but just some flowdrops.  This could be due to a limted about a traffic today, but it seemed better with a smaller queue limit and the fair-queue command

Class-map: tc-class (match-any)

  7403270 packets, 9285635494 bytes

  5 minute offered rate 0 bps, drop rate 0 bps

  Match: access-group name tc-class

    7403270 packets, 9285635494 bytes

    5 minute rate 0 bps

  Queueing

  queue limit 1024 packets

  (queue depth/total drops/no-buffer drops/flowdrops) 0/479/0/479

  (pkts output/bytes output) 7402791/9285046664

  shape (average) cir 5000000, bc 20000, be 20000

  target shape rate 5000000

  Fair-queue: per-flow queue limit 256

              Maximum Number of Hashed Queues 1024

Would this be happening because its just to congested?

Dan.

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

dan.letkeman wrote:

Ok, so I added the fair-queue command with a 1024 dynamic conversation queues, and now I didn't have any no-buffer drops but just some flowdrops.  This could be due to a limted about a traffic today, but it seemed better with a smaller queue limit and the fair-queue command

Class-map: tc-class (match-any)

  7403270 packets, 9285635494 bytes

  5 minute offered rate 0 bps, drop rate 0 bps

  Match: access-group name tc-class

    7403270 packets, 9285635494 bytes

    5 minute rate 0 bps

  Queueing

  queue limit 1024 packets

  (queue depth/total drops/no-buffer drops/flowdrops) 0/479/0/479

  (pkts output/bytes output) 7402791/9285046664

  shape (average) cir 5000000, bc 20000, be 20000

  target shape rate 5000000

  Fair-queue: per-flow queue limit 256

              Maximum Number of Hashed Queues 1024

Would this be happening because its just to congested?

Dan.

That would be my expectation.  The big difference between the types of drops, I believe the no-buffer drops would effectively be global tail drops while the flow drops would be specific to a flow or flows.  I.e. the latter shouldn't drop against non-congestion creating flows.

If your results hold, and you're concerned about these drops, what you could do is experiment a bit by increasing queue-limit until no-buffer drops reappear and back-off again.

BTW, drops are often normal with congested interfaces and can be actually "constructive" in that they provide feedback to TCP flows to slow their transmission rate to available bandwidth.  That said, ideally you still want the minimal number of drops to provide this feedback and you only want to target the "offending" congesting causing flows.

hi  dan,

maybe try decreasing your queue limit and see if that helps

HTH

Review Cisco Networking products for a $25 gift card