cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1109
Views
5
Helpful
7
Replies

CSCsw73198 - ASR QOS policy-map huge packet drop even with normal amount traffic

I'm see the exact same symptoms on our ASR1002's.  we effectively have heirarchical QoS configured, or CB shaping.  I don't see any of the shape rate CIR's being exceeded, yet I see extensive random/tail drops occuring when the physical interface transmit rate is around 35Mbps (1Gbps interface).  I have a TAC case opened but so far I've heard nothing.              

7 Replies 7

Rudy GAVRON
Level 1
Level 1

Hi,

Did you have an answer for this TAC case ? a solution ?

Thank you

Tac case is closed. The problem centers around a few things. The shape cir, the queue depth, and the actual rate of packet flow. Low shape rates with flows serialized from high speed sources (100Mbps and higher) represent problems for traffic shaping. The lower the cir the more problematic it becomes. You can verify that packets that don't violate the contract are still dropped by configuring policing on the ingress interface receiving the packets that will be shaped on egress.  To solve the problem.....you can tune the shape parameters to send more data per interval and increase the queue length to provide a cushion for bursts. Default Tc on the ASR is 4ms. Try 10ms and double the size of the Be bucket along with increased queue depth and tune WRED thresholds if it is used. It is really a case by case basis.

Thank you very much for your fast answer !

I'll try this next week

I forgot to mention...if you want to use the policer to meter the packet flow make sure your actions are "transmit" for everything. The goal is to simply compare the exceed/violate packet count to the number of drops on the shaping policy. I also wanted to clarify that doubling the be bucket is making the be bucket 2x the size of the bc bucket. The configuration I used was:

Shape average 10000000 100000 200000

Service-policy output queue

Policy-map queue

Class VoIP

  Priority percent 10

Class bulk

  Bandwidth percent 45

  Random-detect dscp

  Random-detect dscp af11 250 375

  Queue-limit 500

Class class-default

  Bandwidth percent 45

  Queue-limit 500

  Random-detect

  Random-detect dscp default 200 375

This example is for a 10Mbps circuit. You theoretically could triple the be bucket or even more. It works well for tcp flows. I'm still in the process of designing a permanent solution along these parameters. That or doing aggregate shaping instead of individual site shaping and using tri-color marking to rewrite exceed/violate packets to low bandwidth queues with limited buffer and default WRED settings.

Let me know how it works out. I can't gaurantee it will completely stop packet loss altogether but it will cut down on it immensely. When you begin to sustain the cir continuously your queue will always have a high fill level and more drops will occur but the queue is your troubleshooting aid. At this point you need more bandwidth :)

Hi,

I think that tunning the queue-limit did the trick.

On a 10Mb/s link we applied: shape average 10000000 100000 (to get TC=10ms).

This only change did almost nothing.

We configured bigger queue-limit in child policies (128 for class-default, up to 1024 for important classes), and drops lowered.

We still get some drops, but mainly when the link is quite used.

Thank you Dominic for your help.

That's the same result I got. You have to combine the tweaks. If you put the shaping back to 4ms you'll see your drops increase even with increased queue depths. I do highly recommend making the excess bust bucket on shaping proportionately larger than the committed burst to give you that extra transmit capability when larger than normal bursts occur after a a few intervals of low activity. Other than that you have to just monitor it and make other tweaks as needed. I'm glad it worked out.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: