tail drops

Unanswered Question
Nov 28th, 2007
User Badges:

is there a rule of thumb when it comes to cleaning up tail drops? Obviously the lower the better, but that does introduce latency... I know TCP is pretty resilient, but at what point is the breaking point when you have a customer who just floods their pipe consistently and says its 'slow'?

After about 20 minutes or so here is what I ended up with, which is about 1%.

Class-map: class-default (match-any)

12901191 packets

Match: any

Tail Packets Drop: 139352

Traffic Shaping

Average Rate Traffic Shaping

CIR 11111120 (bps)

Queue Limit

queue-limit 96 (packets)

Output Queue:

Max queue-limit default threshold: 96

Tail Packets Drop: 139352

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
mheusing Fri, 11/30/2007 - 01:22
User Badges:
  • Cisco Employee,


In the end QoS successfully can only address transient overload conditions. If there constantly is more traffic than bandwidth drops are inevitable.

Looking at your stats I would assume a larger queue limit would reduce the taildrops and adding WRED to class default is generally improoving the situation. The idea of WRED is to avoid taildrop at all by randomly dropping packets, if there is too much overload. A dropped packet leads to a reduction in window size of a TCP connection and thus reduced throughput. This way WRED tries to regulate the traffic to match the offered interface bandwidth.

You also configured some shaping to 11 Mbit/s - what is the reason for this?

It would be helpful to see your current QoS config and some information on bandwidth etc. for a more detailed analysis.

Base don the information given, turing on WRED should already improove your situation assuming most of the traffic is TCP based.

Hope this helps! Please rate all posts.

Regards, Martin

sjamison76 Fri, 11/30/2007 - 05:39
User Badges:

The customer purchased a 10 meg Metro circuit from us. However, at default of 10/FULL he never got to 10meg. Switching it over to 100/FULL and traffic shaping it down to 11meg (wont allow to traffic shape down that low) allowed for this policy to take. Originally with the limit-queue at a default of 48, he had packet drops over 20%. After configuring the limit queue higher, this dropped to around 1%. Hopefully we can sell him some more bandwidth, but atleast now he is getting the speed he is paying for.

I dont know what the traffic is, really dont care as long as the QoS performs the way its supposed to at a given rate.

mheusing Wed, 12/05/2007 - 03:07
User Badges:
  • Cisco Employee,


I would give it a try and turn on WRED and fair-queueing for the traffic. Usually this mproves the situation.

In your case you need a nested policy, as you are shaping a FE down to 11M. Example:

policy-map Shape11M

class class-default

shape average 11000000 !

service-policy output Q

policy-map Q

class class-default



interface FastEthernet0

service-policy output Shape11M

This will turn on fair queueing and WRED for your shaper queue, which gives usually better results than FIFO queueing used by default in a shaper.

Be aware however that QoS does not create bandwidth, i.e. 10 Meg will never "feel" like 100 Meg for a customer no matter which QoS feature you use. I am sure you understand this and am crossing my fingers that the customer of yours also understands it.

Hope this helps! Please rate all posts.

Regards, Martin

sjamison76 Wed, 12/05/2007 - 12:17
User Badges:

They are performing very well now that the limit queue got setup to the right size. I will add the WRED and see what happens with it. Its only an internet connection they have, but I need to make sure they atleast get the speed they are paying for :) We hope to sell them some more bandwidth once their contract ends...

Thanks for your help!


This Discussion