is there a rule of thumb when it comes to cleaning up tail drops? Obviously the lower the better, but that does introduce latency... I know TCP is pretty resilient, but at what point is the breaking point when you have a customer who just floods their pipe consistently and says its 'slow'?
After about 20 minutes or so here is what I ended up with, which is about 1%.
In the end QoS successfully can only address transient overload conditions. If there constantly is more traffic than bandwidth drops are inevitable.
Looking at your stats I would assume a larger queue limit would reduce the taildrops and adding WRED to class default is generally improoving the situation. The idea of WRED is to avoid taildrop at all by randomly dropping packets, if there is too much overload. A dropped packet leads to a reduction in window size of a TCP connection and thus reduced throughput. This way WRED tries to regulate the traffic to match the offered interface bandwidth.
You also configured some shaping to 11 Mbit/s - what is the reason for this?
It would be helpful to see your current QoS config and some information on bandwidth etc. for a more detailed analysis.
Base don the information given, turing on WRED should already improove your situation assuming most of the traffic is TCP based.
The customer purchased a 10 meg Metro circuit from us. However, at default of 10/FULL he never got to 10meg. Switching it over to 100/FULL and traffic shaping it down to 11meg (wont allow to traffic shape down that low) allowed for this policy to take. Originally with the limit-queue at a default of 48, he had packet drops over 20%. After configuring the limit queue higher, this dropped to around 1%. Hopefully we can sell him some more bandwidth, but atleast now he is getting the speed he is paying for.
I dont know what the traffic is, really dont care as long as the QoS performs the way its supposed to at a given rate.
I would give it a try and turn on WRED and fair-queueing for the traffic. Usually this mproves the situation.
In your case you need a nested policy, as you are shaping a FE down to 11M. Example:
shape average 11000000 !
service-policy output Q
service-policy output Shape11M
This will turn on fair queueing and WRED for your shaper queue, which gives usually better results than FIFO queueing used by default in a shaper.
Be aware however that QoS does not create bandwidth, i.e. 10 Meg will never "feel" like 100 Meg for a customer no matter which QoS feature you use. I am sure you understand this and am crossing my fingers that the customer of yours also understands it.
They are performing very well now that the limit queue got setup to the right size. I will add the WRED and see what happens with it. Its only an internet connection they have, but I need to make sure they atleast get the speed they are paying for :) We hope to sell them some more bandwidth once their contract ends...
This document is an early notification of a behaviour change that will be introduced in IOS XR release 6.5.
IOS XR configuration principles relevant for this article are:
On router platforms all interfaces must be by defaul...
With XR 4.2.0 the ASR9000 is releasing a new line of hardware models. This amongst others is the RSP440, the next generation RSP with faster switch fabric along with Typhoon based Linecards, the next generation network processor.
The Cisco EPN system incorporates a network architecture designed to consolidate multiples services on a single Multiprotocol Label Switching (MPLS) transport network. This network is designed primarily based on Application ...