QoS tail drops on non-congested link

Unanswered Question
Jul 13th, 2010
User Badges:

We've recently upgraded a WAN link to a DS3.  Encapsulation is Frame-Relay.


The link is an MPLS link, with a 2MB guaranteed delivery ("GoldCAR") rate.


We're seeing utilization on the link up to 20Mbps, yet we're also seeing tail drops.  We want to match our voice traffic in our policy map (we're doing that successfully) and let all other traffic share the remaining bandwidth.


Here's the relevant policy statements:


class-map match-all MarkGold
match access-group name GoldCARService
class-map match-any Gold-CAR
match ip dscp ef
!
!
policy-map Traffic-Engineering
class Gold-CAR
    priority 1024
class class-default
    fair-queue
policy-map Marking
class MarkGold
  set ip dscp ef

!

interface GigabitEthernet0/0
ip address 1.2.3.2 255.255.252.0
duplex auto
speed auto
media-type rj45
service-policy input Marking

!

interface Serial1/0
no ip address
no ip redirects
encapsulation frame-relay IETF
load-interval 30
dsu bandwidth 44210
scramble
service-policy output Traffic-Engineering
!       
interface Serial1/0.101 point-to-point
bandwidth 44210
ip address 3.3.3.2 255.255.255.252
frame-relay interface-dlci 101 IETF 
!

ip access-list extended GoldCARService
permit udp 4.4.4.0 0.0.0.255 1.1.0.0 0.0.255.255
permit tcp 4.4.4.0 0.0.0.255 1.1.0.0 0.0.255.255
permit udp 4.4.4.0 0.0.0.255 1.1.1.0 0.0.0.255 range 5000 6381



A resulting check against the policy map shows drops on our serial link:


router01#show policy-map int s1/0


Serial1/0


  Service-policy output: Traffic-Engineering


    queue stats for all priority classes:
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/144/0
      (pkts output/bytes output) 3646345/732033091


    Class-map: Gold-CAR (match-any)
      3646489 packets, 732062395 bytes
      30 second offered rate 489000 bps, drop rate 0 bps
      Match: ip dscp ef (46)
        3646489 packets, 732062395 bytes
        30 second rate 489000 bps
      Priority: 1024 kbps, burst bytes 25600, b/w exceed drops: 144
   

    Class-map: class-default (match-any)
      352689830 packets, 34606942932 bytes
      30 second offered rate 7717000 bps, drop rate 4000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops/flowdrops) 0/3100244/0/3100244
      (pkts output/bytes output) 349589587/34410144009
      Fair-queue: per-flow queue limit 16


Since my offered rates are nowhere near the capacity of the link, why would I get tail drops?


We're thinking of modifying the class-default class map to:


class class-default
    fair-queue 256
    queue-limit 32


But we're unsure of what might be needed to stop the tail drops.


Thanks,


-- Gil

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
jorge.calvo Wed, 07/14/2010 - 00:41
User Badges:
  • Bronze, 100 points or more

Hello,


A useful command to see if a link is really suffering congestion is "show queue ", in your case: show queue serial1/0. If you see any active conversation in tha output of this command it means there is congestion at that moment. It is a good practice to repeat this command several times to check active conversations.


I would change two thing to reduce your drops:


1. I would make class Gold-CAR match your real CAR with "priority 2048"


2. I would implement WRED on the class-default with a more granular QoS (different AFxx allocations for different traffic classes). This way the the less important traffic would be discarded randomly following a 1/10 discard rate before being tail dropped.


Hope this helps.

glshillcutt Wed, 07/14/2010 - 06:32
User Badges:

Jorge,


Thanks for the response.  We had already planned on taking the priority queue up to 2048, and did so last night.


The "show queue " command has been depracated, the "show policy-map interface " command replaces it.


After making the change as proposed, we are still seeing tail drops:


router01#show policy-map interface Serial1/0

Serial1/0

  Service-policy output: Traffic-Engineering

    queue stats for all priority classes:
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 563671/113497333

    Class-map: Gold-CAR (match-any)
      563671 packets, 113497333 bytes
      30 second offered rate 277000 bps, drop rate 0 bps
      Match: ip dscp ef (46)
        563671 packets, 113497333 bytes
        30 second rate 277000 bps
      Priority: 2048 kbps, burst bytes 51200, b/w exceed drops: 0
     

    Class-map: class-default (match-any)
      84524074 packets, 13997108919 bytes
      30 second offered rate 7503000 bps, drop rate 10000 bps
      Match: any
      Queueing
      queue limit 32 packets
      (queue depth/total drops/no-buffer drops/flowdrops) 0/417726/0/417726
      (pkts output/bytes output) 84106348/13956711339
      Fair-queue: per-flow queue limit 8


The total offered rate, at 277,000 + 7,503,000 is well below the total bandwidth available of 44,210,000.  It doesn't make sense that we'd have a significant drop rate -- we're not in a congested state.


Any other ideas?


Thanks,


-- Gil

jorge.calvo Wed, 07/14/2010 - 07:16
User Badges:
  • Bronze, 100 points or more

Hi,


You could increase the fair-queue and queue limit values and if possible apply a more granular QoS template with WRED on the default-class to keep it only with the less important traffic.


Are you experiencing any other problems apart from the tail drop, like CRCs. Input/Output errors on the Serial, latency spikes and so on?


If not, a more complex QoS should resolve your issue.


Regards.

Joseph W. Doherty Wed, 03/06/2013 - 10:29
User Badges:
  • Super Bronze, 10000 points or more

Disclaimer


The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.


Liability Disclaimer


In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.


Posting


Queues are probably too shallow for DS3.  Recommend individual flow queue be able to support about half your BDP.

diepes Wed, 03/06/2013 - 04:26
User Badges:

I think you are getting the drops due to the fair-queue applied to the Default class.

This set a packet limit of 8 for each flow, even if bandwidth is available.


all the class-default drops are counted as flowdrops, not no-buffer drops.


If you have a singel flow that bursts and the buffer go's over 8 packet's it gets dropped.


>> Fair-queue: per-flow queue limit 8

>> (queue depth/total drops/no-buffer drops/flowdrops) 0/417726/0/417726

paul driver Thu, 03/07/2013 - 03:42
User Badges:
  • Red, 2250 points or more

Hello,


When you increased the priority rate your default queue drop rate increased from 4K to 10K , this seams to suggest the scheduler is continually servicing the priority queue.


Try dropping the priority rate queue to as near as your expected traffic rate as possible


res

Paul


Please don't forget to rate this post if it has been helpful.

Actions

This Discussion

Related Content