LLQ/PQ dropping conforming traffic

Unanswered Question
Mar 30th, 2010

Are there other reasons why a priority queue would drop traffic other then exceeding the bandwidth of a queue?

I have a shaper configured with a child policy that includes LLQ/PQ and CBQ. When I saturate the shaper the LLQ/PQ experiences bandwidth exceeded drops even though the rate of traffic in the PQ is below the bandwidth limit. I.e. it’s dropping conforming traffic.

As you can see there are "b/w exceed drops" but the offer rate is only 1229Kbps where the queue is configured for 1350Kbps. I'm 99% sure there is no bursting happening or other traffic since this is in an isolated lab network. So it looks to me it's dropping traffic for some other reason then exceeding the bandwidth.

show policy-map int gi0/0
GigabitEthernet0/0

  Service-policy output: TEST-WAN-SHAPE

    Class-map: class-default (match-any)
      253452 packets, 383171894 bytes
      30 second offered rate 24662000 bps, drop rate 10162000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 119/108582/0
      (pkts output/bytes output) 142459/215355575
      shape (average) cir 15000000, bc 150000, be 150000
      target shape rate 15000000

      Service-policy : TEST-QUEUE-MANUAL

        queue stats for all priority classes:
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/2412/0
          (pkts output/bytes output) 9491/14350392

        Class-map: AVVID-Voice (match-any)
          11903 packets, 17997336 bytes
          30 second offered rate 1229000 bps, drop rate 245000 bps
          Match: ip dscp ef (46)
            11903 packets, 17997336 bytes
            30 second rate 1229000 bps
          Priority: 9% (1350 kbps), burst bytes 33750, b/w exceed drops: 2412

<SNIP>

To complicate things, sometimes the test works as expected (i.e. the only drops are from the class-default queue) other times the test shows these bandwidth exceed drops for the PQ.

Here is the relevant config:

policy-map TEST-QUEUE-MANUAL
class AVVID-Voice
    priority percent 9
class Tandburg-Video
    bandwidth percent 27
    queue-limit 128 packets
class OCS-Video
    bandwidth percent 21
    queue-limit 128 packets
class Media-Signaling
    bandwidth percent 2
class Net-Mgt
    bandwidth percent 1
class OCS-Voice
    bandwidth percent 5
policy-map TEST-WAN-SHAPE
class class-default
    shape average 15000000 150000
  service-policy TEST-QUEUE-MANUAL

Any ideas on why we may be seeing this behavior?

Thanks,

Joe

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
c2jkeegan Wed, 03/31/2010 - 09:57

So this looks like an issue with the tool iPerf. When generating more then around 10 - 15Mbps of traffic the streams become very uneven and can be quite bursty.

running the Iperf server with the "-i 1" option to report every second showed this.

Giuseppe Larosa Wed, 03/31/2010 - 10:02

hello Joe,

yes only hardware based traffic generators like a SMARTBITS or newer ones can be set in frames per second

the presence of short live bursts explains the drops

Hope to help

Giuseppe

dominic.giardina Sun, 10/13/2013 - 09:34

The default burst in the llq defaults to 20% of the configured priority. When you consider the single rate, two color policer operation and how traffic is metered it becomes clear that the llq will police traffic at 20% of the configured priority unless you manually configure the burst to 100% of the configured priority. That said, this policer action only becomes engaged when interface congestion is present as viewed by the phys interface or by a shaping tool. This introduces to much randomization on priority traffic as to whether it will be policed or dequeued, which means real time apps will suffer degraded performance more often than you would like. Set the llq burst to 100% of the configured priority and only traffic exceeding this rate will drop. 

Actions

This Discussion