cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1705
Views
0
Helpful
6
Replies

Unexplained priority queue drops on 2821

Mark Boolootian
Level 1
Level 1

I'm pulling my hair out trying to figure out why my priority queue is dropping traffic.

I have what I believe is a very simple QoS configuration. I have a 2821 running 12.4(22)T with the following policy map:

class-map match-all bearer

match ip dscp ef

class-map match-all signal

match ip dscp cs3

policy-map voice-policy

class bearer

priority percent 15

class signal

bandwidth percent 10

class class-default

fair-queue

This policy map is applied as an outbound service policy on a T-1 interface.

I flood the T-1 with 10 Mb/s of unmarked UDP traffic while I have a couple of pings running that are sending 60 byte CS3 and EF marked ICMP echo requests to a device on the far end of the T-1. There is no other marked traffic being sent.

What I expect is for the marked traffic to get through the congested link (with some added delay due to serialization). But virtually all of the marked traffic is dropped.

The CPU on the 2821 is near idle. There are no input queue drops.

Here's a picture (fixed-width font required) of the topology:

datasource1

\

<--GE--> 2821 <----T1----> 2651XM <---FE--> target

/

datasource2

datasource1 sends a 10 Mb/s stream of 1400 byte unmarked UDP packets destined to target. datasource2 runs a pair of pings destined to target, one marked EF, the other marked CS3.

Attached is the output from 'show policy-map' on the T1 interface on the 2821.

Explanations would be greatly appreicated.

---

dev#sh policy-map int ser 0/0/0:0

Serial0/0/0:0

Service-policy output: voice-policy

queue stats for all priority classes:

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops) 0/131/131

(pkts output/bytes output) 3027/266376

Class-map: bearer (match-all)

3158 packets, 277904 bytes

5 minute offered rate 1000 bps, drop rate 1000 bps

Match: ip dscp ef (46)

Priority: 15% (230 kbps), burst bytes 5750, b/w exceed drops: 131

Class-map: signal (match-all)

1627 packets, 143176 bytes

5 minute offered rate 1000 bps, drop rate 1000 bps

Match: ip dscp cs3 (24)

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops) 0/97/97

(pkts output/bytes output) 1530/134640

bandwidth 10% (153 kbps)

Class-map: class-default (match-any)

168667 packets, 213899049 bytes

5 minute offered rate 3052000 bps, drop rate 1629000 bps

Match: any

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops/flowdrops) 999/54745/54501/244

(pkts output/bytes output) 114068/135442123

Fair-queue: per-flow queue limit 16

6 Replies 6

Joseph W. Doherty
Hall of Fame
Hall of Fame

"But virtually all of the marked traffic is dropped. "

Based on what? (In other words, how do you know? You stats only show marked packets being dropped for buffer failure, but not all of them. I.e., drops EF shows 131/3158 = 4%, CS3 shows 97/1627 = 6%)

Sorry, those stats included marked traffic that was sent went the link wasn't congested. Here's is what it looks like for a two minute run. Drops in the priority queue are about 85%.

I must be misunderstanding how this is supposed to work. I thought that the system would protect buffers for the priority queue to insure that 15% of the T-1 bandwidth would be available for EF-marked packets. That doesn't seem to be the case.

c3-g#show policy-map int ser 0/0/0:0

Serial0/0/0:0

Service-policy output: voice-policy

queue stats for all priority classes:

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops) 0/164/164

(pkts output/bytes output) 31/2728

Class-map: bearer (match-all)

195 packets, 17160 bytes

30 second offered rate 0 bps, drop rate 0 bps

Match: ip dscp ef (46)

Priority: 15% (230 kbps), burst bytes 5750, b/w exceed drops: 164

Class-map: signal (match-all)

265 packets, 23320 bytes

30 second offered rate 1000 bps, drop rate 0 bps

Match: ip dscp cs3 (24)

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops) 0/216/216

(pkts output/bytes output) 49/4312

bandwidth 15% (230 kbps)

Class-map: class-default (match-any)

104226 packets, 149039475 bytes

30 second offered rate 5886000 bps, drop rate 4969000 bps

Match: any

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops/flowdrops) 0/87101/86616/485

(pkts output/bytes output) 17139/24425179

Fair-queue: per-flow queue limit 16

Ok, do see the high drop rate, but they're still seem to all be no-buffer drops.

QoS should insure your EF and CS3 packets get sent if they're within their bandwidth allocations, but the no-buffer drops, I believe, is precluding them from making it to their outbound queue.

From you prior stats, your class-default FQ is likely using up all your buffers. For testing purposes, you might try changing class-default to FIFO or tuning your buffers. The latter might not work well since you're sending such a high rate of UDP packets. Any rate over your T-1 line rate is going to fill buffers. The former, FIFO, might not use as many buffers and keep from staving the EF and CS3.

Thanks very much for the response Joseph. You've hit upon the place my understanding is completely lacking.

I thought the creation of a policy map with separate classes created separate output queues. Specifically, I was expecting that there was a priority queue with some set of dedicated buffers that can't be drawn on by other classes.

If the priority queue draws from a common buffer pool and there is no restriction on who can draw from that pool (or how much), then any high rate spew (think DoS) can exhaust the pool and trash the traffic in the priority queue.

I thought this was exactly what QoS was supposed to prevent.

Yes, I too believe that different classes define separate output queues, but as for them having dedicated buffers, don't recall seeing documentation that notes they do and suspect they don't. If they don't, you're correct about the exposure to a DoS attack, but there are things we could do to protect the resource. For instance, we could police the inbound interface traffic that's going toward the WAN interface. The policer could be targeted to certain traffic types, like non-TCP, that can also generate this type of situation even when not an intentional DoS attack.

[edit]

From your original post:

Class-map: class-default (match-any)

168667 packets, 213899049 bytes

5 minute offered rate 3052000 bps, drop rate 1629000 bps

Match: any

Queueing

queue limit 64 packets

(queue depth/total drops/no-buffer drops/flowdrops) 999/54745/54501/244

(pkts output/bytes output) 114068/135442123

Fair-queue: per-flow queue limit 16

A good question (for Cisco?) might be why are there 999 packets in the class-default FQ when the queue limit is 64? (Might work differently if you weren't using a "T" train version.)

This problem has been resolved. Happily, QoS does work as expected. The problem I was having is due to a bug in 12.4(22)T. A bug id has been filed: CSCsw98427

I dropped back to 12.4(15)T1 and the QoS behavior was as I had hoped: marked packets are given priority over unmarked packets such that there is no packet loss with marked packets when the serial link is flooded with unmarked UDP traffic.

Thanks to joseph for his responses. I'm trying to glue the remaining hair back in.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco