cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
cancel
356
Views
0
Helpful
1
Replies

Tuning queues in Distributed LLQ ( Cisco 7500 )

etamara
Level 1
Level 1

Hi.

I have the following QoS config in a Cisco 7500:

policy-map Oficinas

class Critical

priority percent 90

class Intranet

bandwidth percent 6

The policy is attached to different 64-kbps serial interfaces with PPP encapsulation.

Actually, there is no voice traffic at all, but I didnĀ“t find a specific place for posting QoS FAQs.

My problem is that class Intranet and class-default show drops ( above 1% ) and when a traffic flow such as HTTP ( class Intranet ), which congests the line, is sent, the measured time is worse with QoS than with default FIFO queueing. Furthermore, for this single flow, drops are never found with FIFO.

Due to line bandwidth, the queue limit for the whole interface is 16. Class-default has a default queue-limit of 2, like class Intranet. I decided to tune the queues of theses classes. I found two options on CCO docs:

1- "queue-limit" command.

2- "fair-queue queue-limit" command.

The first one defines a queue limit for the class, while the second defines a queue limit for each flow. I guess both activate fair queueing ( per-flow in the second case ).

I tried different tests changing these parameters, but results seemed confusing. In general, the bigger the queue-limit, the lower the drop level. However, measured times were worse when drop levels decreased.

Is this normal? Has anybody faced this problem or can give me a clue about how to tune queues?

Thanks,

Enrique

1 Reply 1

hadbou
Level 5
Level 5

On the 7500 you're doing distributed queuing, which means that all the queuing datastructures will

be kept in the DRAM of the VIP.

The packets however, will stay in the packet memory (SRAM on VIP2 or SDRAM on the VIP4).

The DRAM consumption depends on the number of classes in the policy-map and the features enabled.

For a CBWFQ policy with 3-4 classes, the

overhead is about 12kb per policy. If "fair-queue" is enabled each policy could take an additional

75kb.

CPU power and SRAM are also limiting factors on the VIP2-40s. If you have a lot of low-speed

subinterfaces and only CBWFQ is needed, a VIP2-40 may

be able to handle that. For higher aggregate link rate and combinations of QoS features such as dTS,

police, and dFRF.12, a VIP2-50 or higher is

recommended.

Additionally, the size and structure of the routing table should be taken into account. When you enable dCBWFQ, you obviously have to enable dCEF.

The latter will cause the FIB table to be downloaded from the RSP into the VIP. The bigger the routing table and the more complex it is, the bigger the FIB

table, hence the more memory it'll use on the VIP.