cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
533
Views
0
Helpful
4
Replies

qos on GSR multilink e1

a_gougevsky
Level 1
Level 1

I have threshold drops on GSR E1 multilink interface for the following class while I see that maximum queue-depth never reached queue limit value, how could it happen?

policy-map xxx

..<skipped>..

class cm-ROUTING

bandwidth percent 1

router#sh policy-map interface multilink 1

..<skipped>...

Class-map: ROUTING (match-any) (10715393/14)

679924 packets, 37763127 bytes

30 second offered rate 2000 bps, drop rate 0 bps

Match: qos-group 6 (11332770)

Match: ip precedence 6 (4148546)

Class of service queue: 9

Queue-limit: 8 packets (default) Threshold drop 561 pkts, 28860 bytes

Current queue-depth: 0 packets, Maximum queue-depth: 3 packets

Average queue-depth: 0.000 packets

Bandwidth: 64 kbps

...

4 Replies 4

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello,

you are asking the router to assure 1% of bandwidth of your multilink E1 to this class of traffic so this means very low bit rate for it.

In addition you have to think if you have a tail drop policy instead of WRED.

In old times for other router platforms the rule of thumb was to leave up to 25% of BW to the routing protocols.

1% is really a too small percentage for one or two E1 in a multilink PPP bundle

You are also challenging the resolution, the precision of traffic metering with so low values.

Until no routing neigborship is formed the link is empty, as soon as it is formed user traffic can easily saturate the link.

Average size of conforming packets is 56 bytes.

Average size of discarded packets is 52 bytes.

Hope to help

Giuseppe

Giuseppe,

thank you for your reply!

I think that 25% would be overkill for my network, I have 3xE1 bundle and I think that 1% (64kbps in my case) is good enough for routing class. Actually I have 3 bgp sessions established over this link and all of them seems pretty stable, but the threshold drop counter keeps slowly increasing.

If I saw that max queue depth was at my queue-limit value, or if I'd ever noticed that offered rate was close to guaranteed bandwidth, I would conclude that my routing traffic did not fit in 1 percent and would just increase the number. However, what I see is different, of course my reading of output could be wrong either.

OK,

you have three BGP sessions that are sending out a BGP keepalive each every 60 seconds (with defaul timers).

However, you have to think that the BW is not dedicated to a traffic class.

So I would suggest you to use a bigger value maybe 3% just to see the difference. This doesn't mean that this BW cannot be used by user traffic if available.

the scheduler is elastic not a rigid partition of resources.

May be sometimes the three BGP keepalive packets are placed on the queue at once and this can cause one to be dropped. Of course the sessions are stable because only one of many keepalive messages are lost.

In the show there is a queue parameter that is just 3 for this traffic class.

And you have to think to provide space for a possible routing update also.

for this reason is better to provide a greater percentage of BW to routing traffic.

Or it may be more likely you are dealing with resolution limits of traffic meter capabilities.

best regards

Giuseppe

> May be sometimes the three BGP keepalive packets are placed on the queue at once and this can cause one to be dropped.

Why would forth packet to be dropped, while queue limit is 8 packets?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card