cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2486
Views
0
Helpful
9
Replies

Discards on WAN MultiLink Interface

Kevin Melton
Level 2
Level 2

I am working at a Customer site today. They have an MPLS circuit that is effectively 2 T1's bundled together on a 3800 series ISR.

We have been noticing some output drops when we do a sho interface. They are pretty excessive in light of the fact that we do not have alot of load on that interface, and the reliability of the interface is at 255/255.

Here is a sample output from the interface:

GAMPLSRTR01#sho int multi1

Multilink1 is up, line protocol is up

Hardware is multilink group interface

Internet address is X.X.X.X/30

MTU 1500 bytes, BW 3072 Kbit, DLY 100000 usec,

reliability 255/255, txload 126/255, rxload 26/255

Encapsulation PPP, LCP Open, multilink Open

Open: IPCP, loopback not set

Keepalive set (10 sec)

DTR is pulsed for 2 seconds on reset

Last input 00:00:10, output never, output hang never

Last clearing of "show interface" counters 00:02:21

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 2946

Queueing strategy: fifo

Output queue: 22/40 (size/max)

30 second input rate 321000 bits/sec, 322 packets/sec

30 second output rate 1520000 bits/sec, 345 packets/sec

44962 packets input, 5045539 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

53031 packets output, 35239499 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

GAMPLSRTR01#

This reading was taken just a minute or so after the interface counters were cleared.

What would be the most obvious causes of these drops, in light of the fact that we are not hammering our MPLS link?

Could it be our configured Class of Service Policies? What else could potentially cause these drops?

Thanks much. I am in desparate need to explain this today if possible and currently am not sure what is going on.

K-melton

9 Replies 9

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Kevin,

>> Queueing strategy: fifo

>> Output queue: 22/40 (size/max)

packets are sitting in the output queue waiting to be transmitted when the max size of the queue is reached (40 packets) you get outputs drops caused by tail drop.

Hope to help

Giuseppe

Is there a way to change this size of the output queue?

Thanks!

Hello Kevin,

int type x/y

hold-queue 4096 out

verify the max size can be lower on your router

Hope to help

Giuseppe

The Queuing strategy on the interface changes when we re-apply our Class of Service Policies. We had lifted those to see if they were causing the problem this morning.

Here is the output when we re-apply the policy map to the Multilink Interface:

GAMPLSRTR01#sho int multi1

Multilink1 is up, line protocol is up

Hardware is multilink group interface

Internet address is X.X.X.X/30

MTU 1500 bytes, BW 3072 Kbit, DLY 100000 usec,

reliability 255/255, txload 76/255, rxload 29/255

Encapsulation PPP, LCP Open, multilink Open

Open: IPCP, loopback not set

Keepalive set (10 sec)

DTR is pulsed for 2 seconds on reset

Last input 00:00:34, output never, output hang never

Last clearing of "show interface" counters 00:00:23

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 12

Queueing strategy: Class-based queueing

Output queue: 0/1000/64/0 (size/max total/threshold/drops)

Conversations 0/3/256 (active/max active/max total)

Reserved Conversations 4/4 (allocated/max allocated)

Available Bandwidth 3072 kilobits/sec

30 second input rate 352000 bits/sec, 342 packets/sec

30 second output rate 924000 bits/sec, 368 packets/sec

7734 packets input, 962720 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

8523 packets output, 2907859 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

GAMPLSRTR01#

Hello Kevin,

you had a default FIFO queueing

now you have applied CBWFQ

Queueing strategy: Class-based queueing

Output queue: 0/1000/64/0 (size/max total/threshold/drops)

Do you see an improvement ?

I see there are still some output drops

Hope to help

Giuseppe

Guiseppe

Unfortunantly we are still having errors whether we have our CoS policy in place or not.

I wonder if we may have to make some adjustments to how our policy is configured. This is an exhorbitent amount of errors.

The two physical serial interfaces have 0 errors on them.

Not sure what to try next...

thanks

K-melton

Hello Kevin,

I would consider to configure the two serial interfaces as parallel L3 links with different ip numbering and to see how the router behaves.

Also check the stats of the mlppp bundle with

sh ppp multilink

see if there are errors

Hope to help

Giuseppe

I performed the "sh ppp multilink" as you had recommended.

Here is the output:

GAMPLSRTR01#sho ppp multilink

Multilink1, bundle name is group6726

Endpoint discriminator is group6726

Bundle up for 2w4d, total bandwidth 3072, load 106/255

Receive buffer limit 24000 bytes, frag timeout 1000 ms

0/0 fragments/bytes in reassembly list

0 lost fragments, 3258146 reordered

0/0 discarded fragments/bytes, 0 lost received

0x693933 received sequence, 0xA7E88A sent sequence

Member links: 2 active, 0 inactive (max not set, min not set)

Se0/1/0:0, since 2w4d

Se0/0/0:0, since 2w4d

No inactive multilink interfaces

As you can see, there are no errors.

This problem has been baffling us for some time now. The output errors/discards that we are seeing will run excessively for awhile, and then like magic, just stop. Then they will start again.

We have tried adjusting our CoS policies around to see if this makes any difference. We even attempted taking the policies off altogether as you will remember when I originally posted to this forum.

I wonder at this point if it would be worth unbundling the interfaces that make up the multilink and then re-bundling them.

Any recommendations at this point will be welcome.

Thanks

There is nothing to be baffled about. At some point, traffic rate offered is bigger than what the queue can accommodate (burst), and drops occur. That is perfectly normal in WAN networking.

You can increase a little the queue size, I would not go over 2 or 300 packets anyway.

If you cannot tolerate packet drops, reduce load on the circuit, or buy a faster circuit.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: