cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1950
Views
15
Helpful
10
Replies

4x E1 MLPPP max throughput?

doraemonheng
Level 1
Level 1

Dear All,

We are experiencing packet loss when ping from PE1 to CE2 when the traffic above 7.2Mbps.

CE1 - PE1 - PE2 - CE2

Type of point-to-point between PE2 and CE2 is 4x E1 multilink (bandwidth 8192Kbit).

Note*** (PE1 to PE2 no packet loss)

We do see the Total output drops keep increasing even the traffic is low. Is it a normal behavior or does this caused the packet loss when PE1 ping to CE2?

Also, what is the maximum throughput can the 4x E1 multilink handle without any packet loss?

PE2#show interfaces multilink 10

Multilink10 is up, line protocol is up

Hardware is multilink group interface

Internet address is 10.19.60..9/30

MTU 1500 bytes, BW 8192 Kbit, DLY 100000 usec,

reliability 255/255, txload 180/255, rxload 107/255

Encapsulation PPP, LCP Open, multilink Open

Open: IPCP, loopback not set

Keepalive set (10 sec)

DTR is pulsed for 2 seconds on reset

Last input 01:17:03, output never, output hang never

Last clearing of "show interface" counters 05:24:17

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 31000

Queueing strategy: fifo

Output queue: 31/300 (size/max)

30 second input rate 3440000 bits/sec, 1226 packets/sec

30 second output rate 5802000 bits/sec, 1212 packets/sec

22649490 packets input, 598130070 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

22368689 packets output, 2467487460 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

Thanks & Regards,

10 Replies 10

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Doraemon,

the multilink bundle is usinf FIFO queueing and the queue size is 300

Output queue: 31/300 (size/max)

You see you had 31000 output drops on total of 22368689 packets sent.

Be aware that multilink PPP has its own overheads so you cannot expect to achieve 8192 kbps of throughput.

And also E1 has its own overheads: one 64 kbps DS0 channel is used for E1 framing.

You should go near 4 * 1984 kbps = 7936 kbps and from this the multi PPP overhead has to be subctracted.

Hope to help

Giuseppe

hellp Guiseppe,

thank you for your input.

We assumed that the overhead for MLPPP is 4 bytes, so the max throughput we can get is around 7.8Mbps?

But what we are experiencing is the packet loss occurs when the MLPPP throughput above 7.2Mbps.

Attached the interface config for reference:

interface Multilink10

ip vrf forwarding mpls

ip address 10.19.60.9 255.255.255.252

max-reserved-bandwidth 100

load-interval 30

mpls netflow egress

no cdp enable

ppp multilink

ppp multilink group 10

no clns route-cache

hold-queue 300 out

Thanks.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Your packet loss appears to be from queue drops.

"We do see the Total output drops keep increasing even the traffic is low. Is it a normal behavior or does this caused the packet loss when PE1 ping to CE2? "

It might be "normal". There's a good chance you might be seeing the impact of FIFO global synchronized tail drop. See http://en.wikipedia.org/wiki/TCP_global_synchronization for a bit more information. (There's probably some info on this issue on the Cisco site too.)

"Also, what is the maximum throughput can the 4x E1 multilink handle without any packet loss? "

Should be the combined available bandwidth of the 4 links, less overhead. Difficult to impossible to achieve unless you're working with CBR traffic or meet other conditions.

With TCP traffic, certain WAN Optimization packet shaping products can manipulate the advertized TCP receive window which can regulate transmission rates. Or, TCP traffic that responds to ECN, should be able to utilize full bandwidth w/o drops. Otherwise, very careful drop management will maximize "goodput" (i.e. highest utilization with minimum drops).

For non-TCP traffic, situations vary, but often you can not regulate bandwidth demand.

In your situation, you might try decreasing the size of your output queue (what's your BDP [bandwidth delay product]?). You might try WRED and/or FQ.

Hi Joseph,

The formula of the BDP as below:

BDP (bytes) = total_available_bandwidth (KBytes/sec) x round_trip_time (ms)

After calculate, should be around 28000bytes.

Thus, any idea what is the good figure for the "hold-queue" in this case?

Thanks & Regards,

Since hold-queue counts packets, you would also need to know average packet size. Assuming "typical" 1500 byte packets, hold-queue 19 would satisfy BDP for 28 KB.

Hi Joseph,

Yes, half of the packet size is 15xx byte packets.

Some read up on the hold-queue, the default hold-queue out is 40, any reason to change it to 19? (am not sure why current config is set to 300) : )

Additional, what is the impact to the network to put higher hold-queue or lower hold-queue? Any good article on this?

Thanks & Regards,

Hello Doraemon,

Joseph was also suggesting to move to a software scheduler like CBWFQ this can help or to apply WRED directly to the interface leaving FIFO queueing.

I would use CBWFQ that allows you to invoke WRED in class class-default.

This would minimize the FIFO queue but packets would be handled by the software queues of CBWFQ.

to be sure that the packet losses are not caused also by other factors I would check with

sh ppp multilink

that can give you also stats about errors in multilink PPP (if any).

Hope to help

Giuseppe

Hi Giuseppe,

The packet loss occurs above 7.2Mbps. Am thinking it is still not too high for the 8Mbps multilink. Do we really need to apply the software scheduler?

Btw, PE2 end is using c7206VXR. No info from CE2 as customer not going to release the info. Is it possible that high CPU utilization at CE2 router could caused the issue?

As for the ppp multilink command as below:

PE2>show ppp multilink

Multilink10, bundle name is TO-CE2

Endpoint discriminator is TO-CE2

Bundle up for 1w3d, 161/255 load

Receive buffer limit 48000 bytes, frag timeout 1000 ms

5/757 fragments/bytes in reassembly list

38 lost fragments, 151015730 reordered

10/44982 discarded fragments/bytes, 0 lost received

0x60579E received sequence, 0xCCA822 sent sequence

Member links: 4 active, 0 inactive (max not set, min not set)

Se4/1, since 1w3d

Se4/3, since 1w3d

Se4/2, since 1w3d

Se4/0, since 1w3d

No inactive multilink interfaces

Thanks & Regards,

"Some read up on the hold-queue, the default hold-queue out is 40, any reason to change it to 19? (am not sure why current config is set to 300) : ) "

Setting hold-queue to 19 would be optimal for your BDP, but that's assuming all packets are 1500. The default of 40 is reasonable too; 300 seems excessive.

"Additional, what is the impact to the network to put higher hold-queue or lower hold-queue? Any good article on this? "

Proper queue depth can be critical for optimal TCP performance. I can't provide a reference to just one article, but in brief most TCPs do "slow start" and increase send window to match the receiver's receive window until there's packet drops. If the queue is too small, packets are dropped before the TCP send window reaches BDP. If the queue is too large, packets will queue as the send window exceeds BDP. Besides causing queuing latency, when tail drop happens, the TCP send window is so large many packets can be dropped which often forces TCP back to slow start vs. congestion avoidance.

Usually TCP bandwidth "probing" is seen as a "sawtooth", but with large queues, the min/max values can become very large, so much so, average thourghput decreases.

Many believe no drops is the ideal, but for many TCP versions, ideal is just enough drops to get TCP to average as close to max bandwidth as possible.

Hi Joseph,

Thank you for the help. : )

Will definitely give it a try once get the maintenance approval.

Thank & Regards,

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: