cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
939
Views
7
Helpful
11
Replies

MLPPP on Serial Line - latency issue

fribert000
Level 1
Level 1

I just setup two serial 2Mbit lines in an mlppp bundle. It works.

Here are the two problems I am facing:

1. When only ONE link of the two links in the bundle is active, a ping shows round-trip times of 165 ms. When we add the second line to the bundle, after approx. 60 seconds, things stabilize and we see round-trip times of 210 ms.

This is not what I expected. I did expect some degree of overhead when the second line came into the bundle, but not in this ratio. Can tuning be done? Is this normal? Any hints on this?

2. When a line is taken into the bundle or out of the bundle, then for about 60 seconds, only half of the ping packets gets forth and back. After this, it stabilizes. It 60 seconds a reasonable time to make mlppp stabilize in such a setup?

Thanks

Johnny

11 Replies 11

pkhatri
Level 11
Level 11

Hi Johnny,

A couple of questions:

- are both the links the same bandwidth and type ?

- can you post the configs you are using ?

Thanks,

Paresh

Hi Paresh,

Yes, both ends have 2 2M bit lines.

Config (same in both ends except the IP addr):

interface Multilink1

ip address x.x.x.x 255.255.255.252

ppp multilink

ppp multilink group 1

interface serial0/0/0

bandwidth 2048

no ip address

encapsulation ppp

no fair-queue

ppp multilink

ppp multilink group 1

interface serial0/0/1

bandwidth 2048

no ip address

encapsulation ppp

no fair-queue

ppp multilink

ppp multilink group 1

Thanks

Johnny

Hi Johnny,

I can answer the second part of your question first. For the other, we need a bit more info.

PPP uses ECHOREQ (Echo Request) and ECHOREP (Echo Reply) to maintain the integrity of the connection. If the router misses 5 consecutive ECHOREP, it will bring the link down. The default period for sending these is 10 seconds. Therefore, it could take up to 60 seconds for the link to go down. You can tune this by configuring a shorter keepalive at each end using 'keepalive '.

As for your first query, what you are seeing is indeed strange. In fact, with two equal-sized links, you should see better latency, if anything. Could I get you to try something: would you be able to first bring up the bundle with just the first link and take latency measurements ? Then, bring up just the second link and take latency measurements .... just wanna see if the links have different latencies..

Hope that helps - pls rate the post if it does.

Paresh

Hello,

besides the question about packet reordering, which Paresh is after, one other question: what is your hardware and what is the CPU load with one/both links enabled? Do you use CEF? I once had a router, where CPU was at the edge and I got similar results.

Hope this helps! please rate all posts.

Regards, Martin

Hi Paresh and Martin.

I cannot do the individual line tests right now, as the setup is already in production. I will do this as soon as I get a service-window. One thing - one router is ip cef enabled, the other is not. Could this be the issue?

Also, I will adjust the keepalive.

Thanks

Johnny

Hi Johnny,

There is no real dependence of multilink on CEF but in any case, the packets are destined for the router itself so they should be process-switched. So I can't imagine that that is the problem...

Paresh

Hello,

ok I need to reformulate:

In case the pings are sent from the router with the MLPPP links, CEF could reduce CPU load arising from normal IP forwarding and thus reduce delay of process switched packets in the router.

In case the ping packets are sourced somewhere else (have you tried this?) CEF could also speed up things.

In any case I would suggest to use CEF on both machines.

Hope this helpes! Please rate all posts.

Regards, Martin

Thanks

I will try to get a service-window tonight (CET) and do the tests. E.g. enable cef first. Check result. Then do the individual line tests.

I will get back with the results.

Thanks

Johnny

Have a look at this:

http://www.cisco.com/en/US/products/sw/iosswrel/ps1839/products_feature_guide09186a00801e7ba7.html#wp1027188

Quote:

Multilink PPP requires the configuration of standard CEF. Distributed MLPPP (dMLPPP) requires the configuration of dCEF.

End-Quote.

Though this is from a piece on MPLS, it does not state MLPS Multilink PPP, but Multilink PPP.

Regards

Johnny

Result:

ip cef did not do it.

Because - it was one of the lines.

Even though both lines were stable and there were no interface errors reported, one of the lines was stable at approx. 165ms round-trip times and the other was stable at approx. 235ms round-trip times. Both lines supposedly 2Mbit lines between Denmark and India. So next step is to contact the provider for an explanation.

Thanks for your hints

Johnny

This is a common issue with links with different latencies.. The differential delay causes packet re-ordering and while the MLP protocol does handle that, it does increase latency.

Paresh

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card