3 E1 in PPP MultiLink do not use full bandwidth available

Unanswered Question
Paolo Bevilacqua Fri, 07/06/2007 - 12:51

Has the router config changed ? What is the config of the router on the side that has the file being transmitted ?

Paolo Bevilacqua Fri, 07/06/2007 - 14:05

Hi,

I do not see "ppp multilink fragment size 500" under any interface, where di you configure it ?

Also, would you send "show controllers e1" to rule out any physical problem. It appears the vwic modules have not been but in clocking domain as required.

Paolo Bevilacqua Mon, 07/09/2007 - 05:54

Hi,

Your E1 lines are clean, there is no need for "network-clock-participate".

Now, what IOS are you using? Please include "show version".

"ppp multilink fragment size" is available on from 12.2(4)T onwards. With previous releases, you can only configure the "ppp multlink framgment delay" in order to indirectly set the fragments size.

Also please include "show interface", "show buffers".

Paolo Bevilacqua Tue, 07/10/2007 - 12:52

Hi,

No, I didn't saw "show version", or "show interface" on both routers. My suspicion is that the fragment is not set small enough.

Paolo Bevilacqua Tue, 07/10/2007 - 13:29

Hi,

the bundle members are balancing quite well and the input error ratio is reasonable.

The thing is that you cannot set the fragment size directly because you have upgraded to 12.4 mainline, not 12.4 T, where the command is supported starting from 12.4(4)T.

If you do not want to upgrade again, try setting "ppp multilink fragment delay 20"

Paolo Bevilacqua Wed, 07/11/2007 - 12:39

I think the fragment size is still too large, that is, router it is not fragmenting at all.

To confirm that, do clear counter, wait for a little of traffic to pass, then divide output bytes by output packets to get an average packet size.

Then either reduce the "framgment delay" until you see an improvement, or upgrade to the latest 12.4T where fragmentation can be set directly.

Paolo Bevilacqua Wed, 07/11/2007 - 15:25

Hi again.

I'm thinking about this all over again.

The thing is that according to manuals, and people that did real testing, there is no need for fragmentation to allow a single TCP session to use all the bandwidth in a MLPPP bundle. And this makes sense to me.

So, would you please try disabling fragmentation at all:

ppp multilink fragment disable

Then, could you try measuring the performance using iperf ? It is a very simple software to use:

http://dast.nlanr.net/Projects/Iperf/

I'm very puzzled by all this and suspecting it may be an issue with the host OS.

I saw that too, I don't like disagreeing with documentation, but I tried disabling fragmentation, it did not work. I have another multilink with T1 lines that does work. so I have been comparing all my results with it. When I copy files from my laptop to a server on the other end of the multilink for the T1s, not only do I get fragmentation, I get full bandwidth.

I have also tested each E1 line individually and I get full bandwidth. But as soon as I bundle any 2 or 3 my bandwidth gets degraded. Sounds like a fragmentation problem to me.

Please comment.

Paolo Bevilacqua Thu, 07/12/2007 - 12:25

Interesting. Possibly something is messing up with TCP in this case. What else there is between the hosts doing the copy ?

Can you do the test using iperf ? You will need it on both side of the link.

How did you saw that fragmentation was active in the T1 case ?

Just to answer your questions

1: we plugged directly into the DMZ, so no firewall, nothing else, and our router was only sending less than 2mbps. We had a monitor on the Multilink.

2: I used the same method to calculate packet size during a large file copy and it varied between 106 bytes to 560 bytes. Where as the E1s stayed fairly consistent just under 1400.

ran another test:

------------------------------------------------------------

Client connecting to 10.13.1.5, TCP port 5001

TCP window size: 16.0 KByte (default)

------------------------------------------------------------

[ 3] local firewall port 34684 connected with India server port 5001

[ ID] Interval Transfer Bandwidth

[ 3] 0.0-10.7 sec 624 KBytes 479 Kbits/sec

traceroute to 10.13.1.5 (10.13.1.5), 30 hops max, 38 byte packets

1 firewall to Us Internal router Internal Serial Interface(10.7.1.2) 1.032 ms 0.574 ms 0.448 ms

2 Internal router External Interface to India(192.168.1.2) 254.841 ms 255.043 ms 258.965 ms

3 10.13.1.5 (10.13.1.5) 254.733 ms 258.659 ms 256.609 ms

It seems to me that the path from the internal router interface to the external router interface is where the slow down occurs.

vinay_verma80 Tue, 07/10/2007 - 05:24

hi

use 2/3 ftp server(on diffrent machine) and again try to download on diffrent maching and find the collective b.w

u will get the result

regards

Paolo Bevilacqua Tue, 07/10/2007 - 05:37

Hi,

the thing is that multilink PPP is supposed to make available the full bandwidth to a single session.

Else you would be using regular load sharing without the need for MLPPP.

vinay_verma80 Sun, 07/15/2007 - 14:35

hi

the framgmetation command will not be much help , as by default each packet will be fragmented into no of phiscial interface

i suppose somethink is missing

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 833000 bits/sec, 79 packets/sec <<<<<<<<( should be not more than 6 mbps)

5 minute output rate 37000 bits/sec, 44 packets/sec

131629 packets input, 170700329 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

78386 packets output, 10296326 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

try

1 clear the counter

2. use diffrent queueing like wfq , what is the result and then again rever back to FIFO

also paste the other end router statistic and conf

regards

vinay verma

Paolo Bevilacqua Sun, 07/15/2007 - 17:07

That is incorrect. Unless there is fragmentation configured, MLPPP will not fragment as the post above demonstrate.

Also, changing queuing scheme at random doesn't have much a chance of influencing MLPPP behavior.

vinay_verma80 Thu, 07/19/2007 - 09:26

hi

Thanks for ur remarks

But I still think that MLP fragments the packet by default to packet size / no of link

Quote from Cisco

IP Telephony Self-Study

Cisco QOS Exam Certification Guide,Second Edition by Wendell Odom, CCIE No. 1624

Michael J. Cavanaugh, CCIE No. 4516

chapter :Link Fragmentation and Interleaving (page:488)

MLP, by its very nature, fragments packets

MLP always fragments PPP frames to load balance traffic equitably and to avoid out-of-order

packets. Notice that the 1500-byte packet was fragmented into three 500-byte fragments, one for

each link. By default, MLP fragments each packet into equal-sized fragments, one for each link.

Suppose, for instance, that two links were active; the fragments would have been 750 bytes long. If

four were active, each fragment would have been 375 bytes long. And yes, even the 100-byte packet

would be fragmented, with one fragment being sent over each link.

The other point you should consider about basic MLP, before looking at MLP LFI configuration, is

that the multiple links appear as one link from a Layer 3 perspective. In the figures, R1 and R2 each

have one IP address that applies to all three links. To configure these details, most of the interface

subcommands normally entered on the physical interface are configured somewhere else, and then

applied to each physical interface that will comprise part of the same MLP bundle.

let me If I have make a mistake to understand the topic

regards

Paolo Bevilacqua Thu, 07/19/2007 - 14:28

Vinay,

Thanks for sharing the source of your information. It may be very well correct. It appears logical that by default, fragmentation is made dividing packet size by number of links.

However in this case nothing seems to help and perhaps we are going around and around because of some reason that nobody yet has been able to identify.

jwdoherty Wed, 07/18/2007 - 10:14

Possible client BDP issue? What type of clients are transfering the 2GB file and what's the latency between them?

Actions

This Discussion