The IOS of the router on the other end does not support this change. I am in the process of upgrading that IOS to the latest version
Has the router config changed ? What is the config of the router on the side that has the file being transmitted ?
I do not see "ppp multilink fragment size 500" under any interface, where di you configure it ?
Also, would you send "show controllers e1" to rule out any physical problem. It appears the vwic modules have not been but in clocking domain as required.
Your E1 lines are clean, there is no need for "network-clock-participate".
Now, what IOS are you using? Please include "show version".
"ppp multilink fragment size" is available on from 12.2(4)T onwards. With previous releases, you can only configure the "ppp multlink framgment delay" in order to indirectly set the fragments size.
Also please include "show interface", "show buffers".
No, I didn't saw "show version", or "show interface" on both routers. My suspicion is that the fragment is not set small enough.
the bundle members are balancing quite well and the input error ratio is reasonable.
The thing is that you cannot set the fragment size directly because you have upgraded to 12.4 mainline, not 12.4 T, where the command is supported starting from 12.4(4)T.
If you do not want to upgrade again, try setting "ppp multilink fragment delay 20"
I implemented it on both routers. it has not made a difference. Do you want to see any files during the copy? it has been running for 1.5 hours.
I think the fragment size is still too large, that is, router it is not fragmenting at all.
To confirm that, do clear counter, wait for a little of traffic to pass, then divide output bytes by output packets to get an average packet size.
Then either reduce the "framgment delay" until you see an improvement, or upgrade to the latest 12.4T where fragmentation can be set directly.
I'm thinking about this all over again.
The thing is that according to manuals, and people that did real testing, there is no need for fragmentation to allow a single TCP session to use all the bandwidth in a MLPPP bundle. And this makes sense to me.
So, would you please try disabling fragmentation at all:
ppp multilink fragment disable
Then, could you try measuring the performance using iperf ? It is a very simple software to use:
I'm very puzzled by all this and suspecting it may be an issue with the host OS.
I saw that too, I don't like disagreeing with documentation, but I tried disabling fragmentation, it did not work. I have another multilink with T1 lines that does work. so I have been comparing all my results with it. When I copy files from my laptop to a server on the other end of the multilink for the T1s, not only do I get fragmentation, I get full bandwidth.
I have also tested each E1 line individually and I get full bandwidth. But as soon as I bundle any 2 or 3 my bandwidth gets degraded. Sounds like a fragmentation problem to me.
Interesting. Possibly something is messing up with TCP in this case. What else there is between the hosts doing the copy ?
Can you do the test using iperf ? You will need it on both side of the link.
How did you saw that fragmentation was active in the T1 case ?
Just to answer your questions
1: we plugged directly into the DMZ, so no firewall, nothing else, and our router was only sending less than 2mbps. We had a monitor on the Multilink.
2: I used the same method to calculate packet size during a large file copy and it varied between 106 bytes to 560 bytes. Where as the E1s stayed fairly consistent just under 1400.
ran another test:
Client connecting to 10.13.1.5, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 3] local firewall port 34684 connected with India server port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.7 sec 624 KBytes 479 Kbits/sec
traceroute to 10.13.1.5 (10.13.1.5), 30 hops max, 38 byte packets
1 firewall to Us Internal router Internal Serial Interface(10.7.1.2) 1.032 ms 0.574 ms 0.448 ms
2 Internal router External Interface to India(192.168.1.2) 254.841 ms 255.043 ms 258.965 ms
3 10.13.1.5 (10.13.1.5) 254.733 ms 258.659 ms 256.609 ms
It seems to me that the path from the internal router interface to the external router interface is where the slow down occurs.
We are in the process of upgrading to 12.4T will let you know the progress after setting the fragment on both routers which is waiting for the remote site upgrade.
use 2/3 ftp server(on diffrent machine) and again try to download on diffrent maching and find the collective b.w
u will get the result
the thing is that multilink PPP is supposed to make available the full bandwidth to a single session.
Else you would be using regular load sharing without the need for MLPPP.
the framgmetation command will not be much help , as by default each packet will be fragmented into no of phiscial interface
i suppose somethink is missing
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 833000 bits/sec, 79 packets/sec <<<<<<<<( should be not more than 6 mbps)
5 minute output rate 37000 bits/sec, 44 packets/sec
131629 packets input, 170700329 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
78386 packets output, 10296326 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions
1 clear the counter
2. use diffrent queueing like wfq , what is the result and then again rever back to FIFO
also paste the other end router statistic and conf
That is incorrect. Unless there is fragmentation configured, MLPPP will not fragment as the post above demonstrate.
Also, changing queuing scheme at random doesn't have much a chance of influencing MLPPP behavior.