I have a newly configured multilink interface which is a bundle of 4 serial E1 Internet links(Effective Bandwidth 8196Mb). A VPN tunnel is configured to the other site. I have servers at 1 site trying to access the other site. A 40 Mb file transfer is over 1 hour. The througput is very less. Do i need to check this with the service provider. Kindly help. Attached is the config.
What is your utilization and you ping times.
If you have high utilization you can use QoS to make your data transfer faster.
I would normally guess it was a issue with latency but you could have 1000ms ping times and you would still get better than 40m in a hour.
I would try another transfer program. You could also try the normal solution to high latency and break the file into 2 parts and transfer them at the same time.
Latency is excellent as it is the same service provider. I tried different protocols such as ftp,ssh,oracle .. all giving the same response. Is this normal. What are my options?
Ftp does a ok job most the time and if it is the same delay as other software then it isn't the software.
This gets very hard when your utilization is good and you have low latency. In your case if your ping times was 250ms you should be able to transfer 40m in just over 5 minutes using default settings.
The only other common thing that causes slow transfer is if you are getting packet retranmission because of packet loss. You will need to capture the traffic and see if you are seing retranmissions. Since you would see loss in your pings of your circuits or you would show errors increasing on the interfaces I would check your end devices and ensure they are not in half duplex first. You may also have a MTU issue but this is mostly related to something like a firewall supressing the ICMP message indicating that you need to fragement the packets.
If you still cannot see a issue I would load software like IPERF and test. This software is free and can use various packet sizes and protocols.
thanx for the reply. I did some troubleshooting and the inference is below. As the size of the file is increasing on the multilink the throughput is increasing which should not be the case with ftp. Reply would be greatly appreciated.
Please refer to the previous post by Tim, in which he had highlighted several important points that you need look in to to troubleshoot this issue. It clearly points out the areas that you need to check to find out the root cause.
Let us know the troubleshooting and observation made by you on this issue, to understand what is happening.
As stated earlier by TIM, the throughput that you are getting is very low and you need to troubleshoot to isolate the issue.
This 8 Mb internet link is the same service provider end to end. So the latency is something around 260 ms which is better than a dedicated link.
Trace is perfect. There are no packet drops/CRC's. I worked along with the NOC of the service provider.
Tests that i did.
1.FTP data transfer between the FTP Server in the Site and the ftp server published @ Internet traversing via the new 8 MB Internet link. The VPN Tunnel is not into picture at present. Downloaded item is 1 Gb file
Throughput: 250 Kbps
2.FTP data transfer between the FTP Server in the Site and the ftp server located at a different location via the VPN Tunnel. Downloaded item is 80 Mb. It took half an hour at 55 Kbps.
Do i need to do a traffic shaping/change the default window size?
Any help would be greatly appreciated.
Some of the "reduced bandwidth" is the latency time of the link .... TCP-based protocols have to wait for an "ACK" to proceed to the next packet/group of packets sent.
You should be able to increase the throughput somewhat by adjusting your windows size up ... but once the "window-size" group is sent, the sender still has to wait some length of time (at least 250ms - you latency figure)for an ACK before sending the next group.
A semi-easy way to verify this would be to initiate several transmission, one per host, to the same resource. Add the various throughputs together and see if you're still getting the same total throughput.
There are also some FTP clients available that break the file into several chunks, then opens a session for each chunk and sends them all in parallel.
Another source of reduced throughput would be the firewalls at either/both ends. If you were using something like a 2620 running IOS firewall, that would be the most likely suspect. Check the throughput spec on your firewalls (and IDS, if you have them).
Try some other test software, like QCheck from IXIA (it's free after registration). QCheck operates from RAM and eliminates most other possible client-side bottlenecks (like slow drives).
First be sure your numbers are correct.
80MB transfered in 1/2 hour is about 350kbs or about 44k bytes/second
Now ignoring that for now if both your number are calculated the same way and using the same window size and the same FTP program the only difference is your VPN tunnel.
I am assuming that this is a IPSEC tunnel. Ensure that the device that is doing the IPSEC encryption can handle the load. Most devices require hardware acceleartors to accomplish this. Since you are running 12.2 I am assuming this is not one of the newer ISR routers that all have small hardware accelators in them. Even with hardware accelation you may still have issues with packet fragmentation. Depending how you configured the VPN it may fragment and reassemble packets which increase the load on a router a lot. This type of issue can be rectified by artifically setting the MTU on your multilink slightly lower than the MTU size on your serial lines. You can also configure the ipsec to not fragment packets.
Check the cpu utilization on both router. If the utililization is not above 90% then you are ok with the encryption.
It is always best to maximize the window size but most tcp stacks default to 32k and setting it to 64k helps. Using 80mb in 30 minutes if stack size was you limitation it would only be about a 10k byte stack size. I don't think it is that small. With a 260ms latency if you would set the stack size to 64k you in theory could get about 2000kbs
Traffic shaping will only help if your utulization is high. It is used to slow traffic down not increase it. You could slow other traffic down so your ftp gets more bandwidth but if your lines are not 100% utilized it will have no effect.
Let us know what you find and maybe someone will see something.
I agree with the observation of the VPN involvement in the problem. The MTU on multilink is by default 1500. From the configuration posted you will notice that "no ip unreachables is enabled on the multilink interface.
For proper MTU negotiation the ability for the router to send the "packet too big" message must be allowed, this is an ICMP unreachable message.
The other issue is the packet, fragmentation problem as described. Here are the necessary items to confirm the problem, enable "ip unreachables" on the multilink interface, ping from the VPN devices with the df (don't fragment) bit set and try to go at various sizes from ~1100 to max packet size, note where you get an unreachable message. The end stations being used for testing will try to send the largest packet they are capable of so fragmentation is very likely when you add on the IPSEC header, max header ~ 80Bytes.
Need the TCP1323 options enabled for TCP windows scaling (default in Windows is too low and windows scaling is off LINUX/UNIX varies by flavor)so double the TCP window size from 64K to 128K to allow scaling. The end stations need to negotiate MTU automatically to minimize fragmentation, confirm this by using ping witht the df bit set, to confirm set the MTU lower on one station ~1300 Bytes for testing and reconfirm throughput.
This provides some information
Issue resolved. I guess the service provider must have applied some sort of CAR or something at thier end. But i still have a persisting issue where i am not able to ping the gateway.