09-22-2009 10:12 AM
have two routers cisco 2811 with vpn hardware and 871 which I am not sure has hardware vpn. Have a high speed cable connection at both ends. I set up a nat from site A to site B to test ftp directly and got 1.7M speed. I have adjusted tcp mss and ran the same test through dmvpn which uses ipsec and gre and got around 400K. Is that to be expected from the overhead?
thx again
09-22-2009 10:59 AM
1.7M -> 400k is quite a drop. What have you adjusted your MSS to?
Use a tool like iperf to measure throughput - it is more accurate.
Another thing to remember is that with the 871, although you're using hardware encryption, there is a little more involved with encaps/decaps.
I would try removing the encryption (no tunnel prot) and try it with just plain GRE.
09-23-2009 03:48 AM
I will try plain gre. I changes the mss to 1450
09-23-2009 05:28 AM
I would recommend changing the MSS to 1300. Make sure you set it (the MSS) on the internal interface, not the public interface.
09-23-2009 08:27 AM
changed to 1300 on inside interface on both sides of connection. Same result no better than 400K over 1.7M with a direct connect checked by iperf. So...the issue is
gre gre/ipsec gre/ipsec dmvpn
my next step is to connect gre for just two addresses and eliminate all else. Customer just wants to know what he should expect with ipsec over straight no encap and I can't give him a definitive answer on what the overhead costs. 1/4 seems like a lot to me too.
my understanding is that the 871 does support hardware encry. He has a 2811 on one end with hardware encr. He wants to
know if changing out to an 1800 or 2811 will improve anything. I am reluctant to tell him that anything will improve at this point.
thx
09-23-2009 08:39 AM
I wouldn't commit to anything at the moment. The first thing to do would be to see if it is the encryption causing the issue. If removing tunnel protection improves the bandwidth, then the issue is most likely due to encryption.
Would be worthwhile checking if the hardware accelerator is enabled: show crypto engine brief
-aun.
PS. If you found this post helpful, please do rate it.
09-23-2009 10:31 AM
thx,
Straight gre 1.4
09-23-2009 10:36 AM
Okay yeah so it is encryption causing the issue. I would use iperf to test with larger packets (1200 byte packets) to see how the performance is. 1200 byte packets won't be fragmented and will not cause the classic pps problem with small packets being sent at a high rate thus causing latency through the box.
Also, when using any tcp based applications, you may be getting fragmentation, so I would recommend reducing the MSS on the internal interface of the VPN routers to 1300 and the tunnel IP MTU to 1420
int fa0/0
ip tcp adjust-mss 1300
int t0
ip mtu 1420
Let me know if that helps any.
09-23-2009 11:35 AM
I think this is the real issue. I did
a show proc on both routers during the transfer. The remote 871 never went beyond 43% busy. However, the 2811 was 99% busy during the entire transfer. Something very wrong with that picture. I did a sh crypto engine brief and it would appear that the hardware encryp is up and running, but it looks to me like it is all running in processor
09-23-2009 11:46 AM
Which process is taking up all the CPU?
"show proc cpu | e 0.00"
09-23-2009 03:36 PM
It is not vpn per se. Customer put up second router on the same internet with plain old site to site vpn. Outcome just as fast as gre alone. So... the issue is
gre/ipsec or it has something to do with overhead for dmvpn
09-23-2009 04:24 PM
okay, last update
it is not gre per se
it is not ipsec per se
it is not gre/ipsec per se
the issue occurs when you run dynamic vpn. This appears to cause the issue.
All other scenarios allow for 1.4M thruput. Change the config to dmvpn and it is cut by 1/4. Have no idea why.
Bill
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: