have two routers cisco 2811 with vpn hardware and 871 which I am not sure has hardware vpn. Have a high speed cable connection at both ends. I set up a nat from site A to site B to test ftp directly and got 1.7M speed. I have adjusted tcp mss and ran the same test through dmvpn which uses ipsec and gre and got around 400K. Is that to be expected from the overhead?
1.7M -> 400k is quite a drop. What have you adjusted your MSS to?
Use a tool like iperf to measure throughput - it is more accurate.
Another thing to remember is that with the 871, although you're using hardware encryption, there is a little more involved with encaps/decaps.
I would try removing the encryption (no tunnel prot) and try it with just plain GRE.
changed to 1300 on inside interface on both sides of connection. Same result no better than 400K over 1.7M with a direct connect checked by iperf. So...the issue is
gre gre/ipsec gre/ipsec dmvpn
my next step is to connect gre for just two addresses and eliminate all else. Customer just wants to know what he should expect with ipsec over straight no encap and I can't give him a definitive answer on what the overhead costs. 1/4 seems like a lot to me too.
my understanding is that the 871 does support hardware encry. He has a 2811 on one end with hardware encr. He wants to
know if changing out to an 1800 or 2811 will improve anything. I am reluctant to tell him that anything will improve at this point.
I wouldn't commit to anything at the moment. The first thing to do would be to see if it is the encryption causing the issue. If removing tunnel protection improves the bandwidth, then the issue is most likely due to encryption.
Would be worthwhile checking if the hardware accelerator is enabled: show crypto engine brief
PS. If you found this post helpful, please do rate it.
Okay yeah so it is encryption causing the issue. I would use iperf to test with larger packets (1200 byte packets) to see how the performance is. 1200 byte packets won't be fragmented and will not cause the classic pps problem with small packets being sent at a high rate thus causing latency through the box.
Also, when using any tcp based applications, you may be getting fragmentation, so I would recommend reducing the MSS on the internal interface of the VPN routers to 1300 and the tunnel IP MTU to 1420
ip tcp adjust-mss 1300
ip mtu 1420
Let me know if that helps any.
I think this is the real issue. I did
a show proc on both routers during the transfer. The remote 871 never went beyond 43% busy. However, the 2811 was 99% busy during the entire transfer. Something very wrong with that picture. I did a sh crypto engine brief and it would appear that the hardware encryp is up and running, but it looks to me like it is all running in processor
It is not vpn per se. Customer put up second router on the same internet with plain old site to site vpn. Outcome just as fast as gre alone. So... the issue is
gre/ipsec or it has something to do with overhead for dmvpn
okay, last update
it is not gre per se
it is not ipsec per se
it is not gre/ipsec per se
the issue occurs when you run dynamic vpn. This appears to cause the issue.
All other scenarios allow for 1.4M thruput. Change the config to dmvpn and it is cut by 1/4. Have no idea why.