Throughput problems running GRE tunnels over IPSEC hardware encryptors

Unanswered Question
Jul 7th, 2007

Hi,

I have set up two hardware encyption devices to encrypt traffic over a Les100 link. As they wont pass multicast I have set up a GRE tunnel on the routers at either end of the encryptors so I can pass OSPF over the link. Everything came up fine and the routing is all working across the link but on load testing the throughput was only around 5/sec whereas before the encryptors/GRE tunnel it was up around 60 /sec.

The traffic load on the physical interface was very low, whereas the logical tunnel interface was at a very high percentage. It was as if it only had a low bandwidth to play with, even though the physical interface was 100Mb. I'd much appreciate any suggestions as this is a precursor to many more similar links and need to get it up and working ASAP. I'm sure there's something that I'm missing out but can't find what it is. The transmit and recieve bandwidth on the tunnel interface on one side shows as 9 (?). This seems to point to the problem but I don't know why it does it,or if it is the actual issue? Am I right in thinking the bandwidth command on the tunnel interface only affects the routing calculations? either way I tried it and it didn't seem to help. Many thanks.

Configs along these lines both sides:

int fa0/0

ip address x.x.x.x x.x.x.x

int t0

ip address x.x.x.x x.x.x.x

ip mtu 1427

ip ospf cost 3

tunnel source fa0/0

tunnel destination x.x.x.x x.x.x.x

tunnel int included in ospf

The two routers each side plug into the hardware encryptors which run an ipsec tunnel between them. All seems ok on these.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.

Yes, bandwidth cmd is for OSPF metrics and load calculation only. It doesn't affect the actual throughput. You probably need to change "ip mtu" value on your tunnel intf. If you're using GRE (the default on int tunnel) set it to 1440 for DES/3DES (if transform is in the transport mode) or 1420 (tunnel mode). For AES set it to 1424/1404 resp. If NAT is between the tunnel endpoints decrease the value by 8. If mGRE (i.e. DMVPN) is used decrease the value by 4. If you're using VTIs (tunnel mode ipsec ipv4) increase the value by 4 (no GRE header in this case) or don't do anything -- the system calculates itself ;)

Also, if you're using GRE specify "tunnel path-mtu-discovery" (not needed for VTIs).

If it doesn't help ensure PMTUD is working thru the tunnel. If not -- you may find workaronds on CCO ;)

HTH

Oleg Tipisov,

REDCENTER,

CCSI/CCIE

davidmonaghan Mon, 07/09/2007 - 01:11

Hi Oleg,

Many thanks for the reply. I have not been getting any errors on the interfaces with the mtu set at 1427. The throughput went down from around 70meg to 5meg with the tunnels/encryption and the tunnel interfaces were at full load whereas the underlying physical interface was hardly under load at all. I can't help thinking there is something that is restricting the bandwidth by default?

1. Whatever MTU you set you'll not get any errors. 2. Don't look at interface load values - they're calculated according the "bandwidth" set which is 9Kbps on int tunnel :) 3. Bandwidth is not restricted on the tunnel intfs.

The major cause of low throughput is IP fragmentation. Just follow my recomendations and see what happening. You can also set smaller MTU (1300) on your client and server which are used to test the throughput of the VPN.

HTH

davidmonaghan Mon, 07/09/2007 - 02:31

Ok, I'll set up a test rig again and try that. I'll let you know how it goes. Thanks.

davidmonaghan Mon, 07/09/2007 - 03:46

Something I should have mentioned earlier but forgot:

When I did lab testing of the setup I did 'debug ip icmp' and used the extended ping command with the DF bit set to find the mtu size that didn't fragment. This is why I used 1427 (1430 was the number that was successful).

davidmonaghan Fri, 07/27/2007 - 13:28

Hi guys,

Thanks for your input on this. I've learnt a thing or two doing this project! I was getting some fragmentation and I lowered the MTU size as advised but was still getting very poor throughput, only about 5mb across 100mb connections.The problem was that one side of the GRE tunnel was being run on a 3550 switch which will only process switch GRE traffic and was at near max cpu utilisation. I ran throughput tests using 2800 routers and got around 80mb which was much more acceptable. Dave

Actions

This Discussion