IPSEC/mGRE overhead calculation

Unanswered Question
May 7th, 2008

I believed I had properly accounted for the IPSEC/mGRE overhead on my Tunnel interface settings (IP MTU and MSS), but was experience high CPU utilization (IP Input) due to fragmentation and reassembly.

Below are the overhead calculations I used originally;

* mGRE - 28 bytes (24 for GRE plus additional 4 for DMVPN Key)

* IPSEC - 60 (SHA/AES)

* TCP Header - 20

* IP Header - 20

Total - 1372 which would be the MSS number I would use.

Following Best Practic recommendations, I even lowered my MSS number to come up with the following original Tunnel confg;

Tunnel xxxx

ip mtu 1400

ip tcp adjust-mss 1360

I was still experiencing fragmentation / reassembly until I changed to the following;

Tunnel xxxx

ip mtu 1372

ip tpc adjust-mss 1332

What was I missing in my original calculations or did I misunderstand how I would use the resulting number (MTU instead of MSS) from my calculations?

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)


The way I work it out/configure my tunnels is:-

Cisco recommends a GRE MTU of 1400, that's cool. A GRE tunnel encapsulation requires 24/28 Bytes - as you have stated ( I always go with 28, includes some fudge). So the MTU that the GRE can send is 1400 - 28 = MTU 1372 - not including GRE encapsulation. Don't forget that the Maximum Segment Size is the largest transmissible amount of data that can be sent un-fragmented. So the IP header requires 20 bytes. The TCP header requires 20 bytes = 40 bytes.

Great - so now we have:-

28 Bytes - GRE

20 Bytes - IP

20 Bytes - TCP

Total of 68 Bytes, 1400 - 68 = 1332 this is the MSS, that clients and upstream devices should be setting there to MSS in the TCP handshake.

What would be helpful in some documentation is that when you set the MTU of the GRE - subtract the overhead of the GRE encapsulation. Then subtract the TCP & IP overhead, what you are left with is what you should set the MSS to.



This Discussion