cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4156
Views
0
Helpful
4
Replies

DMVPN - MTU size and ICMP Packet fragmentation

w.fuchs
Level 1
Level 1

Hello,

My customer has a problem with windows logon.

Windows calculates the bandwidth of link by sending ICMP Packets in different size.

The ICMP packets will already be fragmented (but with NO DF set) by the server if they are greater than 1480 bytes.

If these fragmented packets (with NO DF bit set) are further sent across the DMVPN tunnel to the client,

the router again fragments the “already fragmented” packets because the max. MTU on tunnel is 1436 bytes.

This does not work properly.

We could fix this by setting the max. MTU on all Servers and Clients and on the routers on the way to max. 1436bytes,

but this will be some additional work and a change on the existing production systems, which the customer would like to avoid if possible.

Is there any other option to fix this by configuring the router?

questions for example are:

-is there a config. parameter at the DMVPN routers that can be set in order to fix the issue that the already fragmented

ICMP packet to 1480 will be properly fragmented to 1436 and reassembled @ the remote router again?

-is there any parameter to configure something link MTU path discovery (as we have for TCP) for the ICMP packets sent by the server?

thanks in advance,

Walter

4 Replies 4

andrew.prince
Level 10
Level 10

Walter,

Option 1

Configure the Active directory GPO to disable PMTUD and PMTUBH settings in clients and servers.

Option 2

change the tcp mss - the router will change any MSS value is see's in any transient tcp syn, tcp syn ack packets that pass thru the device. If you set the MSS to say 1420 - you will have no issues.

Option 3

Configure the DMVPN server to clear the DF bit on any IPSEC traffic on the inside interface.

Option 4

If you are using GRE over IPSEC - enable tunnel MTU discovery in the GRE tunnel (bsaically copies the DF bit into the tunnel encpasulation) then configure option 2.

HTH>

jdive
Cisco Employee
Cisco Employee

- PMTUd relies on getting back ICMP's and you cannot send an icmp in reply to an icmp packet --> this cannot be done. some stacks took the TCP PMTUD approach / logic for UDP as well but this does not help in your case nor is relevant even :)

- The key issue here would be to know what is going wrong in the refragmentation of the fragment sent by the windows stack. I am quite surprised that bandwith measurement does not detect that it's own mtu is not 1500 and therefore send frag'ed ICMP messages this is quite odd.

- One mode workaround for this / in the step of troubleshooting the dmvpn network fragment issue would be to use virtual reassembly on the router which would then reassemble / frag again the packets. This can have perf's impact and would only be a workaround.

- Getting the troubleshooting done on the fragment path is the go i believe.

in my experience the only reliable way to handle these issues is with ip tcp adjust-mss 1300, etc and ip mtu 1400 on the tunnel interfaces...

there is nothing wrong with the windows stack, windows will often send traffic maked with the "DF" bit on. also, the built in windows firewall will prevent the icmp too big messages from coming back, even if the path from the router is otherwise clear.

So i would suggest the following strategy,

on the tunnel interfaces;

ip tcp adjust-mss 1300

ip mtu 1400

and if you use udp traffic (only dns is really big here for windows)

the route-map clear df bit option.

-Joe

thanks a lot for all your resonses.

I just got a call from the customer. He did some more testing the last few days and it simply turned out the be an IOS bug.

thanks,

walter

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: