ICMP unreachables

Unanswered Question
May 28th, 2008

We have a number of customers using a site-to-site VPN. These customers connect through this VPN onto an internal server. After doing some packet captures at both the VPN and the server ends. I'm noticing the following:

When the server resets its PMTUs every ten minutes (in accordance with the PMTUD protocol definition), it sends some large packets to all destinations. The VPN gateway replies with an ICMP "Destination unreachable, fragmentation needed" message for each of the destinations (as PMTUs can be different for each destination). The first ICMP message is sent out immediately, but the next ICMP message (for one of the other customer destinations) only after nearly a second, the third after about two seconds, then four, eight seconds, and so on (0ms, 1000ms, 2000ms, 4000ms, 8000ms, etc)

I was thinking to use the ip icmp rate-limit unreachable command to reduce the period to say 200ms but I need to understand why the VPN responds in this manner. I would expect the VPN to send ICMP unreachables every 500ms (the default setting). Anyone know why?

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
michael.leblanc Wed, 05/28/2008 - 11:39

I'm not currently dealing with this issue, but I have some comments.

Why not set the "ip mtu" to an acceptable value on the appropriate interface, and avoid the constant PMTU discovery?

How different are the PMTUs (i.e.: range)?

The multiple "Destination unreachable, fragmentation needed" messages convey that a PMTU issue exists with multiple destinations, but does not necessarily mean that the PMTU to each is different.

Also, I believe that ICMP messages may be generated at the interface level, and subsequent encapsulation levels (e.g.: GRE, IPSec), for the same destination, depending on how badly the packet is constrained.

Cisco has an excellent document on PMTU discovery.

Are the "large packets" sent by the server staggered over time (1, 2, 4, 8 sec interval)?

If you have ICMP rate limited now, what settings are you using?

Is ICMP rate limiting configured on the far side, or some other intermediate link?

Getting back to the "ip mtu" for a moment.

We use "ip mtu 1400" on our tunnel interfaces for our IPSec + GRE tunnel(s). We never see PMTU issues. The "ip mtu 1400" accommodates all GRE and IPSec headers (transport or tunnel mode).

Keep in mind you have not shared any information about your MTU requirements or the interfaces/technologies in use. Unless there is a broad spectrum of MTU requirements, it seems like it would be desirable to define an "ip mtu" that would put an end to the ongoing PMTU adjustments.

Actions

This Discussion