We have a couple of 5540s (running 8.0(4) that we use only for remote access (client-to-site) VPNs.
In the logs we regularly get some entries similar the following:
2009-02-16 11:59:39 UTC Local3.Info x.x.x.x Feb 16 2009 11:59:39: %ASA-6-602101: PMTU-D packet 1420 bytes greater than effective mtu 1362, dest_addr=a.a.a.a, src_addr=b.b.b.b, prot=TCP
2009-02-16 11:59:48 UTC Local3.Info x.x.x.x Feb 16 2009 11:59:48: %ASA-6-602101: PMTU-D packet 1500 bytes greater than effective mtu 1426, dest_addr=c.c.c.c, src_addr=d.d.d.d, prot=ICMP
When we capture ICMP traffic, we can also see messages that indicate that packets are dropped because DF-bit is set, but fragmentation is required.
Currently we use the default fragmentation settings, but are planning to configure the parameters below fix the user problems:
mtu inside 1500 (default)
mtu outside 1380
sysopt connection tcpmss 1300
sysopt connection tcpmss minimum 0 (default)
crypto ipsec df-bit clear-df outside
crypto ipsec df-bit copy-df inside (default)
crypto ipsec fragmentation before-encryption outside (default)
crypto ipsec fragmentation before-encryption inside (default)
I would appreciate your feedback regarding these settings and any other recommendations!
Thanks in advance for your help!
I suggest you change the tcpmss value to 1200.
This will enable you to have 180 bytes of overhead, TCP/IP/Encryption header information + 1200 bytes of data = MTU of 1380.
Thanks for your reply!
A couple of more questions:
Would changing the maximum segment size to 1200 not have any negative performance impact for the users that do not have a problem with fragmentation today?
Is the MTU setting of 1380 a good value for the outside inteface?
Thanks again for your help!
OK to answer your questions:-
"Would changing the maximum segment size to 1200 not have any negative performance impact for the users that do not have a problem with fragmentation today. 99% of the time is the fact that a higher MSS is negotiated, with the DF bit set = the fragmentation issue most common. Some apps/services will receive the "fragmentation required" icmp message and IGNORE it. So the easiest way I have found is to have the clients negotiate a lower MSS = lower overall MTU. Then the clients can send what they like with the DF bit set, and ignore ICMP "frag required" messages all day long and it will all work!
"Is the MTU setting of 1380 a good value for the outside interface?" personally - no. If you are directly connecting to say an ATM device then yes, as this is the optimum/recommended MTU size. If you are given an Ethernet RUJ45 connection from your provider I would use an of MTU 1500....BUT take into consideration everything else you are trying to do.
if you have an MTU of 1500 and you have IPSEC and run IP and TCP:-
IPSec Header/encryption 56 bytes
IP Header 20 Bytes
TCP header 20 bytes
So far 96 bytes = so the MAX data Payload is 1404. With a client NIC that has an MTU of 1500 - it will take the TCP/IP header off and try to negotiate a MSS of 1460 bytes = possible fragmentation issue.
At the end of day in my personal opinion - you have to keep the network running at 100% and healthy, it's no good if the users can send/receive the max amount of data per packet if it breaks the network?
What kind of issue is your client having? I'm having an issue where the clients are freezing up while they're using the application, even though the tunnel status is up, after a few seconds they're back to normal. This happen intermittently. Cisco suggested setting the DF to on and adjusting the tcpmss to 1280. My outside Interface is still set to MTU size 1500. I did all that and I'm still having the same problem. Is you issue similar to mine, and did any of these suggestions help you at all?
A limited number of users were having problems connecting to some web sites and using Outlook. We did not see any clients freezing.
We enabled clear-df on the outside interface and set the TCPMSS to 1260. We did not modify the MTU on either interface. This fixed our issue.
Thanks for your reply. Currently I have the TCPMSS set to 1280 and clear-df on the outside is also enabled. I will try to lower the TCPMSS to 1260 hopefully I'll have a better results. I'm also planing on upgrading the ASA to the latest software image 8.04(23). I currently have the 8.0.3(6).
I have to jump in here - let me ask you, why are you upgrading from 1 ED image to another ED image? What extra functionality does 8.04(23) have that 8.0.3(6) does not?
Using an Early Delopyment image could be the reason for your issue - upgrading might solve 1 issue - but generate 10 more.
I've had a TAC case open with Cisco for a long time and we've exhausted a tremendous amount of troubleshootings packet captures and so forth. Then it got escalated to a higher level and was decided to upgrade to the latest image before proceeding with any troubleshooting steps.
I am currently facing a situation where we are setting up an ASA5510 for use as a VPN concentrator 3000 replacement. We can establish VPN connections to the unit, but connectivity to devices on the inside network does not work - except for Microsoft Exchange (which connects after a long period of time).
After troubleshooting, I have discovered that sending ICMP ping packets from the VPN client will work - provided the packet size is less than 109 bytes. It is the same situation if I try to ping from an inside host to the VPN-attached host. Pings up to 108 bytes will go through, but nothing larger.
MTU is set for 1500 on both inside and outside interfaces. I set sysopt TCPMSS to the default (1380), but I am getting nowhere near this size for it to be an MTU issue. I have also set the outside interface to clear the df-bit (per suggestions in this thread) and it has not made a difference.
What is the MTU on the client machines set to - by default ANY machine that the cisco VPN client is installed on defaults to a MTU of 1300 and ALL subsequent MSS neogitiations noeogitate to a MSS of 1260.
The client machines are PCs running WinXP Pro. The Cisco VPN client is running with default configs.
However, these same client PCs have VPN settings to connect to our VPN 3000 concentrators, and they work without any problems. The problem appears to be specific to the tunnel into the ASA.
The ASA can ping to the Outside router, as well as any Inside device, with packet sizes up to 1500, without problems.
When I try to telnet from the VPN client machine to a device on the Inside, it appears that the connection is established, as 'netstat' on the client machine shows an established connection. Running debug ip packet on a router also shows the incoming connection. However, something (the MTU limitation?) does not allow a login prompt to be received by the VPN client machine, and the connection will eventually time out.
I suspect something in the ASA is the problem, but cannot identify what it may be.
Found the problem. Compression was enabled under the default group policy:
group-policy DfltGrpPolicy attributes
wins-server value 10.200.6.190 10.200.6.191
dns-server value 10.200.6.190 10.200.6.191
vpn-tunnel-protocol IPSec webvpn
ip-comp enable <----------
Changing this to disable made the difference. Disable is the default.
Hope this helps someone.