cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
75029
Views
35
Helpful
31
Replies

MTU vs tcp adjust-mss

tato386
Level 6
Level 6

My server is having trouble exchanging mail with another SMTP server. After much troubleshooting I ran accross an article that states that this might be caused by a "black-hole" router and MTU/packet issues. Sounds weird but I want to give it a try. My questions are:

1) Should I use "ip mtu xxx" or "ip tcp adjust-mss"? What are the diferences between these two commads.

2) Should I apply this to the WAN or LAN of my Internet router?

Thanks,

Diego

31 Replies 31

Rick,

Thanks you for the reply. I do appreciate it. I do have a question regarding Ivan's article. Under the section "Network Implications" specifically referring to "Listing 2" "Clear the don't fragment bit for UDP traffic" the config example is this to be applied to the inside or outside interface? Just as an FYI, Cisco says when you have a router that is performing VPN and IPSec encapsulation you want to place the mss-adjust command on the inside interface since when it hits the next interface being the outside extra overhead is added due to encryption...any thoughts or opinions on this as well?

HTH,

Brandon

Brandon

Listing 2 in the article by Ivan is using Policy Based Routing to manipulate the DF bit. PBR is applied on the interface on which the packet arrives, so it would be applied on the inside interface.

I am not sure that I understand the logic of "when it hits the next interface being the outside extra overhead is added due to encryption". Certainly going through the outside interface will add extra header and extra overhead of processing. But I do not see how that would impact tcp adjust-mss. If there is some recommendation to place the adjust-mss on the inside I can be comfortable with that. But I do not see how there is any relationship between placement of the adjust-mss and the processing of encryption. With adjust-mss the router is going to look for TCP packets with the SYN bit and will inject a value into the MSS field. I do not see how the processing of encryption impacts that one way or the other.

HTH

Rick

HTH

Rick

Rick,

Thanks for your reply. Your explanation here is a thousand times better than the explanation the Cisco engineer gave me for whom I could hardly understand to begin with. I agree that I don't see any relationship.

HTH,

Brandon

Rick,

Attached is a screen shot from a wireshark capture. You will notice frame #719 as having 1460 bytes on the wire. I had previously 1420 set as the mss size on the inside interface before changing that to 1300, and before running a ping test of "ping -f -l 1415 x.x.x.x" from the local host that awas having issues sending data. At a size of 1415 I got clean replies from the end system (remote side of the VPN tunnel). I went down to 1300 just to be safe and from Cisco's recommendation. But I will probably change it to 1375 or something closer to your suggestion. I figured with a datagram size of 1460 from the capture and the mss size at 1420 on the router interface, but with clean replies at a byte size of 1415 it clearly looked like the issue was fragmentation since the 1420 mss is higher that the 1415 byte size in the ping test. Any thoughts?

HTH,

Brandon

Rick,

Attached is a screen shot from a wireshark capture. You will notice frame #719 as having 1460 bytes on the wire. I had previously 1420 set as the mss size on the inside interface before changing that to 1300, and before running a ping test of "ping -f -l 1415 x.x.x.x" from the local host that awas having issues sending data. At a size of 1415 I got clean replies from the end system (remote side of the VPN tunnel). I went down to 1300 just to be safe and from Cisco's recommendation. But I will probably change it to 1375 or something closer to your suggestion. I figured with a datagram size of 1460 from the capture and the mss size at 1420 on the router interface, but with clean replies at a byte size of 1415 it clearly looked like the issue was fragmentation since the 1420 mss is higher that the 1415 byte size in the ping test. Any thoughts?

HTH,

Brandon

Some additional thoughts . . .

Inside interfaces work fine for ip tcp adjust-mss placement, but keep in mind all TCP traffic is impacted. If the router is also doing local LAN routing, such traffic will be impacted. Works fine, but throughput might be slightly reduced for such traffic.

With an outside interface, be careful of what the outside interface is. If the outside interface is a tunnel interface, perfect, but if the outside interface is the tunnel's physical interface, you'll likely not obtain the results you desire since the MSS of interest is now encapsulated.

Ip tcp adjust-mss is a great feature in that it avoids initial MTU fragmentation processing, at least for TCP, but it should not be relied upon. What you want to insure is that packets can still transit when larger than MTU (either by correct functioning of PMTUD or physical fragmentation). The reason for this is ip tcp adjust-mss only works, I believe, during the initial TCP handshake. It's possible that MTU along a path changes dynamically while a flow is active.

For instance, you have a branch with a dedicated WAN link and a backup VPN link. The main link fails, traffic now flows across the VPN link but with the reduced MTU which ip tcp adjust-mss won't change for existing flows (again assuming I'm correct it only functions during TCP handshake). Of course, you could also use ip tcp adjust-mss across the dedicated WAN link.

Usage of ip mtu can be used to help insure an ICMP message is sent to the host. This to avoid black hole routers or devices that might hide fragmentation.

Hi,

Kindly check the attached document, i've tried to collect the whole points of MTU, TCP MSS, and PMTUD in a single document before, i feel it is still incomplete, but i believe that it will at least guide you in.

BR,

Mohammed Mahmoud.

Mohammed, do you recently completed the ccie ?

Congratulations! I'll revise your document as soon time permits.

Hi Paolo,

Thank you very much, i've accomplished my CCIE last November, thats why i wasn't active on the forum during most of 2007. I await your criticism and additions to my document.

BR,

Mohammed Mahmoud.

I do have GRE/IPSec tunnels but the problem is sending email to a particular domain not traffic between private hosts. The traffic in this case is not carried via GRE/IPSec at any time. It is delivered to the destination via a standard T1 circuit to our ISP.

There is ONE domain that deliverying mail to is constantly failing. Since we deliver mail to hundreds of other domains I don't think its our email server. Of course the other guys get email from hundreds of servers without problems either so the problem isn't on their end either. Since the problem seems to be "in the middle" I ran accross the MTU thing while researching.

Right now I used a registry tweak to lower the MTU on my email server to 1350. If that doesn't work I will do the MSS thing on the LAN side of my router like you suggested.

Thanks,

Diego

Wow, sorry for the diversion...but I do suggest following up with the MSS stuff as discussed in this thread...a must for GRE/IPSec. Your problem smacks of DNS...like the DNS servers you use cannot find the MX entry of the destination domain...but I'm sure you've checked all the obvious stuff. We just went round and round with a similar issue and actually moved the damn Exchange server to another site...! Some fix.

jedavis
Level 4
Level 4

I too have been struggling with this problem recently. I have 2 sites at which I manage the network. In the middle of this connection is a VPN that I don't control. I noticed that there was a lot of fragmentation going on, and that Windows hosts were not setting the DF bit thus allowing fragmentation.

After a bit of experimentation, here is what I found:

packet <= 1400 bytes, no fragmentation required.

packet >= 1401 bytes and <= 1476 bytes, packet is silently dropped

packet >= 1477 bytes, “ICMP frag needed but df bit set” returned.

As a work-around, I set tcp adjust-mss on the remote site router interface (facing the VPN, not the LAN) to 1360. However, I think the correct thing to do is fix the problem with PMTUD. I have been working with the person responsible for that equipment, though I don't have access to the configurations.

Due to the fact that I am receiving the unreacables at packet sizes > 1476, I would infer that GRE is being used in conjunction with IPsec transport mode. But what does this behavior tell me? Is tunnel path-mtu-discovery not configured on the GRE tunnel?

mbroberson1
Level 3
Level 3

Use "ip tcp adjust-mss" try something like 1375 first. If this works slowly increase the size until it breaks, then back off by 20 or so to allow for some cushion. Apply this closet to the source traffic. I would first try the inside LAN side interface of the router.

Thanks for the reply, but I think you misunderstood the question. I already have adjust-mss working. I have it set on the outside interface, not the LAN interface. I agree with josephdoherty on that count. My point is that i view the adjust-mss measure as just a work-around for a misconfigured network. I believe that the correct solution is to fix the network so that the individual components respond correctly to oversized packets.

Some other things that confuse me about PMTUD. The Cisco documentation that I have read seems to suggest that end hosts typically store path MTU information as a host route. However, I have yet to find anywhere that I can display this information on Windows hosts. I certainly can display the routing table, but I don't see host routes using "route print". Does anyone know how to display discovered path MTU values on Windows hosts?

It also seems to suggest that IOS stores the Path MTU not as a host route but as an interface-wide parameter, but that it can't be displayed unless you use "debug tunnel". Is this true or am I misunderstanding the documentation? It certainly seems possible that different hosts that can be reached through a tunnel interface could have different path MTUs.

One final bit of fog floating around my brain on this issue. While PMTUD is done by TCP only, fragmentation is done at the IP layer. Does the path MTU discovery done by TCP affect other protocols? I mean, if TCP has determined that the path MTU for a particular host exists with a value of say, 1400, would the IP stack allow a UDP packet go out larger than this?

Jeff

I can understand your wanting to get the network correctly configured so that tcp adjust-mss is not needed. But there is a fundamental problem about that. Much of our traffic goes through networks that we do not control. And many of those networks do things that break PMTUD (especially networks that block the ICMP error message that is essential to the functioning of PMTUD). So you can get all the devices in your network correctly configured and working correctly, but you are still at the mercy of devices in other networks which will sometimes break PMTUD.

To answer the other part of your question - there is no connection between what the IP stack of a host does with PMTUD for TCP and what it does for UDP. It may have determined the size to be 1400 for TCP but it will happily send out UDP packets larger than this.

HTH

Rick

HTH

Rick
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card