cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11223
Views
0
Helpful
10
Replies

IP MTU and ip tcp adjust mss

Umesh Shetty
Level 1
Level 1

Hi All,

I am using GRE over IPSec to connect two branch sites and if I have set a tcp mss of 1360 do I need to also set an interface MTU of 1400 also on the Tunnel interface . I understand that when the MSS is set to 1360 effectively there will be no TCP packet more that 1400 bytes of size sent over this interface and hence the IP MTU even if is set to 1500 would not be a problem because most end stations would reduce the segment size because of  the ip tcp adjust mss 1360 command. The oly reason why I feel the IP MTU of 1400 may be needed is for UDP packets which do not undergo the PMTUD process and will not understand the mss of 1360 and may send a segment of larger size. Can someone please confirm if thats exactly why its needed there ?

Thanks in Advance

Regards

Umesh Shetty

5 Accepted Solutions

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Yes, with MSS adjust, IP MTU shouldn't (generally) matter to TCP traffic unless it's possible that TCP transit traffic's TCP session startup didn't transit the MSS adjusted interface (e.g. VPN tunnel as an alternate or backup path).

Also keep in mind, although we normally set MSS adjust for MTU less 40, options can increase the overhead, i.e. MSS might not always preclude TCP fragmentation even for TCP sessions that started across it.

View solution in original post

Hi Umesh,

It can happen! I recall a WAN connection where we were using IPSEC and the SP also had a leg in their network running IPSEC - this double encapsulation meant 1400 was too high. This was essentially down to a miscommunication of responsibilities.

In most cases 1400 should be enough.

View solution in original post

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Except in cases like Jamie mentions, which should be addressed by enabling PMTUD for the tunnel traffic, you can usually go even a little larger, such as 1420, but 1400 is a little "safer".

You can also save a few bytes for GRE/IPSec by changing the default mode from tunnel to transport (or it might be the converse).  You can also save a few byte by using, if possible VTI tunnels which save GRE overhead.  If you use either of the last technique, you can also increase your IP MTU to account for, and take advantage of, the increase.

View solution in original post

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

You could shape either on the tunnel or physical interface.  If you shape on the physical interface, be aware, by default, the shaper will see all traffic as just one flow.

When a vendor sets a logical bandwidth cap, that's usually for L2, but I believe most shaper don't account for L2 overhead.  So if you want to be precise, shape slower for that.

A shaper on the physical interface should account for the encapsulation overhead, not sure about when it's on the tunnel interface.

View solution in original post

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Yes, most recent current Cisco routers that host VPN tunnels do copy original ToS to the encapsulated packet.  So, if your QoS is only looking at ToS markings, you can use QoS as the physical interface.  However, say you wanted to use FQ within the same traffic class.  Again, the physical interface QoS would see all the encapsulate traffic as just one flow.  In that case, though, the qos pre-classify command allows the physical interface to "see" a copy of the original packet's header info.

Often you want to shape for the local the physical interface egress cap bandwidth and shape for the tunnel's remote side bandwidth cap.  Using a tunnel shaper, I believe, is a little "cleaner" for shaping for a remote side bandwidth cap.

Also when doing QoS at the tunnel, you can often look deeper into the packet, e.g. NBAR classification, which isn't fully available when using even the qos pre-classify command.  (QoS at the tunnel "sees" all the pre-encapsulated packet.)

Whether to shape or implement VPN QoS at the tunnel and/or physical interface is a "it depends" answer, along with QoS feature support on the device.

View solution in original post

10 Replies 10

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Yes, with MSS adjust, IP MTU shouldn't (generally) matter to TCP traffic unless it's possible that TCP transit traffic's TCP session startup didn't transit the MSS adjusted interface (e.g. VPN tunnel as an alternate or backup path).

Also keep in mind, although we normally set MSS adjust for MTU less 40, options can increase the overhead, i.e. MSS might not always preclude TCP fragmentation even for TCP sessions that started across it.

Thnx Joseph,

When using GRE over IPsec I set the MTU to 1400 so that allow a buffer of 100 octets for GRE+IPSec+ any additional IP options etc. That that seems to be the value recommended by Cisco for this sort of implementation. Do you feel that it works in most cases ? Or have you ever experienced the MTU size of 1400 also being a little high ?

Thanks in Advance

Regards

Umesh Shetty

Hi Umesh,

It can happen! I recall a WAN connection where we were using IPSEC and the SP also had a leg in their network running IPSEC - this double encapsulation meant 1400 was too high. This was essentially down to a miscommunication of responsibilities.

In most cases 1400 should be enough.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Except in cases like Jamie mentions, which should be addressed by enabling PMTUD for the tunnel traffic, you can usually go even a little larger, such as 1420, but 1400 is a little "safer".

You can also save a few bytes for GRE/IPSec by changing the default mode from tunnel to transport (or it might be the converse).  You can also save a few byte by using, if possible VTI tunnels which save GRE overhead.  If you use either of the last technique, you can also increase your IP MTU to account for, and take advantage of, the increase.

Thnx Jamie and Joseph,

Another question that comes to my mind from this discussion is the placement of a policy map for traffic shaping. Suppose I have a Fast ethernet interface and a 6 mbps circuit connected on it, with IPsec configured on it the tunnel interface where would you suggest to place the policy map for traffic shaping ? Should that be on the Tunnel interface in which case I will have to keep the additional GRE+IPSec header in mind and keep the shaping rate less than the CIR. Or is it suggested to Shape the traffic on the Fast ethernet physical interface and just allow the GRE + IPSEc headers to be added and then shape them all to the exact CIR?

Thanks in Advance

Umesh Shetty

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

You could shape either on the tunnel or physical interface.  If you shape on the physical interface, be aware, by default, the shaper will see all traffic as just one flow.

When a vendor sets a logical bandwidth cap, that's usually for L2, but I believe most shaper don't account for L2 overhead.  So if you want to be precise, shape slower for that.

A shaper on the physical interface should account for the encapsulation overhead, not sure about when it's on the tunnel interface.

Thnx Joesph,

I understood the reason why we have to shape slower to account for the L2 overhead that will be imposed after the shaping takes effect.

All my traffic is being marked into specifically before it reaches the router, so currently I have traffic shaping on the Tunnel interface with a nested policy-map for class based queing which matches the dscp values and queues them.

I have read that whenever there is IPSec and GRE used the TOS feild from the original IP header gets copied to the new header. And hence the physical interface can use the same marked feilds to classify packets.

So I wanted to know if the traffic shaping and nested queuning not work the same wat as it would do on a Tunnel interface? When you say " by default, the shaper will see all traffic as just one flow"  how is that different from queuing on a tunnel interface?

The reason why I am sceptical about shaping on a tunnel interface is it is tough to guess the average packet size in the network which is important whane it comes to deciding the shaping rate. And I always need to assure a value lower than the average packet size.

The problem is that sometimes the Tunnel interface reaches shaping rate but the physical interface is well below the contracted BW, and these are times when the packet sixe is larger that what is estimated.

If the spaing was done on the physical interface at almost the shaping rate (leaving aside some compensation for the L2 header) I would be sure of utilizing the full BW most of the times irrespective of the packet size.

Your help needed to identify the pros and cons of doing it on the Tunnel vs Physical interface.

Thanks in Advacnce

Umesh Shetty

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Yes, most recent current Cisco routers that host VPN tunnels do copy original ToS to the encapsulated packet.  So, if your QoS is only looking at ToS markings, you can use QoS as the physical interface.  However, say you wanted to use FQ within the same traffic class.  Again, the physical interface QoS would see all the encapsulate traffic as just one flow.  In that case, though, the qos pre-classify command allows the physical interface to "see" a copy of the original packet's header info.

Often you want to shape for the local the physical interface egress cap bandwidth and shape for the tunnel's remote side bandwidth cap.  Using a tunnel shaper, I believe, is a little "cleaner" for shaping for a remote side bandwidth cap.

Also when doing QoS at the tunnel, you can often look deeper into the packet, e.g. NBAR classification, which isn't fully available when using even the qos pre-classify command.  (QoS at the tunnel "sees" all the pre-encapsulated packet.)

Whether to shape or implement VPN QoS at the tunnel and/or physical interface is a "it depends" answer, along with QoS feature support on the device.

Thnx Joseph,

So if I have a Hub site with 6 mbps circuit connecting to the MPLS cloud and three spoke sites with 2 Mbps each. On the hub router I  configure a tunnel for for each branch site and shape each of the tunnel for 2 Mbps . I hope this is what you mean when you say " Using a tunnel shaper, I believe, is a little "cleaner" for shaping for a remote side bandwidth cap"

If I only have 2 sites connected instead of a Hub and Spoke setup and I do not use WFQ inside the individual class or any other features like NBAR, I can shape on the physical interface.

I hope my understanding is right here.

Regards

Umesh Shetty


Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Correct on both counts.

Suppose you had a hub with 100 Mbps hand-off, but with a 50 Mbps cap.  Also suppose you had 55 branch sites, each with 10 Mbps.  Ideally you would want to shape each tunnel for 10 Mbps, so you can manage any congestion to individual sites, and you also would want to shape the physical interface at 50 Mbps, again so you can manage egress congestion.  (Note ingress to your hub is oversubscribed, which creates another problem, but assume most of the traffic volume is from hub to branch and the aggregate of the branches does not typically exceed 50, or less, Mbps.)

Assuming your traffic contained VoIP, you would want to prioritize it under each tunnel's shaper but you would also want to prioritize it on the physical interface.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card