cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
40167
Views
10
Helpful
3
Comments
Sandeep Singh
Level 7
Level 7

 

 

Introduction


New generation networking equipments like servers and Nexus switches support jumbo mtu so to send a big chunk of data in one go. This improves overall network throughput if all equipments support jumbo mtu size. However if jumbo mtu is not supported by all switches in the path it may result in packet fragmentation or in some cases packet drops. If the DF (Don't Fragment) bit within the IP header is not set by the server, the router will fragment the packets and forward on. This will impact both the router performance and the throughput that the servers are able to achieve between each other. If the server Operating System supports and is configured for Path MTU Discovery, it will set the DF bit in the IP packet header causing the Nexus to return an ICMP Type 3 Code 4 (Fragmentation Needed and Don't Fragment was Set). This will result in the hosts reducing the maximum packet size they sends.

 

 

 

Topology Description


There are two servers on two different subnets/vlans and they want to use jumbo frames to tranfer data. The servers are actually connected on two Nexus 5000 switches and these are layer 2 so the default gateways are on the Nexus 7000 switches.

 

 

 

Limitations


For Nexus 5K release prior to NX-OS 5.0(2)N1(1), an MTU mismatch is a Type 1 inconsistency and the vPC secondary peer will disable vPC member ports. As part of NX-OS 5.0(2)N1(1) Support for Type 2 vPC Consistency Checks was added which changed a number of parameters from Type 1 to Type 2 inconsistency as so downtime is less likely in this case.
Nexus 5K doesn't show the configured 9216-byte MTU in the output of commands such as show interface. You can see that the policy has been taken using the show queuing interface command though.

 

Note: For L2, you cannot set the MTU per interface but only system wide. The mtu is set under network-qos which can only be attached to system qos.
For L3 you can apply it to just one or a group-of interface, instead of the "system qos" just use "interface ethx/y".

 

 

MTU Size Considerations

For Layer 3 interfaces, you can configure the MTU to be between 576 and 9216 bytes (even values are required). For Layer 2 interfaces, you can configure the MTU to be either the system default MTU (1500 bytes) or the system jumbo MTU size (which has the default size of 9216 bytes). You can change the system jumbo MTU size, but if you change that value, you should also update the Layer 2 interfaces that use that value so that they use the new system jumbo MTU value. If you do not update the MTU value for Layer 2 interfaces, those interfaces will use the system default MTU (1500 bytes).

 

 

Configuration


For Jumbo mtu to work properly it should be configured on all interfaces, including physical interface and vlan interface (on Nexus 7K). If you have port channel configured then it should also have jumbo mtu enabled. Make sure that all possible paths between any jumbo enabled end-points are all jumbo enabled.

 

Nexus 5K config:

policy-map type network-qos JUMBO_MTU
  class type network-qos class-default
    mtu 9216

!

system qos
  service-policy type network-qos JUMBO_MTU

Nexus 7K config:

system jumbomtu 9216

!

interface ethernet x/y
mtu 9216

!

interface vlan xxx
mtu 9216

This document is created from following discussion
https://supportforums.cisco.com/discussion/12070561/jumbo-support-nexus-n7k-c7009

 

 

Verify

 

Use the command "show interface ethernet" or "show interface ethernet port/slot" to check if the size is set for jumbo mtu.

 

 

Related Information


vPC Best Practices for Nexus 7000 and 5000
vPC Failover Scenarios and Troubleshooting Checklist

 

Comments

There is an annoying behaviour (reported bug actually) of N5k, when you do the "show interface" the MTU you will see will always be 1500, even though you set the default to 9k using the MQC.

stuart.pannell
Level 1
Level 1

Just a quick note on this, I was testing between 2 N5K's and had applied the above config though was only getting mtu of 1472 with DF set. I could see on the interface counters that I was sending Jumbo packets and 30 for each ping test(=6 packets per ping). I had to add the MTU command under the vlan interface to increase the mtu. I added the command mtu 9216 as shown

interface Vlan130
  no shutdown
  mtu 9216
  ip address 10.136.116.7/28

Now I get a maximum packet size of 9164(9172 with overhead) bytes before fragmentation occurs

my switches are back-to-back 10Gb Fibre SFP+ LR. The interface counters now show 5 packets per ping test = 1 packet per ping as expected :-)

 

 

 

evelandi
Cisco Employee
Cisco Employee

Could you please post the bug id in order to check it?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: