Jumbo frames and UDP multicast

Unanswered Question
Aug 21st, 2008

Does anyone know if there is an issue with UDP multicast on Jumbo frame blades?

We have a 6500 with 48-port 10/100/1000 RJ45 EtherModule WS-X6148A-GE-TX ?

Is there a way to disable Jumbo frames on the 6500 or on a blade / port specific?

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
andrew.burns Fri, 08/22/2008 - 06:06


Jumbo frames can be problematic at the best of times - if you are using UDP multicast with jumbo then you'll need to be 100% sure that the sources, destinations, and every possible path between them is also jumbo enabled.

By default jumbo is disabled on all ethernet blades (default mtu is 1500) but can be changed using the mtu command. It's done per interface - so to change an entire blade you could use the interface range command.

You can check using this command:

SWITCH#show interfaces | inc protocol|MTU

GigabitEthernet1/1 is up, line protocol is up (connected)

MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec,

GigabitEthernet1/2 is up, line protocol is up (connected)

MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec,

GigabitEthernet1/3 is up, line protocol is up (connected)

MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec,



pipsadmin Fri, 08/22/2008 - 06:31

So where would you enable Jumbo frames normaly in a dual core 6500 ?

andrew.burns Fri, 08/22/2008 - 08:20


Normally, if you're implementing jumbo frames you'll implement it everywhere (every interface) because if you don't do that you can get drops or fragmentation. In every case where I've seen a partial implementation of jumbo I've seen issues - either with traffic being black-holed or switch performance being impacted by having to fragment. Even within a single VLAN it has to be *every* machine on that VLAN as there is no equivalent in UDP of the TCP MSS exchange, which can hide MTU mis-matches.

Some good (if dated) references are here:




Although they talk about jumbo in the Internet the core messages are still relevant.

I'm not saying don't do it (it does work and it does make a big difference, especially in HPC environments) just don't underestimate the implementation effort required.




This Discussion