cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
11724
Views
20
Helpful
4
Replies

MPLS MTU vs MTU in an MPLS interface

ronmarcojr
Level 1
Level 1

Hi Everyone,

I've been reading around in the forum trying to figure out what causes giants in an mpls enabled interface.  We have a network running MPLS designed by Cisco and the recommendation regarding fragmentation is to set the mtu to 1550 to accomodate the extra labels.  As I read through, most of the articles say that it should be mpls mtu.   My questions regarding this are: 1.) Is setting the MTU to 1550 have the same to using the mpls mtu command to set some higher value to accomodate the labels? 2) Could this be the reason why we are seeing a lot of giants.

I'm posting the config of two neighbor routers and the equivalent show interface command.  Hope you guys can enlighthen me.  Thanks is advance. (I'm deleting some output which I think is not relevant)

CONFIGURATION

ROUTER 1

interface GigabitEthernet5/2

mtu 1550

wrr-queue bandwidth 70 30
wrr-queue cos-map 1 2 2
wrr-queue cos-map 2 1 3
wrr-queue cos-map 2 2 4
priority-queue cos-map 1 5 6 7
rcv-queue cos-map 1 3 4
mpls label protocol ldp
mpls traffic-eng tunnels
tag-switching ip
mls qos trust dscp
service-policy output Sup720PE-GE-OUT
ip rsvp bandwidth 500000 500000
ip rsvp signalling hello

ROUTER 2

interface GigabitEthernet5/2

mtu 1550
ip address 10.220.0.118 255.255.255.252

wrr-queue bandwidth 70 30
wrr-queue cos-map 1 2 2
wrr-queue cos-map 2 1 3
wrr-queue cos-map 2 2 4
priority-queue cos-map 1 5 6 7
rcv-queue cos-map 1 3 4
mpls label protocol ldp
mpls traffic-eng tunnels
tag-switching ip
mls qos trust dscp
service-policy output Sup720PE-GE-OUT
ip rsvp bandwidth 500000 500000
ip rsvp signalling hello

SHOW INTERFACE

ROUTER1

GigabitEthernet5/2 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0012.01fc.3900 (bia 0012.01fc.3900)
  Description: VAL-ipb-rtr-004:Gi5/2

  MTU 1550 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 2/255, rxload 1/255
  Encapsulation ARPA, loopback not set
 
  Last clearing of "show interface" counters 8w0d
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)

     1070879096 packets input, 982197818027 bytes, 0 no buffer
     Received 2508336 broadcasts (0 IP multicasts)
     0 runts, 501894533 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

ROUTER2

GigabitEthernet5/2 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 0012.01fc.4080 (bia 0012.01fc.4080)
  Description: ***TO VAL-IPB-RTR-003:gi5/2***

  MTU 1550 bytes, BW 1000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 2/255
  Encapsulation ARPA, loopback not set

  Last clearing of "show interface" counters 8w0d
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)

     16907297459 packets input, 4550396274896 bytes, 0 no buffer
     Received 2659058 broadcasts (0 IP multicasts)
     0 runts, 334300770 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

4 Replies 4

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Ron,

mtu command sets the generic L2 frame size to 1550 but it increases also the mtu of L3 routed protocols.

mpls mtu is more specific and says how many bytes can be in an MPLS PDU.

it can be wise in your case to set ip mtu to 1500

by setting ip mtu to 1500 (standard value) and mtu to 1550 you give space for all the label overhead

for example the following

mtu 1550

mpls mtu 1532

ip mtu 1500

can be a more complete definition

Hope to help

Giuseppe

Hi Giuseppe,

That's a very concise clarification. Thanks.

Though just to follow up on the other question, regarding the giants, we have this on a lot of P-PE connections although there doesn't seem to be any input errors or drops. Some forum threads mention that it is normal in an mpls implementation. The only time i know giants can occur (other than hardware/software error) is when there are additional bytes in the header such as the case with trunks and mpls tags, but a lot of the documents say that giants are discarded by router.  Do you think the mtu setting is causing this? Should we be concerned about this and try to eliminate it?

Thanks again,

Ron

Hello Ron,

thanks for your kind remarks.

I remember we had a similar issue in our first MPLS implementations, after some mounths of deployment there were very high giants counters but no real issue for user traffic we explained this with the baby giants concept: packets are counted as giants but they are not discarded.

Our P and PE nodes were C7500 and C12000.

As you have written the baby giant concept is derived by additional overhead on trunk links and/or by MPLS overhead.

As far as user traffic is not affected by these counters you should be fine.

You can demonstrate it is not a real issue if you are able to send packets with IP size 1500 bytes within an MPLS VPN.

if an extended ping in VRF with 1500 bytes IP size packet is fine your network is working well.

There can be more overhead demanding scenarios for example carrying MPLS VPN packets inside a EoMPLS pseudowire or inside an MPLS TE tunnel so you may need to adapt the test to your specific environment.

Hope to help

Giuseppe

Thanks again Giuseppe, I just needed more information to convince myself and my colleagues that we don't need to worry much about these giants.

Regards,

Ron

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: