MPLS Packet loss MTU >1937

Unanswered Question
Mar 9th, 2010
User Badges:

Hi folks



I'm hoping someone can help me with the following issue(at least I think it's an issue).


We recently installed a remote PoP with a service provider(who are using something called Martini tunnels??) ethernet link providing the WAN link.


This has not yet gone into production however as I am seeing packet loss across the WAN link between hosts in our remote PoP, and our main center. This only occurs when the hosts at either end are inside an MPLS VRF.  The larger the packet size the greater the packet loss starting from anything above a 1937 byte size.


If I ping between hosts with the don't fragment bit set the maximum packet size is 1472.  Without the dont' fragment bit set I can ping up to 1937 without packet loss.  At 1938 I'm seeing >60% packet loss, which gets progressively worse as I continue increasing the packet size.


In contrast if the hosts are not inside a VRF I can ping OVER 9000!!!11.  Fragmentation appears to be occuring both inside and outside of the MPLS VRF, but the packet loss issue with large packets inside the MPLS VRF is a concern.  I'm weary this could lead to unforseen problems once the PoP is put into production?


I have checked the interface MTU size along all the devices in the path, but they are all set to jumbo frames with a minimum of 9000 bytes.  All interfaces are GigE.


PoP 7204VXR VPNV4 + VRF MTU 9126  < >  PoP Edge 2960G MTU 9000 < >  {{ Service Provider WAN }} < > Main Edge 3560G MTU 9198 < > Main Core 6590e MTU 9216 VPNV4  < >  Main Access 6590e MTU 9216 VPNV4 + VRF


The directly connected interfaces for the 7204,and 6509's are configured for tag switching.



Thanks in advance.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Reza Sharifi Tue, 03/09/2010 - 19:54
User Badges:
  • Super Bronze, 10000 points or more
  • Cisco Designated VIP,

    2017 LAN

Hi,


Draft-Martini is a point-point layer-2 circuit transported by service providers using MPLS LDP.

Since you increased the physical interface MTU did you also increase the MPLS MTU to match the physical interface?


HTH

Reza

tf2-conky Tue, 03/09/2010 - 20:25
User Badges:

Is this with the interface 'tag-switching ip', or 'mpls mtu' command?  I have already tried the mpls mtu command set on the PoP 7204VXR, but it made no difference.  9126 seems to be the default on the 6509e's in our main center


Currently


PoP 7204VXR to WAN


#sh mpls interfaces gigabitEthernet 1/1.1 detail
Interface GigabitEthernet1/1.1:
        IP labeling enabled (ldp)
        LSP Tunnel labeling not enabled
        BGP tagging not enabled
        Tagging operational
        Optimum Switching Vectors:
          IP to MPLS Feature Vector
          MPLS Feature Vector
        Fast Switching Vectors:
          IP to MPLS Fast Feature Switching Vector
          MPLS Feature Vector
        MTU = 9216



PoP 7204VXR to LAN


bdr1.chc#sh mpls interfaces gigabitEthernet 0/1.121 detail
Interface GigabitEthernet0/1.121:
        IP labeling not enabled
        LSP Tunnel labeling not enabled
        BGP tagging not enabled
        Tagging not operational
        Optimum Switching Vectors:
          IP to MPLS Feature Vector
          MPLS Feature Vector
        Fast Switching Vectors:
          IP to MPLS Fast Feature Switching Vector
          MPLS Feature Vector
        MTU = 9216


Main Core 6509 to WAN


#sh mpls interfaces gigabitEthernet 4/13 detail
Interface GigabitEthernet4/13:
        IP labeling enabled (ldp)
        LSP Tunnel labeling not enabled
        BGP tagging not enabled
        MPLS operational
        Optimum Switching Vectors:
          IP to MPLS Feature Vector
          MPLS Feature Vector
        Fast Switching Vectors:
          IP to MPLS Fast Feature Switching Vector
          MPLS Feature Vector
        MTU = 9216


Main Access 6509e to LAN


sh mpls interfaces gigabitEthernet 4/7 detail
Interface GigabitEthernet4/7:
        IP labeling enabled (ldp)
        LSP Tunnel labeling not enabled
        BGP tagging not enabled
        MPLS operational
        Optimum Switching Vectors:
          IP to MPLS Feature Vector
          MPLS Feature Vector
        Fast Switching Vectors:
          IP to MPLS Fast Feature Switching Vector
          MPLS Feature Vector
        MTU = 9216



I'm just waiting to confirm with our service provider but I believe the max MTU on the WAN circuit is ~1526(media converter).  Should I be adjusting the MPLS MTU accordingly?


Is this right in terms of calculating the overhead for ping testing?


18 bytes = 802.3
4 bytes = dot1q
20 bytes = IP
8 bytes = ICMP

4 byes = MPLS


54 bytes total overhead - 1526 = 1472 bytes max unfragmented

Giuseppe Larosa Wed, 03/10/2010 - 00:16
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Conky,

the 8 bytes of ICMP are within the IP packet


the overhead is:


18 bytes = 802.3 : you don't need to count this too L2 OSI
4 bytes = dot1q: you don't need to count this too L 2 OSI


4 bytes / each MPLS label


you should verify also how many labels (the mpls label stack size) are needed


you can override configuration of a single GE port on C6500 side using


int gi4/13

mtu 1500

mpls mtu 1508



check if the same commands are supported on C7200 GE port


Hope to help

Giuseppe

judebryant Fri, 03/12/2010 - 07:08
User Badges:

Hello all,


onthe 2960 switch, you can configure the interface to have a large mtu.  However there is a system mtu that has limits.  So jumbo packets will be fragmented on this box.


I hope I followed the thread correctly enough to provide some useful information


Regards

Jude Bryant

Pioneer Telephone

tf2-conky Mon, 04/05/2010 - 21:49
User Badges:

Never really resolved this.  The ping tests I originally did were from a laptop running Ubuntu.  I got completely different results when running the same tests from a laptop running windows hosts.  Basically no packet loss no mater what the packet size when not specifiying the DF bit.


The main thing is I can ping across(router to router) the WAN circuit > 1500 bytes with the DF bit set inside a VRF.


This has been in production now for over a month, and there have been no customer complaints.  *shrugs*

Actions

This Discussion