%MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol

Unanswered Question
Apr 5th, 2010

Hello:


I have one Cat6k switch running 12.2(18)SXF13.But I see the following error msgs in th log.


Mar 28 21:18:57.155 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 28 22:18:59.260 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 28 23:19:03.572 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 29 00:19:07.028 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 29 01:19:15.348 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 29 02:19:15.892 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.
Mar 29 03:19:19.844 MEDT: %MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at 95% capacity for MPLS protocol.


------------------ show module ------------------


Mod Ports Card Type                              Model              Serial No.
--- ----- -------------------------------------- ------------------ -----------
  1    6  Firewall Module                        WS-SVC-FWM-1       SAD1206018E
  2    1  Application Control Engine Module      ACE20-MOD-K9       SAD120705B7
  3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     SAL1219PY4P
  5    2  Supervisor Engine 720 (Active)         WS-SUP720-3B       SAL1202CNVA

Mod MAC addresses                       Hw    Fw           Sw           Status
--- ---------------------------------- ------ ------------ ------------ -------
  1  001e.f72b.239e to 001e.f72b.23a5   4.2   7.2(1)       3.2(7)       Ok
  2  001e.4a6f.d0f0 to 001e.4a6f.d0f7   2.3   8.7(0.22)ACE A2(1.3)      Ok
  3  001f.9e0f.8cac to 001f.9e0f.8cdb   2.7   12.2(14r)S5  12.2(18)SXF1 Ok
  5  001a.2f3c.ab40 to 001a.2f3c.ab43   5.6   8.5(2)       12.2(18)SXF1 Ok

Mod  Sub-Module                  Model              Serial       Hw     Status
---- --------------------------- ------------------ ----------- ------- -------
  3  Centralized Forwarding Card WS-F6700-CFC       SAL1218P68B  4.0    Ok
  5  Policy Feature Card 3       WS-F6K-PFC3BXL     SAL123302XN  1.9    Ok
  5  MSFC3 Daughterboard         WS-SUP720          SAL1202CQ3Q  3.1    Ok




show mls cef summary


Total routes:                     523869
    IPv4 unicast routes:          267309
    IPv4 Multicast routes:        3
    MPLS routes:                  256555<<<<<<<
    IPv6 unicast routes:          2
    IPv6 multicast routes:        0
    EoM routes:                   0




Can you please throw some lights on this issue ? Do you think it is a bug ? or hw issue.


I do see the following buf which looks bit similar but need some more info.


CSCsm27567

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
sampusarkar Thu, 04/08/2010 - 04:59

Hello Ganesh :


  Thanks for the reply.



The msl maximun router is 512K


pz-c6509-1#sh mls cef maximum-routes
FIB TCAM maximum routes :
=======================
Current :-
-------
IPv4 + MPLS         - 512k (default)
IPv6 + IP Multicast - 256k (default)


I can change the MPLS table size. But can you please tell me how MPLS routes are being propagated in customer routers and it is giving

FIB exception errors?


Regards

Arjun

Giuseppe Larosa Thu, 04/08/2010 - 05:16

Hello Arjun,


>> But can you please tell me how MPLS routes are being propagated in  customer routers and it is giving

FIB exception errors?


the message advices that the CEF table is near to  full.

This has an impact on forwarding plane = how traffic is processed


This does not mean that BGP routes or MPLS label bindings are not passed to other devices that is part of signalling plane.


It means that if number of MPLS entries increase over maximum part of traffic will be process switched causing cpu to go to 100%.



By the way you have more then 256,000 MPLS routes it may be time to implement LDP label filtering


example of LDP label filtering:


sh run | inc advertise
no mpls ldp advertise-labels
mpls ldp advertise-labels for PREFIX-LDP-out


sh ip access-lists PREFIX-LDP-out
Standard IP access list PREFIX-LDP-out
    10 permit 10.80.0.0, wildcard bits 0.0.255.255 (1718 matches)


the only needed labels are those of loopbacks


Hope to help

Giuseppe

sampusarkar Thu, 04/08/2010 - 05:44

Hello Giuseppe :


   Thanks for the reply. But in this Cat6k we do not run MPLS. So the question is how MPLS table is being exhausted? What  I have found is


Default for MPLS is not 512K, if you refer to this output for "show mls cef maximum-routes":
IPv4 + MPLS         - 512k (default)
512K is shared between IPv4 unicast and MPLS combined with no fixed resources individually by default. If I add number of routes for IPv4 Unicast and MPLS, it is (523864=267309 + 256555). This at the moment exceeds the capacity of TCAM partition (512K) which is responsible for IPv4 unicast and MPLS.
The default number is 512K which evaluates to these many routes.
-    512 * 1024 = 524288
So we have decided to increase the size of size of MPLS table. But I am not sure how MPLS routes are being propagated on router where we do not
run MPLS. Only VRF lite is being used.
Regards
Arjun

Actions

This Discussion