Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Community Member

invalid CEF entries

I have a cisco 6500 connected to a cisco 7200 (c7200-is-mz.124-10.bin) via fast Ethernet. The 7200 terminates RBE and PPPOE DSLs with CEF enabled system wide. This configuration has been working fine for months and all of the sudden with no change to the config I started to see packet loss to some PPPOE customers, RBE are fine. After investigating this I realized the issues was CEF related and looking into it I found that my PPPOE adjacencies were becoming invalid. I have been using the same config for the pppoe connections for years and those customer are connecting and get and ip via radius with no issues. From the 7200 itself im able to ping to the customer ip with no packet loss. If I step back to th3 6500 with connects to the 7200 via Ethernet, I get packet loss on every other packet. I have done some research on this and on the cisco site it states that invalid adjacencies are caused by invalid arp entries and that the symptoms are packet loss on every other packet.

I have no idea at this point why all the sudden im getting invalid cached adjacency if the pppoe connections have a remote mac address associated with them. As soon as I turn on cef I get invalid cache adj for the PPPOEs. I would appreciate any input on this below is the troubleshooting steps I took.

thanks Paul

Here is an example of what im seeing:

xxx.xxx.108.25/32, version 3950, epoch 0, attached, connected, cached adjacency to Virtual-Access2.80

0 packets, 0 bytes

via Virtual-Access2.80, 0 dependencies

   invalid cached adjacency

xxx.xxx.108.105/32, version 4086, epoch 0, attached, connected, cached adjacency to Virtual-Access2.77

0 packets, 0 bytes

via Virtual-Access2.77, 0 dependencies

   invalid cached adjacency

debug:

*Mar 9 17:44:20: CEF-Drop: Packet for xxx.xxx.108.105 -- encapsulation *Mar 9 17:44:20: CEF-Drop: Stalled adjacency for 0.0.0.0 on

Virtual-Access2.64 for destination xxx.xxx.108.104 *Mar 9 17:44:20: CEF-Drop: Packet for xxx.xxx.108.104 -- encapsulation *Mar 9 17:44:20: CEF-Drop: Stalled adjacency for 0.0.0.0 on

Virtual-Access2.18 for destination xxx.xxx.108.76 *Mar 9 17:44:20: CEF-Drop: Packet for xxx.xxx.108.76 -- encapsulation *Mar 9 17:44:20: CEF-Drop: Stalled adjacency for 0.0.0.0 on

cef table events

CEF table events (storage for 100 events, 8282 events recorded)

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.4) incompl.   [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.64) incompl. [OK]

+00:00:00.000:                0.0.0.0/32         ADJ (Vi2.22) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.82) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.58) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.28) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.61) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.76) incompl. [OK]

+00:00:00.000:                 0.0.0.0/32         ADJ (Vi2.80) incompl. [OK]

CEF Drop Statistics

Slot Encap_fail Unresolved Unsupported   No_route     No_adj ChkSum_Err

RP       1037269           0     229600     417881           0           6

sh cef not-cef-switched:

CEF Packets passed on to next switching layer

Slot No_adj No_encap Unsupp'ted Redirect Receive Options   Access     Frag

RP   321306       0         26     127   165066       0       0       0

sh ip cef fast0/0 detail

IP CEF with switching (Table Version 2528), flags=0x0

2431 routes, 0 reresolve, 0 unresolved (0 old, 0 new), peak 10

94 instant recursive resolutions, 10 used background process

2431 leaves, 125 nodes, 499848 bytes, 7003 inserts, 4572 invalidations

6 load sharing elements, 2256 bytes, 6 references

universal per-destination load sharing algorithm, id 297DE1C5

4(1) CEF resets, 31 revisions of existing leaves

Resolution Timer: Exponential (currently 1s, peak 1s)

1 in-place/0 aborted modifications

refcounts: 41747 leaf, 32256 node

Table epoch: 0 (2431 entries at this epoch)

Adjacency Table has 360 adjacencies

73 IPv4 incomplete adjacencies

0.0.0.0/0, version 2439, epoch 0, cached adjacency xxx.xxx.98.117

0 packets, 0 bytes

via xxx.xxx.117, FastEthernet0/0, 0 dependencies

   next hop xxx.xxx.117, FastEthernet0/0

   valid cached adjacency

xxx.xxx.14.16/29, version 644, epoch 0, cached adjacency xxx.xxx.98.117

0 packets, 0 bytes

via xxx.xxx.98.117, 0 dependencies, recursive

   next hop 216.237.98.117, FastEthernet0/0 via xxx.xxx.98.117/32

   valid cached adjacency

xxx.xxx.98.116/30, version 591, epoch 0, attached, connected

0 packets, 0 bytes

via FastEthernet0/0, 0 dependencies

   valid glean adjacency

xxx.xxx.98.117/32, version 366, epoch 0, cached adjacency xxx.xxx.98.117

0 packets, 0 bytes

via xxx.xxx.98.117, FastEthernet0/0, 1 dependency

   next hop xxx.xxx.98.117, FastEthernet0/0

   valid cached adjacency

5 REPLIES
Hall of Fame Super Gold

invalid CEF entries

Have you tried reloading router ?

Community Member

invalid CEF entries

Yes i reloaded it but nothing. As soon as i turn on cef i see issues. The odd thing was that it was working fine for months.

Hall of Fame Super Gold

invalid CEF entries

I would update the 7200 preferably to a S version.

Community Member

invalid CEF entries

Still doesnt explain why it was working then suddenly stopped, also im not sure if the IOS above im using has any known bugs with PPPoE i dont have access to that info on cisco site.

Community Member

invalid CEF entries

So after looking at my CEF config which was working for months before i saw the problems above i came to the conclusion that my config should be working and i shouldn't see any CEF problems i decided to give it one more try.

There was two things i looked at and adjusted/fixed but im not sure which one was more likely to fix the problem.

1. From the cef troubleshooting guide it says that inverse arp for atm is responsible of creating the adjancies and what i think was happing was that alot of the PPPoE were disconnection due to IDLE timeout which was set for 14 minutes and since atm inarp has a default of 15 to age out this was causing some of the PPPoEs to reconnect on different virtual interfaces and cef not to no about it because the PPPoE were disconnecting/reconnecting faster the the atm inarp age out time.

2. The other possibility is we found a customer who took it apon themselves to configure a /24 and a static ip address from the dynamic pool. Since all the dynamic PPPoEs get a /32 im not sure if that /24 was cauing any CEF issues.

Not sure 100% which solution fixed the problem but its working.

684
Views
0
Helpful
5
Replies
CreatePlease to create content