cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3088
Views
65
Helpful
30
Replies

Multilink PPP, IP CEF or Both

nxm
Level 1
Level 1

I will be installing two 2811 routers with four dedicated (point-to-point) T1 connections.

Is it better (faster) to use Multilink PPP, IP CEF, or can I use a combination of both?

1 Accepted Solution

Accepted Solutions

Well, CEF is a switching mechanism. IOS supports a number of switch mechanisms:

- process-switching - CPU-intensive

- fast-switching - less CPU-intensive

- CEF-switching - highly optimised.

So you are going to be using one of the above, for sure. It's better to enable CEF and to use that since it performs much better than any of the other methods. In fact, in newer IOSes, it should be enabled by default.

CEF is also required for a number of other things like QoS marking and NBAR. The bottom line is that you have so much to gain and so little to lose by enabling CEF.

Enable it using: 'ip cef'

Hope that helps - pls rate the post if it does.

Paresh

View solution in original post

30 Replies 30

pkhatri
Level 11
Level 11

Hi,

You will need to enable CEF on your router for other reasons, and not just to provide load-sharing.

However, when comparing between using load-sharing via CEF and MLPPP, note the following:

- MLPPP will give you a single interface with a bandwidth of 4*T1. Therefore, you can use up the entire 4*t1 for something like a single data transfer.

- MLPPP is CPU-intensive so you will need to keep an eye on the CPU when you enable it. One option is to use MLPPP but disable fragmentation (which is fine for T1 links)

- with CEF, flows will be load-shared over the 4 links. The disavantage to that is that a single flow will never be able to use more than one T1 since CEF will not allow a flow to use multiple physical links. Therefore, if you try to do somethink like an FTP, you will never get more than T1 through at a time.

Therefore, the recommendation is to go with MLPPP with a word of caution on CPU usage. You will still have to have CEF running on your router but it just won't do load-sharing over these 4 links since you will now have a single multilink virtual interface.

Hope that helps - pls rate the post if it does.

Paresh

Hi,

what are some of the other reasons that CEF should be enabled for?

thanks

Well, CEF is a switching mechanism. IOS supports a number of switch mechanisms:

- process-switching - CPU-intensive

- fast-switching - less CPU-intensive

- CEF-switching - highly optimised.

So you are going to be using one of the above, for sure. It's better to enable CEF and to use that since it performs much better than any of the other methods. In fact, in newer IOSes, it should be enabled by default.

CEF is also required for a number of other things like QoS marking and NBAR. The bottom line is that you have so much to gain and so little to lose by enabling CEF.

Enable it using: 'ip cef'

Hope that helps - pls rate the post if it does.

Paresh

chad patterson
Level 1
Level 1

      As I am assuming that you are connecting to an ISP from an edge router, you will want to aggregate your T1's for the fastest, most efficient connection. Which makes pkhatri correct: you will definately want use both. Use MLPPP to aggregate several T1's, but you have to have the ISP configure this on their end too. There are two way with which you can do load-sharing with CEF, per-packet and per-destination, but one is flow-based and neither is nearly as efficient as MLPPP.

     The drawback to per-packet load-sharing is that you will not be able to stream video or use VOiP very effectively (or an UDP based application). The drawback to per-destination is that it is flow based, and therefore will not utilize all lines efficiently. It will favor the line with the best connection, then when that line is full and slow, it will start to use the second best line, and etc. MLPPP on the other hand will evenly and effectively send pacet fragments on all of the lines, and reassemble them at the ISP, so there will be no UDP problems.

     I think a lot of people forget what some of these features are designed for. For instance, CEF is not necessarily optimised for connecting to the ISP from an edge, but it is optimised for routing among managed, interconnected routers.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Chad, several statements in your posting caught my attention.  I would appreciate any additional information you might be able to provide for clarification and my education.

"There are two way with which you can do load-sharing with CEF, per-packet and per-destination, but one is flow-based and neither is nearly as efficient as MLPPP. "

As you might note in my other posting, I noted I would expect CEF packet-per-packet to have less overhead then MLPPP, so could you expand on what you mean by ". . . neither is  nearly as efficient as MLPPP."?  Did you have in mind using small fragments so you could transfer a single packet as quickly as possible?

"The drawback to per-destination is that it is flow based, and therefore will not utilize all lines efficiently. It will favor the line with the best connection, then when that line is full and slow, it will start to use the second best line, and etc."

My understanding of CEF, in per-destination mode with multiple links, it just basically round-robined the flows.  Could you provide information or references on CEF selecting "best connections" and switching to "second best" "when that line is full and slow"?

"For instance, CEF is not necessarily optimized for connecting to the ISP from an edge, but it is optimized for routing among managed, interconnected routers."

Could you explain how or when CEF is not optimized for edge to ISP connections?  Are you saying there's something better optimized and/or certain edge to ISP connections for which CEF should be deactivated?

Thanks.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

nxm@san.lacity.org wrote:

I will be installing two 2811 routers with four dedicated (point-to-point) T1 connections.

Is it better (faster) to use Multilink PPP, IP CEF, or can I use a combination of both?

As the others have posted, especially Paresh, you almost always want CEF enabled.  About the only valid reason for turning it off would be if you're debugging an issue that might be caused by CEF.

Normally you would also opt for MLPPP as its huge advantage is to provide the aggregate of all your links; even to a single flow.  (As also noted by Paresh.)  Basically your 4 T1s should behave much like (but not exactly like) a single 6 Mbps link.

The disadvantage of MLPPP it does increase overhead on the router to manage it and consumes some of your link bandwidth.  From my experience using MLPPP on 2811, on 4 T1s, the overhead is usually minimal as long as you don't enable MLPPP fragmentation support (once again also noted by Paresh, but I think he didn't mentioned the slight usage of your link bandwidth .)

Fragmentation can "slice and dice" individual packets across your multiple links, making your 4 T1s even more like a single 6 Mbps, but increases the MLPPP processing overhead.

One area where I differ with Paresh, he's correct CEF normally directs the same flow to just one link, but some CEF implementations also offer packet-by-packet load sharing.  This would send packets somewhat similar to MLPPP, w/o fragmentation, and likely with less overhead.  However, it would also likely disrupt a flow's packet sequence, which can create a bunch of performance issues! (I highly, highly recommend you don't pursue this, especially across four links unless you know all your traffic is packet ordering insensitive!)  MLPPP guarantees the receiving router will restore a flow's packet sequence before it forwards.

Lastly, MLPPP, since it treats your 4 physical links as one logical link can reduce (very slightly) resource usage for maintaining routing.  I.e. one p2p link to logically handle vs. four of them.

     Actually CEF has absolutely nothing to do with MLPPP multilink. CEF is enabled by default and does 'process switching', and should only be turned off if the the interface has a feature enabled that cannot support CEF, which be unlikely in most cases.

     CEF is just a routing trouble, and stores information for hops, i.e.., the best next hop in a route to reach a destination on a given network. If you are connected to the ISP, then the ISP gateway IS YOUR ONLY NEXT HOP, and makes the routing table stored in CEF useless because every singly entry in the table will be "ISP gateway". This assumes you have a simple private local subnet consisting of a mere /24 size. If your private has more than subnet, then CEF will assist in routing to destinations on the local subnet.

     Lastly a link of  MLPPP link of 4 T1's does make that link behave EXACTLY like a single 6Mbps link. It routes packet fragments all up and down those 4 T1 lines as if it were a single line. One can use CEF to aggregate 4 T1's. but there are only two ways to achieve this, and that is through the load-sharing command applied to each interface.

     There are 2 different load-sharing commands, and they both route packets in different ways;

load-sharing per-destination and load-sharing per-packet. If one chooses a per-destination basis, then it is flow based, and will not efficiently utilize all links as ech destination is a single flow. However the drawback to using a per-packet basis is that the packets get fragmented out of order, and UDP based services such as VoIP and streaming media will not work correctly.

     A smart man would choose MLPPP, but you have to set this up with your ISP in advance, as they have to apply MLPPP on their Cisco router well. CEF will not interfere with this and will also not offer any advantage to you (unless you apply load sharing).

     CEF load sharing example:

ip cef   
ip cef load-sharing algorithm original

!
interface WAN1

ip load-sharing per-destination

!
interface WAN2

ip load-sharing per-destination


Hi Chad,

Please allow me to comment on some of your statements as I happen to have different views.

CEF is enabled by default and does 'process switching'

This is probably a typo. Cisco differentiates very strongly between the process switching and interrupt-context switching. CEF falls into the category of interrupt-context switching. Most definitely, it is not a process switching (although it often is implemented in software as a code that is executed when routing a packet).

CEF is just a routing trouble, and stores information for hops, i.e..,  the best next hop in a route to reach a destination on a given network.  If you are connected to the ISP, then the ISP gateway IS YOUR ONLY NEXT  HOP, and makes the routing table stored in CEF useless because every  singly entry in the table will be "ISP gateway".

I assume that instead of "trouble", you mean "table". However, CEF is not exactly a table. CEF consists of two components, the FIB and the adjacency table. The FIB is a prefix tree that stores the network addresses (prefixes) from the routing table. The tree has been chosen because it provides one of the most effective data structures for retrieval of information. Note that the FIB is initialized using the contents of the routing table, and apart from a handful of system-specific routes, it does not contain anything what routing table also does not contain. Nodes in the FIB tree may contain pointers to the adjacency table, pointing directly to the L2 information relevant for routing packet to or through a particular neighbor. In database terms, the FIB is an index over the adjacency table, with the lookup key being the destination IP address of a packet.

The adjacency table contains frame headers that would be used to deliver packets to or through a particular directly-connected neighbor. This table is constructed using the existing L3/L2 mapping tables like ARPs. For each next hop and a directly connected end host, the router has an entry in its ARP table. Using this information, it can precompute the entire Ethernet header of a frame that would be used to carry packets to or through each next hop router or a host. Again, the adjacency table is not going to hold any other entries apart from existing entries in L3/L2 mapping tables.

I can not agree with the statement that the CEF is useless in the case of just one gateway. Even with just one gateway, the process of routing the packet is more effective using CEF. Just to compare:

  • In process switching, a packet arrives. You take its destination IP, proceed through the routing table in a linear fashion, performing binary ANDs between the destination IP and the netmask in each row of the table, comparing the result of this AND with the network address in the routing table row. Depending on the size of the routing table, tens or hundreds of thousands of ANDings and comparison must be performed before a matching row is found. After finding the first match, you take the IP address of the located next hop from the matching routing table entry. If the routing table entry does not indicate the egress interface, you need to perform a recursive lookup using this next hop address again and again until you find a routing table row that contains also the information about the egress interface. Next, you visit the L3/L2 mapping table associated with the egress interface and lookup the L3-to-L2 mapping of the lastly found next hop. Then you construct the frame header using the located addressing information, encapsulate the packet and send it out. This process repeats with each incoming packet.
  • In CEF switching, a packet arrives. You take its destination IP and start comparing it bit by bit with the values stored in the FIB tree. At most 32 comparisons are necessary. Using the last (deepest) matching tree node that contained a pointer to the adjacency table entry, you immediately locate the template of the prepared frame header and information about the egress interface in the adjacency table. No recursion is necessary as this is resolved during the buildup of CEF structures. No repetitive lookups in L3-to-L2 tables are performed because the results of such lookups have already been processed into the adjacency table.

The CEF switching is thus noticeably superior to the workable but naive process switching.

a link of  MLPPP link of 4 T1's does make that link behave EXACTLY like a single 6Mbps link.

To be precise, the MLPPP bundle consisting of 4 T1's will be slightly slower than a single 6Mbps link because of additional 4-byte long MLPPP fragmentation header inserted into each fragment travelling over particular physical link in the bundle, consuming some of the bandwidth. It also has to be noted that the fragmentation and reassembly of packets over MLPPP consumes router resources - after a certain number of links in the bundle, your router (its CPU) may become the bottleneck itself instead of the interfaces.

It will favor the line with the best connection, then when that line is  full and slow, it will start to use the second best line, and etc.

I beg to differ. Even the per-destination load balancing is performed solely by hashing the addressing fields of the packet.

Router#show ip route

Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP

       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area

       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2

       E1 - OSPF external type 1, E2 - OSPF external type 2

       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2

       ia - IS-IS inter area, * - candidate default, U - per-user static route

       o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

     10.0.0.0/24 is subnetted, 1 subnets

C       10.0.0.0 is directly connected, FastEthernet0/0

S    192.0.2.0/24 [1/0] via 10.0.0.14

                  [1/0] via 10.0.0.13

                  [1/0] via 10.0.0.12

                  [1/0] via 10.0.0.11

Router#show ip cef exact-route 1.1.1.1 192.0.2.1     

1.1.1.1         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.2

1.1.1.1         -> 192.0.2.2      : FastEthernet0/0 (next hop 10.0.0.11)

Router#show ip cef exact-route 1.1.1.1 192.0.2.3

1.1.1.1         -> 192.0.2.3      : FastEthernet0/0 (next hop 10.0.0.12)

Router#show ip cef exact-route 1.1.1.1 192.0.2.4

1.1.1.1         -> 192.0.2.4      : FastEthernet0/0 (next hop 10.0.0.12)

Router#show ip cef exact-route 1.1.1.1 192.0.2.5

1.1.1.1         -> 192.0.2.5      : FastEthernet0/0 (next hop 10.0.0.12)

Router#show ip cef exact-route 1.1.1.1 192.0.2.6

1.1.1.1         -> 192.0.2.6      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.7

1.1.1.1         -> 192.0.2.7      : FastEthernet0/0 (next hop 10.0.0.12)

Router#show ip cef exact-route 1.1.1.1 192.0.2.8

1.1.1.1         -> 192.0.2.8      : FastEthernet0/0 (next hop 10.0.0.12)

Router#show ip cef exact-route 1.1.1.2 192.0.2.1

1.1.1.2         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.13)

Router#show ip cef exact-route 1.1.1.3 192.0.2.1

1.1.1.3         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.13)

Router#show ip cef exact-route 1.1.1.4 192.0.2.1

1.1.1.4         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.12)

Note that the changing combination of source/destination IP results in a different hashing value, thus into a different egress interface and next hop. However, repeating the same command several times will always produce the same path:

Router#show ip cef exact-route 1.1.1.1 192.0.2.1

1.1.1.1         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.1

1.1.1.1         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.1

1.1.1.1         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.1

1.1.1.1         -> 192.0.2.1      : FastEthernet0/0 (next hop 10.0.0.14)

Router#show ip cef exact-route 1.1.1.1 192.0.2.2

1.1.1.1         -> 192.0.2.2      : FastEthernet0/0 (next hop 10.0.0.11)

Router#show ip cef exact-route 1.1.1.1 192.0.2.2

1.1.1.1         -> 192.0.2.2      : FastEthernet0/0 (next hop 10.0.0.11)

Router#show ip cef exact-route 1.1.1.1 192.0.2.2

1.1.1.1         -> 192.0.2.2      : FastEthernet0/0 (next hop 10.0.0.11)

Router#show ip cef exact-route 1.1.1.1 192.0.2.2

1.1.1.1         -> 192.0.2.2      : FastEthernet0/0 (next hop 10.0.0.11)

Router#

I think a lot of people forget what some of these features are designed  for. For instance, CEF is not necessarily optimised for connecting to  the ISP from an edge, but it is optimised for routing among managed,  interconnected routers. 

I believe you are inferring this based on a wrong premise. CEF was optimized to provide rapid lookup of frame rewrite information and egress interface, something that you are always interested in, regardless of how many neighbors you have - either a single ISP or dozens of neighboring routers.

Best regards,

Peter

CEF is enabled by default and does 'process switching'

This is probably a typo. Cisco differentiates very strongly between the process switching and interrupt-context switching. CEF falls into the category of interrupt-context switching. Most definitely, it is not a process switching (although it often is implemented in software as a code that is executed when routing a packet).

     OK, you called me out on that one. My statement on that was just plain wrong. CEF isn't process switching. I meant that basically performs the same thing, but that's not how it came out at all. So i admit that was a bad and misinformative statement.

     However as far as the rest of your post is concerned, I'd say it's a bit off topic and misleading. I was responding to original poster's question as to whether he should use MLPPP or CEF to aggregate 4 T1's. Don't believe me? Then please refer to the original post. Now whatever it was that I posted was in relation to that topic: MLPPP vs CEF. And if you think they do the same thing, you are wrong. Have you tried to apply them in the real world? I aggregate T1's and ADSL lines all the time.

     Now with MLPPP you get one interface. Packets gets fragmented and reordered correctly. How is this significant you ask? Well that means you can stream movies and use VoIP services on that network. There may be a slight framing cost, but it is insignificant. When I run a speed test of a MLPPP 8 of bonded T1's, I get a 12Mbps download speed.

     Let's consider using CEF on 8 T1 lines shall we? First of all, with CEF, you don't get true aggregation: you get 'load-sharing'. You have to specify this on the interface level by the way. However there are two forms of load-sharing that CEF can use; per-destination and per-address. Their variance is significant.

     When you use the per-destination form of load sharing, your packets are routed by source/destination flows. Meaning that when you open a service on the intertnet, you will use only one T1 line to transport all of your packets: of course you will use all 8 T1 lines at the same time for seperate services, they just will not be in sync with each other. You can use streaming media and VoIP services with this setup. But when you run a speed test, you will never surpass 1.5Mbps. That does not even come close to 12Mbps!

     When you use the per-packet form of load sharing, your packets are sent down all 8 of the T1 lines, but since they use different routes, they are not ordered. You CANNOT use streaming media and VoIP services with this setup. When you run a speed test, you will not even come close to 12Mbps.

     You can also use route maps in conjuction with CEF, but guess what; you will still not surpass 1.5Mbps! Have you found a way to surpass these limitations? If so do tell!

Hi Chad,

Thanks for responding.

However as far as the rest of your post is concerned, I'd say it's a bit off topic and misleading.

Yes, I admit I am off-topic with respect to the topic of the original question - because I reacted to some claims about CEF you made that I found myself in disagreement with. I have openly stated I am going to react to some of your claims, not to the claims of the OP, and I quoted each of your statements I've presented a counterview to - making my response on-topic with your post. It is normal to branch off a discussion in an open threaded forum. I do not think, though, that I was misleading, i.e. leading to wrong conclusions, in my response. From my perspective, I felt some of your statements about CEF were not entirely correct so I reacted.

Best regards,

Peter

I believe you are inferring this based on a wrong premise. CEF was optimized to provide rapid lookup of frame rewrite information and egress interface, something that you are always interested in, regardless of how many neighbors you have - either a single ISP or dozens of neighboring routers.

     Peter, I have to call you out on this. You cannot get around the fact the CEF uses a database (whatever name you want to give it, it is basically a database (and by the way a table is a database)) to find the best next-hop for a packet. This is the main function of CEF. 

Taken from http://www.cisco.com/en/US/docs/ios/12_1/switch/configuration/guide/xcdcef.html#wp1000937

The two main components of CEF are as follows:

  • Forwarding information base (FIB)— CEF uses an FIB to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and those changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table. In the context of CEF-based MLS, both the Layer 3 engine and the hardware-switching components maintain an FIB.
  • Adjacency tables—Network nodes in the network are said to be adjacent if they can reach each other with a single hop across a link layer. In addition to the FIB, CEF uses adjacency tables to store Layer 2 addressing information. The adjacency table maintains Layer 2 addresses for all FIB entries. As with the FIB, in the context of CEF-based MLS, both the Layer 3 engine and the hardware-switching components maintain an adjacency table.

     Please refer to this documentation to see how CEF uses ADJACENCY TABLES to store MAC addresses, in conjuction with a FIB to find NEXT-HOP ADDRESS INFORMATION.

     Now Peter, I don't know what kind of relationship you have with your ISP, but if they allow to skip their router (which is the thing that is ADJACENT to your edge router) and connect directly to 8.8.8.8 (Google) as a next-hop, then can you please give me their phone number?!

Hi Chad,

Peter, I have to call you out on this. You cannot get around the fact  the CEF uses a database (whatever name you want to give it, it is  basically a database (and by the way a table is a database)) to find the  best next-hop for a packet. This is the main function of CEF.

Am I saying anything different? The result of routing a packet is encapsulating it into a properly addressed frame and sending it out the appropriate interface towards the next hop which is identified by the L2 addressing information in the frame. That is why I wrote that "CEF was optimized to provide rapid lookup of frame rewrite information and egress interface". Taken from

http://www.cisco.com/en/US/tech/tk827/tk831/technologies_white_paper09186a00800a62d9.shtml#express

"Cisco Express Forwarding, also uses a 256 way data structure to store forwarding and MAC header rewrite information"

Please refer to this documentation to see how CEF uses ADJACENCY TABLES to store MAC addresses, in conjuction with a FIB to find NEXT-HOP ADDRESS INFORMATION.

The FIB does not need to store next-hop address information because it does not need it to perform packet routing. I am openly stating that the documentation is not precise on this point. The next hop address is just the first step in locating the frame header rewrite information, and is discarded afterwards. During the construction of FIB and adjacency table, this second step of translating next hop IP addresses into L2 addresses has been done so there is no use for next hop IP addresses in the FIB anymore. FIB stores the destination networks only. From the same document indicated above:

Cisco Express Forwarding uses a trie, which means the actual information being       searched for is not in the data structure; instead, the data is stored in a       separate data structure, and the trie simply points to it. In other words,       rather than storing the outbound interface and MAC header rewrite within the       tree itself, Cisco Express Forwarding stores this information in a separate       data structure called the adjacency table.

Best regards,

Peter

Am I saying anything different? The result of routing a packet is encapsulating it into a properly addressed frame and sending it out the appropriate interface towards the next hop which is identified by the L2 addressing information in the frame. That is why I wrote that "CEF was optimized to provide rapid lookup of frame rewrite information and egress interface".

     Peter you are saying something different. In the context of this discussion, you have been arguing that CEF is optimized for routing packets between an edge router and an ISP. But it is not; it is optimised for processing packets. OK, so CEF knows where the best next-hop is faster than process-switching, great. It still only has one hop to choose from! Therefore it is not optimised for routing form an edge router to an ISP, like say MLPPP is. It also cannot choose efficiently between 8 different equal-cost lines that are only one hop away from the ISP, because they are only one hop away (but CEF will make that decision faster, LOL).

Chad,

you have been arguing that CEF is optimized for routing packets between an edge router and an ISP

I believe I have written that the CEF is optimized for rapid lookup of information necessary to perform packet routing, and that it is irrelevant for CEF's performance how many hops it can choose from.

Please note that by saying CEF, I am not talking about load-sharing whatsoever. When I talk CEF, I talk about performing the basic routing using the FIB/ADJ. I am not talking about "spreading" the traffic through several links. That is an added value to CEF but it is not its defining property. This may be the point where we diverge in our understanding.

OK, so CEF knows where the best next-hop is faster than  process-switching, great. It still only has one hop to choose from!  Therefore it is not optimised for routing form an edge router to an ISP,  like say MLPPP is.

I do not believe that putting CEF and MLPPP into this kind of relation - "CEF is not optimised for routing from an edge router like MLPPP is" - is appropriate. In doing this, you are trying to compare and contrast two totally different things. CEF is a realization of the routing function. MLPPP is a link layer technology with no relation to routing whatsoever. Even with MLPPP running on edge router, you need routing as the internal LAN interfaces of the edge router are on different networks. Whether CEF's load balancing (or sharing) mechanisms can replace MLPPP - no, I don't think they can. They can do load balancing but they do it differently, not equivalently to MLPPP.

Best regards,

Peter

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: