Bond DS3's on 7206

Answered Question
Jun 27th, 2009
User Badges:

I'm trying to create a bundle using 3 x DS3 on a 7206 in order to have a backbone link that is larger than 100Mbps. Due to the limitations of MLPPP I cannot do this using a multilink bundle when the DS3's are clear-channel and these cards are not PA-2T3+ so I can't break them into T1's and bond over multiple MLPPP links.


How can I accomplish this task on 7206 NPE300 with 3 x PA-2T3? I have 6 of these in a ring configuration over Microwave.

Correct Answer by nick.mueller about 7 years 11 months ago

> It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.


Right that is exactly correct and expected due to per-destination load-balancing.


Based on your description of the JSDU, and definitely in the case of two laptops with iperf, you have the situation where you have exactly one source-destination pair (based on IP address).


The per-destination load-balancing algo uses a hashing function to choose a single path for traffic between a source-destination pair. All traffic between the pair will use this path (aka one of your DS3 interfaces).


If you had two sets of laptops, depending on how stuff hashes in the algo, you will possibly use a different interface for each iperf session.


You can verify this by looking at the interface statistics after running iperf - one of the interfaces will have a lot more traffic than the others.


Now when you add a bunch of servers and clients to the mix, there will be multiple sources and destinations. Hence multiple links will be used, but no "session" can exceed the max bandwidth of whatever link the hashing algo chooses.


If there are only a limited number of source-destination pairs, you will have imbalances. In that case, you could try per-packet load balancing. This can be changed by entering interface configuration mode and adding the command "ip load-sharing per-packet". You *must* add this command on all paths to the destination (ie all three DS3 interfaces).


Per-packet can lead to out-of-order delivery, so if you have applications sensitive to delivery order (e.g. VoIP), you may have issues. Per-packet may also lead to higher cpu utilization.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Loading.
Giuseppe Larosa Sat, 06/27/2009 - 23:26
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Paul,

I would use the three links as distinct L3 links.

I don't think MLPPP can scale at these speeds.

For the router is much less work using a routing protocol and seeing three parallel links.

flow based CEF load balancing will do the rest of the job.


You can use OSPF or EIGRP.

You will have three subnets and for the routing protocol they will be three LAN segments.

For each destination the router will see three possible next-hops.


Hope to help

Giuseppe


pgio911 Mon, 06/29/2009 - 06:14
User Badges:

Guiseppe,


Thanks for the response.


We are running this config currently using EIGRP and a singel DS3 and it works great. So you are saying that simply adding the new 2 DS's as individual L3 links and usign CEF will get the desired result?


Would you suggest using CEF per packet or per destination load balancing?


Thanks again,


Paul

Giuseppe Larosa Mon, 06/29/2009 - 07:32
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Paul,

use default per destination load balancing


per packet load balancing has several drwabacks


Hope to help

Giuseppe



pgio911 Tue, 06/30/2009 - 11:46
User Badges:

Guiseppe,


I have this configuration setup on the bench and with 3 x DS3 using HDLC encapsulation EIGRP and IP CEF on both 7206's... There is also a FE in each


I'm using two JSDU ether testers to test the thru-put between and I can only push 54Mbps before losing frames and receiving Oos errors.


IP CEF shows all three DS3's as available paths between my FE on each router and EIGRP shows all routes between the two 7206's.


Any ideas as to why I can't get more than 54Mbps between these two on the bench?

nick.mueller Tue, 06/30/2009 - 21:25
User Badges:

I am not personally familiar with JSDU ether testers, but do they utilize multiple different IP addresses?


Because with per-destination load balancing, you might not be utilizing all available links fully unless it does.

pgio911 Wed, 07/01/2009 - 06:32
User Badges:

We tested this using both teh JDSU tester and iperf from two seperate connected laptops. The JDSU units send traffic between them based upon IP's that are static or received via DHCP. These units were connected to a FE card on each 7206.


When you do a show IP CEF you see that all three addresses of the DS3's exist as links to get the destination IP address that is assigned to the FE card. EIGRP show this as well.


It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.

Correct Answer
nick.mueller Wed, 07/01/2009 - 08:17
User Badges:

> It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.


Right that is exactly correct and expected due to per-destination load-balancing.


Based on your description of the JSDU, and definitely in the case of two laptops with iperf, you have the situation where you have exactly one source-destination pair (based on IP address).


The per-destination load-balancing algo uses a hashing function to choose a single path for traffic between a source-destination pair. All traffic between the pair will use this path (aka one of your DS3 interfaces).


If you had two sets of laptops, depending on how stuff hashes in the algo, you will possibly use a different interface for each iperf session.


You can verify this by looking at the interface statistics after running iperf - one of the interfaces will have a lot more traffic than the others.


Now when you add a bunch of servers and clients to the mix, there will be multiple sources and destinations. Hence multiple links will be used, but no "session" can exceed the max bandwidth of whatever link the hashing algo chooses.


If there are only a limited number of source-destination pairs, you will have imbalances. In that case, you could try per-packet load balancing. This can be changed by entering interface configuration mode and adding the command "ip load-sharing per-packet". You *must* add this command on all paths to the destination (ie all three DS3 interfaces).


Per-packet can lead to out-of-order delivery, so if you have applications sensitive to delivery order (e.g. VoIP), you may have issues. Per-packet may also lead to higher cpu utilization.

pgio911 Wed, 07/01/2009 - 19:36
User Badges:

Nick,


Great feedback... Thanks!


My JDSU units are capable of sending up to 1GB of traffic over 16 streams so I will setup the test that way and let you know the results, but what you are saying makes perfect sense based upon my results.


Thanks much...!


Paul

pgio911 Fri, 07/17/2009 - 09:05
User Badges:

Nick,


Just FYI... Were bench tested this solution using multiple streams from the JDSU's and several Laptops and everything worked perfectly.


I was able to max-out the 135MB of the 3 x DS3's using numerous streams.


Your imput was very helpful and much appreciated!


Thanks,


Paul

paolo bevilacqua Sun, 06/28/2009 - 14:31
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

Ask the SP for an OC-3 instead. They should be happy as hey will be saving a card on the mux, and you will have an easier life and better performances.

pgio911 Mon, 06/29/2009 - 06:16
User Badges:

This is legacy microwave and is a private six hop ring consisting of 4 clear channel DS3's

pgio911 Mon, 06/29/2009 - 06:16
User Badges:

This is legacy microwave and is a private six hop ring consisting of 4 clear channel DS3's

Joseph W. Doherty Mon, 06/29/2009 - 03:23
User Badges:
  • Super Bronze, 10000 points or more

BTW, a NPE300 is a bit light to support 150 MBps (duplex). (The NPE300 PPS rate is about the same as a 3825.) Much would depend on your average packet size and other processing overhead, such as MLPPP. (If you do find you can run MLPPP, MLPPP fragmention, on vs. off, might impact CPU utilization.)



Actions

This Discussion