cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1986
Views
0
Helpful
13
Replies

Bond DS3's on 7206

pgio911
Level 1
Level 1

I'm trying to create a bundle using 3 x DS3 on a 7206 in order to have a backbone link that is larger than 100Mbps. Due to the limitations of MLPPP I cannot do this using a multilink bundle when the DS3's are clear-channel and these cards are not PA-2T3+ so I can't break them into T1's and bond over multiple MLPPP links.

How can I accomplish this task on 7206 NPE300 with 3 x PA-2T3? I have 6 of these in a ring configuration over Microwave.

1 Accepted Solution

Accepted Solutions

> It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.

Right that is exactly correct and expected due to per-destination load-balancing.

Based on your description of the JSDU, and definitely in the case of two laptops with iperf, you have the situation where you have exactly one source-destination pair (based on IP address).

The per-destination load-balancing algo uses a hashing function to choose a single path for traffic between a source-destination pair. All traffic between the pair will use this path (aka one of your DS3 interfaces).

If you had two sets of laptops, depending on how stuff hashes in the algo, you will possibly use a different interface for each iperf session.

You can verify this by looking at the interface statistics after running iperf - one of the interfaces will have a lot more traffic than the others.

Now when you add a bunch of servers and clients to the mix, there will be multiple sources and destinations. Hence multiple links will be used, but no "session" can exceed the max bandwidth of whatever link the hashing algo chooses.

If there are only a limited number of source-destination pairs, you will have imbalances. In that case, you could try per-packet load balancing. This can be changed by entering interface configuration mode and adding the command "ip load-sharing per-packet". You *must* add this command on all paths to the destination (ie all three DS3 interfaces).

Per-packet can lead to out-of-order delivery, so if you have applications sensitive to delivery order (e.g. VoIP), you may have issues. Per-packet may also lead to higher cpu utilization.

View solution in original post

13 Replies 13

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Paul,

I would use the three links as distinct L3 links.

I don't think MLPPP can scale at these speeds.

For the router is much less work using a routing protocol and seeing three parallel links.

flow based CEF load balancing will do the rest of the job.

You can use OSPF or EIGRP.

You will have three subnets and for the routing protocol they will be three LAN segments.

For each destination the router will see three possible next-hops.

Hope to help

Giuseppe

Guiseppe,

Thanks for the response.

We are running this config currently using EIGRP and a singel DS3 and it works great. So you are saying that simply adding the new 2 DS's as individual L3 links and usign CEF will get the desired result?

Would you suggest using CEF per packet or per destination load balancing?

Thanks again,

Paul

Hello Paul,

use default per destination load balancing

per packet load balancing has several drwabacks

Hope to help

Giuseppe

Guiseppe,

I have this configuration setup on the bench and with 3 x DS3 using HDLC encapsulation EIGRP and IP CEF on both 7206's... There is also a FE in each

I'm using two JSDU ether testers to test the thru-put between and I can only push 54Mbps before losing frames and receiving Oos errors.

IP CEF shows all three DS3's as available paths between my FE on each router and EIGRP shows all routes between the two 7206's.

Any ideas as to why I can't get more than 54Mbps between these two on the bench?

I am not personally familiar with JSDU ether testers, but do they utilize multiple different IP addresses?

Because with per-destination load balancing, you might not be utilizing all available links fully unless it does.

We tested this using both teh JDSU tester and iperf from two seperate connected laptops. The JDSU units send traffic between them based upon IP's that are static or received via DHCP. These units were connected to a FE card on each 7206.

When you do a show IP CEF you see that all three addresses of the DS3's exist as links to get the destination IP address that is assigned to the FE card. EIGRP show this as well.

It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.

> It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.

Right that is exactly correct and expected due to per-destination load-balancing.

Based on your description of the JSDU, and definitely in the case of two laptops with iperf, you have the situation where you have exactly one source-destination pair (based on IP address).

The per-destination load-balancing algo uses a hashing function to choose a single path for traffic between a source-destination pair. All traffic between the pair will use this path (aka one of your DS3 interfaces).

If you had two sets of laptops, depending on how stuff hashes in the algo, you will possibly use a different interface for each iperf session.

You can verify this by looking at the interface statistics after running iperf - one of the interfaces will have a lot more traffic than the others.

Now when you add a bunch of servers and clients to the mix, there will be multiple sources and destinations. Hence multiple links will be used, but no "session" can exceed the max bandwidth of whatever link the hashing algo chooses.

If there are only a limited number of source-destination pairs, you will have imbalances. In that case, you could try per-packet load balancing. This can be changed by entering interface configuration mode and adding the command "ip load-sharing per-packet". You *must* add this command on all paths to the destination (ie all three DS3 interfaces).

Per-packet can lead to out-of-order delivery, so if you have applications sensitive to delivery order (e.g. VoIP), you may have issues. Per-packet may also lead to higher cpu utilization.

Nick,

Great feedback... Thanks!

My JDSU units are capable of sending up to 1GB of traffic over 16 streams so I will setup the test that way and let you know the results, but what you are saying makes perfect sense based upon my results.

Thanks much...!

Paul

Nick,

Just FYI... Were bench tested this solution using multiple streams from the JDSU's and several Laptops and everything worked perfectly.

I was able to max-out the 135MB of the 3 x DS3's using numerous streams.

Your imput was very helpful and much appreciated!

Thanks,

Paul

paolo bevilacqua
Hall of Fame
Hall of Fame

Ask the SP for an OC-3 instead. They should be happy as hey will be saving a card on the mux, and you will have an easier life and better performances.

This is legacy microwave and is a private six hop ring consisting of 4 clear channel DS3's

This is legacy microwave and is a private six hop ring consisting of 4 clear channel DS3's

Joseph W. Doherty
Hall of Fame
Hall of Fame

BTW, a NPE300 is a bit light to support 150 MBps (duplex). (The NPE300 PPS rate is about the same as a 3825.) Much would depend on your average packet size and other processing overhead, such as MLPPP. (If you do find you can run MLPPP, MLPPP fragmention, on vs. off, might impact CPU utilization.)

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card