I'm trying to create a bundle using 3 x DS3 on a 7206 in order to have a backbone link that is larger than 100Mbps. Due to the limitations of MLPPP I cannot do this using a multilink bundle when the DS3's are clear-channel and these cards are not PA-2T3+ so I can't break them into T1's and bond over multiple MLPPP links.
How can I accomplish this task on 7206 NPE300 with 3 x PA-2T3? I have 6 of these in a ring configuration over Microwave.
> It would appear that the algorythm used to "load-balance" is not distributing the packets in a way that allow them to use all the available paths.
Right that is exactly correct and expected due to per-destination load-balancing.
Based on your description of the JSDU, and definitely in the case of two laptops with iperf, you have the situation where you have exactly one source-destination pair (based on IP address).
The per-destination load-balancing algo uses a hashing function to choose a single path for traffic between a source-destination pair. All traffic between the pair will use this path (aka one of your DS3 interfaces).
If you had two sets of laptops, depending on how stuff hashes in the algo, you will possibly use a different interface for each iperf session.
You can verify this by looking at the interface statistics after running iperf - one of the interfaces will have a lot more traffic than the others.
Now when you add a bunch of servers and clients to the mix, there will be multiple sources and destinations. Hence multiple links will be used, but no "session" can exceed the max bandwidth of whatever link the hashing algo chooses.
If there are only a limited number of source-destination pairs, you will have imbalances. In that case, you could try per-packet load balancing. This can be changed by entering interface configuration mode and adding the command "ip load-sharing per-packet". You *must* add this command on all paths to the destination (ie all three DS3 interfaces).
Per-packet can lead to out-of-order delivery, so if you have applications sensitive to delivery order (e.g. VoIP), you may have issues. Per-packet may also lead to higher cpu utilization.