Distributed Load Sharing in OSPF

Unanswered Question
Jun 8th, 2009


I have a scenario where 7604 is connected (upstream) to 7200 (downstream) over 2 Ethernet links. OSPF is configured over 2 ethernet links. Ethernet links are connected to the switch and terminated as sub-interfaces in single physical interface of 7604 and 7200.

Suppose say both the ethernet links are taken for 10 Mb each and the peak traffic to the downstream location is 16 Mb.

Now when the traffic flows from upstream to downstream it is not getting load shared evenly over 2 ethernet links. One ethernet link carries 10 Mb and other one carries 6 Mb. Thus traffic flowing on first ethernet faces heavy packet drops.

But OSPF cost for 2 ethernet links are same and routing entry to the downstream's loopback shows 2 ethernet links in upstream (traffic always flow to the downstreams loopback).

Since 7604 is not supporting per packet load sharing configuration in interface, i m not able to configure it.

Is there any way that I can configure to have a distributed load sharing over 2 ethernet links.



  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Paolo Bevilacqua Mon, 06/08/2009 - 23:43

That is normal, CEF load sharing does not guarantee that links will be equally utilized.

There is no technology that allows that and your best remedy is to upgrade to a faster link,or use QoS to prioritize traffic.

csco10716389 Tue, 06/09/2009 - 02:36

By default OSPF does load sharing till 16 equall cost.Kindly check the cost on the interface .If possible kindly share the config.

Paolo Bevilacqua Tue, 06/09/2009 - 03:59

Load sharing means that statistically, you should have a a similar utilization on multiple links, not perfectly the same, because OSPF doesn't measure traffic.

Please make sure you understand how things work before answering questions above your competence.

Joseph W. Doherty Tue, 06/09/2009 - 04:29

Normally, CEF will round-robin traffic flows, so it's normal, at any instant, to see multiple links with different loads. If you configure CEF per-packet, it will round-robin individual traffic flow packets and link load balancing would be much better. Using per-packet, though, does risk your traffic to sequencing issues (although with only two like links, between two devices, likely minimal impact).

What's not clear, nor am I sure with a 7600, CEF per-packet might not be an offered option on LAN Ethernet ports (you didn't describe your Ethernet cards or sup). It (CEF per-packet) might be supported if you ran Ethernet off a WAN type module, i.e. FlexWAN or SIP-200/400.

Another possible option, I believe later IOS images for 7600s support a limited form of OER. OER can dynamically balance multiple flows across multiple links.

Paolo Bevilacqua Tue, 06/09/2009 - 04:49

I do not agree on the minimal impact with only two links. All what it takes is one out of sequence packet to confuse and reset a PCs TCP session. That leads to retransmissions and more traffic ultimately.

The L3 switches do not offer per-packet load sharing in the attempt to conserve network's sanity.

It's amazing to see how people fails to grasp the above and this matter has to be discussed again and again.

Joseph W. Doherty Tue, 06/09/2009 - 05:13

"I do not agree on the minimal impact with only two links. All what it takes is one out of sequence packet to confuse and reset a PCs TCP session. That leads to retransmissions and more traffic ultimately. "

My understanding is an out-of-sequence TCP packet will generate a dup ACK, and that most TCP implementations look for 3 dup ACKs before considering a packet lost and retransmitting the "lost" packet.

So, with only two links, alternating packets, of equal bandwidth and delay, I wouldn't expect the out-of-sequence delivery to cause most TCP implementations a problem.

Other non-TCP traffic can also be, of course, packet sequence sensitive, but most will tolerate some since, by design, IP itself doesn't guarantee sequenced delivery. So, like TCP, all IP applications should tolerate some out-of-sequence delivery.

I do agree that most don't well understand this issue, and it's easy to hit the point where there's enought out-of-sequence delivery to cause a major performance problem. For that reason, most should avoid intentionally causing out-of-sequence delivery.


If you can provide a packet trace, or other documentation, where just one out-of-sequence TCP packet forced a PC TCP session to reset, I would be curious to know of it, the TCP stack version, and its configuration, that showed such behavior.

Paolo Bevilacqua Tue, 06/09/2009 - 05:40

I don't have the traces, and I do not debate theoretically. But, I've seen it happening.

I invite anyone that wants to learn networking, to do a proper test himself, with and without concurrent traffic, looking at transfer rates, total packets sent, etc.

Usually the amount of stuff learned, not necessarily in the specific matter, is worth the effort.

Joseph W. Doherty Tue, 06/09/2009 - 06:45

I don't doubt you've seen it happen. However, learning networking via testing, without knowing theory, can often mislead. Knowing theory, and obtaining expected results, might just be "lucky".

Knowing theory, and obtaining unexpected results, is when you really learn. Either you really don't know the theory or you've found an environment problem.

Paolo Bevilacqua Tue, 06/09/2009 - 10:55

We can agree to disagree.

I've found that theoretically inclined people is often poor at real networking, as they often disconnect from the true state of things.

I would not like to work with anyone that is unable to run and learn from a lab.

Joseph W. Doherty Tue, 06/09/2009 - 11:52

"I've found that theoretically inclined people is often poor at real networking, as they often disconnect from the true state of things. "

Hmmm, never had the chance to work with a good engineer then? I realize they often are in very short supply. Perhaps you've only encountered those that think about the science, not the practical application. Or, perhaps you've only encountered those afraid to put their "posterior" on the line for a working system.

As there's a real difference between an engineer and a scientist, there's also a real difference between an engineer/architect and a mechanic/technician in their outlooks. However, I'm not one to knock other outlooks, because all have their usefulness when properly utilized.

Paolo Bevilacqua Tue, 06/09/2009 - 12:17

I did had many chances of meeting good engineers, especially during my long tenure at cisco. And some poor one also.

Coherently with my attitude, let me put this in a very practical way.

I could not care less about labels like engineer/scientist/technician/architect, although I'm well aware of their supposed meaning.

I'm in this industry since more than 20 years now, and I've found that the "best" people at designing and running networks have an humble personality, are willing to get their hands dirty with practicalities, put practice before theory and that do not put a tile before the name. A reputation perhaps, but never a title.

Instead, often I've seen that people with heavy theoretical background, have issues learning quickly, retaining knowledge, and maintaining feet on ground. Perhaps they became consumed by the long years spent on books.

That is my experience and opinion, then anybody is entitled to differ from me.

Joseph W. Doherty Tue, 06/09/2009 - 15:43

(BTW, apologies to original poster, and other readers, as Paulo and I go off beyond original question.)

I agree there are good and bad people doing almost anything, including networking. I also agree many of the traits you note are indeed attributes of a good practitioner. I also agree about position titles, although I was discussing outlooks; a different thing. There can be a major difference between a title, such as "network engineer", and whether that person has an engineering outlook.

At least to me, you seem to be implying theoretical knowledge is an impediment to providing a functional and practical network, and perhaps that's been your experience. (From what you descibe, regardless of position titles, it sounds like you've had experience with scientist outlooks, not engineer outlooks. The essense of engineering is turning science into practice.)

My experience, this month 30 years, has all been within businesses delivering or correcting production systems. Personally, haven't often had the chance to work, with what we used to call, "Ivory Tower" types, instead, most often work with "trade school" folk. I.e. those with no formal education within the field besides what they learn on-the-job or perhaps while pursuing a certificate. There's nothing wrong, I believe, entering the IT profession without formal study in the field, but if they really want to be an "engineer", in more than title, I think at least some theoretical knowledge is another important and necessary attribute of a good practitioner.

huangedmc Tue, 06/09/2009 - 20:14

Wow I learned a lot about OSPF & CEF by reading the comments above.

Seriously, don't you guys find it odd that the OP would have packet drops w/ only 10 - 16Mbps of traffic on 7200 & 7600 routers?

Unless he has really old interfaces that only do 10M...

Perhaps that's where the problems is.

Paolo Bevilacqua Tue, 06/09/2009 - 21:16


That happens because the OP has 10 mbps circuits, but offered traffic is more than that.

Router has no fault in that.

Joseph W. Doherty Wed, 06/10/2009 - 04:53

Addendum: "CEF will round-robin traffic flows", alhough it it appears to function as such, technically not truely round-robin for flows. There's a hash involved, and traffic between same IP SRC/DEST will take same path.

Additional details can be found here:


and here:


From the 1st reference:

"There are a few scenarios where per-packet load balancing is more advisable, e.g. the majority of traffic is between two hosts. "

From the 2nd reference:

"Packets for a given source-destination host pair might take different paths, which could introduce reordering of packets. This is not recommended for Voice over IP (VoIP) and other flows that require in-sequence delivery."

Since TCP doesn't require in-sequence delivery, it might not be clear how easy it is to degrade your end-to-end performance using per-packet load sharing even though your link balance utilization is fantastic.

Alhough if your equipment supported it, I still believe per-packet shouldn't cause an issue in the situation you've described, it's very easy for network to change causing a issue later. So, unless you really, really had the need to use this technique, I recommend against using it. If you do use it, some how you need to document it being used with the possible issues it might cause.

arun kumar Wed, 06/10/2009 - 21:40

hi all,

thanks for your responses.... since the customer is not ready to upgrade the bandwidth at this point of time.. the only option left is to use etherchannel in the switch and bundle the two ethenet links.



Paolo Bevilacqua Wed, 06/10/2009 - 22:08

Note that etherchannel, just like CEF, doesn't guarantee equal link utilization.

It may work better, or it may not. Or it may work fine one day, but poorly another.

These are all statistics (or stochastic if you want) methods without feedback from actual circuit utilization.

Joseph W. Doherty Thu, 06/11/2009 - 03:50

I had thought about suggesting Etherchannel, but wasn't sure whether it would be an option on the 7200. Paolo's note about its effectiveness compared to CEF is quite correct.

However, unlike CEF, which except for just(?) the 4500 doesn't offer variations to the CEF hashing algorithm, many switches provide different attribute variations for the hashing algorithm which can impact how well it supports your traffic. Even so, to quote Cisco:

"Hash based mechanisms, for example, EtherChannel or Cisco Express Forwarding (CEF), were devised so that flows would be statistically distributed, based on mathematical functions, among different paths. However, the traffic rate for a flow is not considered, and it is possible for multiple high bandwidth flows to be sent on an already congested link while other links remain underutilized. Finally, link bonding technologies, such as Inverse Multiplexing over Asynchronous Transfer Mode (ATM-IMA) and Multilink Point to Point Protocol (MLPPP) try to slice packets so that they each packet is simultaneously sent on multiple links.

Link bonding technologies generally place a substantial load on the fragmenting device and the reassembly device, and are sensitive to intramember-link delay variation. Finally, the remote peer must be the same type of device, limiting the use of multiple WAN providers. "

Link bonding will also allow a single flow to obtain more than one link's bandwidth. Link bonding also preserves flow packet sequence, unlike packet-by-packet. (Not 100% certain, but there might be 3rd party Ethernet imuxs.)

OER/PfR technology supports dynamic link loading, although slow to react. It works well if, for example, there's one flow consuming all the bandwidth on its link over several minutes. It can push other traffic to a different link.


"the only option left is to use etherchannel in the switch and bundle the two ethenet links. "

Have you discounted packet-by-packet as an option?

Joseph W. Doherty Sun, 06/14/2009 - 05:17

Addendum 2:

From another post, just discovered the command:

"ip cef load-sharing algorithm {original | tunnel [id] | universal [id] | | include-ports {source [id] | [destination] [id] | source [id] destination [id]}} "

Seems to be supported on later IOSs.

"Load-Balancing Algorithms for Cisco Express Forwarding Traffic

The following load-balancing algorithms are provided for use with Cisco Express Forwarding traffic. You select a load-balancing algorithm with the ip cef load-sharing algorithm command.

•Original algorithm-The original Cisco Express Forwarding load-balancing algorithm produces distortions in load sharing across multiple routers because the same algorithm was used on every router. Depending on your network environment, you should select either the universal algorithm (default) or the tunnel algorithm instead.

•Universal algorithm-The universal load-balancing algorithm allows each router on the network to make a different load sharing decision for each source-destination address pair, which resolves load-sharing imbalances. The router is set to perform universal load sharing by default.

•Tunnel algorithm-The tunnel algorithm is designed to balance the per-packet load when only a few source and destination pairs are involved.

•Include-ports algorithm-The include-ports algorithm allows you to use the Layer 4 source and destination ports as part of the load-balancing decision. This method benefits traffic streams running over equal cost paths that are not load shared because the majority of the traffic is between peer addresses that use different port numbers, such as Real-Time Protocol (RTP) streams. The include-ports algorithm is available in Cisco IOS Release 12.4(11)T and later releases. "

If you platform/IOS supports it, changing CEF's hash might improve your overall per-destination load balancing.


This Discussion