cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
345
Views
15
Helpful
3
Replies

Load balancing in EIGRP

moses12315
Level 1
Level 1

I configured load balancing across unequal paths (EIGRP protocol) with variance 2.

Actually I have two connections to a remote router (64kbps and 256kbps). At the beginning I had only the 256Kbps working and I have latency via ping 19-20ms. After I configured load balancing (variance 2, load sharing 19 for 256Kbps link and 8 for 64Kbps) I have a latency via ping 30-32ms.

At the beginning I was surprised then I thought the following:

Lets say that a packet of 256Kbs has to travel to the remote router.

Scenario first(only 256Kbps is up)

The latency will be 1 second

Scenario 2(only 64Kbps is up)

Latency is 4 seconds

Scenario 3(both links are up , with 66% for 256Kbps and 34% for 64Kbps)

So we have max latency for 256kbps(data is 170bits) <1sec

For 64kbps link (data is 86 bit) around 1,5 seconds. So in total I will need 1,5seconds for the packet to reach its target.!!!!

Conclusion. If I have only the 256Kbps link I have better bandwidth than use load balancing on both links.

Is that normal? If yes what's the purpose of using load balancing accross unequal-cost paths?

Thanks a lot

Moses

3 Replies 3

paolo bevilacqua
Hall of Fame
Hall of Fame

Moses, what you are not considering, is that each flow (defined as a set of source/dest addresses and ports) will stick to one path only. That is called sometime called per-destination load balancing, even if we saw that not only the destination matters to define what a flow is.

Not doing that, and using per-packet load balancing, you would end with out of order arrival due to large latency differential, that is a very bad thing.

Consequently, with unequal cost load-balancing, you never know if a certain flow will take the "fast path" or the slow one. That is often inappropriate exactly because isn't deterministic.

In practice, unequal cost load balancing is never done on professionally designed networks. What you can do in your case, is to announce certain destination with an higher metric on a certain link, so you know what goes where, or use policy based routing to the same effect, but with more flexibility.

royalblues
Level 10
Level 10

Personally i havent seen any one configuring unequal cost load balancing with EIGRP (except for labs)

However the latency you mention would also depend on the distance between the 2 end points and will be the most significant one. What you are talkign about is more of the serialization delay which depends on the bandwidth and the size of the packets.

When you configure unequal cost load balancing, the traffic is distibuted across the links proportinately to the metric. But with the default CEF per destination switching, a particular flow always uses one path.

Narayan

Mohamed Sobair
Level 7
Level 7

Moses,

The Load Sharing in Eigrp for unequal cost path determined by the link Bandwidth.

How is the Variance command calculated in order to load share among different Eigrp Paths? The variance is taken based on deviding the Feasible Distance Metric of the Successor by the Feasible distance metric of the Feasible successor. although this is the load sharing method, its still not equally balanced amongest multiple paths. and this what makes Eigrp a unique routing protocol that does unequal cost load balancing and also taking into account the Speed/bandwidth of the link. Because the 64K link is lesser than 256K link, the traffic is still un equally shared.

In order to achieve un equal cost load sharing with almost typicall percent of traffic, you will need to add the command (Traffic-share balanced) under the Eigrp process.

HTH

Mohamed

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco