As we know, EIGRP default K values is K1=1, K2=0, K3=1, K4=0, K5=0. I found the default setting makes the metric unreadable. According the metrics is hard to know the hop count. I was considering to change the K values to K1=0, K2=0, K3=1, K4=0, K5=0. According the setting, the metric shoule be multiple of 256(256*delay/10*(hop+1)) because all of the interface is FastEthernet. Any comment about this setting? Is there any impact?
If all of the interfaces are the same bandwidth then changing the K values to eliminate bandwidth would have little impact. But I also do not see where it accomplishes very much positive since it would become a constant in the equation. And if the network ever changes so that any of the links have a different bandwidth it does have an impact and I believe that the impact would be mostly negative.
For a network of homogeneous media like your case, the metric reduces to a hop count, and i guess changing the K values in this manner (neglecting the bandwidth) would make the composite metric more readable (smaller - a local route for example will go from 28160 to 2560, but practically i am not sure will this decrease be effective after cumulating the consecutive delay), and i don't think that this would have any impacts.
 sorry Rick didn't see your reply.
It is not a problem. Both posts are close in time and I am sure that we were both working on answers at about the same time. And I note that again we share a common outlook in answering a question.
Thanks Mohammed and Rick.
In my network, there's multiple unequal bandwidth link, we need to tune the route manually and frequently. Now we are using OSPF as routing protocol. In OSPF network, i can change the interface COST easily, and the route metric is also readable. But OSPF is weak in unequal load balancing, so we are considering migrate to EIGRP. Actually, i was seeking a efficient way for route tuning. If i only set K3=1, then change interface delay should influence the route selection, i would like to tune the route in such a way. But i am afraid there's some impact i have never considered. Both of your advise is important for me.
Think well before you move from OSPF to EIGRP or anything else.
If you believe (like old documentation may have lead you to) that EIGRP will share traffic over links of unequal speed in a proportional manner, please do a lab test before and share the results here!
Well Paolo has a point.
The variance will definitely make the alternate and less preferable routes into your route table.
This table is responsible for creating the CEF table which routers use to forward a packet out an interface
What i wonder is how CEF picks this up and loadbalances the traffic in a proportional manner. All it can do is to either do a per-destination or per packet loadbalance
Indeed. But in theory, nothing prevents the CEF code in per-destination mode, to callback into EIGRP and be returned with some sort of 'preference' in choosing a link over another.
All theory until someone does a good test for real!
Sorry if I won't be the one to do that, as I had my share (that's the appropriate word) of disrupting traffic and packet counting already in the last years.
I do not have a test to offer. But I did ask this question to a senior Cisco engineer at the recent Networkers conference. The answer that I received is that CEF manages the unequal load balancing with an adaptation of its normal load sharing algorithm. In normal (equal cost) load sharing CEF will hash a packet to determine which path to use(with two destinations there are two outcomes, with three paths there are three outcomes, etc). With unequal cost load sharing CEF will hash a packet and create the number of outcomes to match the variance (with variance of 2 there are two buckets for the path with the better metric and one bucket for the path with the worse metric) so that the traffic load is proportional.
Thanks for the information Rick.
But what i could not understand is the creation of 2 buckets with a varinace of 2. The variance of 2 can lead to more paths in the RT than 2 (say 5)so what happens in that case?
There is indeed a test required to confirm this
A very nice information indeed, but i agree with Narayan, the last paragraph about how CEF handles the unequal load sharing has a weird part, if the variance is 2 then there can be many routes passing this variance lets say 5 as Narayan said, does the statement mean that path(s) with the worst metric will share the second bucket (which is not as the same as the theoretical concept).
Anyway, why not cut the chase and agree on a topology to test and have a nice test to answer our doubts.