I have a router(router-A) which is running iBGP with my other 2 Internet G/W routers(router-B & C) which have multiple uplinks to different/same Internet providers. Below is a design of my n/w.
\\/\/// \\/\/// <--- uplinks to ISPs
@---(ibgp)--@ <---- Internet G/Ws (B & C)
\ (ibgp) /
@ <---- router A
As of now I am taking full internet routing tables from my uplink providers to router B & C and advertising the same table to my downstream iBGP peer (router A). My router "A" has less b/w capacity and memory than my G/Ws "B" & "C", so during some uplink flap or any such mishap my downstream iBGP peering with router "A" goes down and take long time to come back and install back full internet routing table. To avoid such situation with router "A", I am thinking to send just default routes from my G/Ws to routers "A" and configure ibgp multipath-2 on router "A" itself so that there should not have any impact of uplinks mishap on router "A" at least. At the same time I don't want that there should be per-packet load balancing because that is not good for any delay sensitive traffic and moreover it is CPU intensive. I am using CEF and load balancing used here is default "per-destination". With CEF, it is understanable that CEF uses per-pair or per-session (source & destination) based load balancing when have source and destination in routing as well as in CEF table but how can I achieve per-session load balancing with default-routes from my both IGWs? I don't want that my only one link to IGWs from router "A" is kept utilze and another left unused which doesn't solve the purpose of load sharing. I am also thinking to use "CEF universal load sharing" algorithm to bring more granularity. I also hope that with default routes, per-packet load sharing won't get activated automaticaly because in CEF table there won't any specific route for destination and request for path goes to CPU then.
I will be very thankful, if experts of this forum throw some light on this issue and if possible, please provide better solution.
Site of Origin applies to VPNv4 routes being a special type of route-target extended community. So this may apply to your scenario if most routes are VPNv4 routes and you are providing MPLS L3 VPN services.
About dividing the cluster in two parts changing the cluster-id on the two RB and RC that act as route reflector servers, this can be of help in case you want to ensure IP or VPN connectivity when two events happen: client Rj losts iBGP session with RB and client Rk losts iBGP session with RC.
in this case if cluster-id is different RB and RC are allowed to exchange routes and Rj will see Rk's routes with a cluster-list or [RC RB] and originator-id of Rk.
We have tested this some years ago with positive results in a lab contexts.
To be noted that BGP attributes cluster-list and originator-ids allow safe tracing of iBGP advertisements even in classic IPv4 general routing table family without requiring SOO.
Even if somewhere it is stated that these two attributes ( cluster-list and originator-id) are not used to make the best path choice in iBGP, our tests showed they are and a shorter cluster-list is preferred meaning the advertisement has had less reflections. This is reasonable.
Hope to help
if you have added the ibgp multipath commands and you have two default routes you are fine if there are direct links between RB and RC.
Actually CEF load balancing consider the two possible next-hops and performs load balancing per session as you have correctly described even if using two default routes.
For CEF actually what counts is to identify the next-hops to be able to prepare CEF entries to be used for efficient packet rewrite. CEF uses topology information to build CEF table entries and a default route 0.0.0.0/0 is a legitimate entry as the other ones, just it is the less specific entry in the table.
To be noted that in case of failure of direct link between RB and RC you can be at a risk of routing loop if the iBGP session between RB and RC is going via RA.
You can solve this in the following ways:
adding another direct link between RA and RB
enabling MPLS on all links between RA, RB and RC in this latter case when direct link fails RA becomes a P node on the LSPs between RB and RC and this solves the possible routing loops issues
Hope to help
CEF load balances based on source/destination IP addresses and by default it uses per-destination load sharing. So with 2 default routes as long as the source/destination pairs differ both uplinks from router A will be used.