We have two T1's going to our ISP. We would like to use these two ones at the same time. What is the best way to do this? I was thinking of making 2 static routes and also have the ISP make 2 static routes pointing back to us. Any other suggestions? What's the MLPPP stuff?
rip,igrp,eigrp,and ospf will automaticaly do equal cost load balancing if the metrics are the same.
so... 2 full T1s, same routing protocol on both(same admin dist) or static routes with same metric should load balance automaticaly.
here is an example routing table entry
O IA 10.1.1.0/24
[110/39173] via 10.1.1.1, 01:10:43, GigabitEthernet5/5
[110/39173] via 10.1.1.2, 01:10:43, GigabitEthernet3/1
same metric, same admin dist.
even though this is a gig link the same principal applies
Thanks. That was what I thought. I just wanted to make sure that there wasn't some kind of high-level way of doing this.
You can use Cisco CEF to load-balance across the T-1s as well. All you need is 2 static routes, ie:
ip route 0.0.0.0 0.0.0.0 S0
ip route 0.0.0.0 0.0.0.0 S1
and you need to enable CEF by typing "ip cef" at the global config prompt.
Here's a link:
Do you have 1 router or 2. If just one you can use mppp. That is how we bundle T-1's. If you need more help, email me. Here is an example from one of my routers:
multilink virtual-template 1
ip address 192.168.115.1 255.255.255.0
no ip redirects
no cdp enable
standby 1 ip 192.168.115.3
standby 1 priority 105
standby 1 preempt
ip unnumbered Ethernet0/0
no ip mroute-cache
ppp authentication chap
no ip address
ip unnumbered Ethernet0/0
ppp authentication chap
Im not sure with the new routers but I tried to implement MLPPP on some older routers and the CPU utilization tripled. Had to back it off. As far as dynamic load balancing goes, yes they will balance automatically but it will be somewhat skewed if you are using fast-switching (ip route-cache). To balance packet by packet you would need to disable this but it will also be very CPU intensive since the router will then need to process every packet. About the only advice Ive been given to load balance leased lines 100% is to use hardware such as IMUX or IMA. Just my 2cents.
Anytime you can get divergent routing, go for it; however, if you're using the same ISP for both then you're going to want to know what you're using the T1s for: video/voice (real-time) or large packet transfers.
There are a couple load-balancing methods available to you, and these have changed in recent code versions. The old way was per-destination load balancing only. The problem here was cacheing could allow un-even load distributions if one conversation was typically more load-intensive than the next. Then came per-packet load balancing. This also had cacheing issues, which is why they backed off to process switching for this method. Your problem here was out of sequence packet issues which cause problems with voice and video. Then came Multilnk PPP (I don't even tackle Multilink, multichassis PPP).
MLPPP created a logical virtual-access interface that would make your two T1s look like a single 3 MB Full-duplex pipe. I've encountered two problems here. First, despite the fact that Cisco is employing PPP, a multiprotocol encapsultation, their support of IPX is clugey. You end up having to use Loopack interfaces (yes, a logical ontop of a logical interface) with commands such as ipx-ppp client. Very wierd. You also get to enjoy such caveats as Virtual-intefaces that never go away and are sometimes never used again, for seemingly no reason -- thus consuming memory and indice space.
Then Cisco came up with Cisco Express Forwarding. CEF deploys a router table-like and router cache-like implementation utilizing FIBs (forwarding information b...something (blanking on the moniker)) and another table. This is supposed to address some of the packet and destination load-balancing issues previously experienced. Thus, by using CEF load balancing, you should be able to load balance both T1s without having to worry about logical interfaces used by MLPPP implementations. My understanding, is that you'll still be subject to uneven balancing if you go destination load balancing since it's still a conversation to one T1 balancing.
Lately (as of 12.05T, though if youcan help it, don't use T codes (free warning)), Cisco has made improvements to MLPPP such that the virtual-access interfaces (whichyou really didn't have control over if you tried more than one on a box) are now replaced with multilink interfaces (similar to their multichassis MLPPP). This seems to be a better implementation, however, if you don't go the 12.05T, you'll have to upgrade to 12.1 LD codes, which require more memory.
As far as processor loads, I haven't seen huge changes in the loading. This, of course, dependent on the amount of process switched traffic.
Just one more thing regarding CEF. By default CEF does load-balancing per destination. If you feel that you need to do load-balancing per packet you have to enable it with the command " ip load-sharing per-packet" at the interface level.
Actually, I'm in a similar situation but as a WAN service provider. In my understaning, MLPPP is better from a performance standpoint and you don't have to worry about two different IP addresses. CISCO's white paper, "Alternatives for High Bandwidth Connections Using Parallel T1/E1 Links", gives you a general idea for each technology.
We're planning to start providing MLPPP services to some specific regional customers in Seattle area as Beta trials, but load balancing between two circuits which you mentioned.
As you can imagine easily, these are totally depending on your provider. You should check your provider first for what kind of options are available. You may not have multiple options. Good luck and please post the outcome out of your effort for the rest of us.