We have a couple hundred remote sites, all running T1s to the Internet on 1841s. The 1841 creates a site-to-site IPSEC tunnel (non GRE) to our VPN Concentrator (nortel) at the corporate office. One site requires additional bandwidth and ordered a new T1 from AT&T. We don't know how to utilize both T1s with IPSEC, as the Nortel concentrator only allows one IP.
I have read up on Multilink PPP, but am confused on what needs to be done? Does AT&T have to be involved?
We cannot change our topology at the corporate office or add additional hardware. We've prefer not to use a IMUX. All of the configs I've seen show the leased lines running in parallel to another router, which is really not how we are setup. Again, all of our lines are public.
I am not sure that I can address all of your question without knowing more about your situation. In general it would not be needed for ATT&T to be involved in running two T1s as MLPPP. AT&T is just running two T2s. The handling of MLPPP is done on the two end devices.
Where do the T1s terminate at the corporate office. Assuming that they terminate on some Cisco router it should not be difficult to configure MLPPP (assuming that the T1s both terminate on the same corporate router. The beauty of this solution is that the T1s do not get an IP and the multilink interface gets the IP address. So it is a single address. The routers and MLPPP will balance the traffic over both links. I do not see this as a significant change in topology or hardware for the corporate end (or the remote end - assuming that the 1841 has the hardware for two T1s).
I believe that there are multiple possible solutions that would provide what you want. IPSec with GRE is one of them. I have implemented many VPNs using IPSec with GRE and it works quite well for the customers for whom I have done this. I have done this in quite a few instances where there were two (redundant) paths from the remote to the HQ and one of the advantages of IPSec with GRE is that you can run a routing protocol over the GRE so that you can do load sharing over the links and perhaps more importantly can determine is there is a problem (with the link itself or with something upstream of the physical path) that makes one of the links non-functional and will automiatically and quickly shift all traffic to the link that is still working. However if your case where both T1s will originate on the same device and terminate on the same device, I am not sure that I would go for the IPSec with GRE solution.
I think that there may be a possible solution to use the two T1s as independent links. There are two aspects of the configuratino of the 1841 that may impact this. One is the choice of interface of the 1841 used for IPSec peering and the other is how routing is implemented on the 1841 (and on the 7206). To implement this you would need to make sure that the 1841 is specifying its address for IPSec peering as something not on the outbound physical interface. I would probably suggest a loopback interface on the 1841. I do not know how your routers are currently set up but I assume that the IPSec on the 1841 is specifying a peer on the Nortel so the remote address is not dependent on the corporate 7206. But it may very well be that the 1841 is using its outbound T1 interface as its peering address. If you configure the 1841 to use some other address for peering then you can specify routing on the 1841 use both T1s to access the remote peer address. And you would specify the 7206 to use both T1s to access the 1841 peering address.
And I believe that the multilink solution may also be very workable and have a few advantages, especially in the aspects of load balancing and of failover in case of failure of one of the links.
To answer your other questions. Yes adding MLPPP will tie up 2 T1s on the 7206. It will also require configuring a virtual interface on the 7206. Depending on how routing on the 7206 is set up, it may also require some changes to the routing logic on the 7206.
And, based on my understanding of your situation, yes any of the solutions would require some degree of change on the 7206.
There are some things about your environment that we need to understand better so that we can give you better suggestions. Your original post indicated that there were several hundred remote sites each running a T1. Now we understand that they aggregate into a DS3 which is not channelized. Help us understand how this is being done. If it is a single non-channelized DS3 is there a single IP address on that interface? How do you access many remote sites over the DS3 if it has a single IP address?
Perhaps if you supplied some detail about how the remote routers are configured it would help us figure out what advice we can suggest to you. Right now several of the assumptions I have made seem to be not valid and until I understand the environment better I am not sure what to suggest.
We do MLPPP all the time. AT&T must get involved and bond the 2xT's together. They may or may not do this for you since their support model won't let them do it until Q4 of '05(or so I am told). They can, but on a 1 off basis. Check with your account rep. Basically, they give you 1 IP address (for both T1's) which you'd assign to the multilink interface, not the individual CSU's. Once this is done, you point your tunnel from the Multilink address toward your head end 7206 address and negotiate isakmp and ipsec the same way as usual. You do not need channelized DS3, since this is basically a big fat pipe to the internet with many tunnels within it.
I do like the idea of a GRE tunnel though. It allows for routing protocols to be used which makes your life easier for managing and failover purposes. Hope this helps.
Basically, I created a loopback interface and set it to one of the LAN IP's provided by AT&T. We left the WAN IPs on each Serial Interface. I applied the crypto map to each T1 (Serial Interface). to get it working, I added :crypto map myvpn local-address Loopback2"
This command tells the tunnel to establish with the address of Loopback2. IP CEF enabled and ip load-sharing per-packet set on both interfaces. We are seeing very even load sharing. It's sweet!
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...