Giuseppe Larosa Thu, 06/18/2009 - 23:44
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

cmadiam82 Fri, 06/19/2009 - 01:51
User Badges:

Many thanks Giuseppe.


I will try to talk to our provider.

Joseph W. Doherty Fri, 06/19/2009 - 02:55
User Badges:
  • Super Bronze, 10000 points or more

Another option (preference?) with some MPLS providers, besides those mentioned by Giuseppe, might be ATM IMA.

paolo bevilacqua Fri, 06/19/2009 - 03:07
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

ATM IMA wastes bandwidth, needs additional hardware, so it should be avoided.

Joseph W. Doherty Fri, 06/19/2009 - 03:59
User Badges:
  • Super Bronze, 10000 points or more

I'm not too keen on it either. However, you're often stuck with what the provider is willing to do or can offer. (Another possible expense, believe ATM/IMA cards/modules are often only supported with a non-base IOS image on some routers.)


The one advantage of IMA, it seems to tax the processor less, since the mux seems to be in the hardware vs. MLPPP being in software.


Also on options, neither of us suggested that with CEF, an actual bonding technique might not be needed unless single flows really needed the aggregrated bandwidth. (For those thinking CEF packet-by-packet, 4 links could be very bad for typical TCP implementations, not to mention possible impact to some other non-TCP traffic types.)

paolo bevilacqua Fri, 06/19/2009 - 04:01
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

What you say is all correct.


The thing is that nowadays few peope are happy with single session speed of T1 and invariably true aggregation is asked for. No surprisingly this the single cisco most FAQ since the 80s.

Joseph W. Doherty Fri, 06/19/2009 - 04:10
User Badges:
  • Super Bronze, 10000 points or more

I agree, but today with LANs commonly running at 100 Mbps or more, haven't found many happy with "only" 8 Mbps (the 4 E1s).


In some places, its nice to see where Ethernet WAN hand-offs are becoming more common, but even when lots of WAN bandwidth is possible (if practical, i.e. cost), WAN acceleration devices are an interesting option too. Of course, this is getting a bit far from the original poster's question.


[edit]

Oh, also on the question of aggregation, in this particular instance, note the diagram. This appears to be a hub site connected to a MPLS cloud. Assumming the remote sites only have single E1 (or so), single flows are going to be limited by remote site bandwidth. There's still advantage to aggregation, as it avoids single link bandwidth contention, but not as much benefit unless other sites had more than single link too.

cmadiam82 Fri, 06/26/2009 - 15:17
User Badges:

(WAN acceleration devices are an interesting option too.) We are now using Cisco WAAS. Someone told me that we are the first on our country to use this... Hope so.... Hehehe...


Anyway, I've already talk to our provider and told me that they will be using VRF. As of now we have 31 site and they are planning to break that 31 sites into 4 groupings and every E1 (from the 4 E1's) will have 8 sites for every group. The reason that they gave me that kind of plan is there's no one single point of failure if we bundled that four E1's. So if this will be the case, is there any possibilities that i can configure our router to have a floating route to the other 3 remaining E1's ones the assigned E1 for that group of router will fail? As a standby route?


Thanks again guys....

Giuseppe Larosa Sat, 06/27/2009 - 04:37
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Chester,

the provider is probably suggesting you to divide your sites into 4 VRFs: 8 sites will point to a single E1.


If so this can be considered a poor design because if one E1 fails you miss 8 sites.



First of all, it is possible to use MPLS VPN and to have a bundle as the access link to the VRF.

We tested multilink PPP for a customer and it worked well (actually we had more issues trying to use the multilink PPP bundle as the backbone MPLS link but that is a different story caused by a bug).


I understand that they can feel not confident with bundling technologies but they are widely used also in MPLS VPN/VRF contexts.


Be careful


Hope to help

Giuseppe



cmadiam82 Tue, 06/30/2009 - 20:51
User Badges:

Hi Giuseppe,


What do you mean by this (it is possible to use MPLS VPN and to have a bundle as the access link to the VRF.)?


Thanks...

Chester

Joseph W. Doherty Sun, 06/28/2009 - 03:03
User Badges:
  • Super Bronze, 10000 points or more

Like Giuseppe, I find what the MPLS vendor suggests rather odd. You write they suggest this to avoid single point of failure, but it likely will make one, not elminate one. If your remote sites are in different VRFs, the only way to reach each would be via the "dedicated" E1 at the hub site unless each remote site is in more than one VRF too.


One feature of MLPPP, or other channel bundling, they tend to allow failure of one or more links yet still keep the logical path avaiable. If you really wanted to avoid single point of failure, you could have two hub routers, each with dual E1 in dual channels.

Rick Morris Mon, 06/22/2009 - 10:59
User Badges:
  • Silver, 250 points or more

All good answers so far. I had to do something similar with 2 E-1's at our mexico site and we ran BGP because our provider sould not multi-link the lines together.


Along with that to get load sharing I had to add the following config to help share the load:

ip load-sharing per-packet


We peer with the vendor's loopback and if you choose this route make sure you use the ebgp-multihop in your config and create a loopback on your side and update source.


Sorry if this is to much info on the config side, it is just a little different with multiple E-1's in this type of BGP config.


There are some other little caveats to remember but until you know what you are doing it really does not matter just yet.


paolo bevilacqua Mon, 06/22/2009 - 11:20
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    Founding Member

Check your file transfer performances with and without per-packet, let us know :)

Actions

This Discussion