cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2607
Views
13
Helpful
15
Replies

connecting 60Gb backbone between 2 buildings

Eby Mani
Level 1
Level 1

What is the best & least expensive way to build 60GB backbone network within 2 buildings which are less than 400 metres apart.

Building A has the following,

  1. about 10 Servers with 1000T connectivity,
  2. 5 nos of 48Port Gigabit stackable switch with 4 X 1G SFP uplinks.
  3. 200 Workstations

Building B has the following,

  1. about 5 Servers with 1000T connectivity,
  2. total 40TB NAS Storage,
  3. 4 nos of 48Port Gigabit stackable switch with 4 X 1G SFP uplinks.
  4. 130 Workstations

Say, as of now the connectivity between A & B is 1Gb thru OM3 6core MultiMode fibre.

Option 1

Get 2 nos of 4506-E chassis with Sup7L-E, 12-Port 10GbE (SFP+) module & 24-Port 1GbE module. Aggregate 6nos of 10GbE using OM4 12Core MultiMode fibre.

Option 2

Get 2 nos 4900M with 8port half card and TwinGig modules for uplinking from stackable switches. But there is some bottle neck, related to come connectivity issue, which i don't remember .

Is there any better solution to do this relatively simple and least expensively ?.  Or should we go with OC-192/768 ?.

Thanks in advance.

15 Replies 15

vmiller
Level 7
Level 7

How much fiber is available between the two sites ?

have you looked at this?

http://www.cisco.com/en/US/prod/collateral/modules/ps5455/ps6575/prod_brochure0900aecd803a53ea.pdf

Marvin Rhoads
Hall of Fame
Hall of Fame

What is it that makes you believe 60 Gbps is necessary? Very very few applications or use cases can take advantage of that amount of bandwidth. Do you have data from your current 1 Gbps connection that indicates that 60 times that is necessary?

To do anything with 10 Gbps will take some investment. The best 10 Gbps price points from Cisco right now to be had from Nexus 5k series. What are your stackable switch models? They will be your likely chokepoint  - how much depends on their capabilities.

The CWDM solution vmiller posted about above would require not only single-mode fiber but also wavelength-specific transceivers.and terminating equipment.

Leo Laohoo
Hall of Fame
Hall of Fame

Hmmmm ... I'd use Nexus 2K for each site with a Nexus 5K controlling them.  

If you are doing VRF then a pair of 6500E with Sup2T and "joining" them together using VSS.

For 10Gb ports would you consider 40 Gb?  If yes, then you can use the WS-X6904-40G line cards.  If 40 Gb ain't your cup of tea the you can use WS-X6816-10G.  For SFP ports, you can choose either WS-X6824-SFP-2T or WS-X6848-SFP-2T.  If you want copper, then WS-X6848-TX-2T for Data Centre copper line card.

Eby Mani
Level 1
Level 1

Many thanks,

@vmiller

we have 2pairs left, planning to have 12core OM4 so it can support full 10G.

I guess CWDM is for those with limited fibers over long distances. And is 2.4Gb max speed per fiber or per wavelength ?. And compared to the cost of a say 24port 10G 4900M or 4500-X or Nexus5xxx switch, is CWDM solution relatively inexpensive ?.

@Marvin

workstations manipulate uncompressed video, and videos are stored in NAS drives across the building, copying them over 1Gb backbone takes a long time. All the switches are non-cisco managed L2 switches with 96Gb backplain and 20Gb stacking bandwidth.

@leolaohoo

40G line cards are currently out of question due to budget. However if 100G cards are available for the price of one 40G card , then why not ?.

--------------------------

The plan is to upgrade servers, storage, etc.. to 10G and may be some critical workstations to 10G, but not all. I'm considering non-cisco products, but when it comes to warranty support/replacement, nothing beats cisco where i live.

And in general, i haven't read any 10G aggregation tests, what would be the average bandwidth ?.

Thanks again.

However if 100G cards are available for the price of one 40G card , then why not ?.

Currently the cost of a 100Gbps line card is cost-prohibitive.  The per-port-price should be coming down in about 18 months.

CWDM is really not well-suited for connecting two buildings' common LAN infrastructure. You will have to introduce costly equipment that reduces your performance and scalability.

I'd definitely lean strongly towards a Nexus 5K / 2K solution. Depending on your physical cabling for the workstations, it may even be better to have them all go into the 2K FEXes and essentially look like data center servers. A Nexus solution also give you greater flexibility for potential SAN ports.

The WS-X6xxx cards Leo mentioned are all Catalyst 6k-based and would require putting a new chassis in each building, probably Sup 2T-based. While I'm a big fan of the 6k as a core switch, it's really not the most economical or scalable for 10 Gbps solutions - especially in comparison to the Nexus line. Have a look at this data sheet.

WDM is targeted for networks that are fiber constrained or long-haul, high bandwidth networks where optical amplification is required.

Generally speaking, the CWDM wavelengths are bit rate agnostic, so can handle 1G, 10G, 40G, 100G, etc.  As Marvin mentions, it adds cost to your network because your optical signals need to be tuned to specific wavelengths (more costly CWDM transceivers in your equipment or outboard, expensive wavelength converters) plus the optical multiplexer/demultiplexers at the fiber ends that put the individual optical wavelengths onto the fiber and take it off. Each wavelength is thought of as a virtual fiber because what you put in at one end comes out the other the same way (no bit rate or format changes), just like an individual fiber link.

Hope this helps!

Tom

Not as expensive as one might think. BUT, i missed the part about multi mode. my bad

rayframe1
Level 1
Level 1

You mention having a 4900M, we have 60 Gbps of transport in a channel-group, between 2 4900M's using 6 of the TenGig interfaces. The channel-group should go up to 80 Gbps, using slot 1. Slot 2 and 3 have limitiations with TenGig. The distance possible is dependant on the type of SFP's and fiber

It's a practical solution, using existing equipment, rather than going cutting edge. Otherwise there are 40 Gbps and 100 Gbps interfaces are available, but it may get more costly as they won't work with older hardware.

I agree the 4900M is the most cost-effective 10 Gbps Cisco solution after the Nexus. Some folks are more comfortable with it as it runs the same IOS your may be familiar with. It does have hardware and slot limitations; but then all boxes do - just at different points.

Nexus 5k is a very solid platform though and more growth-capable going forward. Cisco's pricing on it is very competitive.

I'd recommend talking to your account manager or reseller and asking for a two scenario proposal. Then you can make an informed decision based on both cost and performance.

Have you heard of the new 4500X

The first of the 4500 family that will support VSS.  This model is based around the Sup7E supervisor card.

Eby Mani
Level 1
Level 1

@Tom

Thanks, very nice explanation. When using CWDM for say 40G or 100G, you need to have output from 40/100G cards connected to network port of CWDM. Am i correct ?.

@Raymond

We don't have 4900M yet, only looking for that option. Regarding the slot 2 and 3 tengig interfaces, could you provide url for that ?. I remember reading something in cisco website regarding the same few months back, now i'm unable to locate.

@Marvin

You seem to be a Nexus guy, how different is the hardware&OS compared to standard cisco gear for the lesser mortals. If i'm correct, is NX-OS is using QNX at its core ?.

@leolaohoo

4500-X mentioned in my previous post along with other potential solution.

General question, does Nexus(with L3 daughter card), 4900M & 4500X switches support ring like network, just in case if i want to add a third site.

Hi Eby,

I'm not really pro-Nexus in many things but for your application it seems the best fit. The NX-OS is a bit more Linux-y and has some nice features like hitlless in-service software upgrade (ISSU) that are nice. Many commands are the same and there are several guides out there giving one a quick reference sort of translation (example).

The 4500X data sheet looks promising but it is even more cutting edge hardware-wise. The support are on cisco.com is pretty much empty. (link) I'd ask Cisco about what's non-blocking, port limitations, etc. Is that where you want to stake your future?

I've read that Cisco's margins are lower on Nexus so the price-performance ratio is better from the customer perspective. Plus the architecture is truly next generation rather than based on the workhorse 6500 chassis which dates back over 10 years now. OTOH, the Nexus still doesn't offer every single feature that we've come to take for granted on the tried and true 6500. OTOOH, it has some still-wet-paint features like OTV and VPC that the Catalyst series doesn't offer.

Re a 3rd site, I'd think of it more like a 3 data center campus area network as opposed to a metro network (sort of dated concept in 2012) and think more an Ethernet with spanning tree design than a SONET or RPR network with ring technology considerations. Any of the ring-based technologies are going to force you to introduce a "layer" of equipment just to support that technology while doing nothing to move your performance froward per se. This point holds for the CWDM solutions as well.

Eby,

The network port of the CWDM filter is connected to the fibers that interconnect your buildings (CWDM filter on each end of the backbone fibers). The output of your equipment's optical interface(s) (whether 1G or 100G) is connected to the client side ports of the CWDM filter (each port is designated with individual frequencies).  The wavelength of your equipment's optical interfaces must match the CWDM's input port wavelengths.  

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco