cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1235
Views
35
Helpful
19
Replies

Switch advice

husycisco
Level 7
Level 7

Hello,

I need a maximum 8 ports switch with 2x10gig fiber ports (Or gbic slot). This can be cisco or non cisco. I found the following but dont know if its gbic slots support 10gig gbics http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps9967/ps9971/data_sheet_c78-502486.html

Since I need 100+, low price is also a concern

Thanks

19 Replies 19

xcz504d1114
Level 4
Level 4

That linksys switch does not support 10 gigabit. When you start talking about 10gig, you need to start looking at the E series Cisco switches, 3560-E, 3750-E, etc. The 10 gig slots are going to be full sized GBIC's not SFP's. I don't know of any 8 port switch with 10 gig back bone, by any vendor, but I will admit the majority of my knowledge is Cisco.

The biggest cost savings I do for my company, is select the right device to do the right job. I would start with asking the question, "why do we need 10 gig backbones?". Especially 10 gigs per 8 ports. Also keep in mine teh GBIC's for 10gig are not cheap, they cost quite a bit more than the switch you linked.

An example of bandwidth utilization in a real world environment; in one of my datacenters, I have somewhere around 160 servers, most live in IBM bladechassis, 2 6509's, 2 6513's and 2 4510's, and a host of 3750's. Nowhere in my datacenter am I running 10 gig connections, I simply don't have the need for it.

I have another datacenter that supports well over 2500 live camera feeds, none of them are high definition, but they do have exceptional resolution. I do run 10 gig there, it is all supported by 6 4510's all with interconnecting 10 gig fiber, the edge switches are 3750's that run etherchanneled 2 gig connections. I currently utilize about 10% of my 10 gig connections, it is all load balanced across them, and I'm expecting to double my camera count, over 5000 high resolution cameras, all streaming and recording, I expect to utilize 40% of my 10 gig connections.

Also keep in mind that your limitation will not be your port speed, it will be your TCP window size. For instance, a windows XP using the default TCP window size with 1 ms of latency, will NEVER exceed half of a 1 gig connection... Assuming you have all 8 ports, and all 8 useers are CONSTANTLY using their half a gig capability, you still only need 4 gigs of bandwidth.

I can go on for hours about this kind of stuff, so let us know if you have any specific questions.

HTH,

Craig

Craig,

Very well argued!

Rgds, Ingolf

Craig,

That response deserves 5 points. Thank you for your time. Here is some more info.

I wish i had an environment like you, somewhere like a datacenter which physical locations were not a concern.

As you can see in the attachment that is prepared with advanced auto-cad techniques, we have buildings that have to be connected to distribution point which will then connect to metro ethernet POP point. Since minimal digging is required and price is a concern (there are approximately 200+ buildings like ones in drawing), a star topology appears to be out of question.

To prevent bottlenecks, i need 2 x 10gig uplinks, other ports will be gigabit ports. I can post the project to discuss further about best practises, the route that fiber should follow etc

All networks have their challenges, including mine. The picture you put up there is, in all honesty, typical, it is cheaper to build a ring. In fact, at my primary location, we have 11 miles of fiber connected just like that.

Keep in mind, that just because the physical layout is a ring, doesn't mean you can't make it a star! Unless you are pulling zip cord (generic term for 2 strand fiber) you can use clever patchpanel work to create a star topology. If you're paying for the digging, you might as well pull 24 strands or more.

I see your concern for bottlenecking though, and with 200 buildings, averaging 8 ports a building, were talking about 2000 users? Do these users currently or in the near future have a need for VoIP? If so, make sure whatever you choose to be your switch, supports PoE, and more importantly Voice VLANs (that can save you money, cost per port is a bit higher, but you need half the ports to do it).

Another concern I would have is spanning-tree... Please don't daisy chain 200 switches :) Default spanning-tree timers for PVST+ are built for a depth of 7 switches, and the numbers cannot go high enough to support 200 switches deep :) "7 switches deep" is a bit of a misgnomer, especially when you start looking at redundancy, the calculation is done based on the WORST possible depth, not your operating depth, what could appear as 1 depth, could actually be a depth of 6, there are some good examples on the cisco site that can show you a visual of that.

My next question is, what are the users doing with their bandwidth? Is it 80% internet? If so, I'm assuming now that you have a 10 gig internet pipe to fill up that backbone, and I would like to buy ISP services from you :)

I would definately look at having some sort of fiber aggregation point, Cisco makes a 12 port fiber switch, put those at key locations to help with the spanning tree issues. I'm also a big fan of pushing L3 out as far as I can, if you have a building with 40users, what a good time to spend a little extra money on a 3750 and build a L3 distribution point.

To give you an example of what you can do with just a 1 gig backbone. My 160 server datacenter, serves over 150 locations across southern Oklahoma, T1's DS3's, MPLS, very diverse. My 11 mile fiber ring (partially starred out, not nearly what I want it to be yet) is 1 gig. My spanning tree depth is 15, I we have terabytes of file shares, terabytes of e-mail, we support VoIP to every desktop, we support multimedia departments, GIS (global mapping and teraforming project data, very heavy on bandwidth), etc. I have appx 2200 users on my 1 gig fiber ring.

Here is my usage on my cores, both have a single 1 gig connection to them going opposite ways ont he ring. This supports my 2 internet DS3's, and my datacenter for everyone on the fiber ring.

Core A:

MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

reliability 255/255, txload 7/255, rxload 3/255

Core B:

MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

reliability 255/255, txload 4/255, rxload 2/255

Granted that is a real time snap grabbed at 1 inteval of time, so here is the peak usage for today:

Core A TX: 152 Mb

Core A RX: 125 Mb

Core B TX: 47 Mb

Core B RX: 77 Mb

Good design upfront will save you time and money in the end, we all have budgets, and it never seems like enough, just make sure the sacrifices you make are the right ones.

HTH,

Craig

I wanted to add real quick, that 95% of your users, don't need a gig connection (look at my burst usage above). My database programmers say they do... But they really don't :) So where you can cut back on putting gig to the desktop, again might be a good time to throw in something with L3 capabilities to keep that STP issue under control.

Now sometimes we make silly design decisions based on political reasons, that could be a driving force behind the gig connections too.

Wouldn't it be nice if they always listened to us engineers?

Edit: I'm all over the place (been a very busy day at work today), also remember you can put 72 strands of fiber in the ground for later use, and only terminate what you need, again initial cost saver that could save you thousands later.

Craig

"I wanted to add real quick, that 95% of your users, don't need a gig connection . . ."

Is that anything like 95% of users will never need more than 640 KB RAM, or more than 10 Mbps (shared) Ethernet? LOL

PS:

BTW, the answer should really depend on cost/benefit.

Everything is definitely subjective, to put a devils advocate spin on your same sentence, would you agree that users will don't need 100000000000 Terabyte connections? I chose the word "don't" not "never", future prospects should always be in your mind, a year ago my engineers were telling me "we will never need to go to IPv6, NAT will save us forever"... low and behold today we actually have a need IPv6 would solve.

But you more elegantly put it; "the answer should really depend on cost/benefit".

Craig

". . . would you agree that users will don't need 100000000000 Terabyte connections?"

Nope.

I suspect when you also add the cost for this bandwidth connection (with current technology) has even many, many more zeros behind the first number, users might change their mind. If they still want to go ahead, and their check is good, insure your get a percentage as a fee. I've always wanted to retire to my own (purchased) island. Wonder if Australia is for sale? ;)

Wonder if Australia is for sale?

Mate, I can be your "agent". I know someone who knows "someone" who can sell you anything that fell off from a back of a truck. ANYTHING!

Huseyin, hope you caught Craig's "Keep in mind, that just because the physical layout is a ring, doesn't mean you can't make it a star! Unless you are pulling zip cord (generic term for 2 strand fiber) you can use clever patchpanel work to create a star topology. If you're paying for the digging, you might as well pull 24 strands or more." If the extra fiber was available along the ring, then by patching a "star" as Craig suggests, you might no longer need 10 gig. Something likely worthwhile to price out both ways.

Leo Laohoo
Hall of Fame
Hall of Fame

WS-C3560E-12SD has 12 1Gb SFP and 2 10Gb while the WS-C3560E-12D has 12 10Gb ports. The 10Gb ports support the TwinGig converters.

Thanks guys, great information.

Under each building, i need only 2 ports other than that 2 up and downlinks. We use PLC (Powerline communication) devices that can propagate ip connectivity over electricity lines up to 205mbps to 32 or 64 clients. 1 port is required for that PLC gateway, and 1 required for Audiocodes media gateway for VOIP. I want an infrastructure that can be able to offer 100mbps to customers. Thats why uplinks are my only concern atm. But the price for that cisco switch that leo posted is way too pricey to afford 200+. Maybe they can be used at distribution, 10gig uplinks to core and 1 gig at distribution, or 2 using etherchannel.

"Another concern I would have is spanning-tree..." exactly. That ring topology will travel max 35 switches and will end at the distribution switch that it started. How would STP behave?

As a matter of fact, cisco solutions appear pricey, hp is more affordable, thats why im planning to work with hp in distribution and access layers. but hp introduces me new type of configs such as gvrp instead vtp. Plus how will hp behave in an stp scenario with 35 switches deep? How would hp distribution layer and protocls (especially gvrp) interoperate with cisco core switch pair?

In the Cisco product portfolio, anything that offers 10 gig is going to be on the more expensive side of the ledger.

Candidate (L3) switches that offer just dual 10 gig ports are the 3560E and 3750E series, the 49(2or4)8-10gig switches, ME 4924-10GE, and/or an "inexpensive" 4500 or 6500 with a supvisor that offers dual 10 gig (i.e. 4500 Series Supervisor Engine II-Plus-10GE (L2), 4500 Series Supervisor Engine V-10GE, 6500/7600 Supervisor Engine 32 with 2-Port 10 Gigabit Ethernet). Price of the switch will be impacted (much) by number and type of other switch ports. (Least expensive might be the 3560E-24TD.)

If you need the other ports to be gig fiber, beside the option of using a 3560E-12SD (as mentioned by Leo), you can also stack a 3750-E with one or more 3750G-12Ss. (Likely more expensive than a 3560E-12SD but also provides stack capabilites with regard to redundancy and port density.)

As other posters have noted, if ring is a deep as you note, you might consider running several or all L3 switches as routers and minimize or eliminate the STP requirement.

35 switches... man you are going to make me go back to STP math school... No matter what you are going to have to change your STP timers, it's going to be a give or take situation, either speed up your BPDU transmission and increase processor load or increase age time and slow down convergence.

Here is an article about PVST (not MST or Rapid STP):

http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a0080094954.shtml

When you start talking about a multi-vendor environment, a lot of concerns start to arise. I have some previous experience throwing HP in the trash... ahem, I mean working with HP... I think that says enough :) BUT I will say in HP's defense, my environment is just not meant for their target market, yours very well might be.

Being in a position that influences many different factors, I actually few manpower and down time as a monetary value, my organization genarates an average of $20,000 a minute, down time, mean time to repair, customer support etc are all very important to me. So my engineers and admins ability to support, troubleshoot and deploy equipment in a timely fashion are very high on my priority list. Granted not all organizations place such a high value on those items and that's where some long term cost can be saved. What I'm getting at is the purchase price is NOT the only factor to consider, IT in general is seen as a non-revenue generating, high expense, but businesses can't operate without us today.

In terms of GVRP / VTP, it's no different, concepts are completely the same, learn the syntax and you are golden, for the record, the same thing applies to Juniper. I have never tried to configure gvrp on Cisco before, I had seperated HP and Cisco with a L3 domain.

Back to spanning-tree (again I'm sorry if I'm all over the place), cisco supports PVST+ by default, HP supports MST (if I remember correctly) by default, Cisco will detect the mismatch and consider it a seperate spanning tree instance, I'm not sure how HP will treat it, your best option is just to configure the Cisco to use MST. With MST it takes all of the configurations and calculates a hash and sends it to it's neighbor, as long as the has matches you are in the same domain, things that affect the hash: MST Name, MST revision number, instance mappings, and VLAN names, if any of those don't match you will have a spanning-tree domain split, which could lead to loops. Dump everything into instance 0 (cisco default) and set the name and revision the same and you should be OK. Cisco MST supports 20 hops by default, and is configured with the global command "spanning-tree mst max-hop x" and can go up to 255, max-age is set to 20 seconds by default (you should consider each hop a 1 second delay), and a transmit time of 2 seconds (speeding up the transmit time can reduce the required age time, but increases processing load exponentially.

It sounds like you have A LOT of things to consider, don't let my Cisco bigotry sway from the right decision. Without knowing your budget, number of facilities, fiber layout, fiber strands, fiber lengths between facilities, current infrastructure, user needs, user functions, internet bandwidth, and a whole host of other variables, you're not going to get a GOOD answer.

Quite honestly, I think you are on the right track, you are considering the right things, I think you have a lot of foot work to do, and I think you are the BEST person to answer the needs of your current project, as you know the requirements better than anyone here.

Good luck on the long hours :) Let us know if there is something we can help with,

Craig

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco