cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2054
Views
0
Helpful
13
Replies

Switch stack design question

nickbrown1968
Level 1
Level 1

Hi, I'm evaluating some LAN design proposals and am seeing two distinct designs emerge. The switches are to be located across a pair of comms cabinets.

Some designs propose a single (6 x 3750X) stack split across the two cabs. Others propose a pair of stacks - one in each cabinet - with HSRP\VRRP configured across the pair of stacks. I'm trying to decide the pros and cons of each approach.

The single stack design seems the simplest, but are there any drawbacks? Any assistance appreciated.

Cheers, Nick.

13 Replies 13

Reza Sharifi
Hall of Fame
Hall of Fame

Hi Nick,

One of the benefits of VSS (6500 series) and stacking (2960, 3750 series) is that you don't have to worry about STP, HSRP, VRRP, etc, because the entire stack logically looks as one device.  If you go with 2 different stacks, then you are introducing these protocols back to your design.

I personally like the one stack design for core/distro, as it is simpler to troubleshot and it provides all the redundancy you need.

HTH

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Usually the single stack is preferred, as it does simplify things.  However there are still some advantages to splitting devices.

One issue with stacks has been upgrading them causing downtime.  The latest IOS image, I think, now supports rolling upgrades - can't say how well they work.

Another issue with stacks, is how they move traffic around on their ring.  The original 3750 series placed everything thing on their ring.  The later -E/-X models use their ring much better, at least for unicast.

If if the switch decides to go nuts, as one stack, you lose everthing.  With two stacks, one stack might continue to function.

There might be a couple more advantages to splitting your stack, but normally you would opt for using a single stack.

devils_advocate
Level 7
Level 7

The pros of using a single stack is that its cheaper (less hardware and less cooling/power) and you can utilise link aggregation from your access switches which effectively takes spanning tree out of the mix (if configured correctly).

The cons of using a single stack is that you have no redunancy if the entire stack were to go down, although having access switches connected to different stack members and also using multiple power supplies from different power feeds can somewhat mitigate this.

By using two stacks, you can uplink each of your access switches to a seperate stack and allow spanning tree to take care of the redunant links but this has the downside of physical links sitting there doing nothing.

Budget dependant, it would be worth looking at 6500's and VSS but this could cost a fair amount more but will allow you to use both chassis and provide much better resiliency.

In my experience, the chance of a whole stack going down is rare (although not impossible), it tends to be individual switches but if the budget is there, get two stacks.

With the 3750x series, you have several different types of stacking cable lengths, which is something you should consider in this design.

You have 1m, 3m, and 50cm stacking cables.

If it were me, I would choose 2 stacks with 3 3750x series switches (although I would choose 3850s if I had my choice).

Now, how is your network topology setup?

If you have access switches off of these two stacks, you can have two etherchannels to each 3750, or one connect to each, for redundancy.

Then you configure configure your FHRP, and setup and reundancy you want for the outside.

You can actually get 5m, 7m and 10m stack cables from a company in the states, although these are not made by Cisco so its likely they would not support them if you had an issue.

We are having to order a few of the 7m ones due to a single stack being needed between two cabinets on one of our vessels.

Ah that's interesting to know. I would order several replacements then just to be safe.

If it were me, I would have two of those 7m stacking cables run at the same time, and obviously only have one connected. That way if you had a failure, it wouldn't be as bad. Just an idea.

Leo Laohoo
Hall of Fame
Hall of Fame

Here's my $0.02 ...

1.  If you are thinking about stacking, you may need to reconsider your choice of 3750X against 3650 and 3850.  3850 has the same price as 3750X and the 3650 is priced at the same level as the 3560.

2.  You said "cabinets".  I've got a strange feeling you are talking about putting your "stack" of switches in a DC and hooking up your servers to the stack.  If this is the case, then you seriously need to reconsider your options as 3750X, 3650 or 3850 ain't designed for DC work.  The models do not have deep buffers to support high-speed, continuous and hitless data traffic from servers.  If you are doing servers, you'll need to consider Nexus 2K with 5K/6K and 7K.

3.  How many switches in a stack?  In our case, it was A LOT CHEAPER to get a 4510R+E bundle (including supervisor card and PoE line cards) than a stack of 3750X.  Plus with 4510R+E, you need LESS power requirement than a stack of 3750X/3650/3850.

Really, the 4510 was cheaper than a 3750 stack?

I did a comparison for a recent project where I looked at both the 4500 and the 6500 series for the distribution switches and it still worked out cheaper to get a stack of 8x 3750's. I think this was because we needed 16x 10GB uplinks at this layer and also over 150 ethernet ports so using 8x 24 port X series switches fulfilled both of these requirements within the budget. I would have much preferred not to have end user devices patched into the switches which are doing the intervlan routing but budget dictated that I didn't have enough money to seperate them and have a dedicated distribution switch.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Really, the 4510 was cheaper than a 3750 stack?

Yes, it might be.  You do need to carefully compare what you're getting.

Within the 4500 series, a supervisor redundancy chassis adds to the cost, and a second sup especially so.  Adding full L3 adds to either's cost, especially if you buy L3 licenses for every stack member.

The 4500 provides a true fabric between cards, but the older cards/chassis only provides 6 Gbps per slot.  As Leo mentions, the 4500 generally has more buffer capacity.

My experience has been, for an user edge device, with redundancy, stacks of 3750s, of 2 or 3, were good value.  "Today", stacks of 4 are often borderline value.  For larger stacks, you probably should consider a chassis.

Note:

When I write "today", keep in mind that comparative value of devices may change.  I designed a large campus where we used 3750Gs stacks (occasionally even stacks of nine) in user closets and 3750E for (sort of) ToR for servers.  At the time, I considered a 4500 unsuitable for multiple gig (this pre- sup6 with 6 Gbps per slot chassis) and 6500s too expensive for an edge device, especially user edge).

Today I've been using stacks of 3750Xs, but generally only up to about 3 in a stack.  If I need 4 or more, we'll often use a 4507R or 4510R with 24Gbps line cards.  (Few host devices really need 48Gbps line cards.)

I did a comparison for a recent project where I looked at both the 4500 and the 6500 series for the distribution switches and it still worked out cheaper to get a stack of 8x 3750's.

It doesn't matter whether you are stacking 3750X or the older models.  If you are going to stack 7 or more switches, then get a chassis or break your stack into two.  Don't attempt to stack 7 or more.  In some cases, you will get a lag in between commands.  This lag can give you the impression that you've crashed the stack.

If you need to stack 7 or more, get a chassis, like the 4510R+E.

nickbrown1968
Level 1
Level 1

You've all given me a few things to consider. To clarify a few points:

- I'm not a network expert by any means which is why I'm not doing the design myself.

- All the submitted proposals follow a collapsed access/distribution/core layer design using 6 switches. There are no servers in the estate, the LAN is to serve around 150 workstations and VOIP phones only. The only other major network components are the firewalls protecting the edge to the Internet.

From a technical POV it appears that the biggest issue with a single stack is if it goes "down" then the whole of the LAN is lost. Obviously a single switch failure could be caused by many things and would only affect devices attached to that switch, but how exactly does a whole stack go "down"?

Am I right in thinking that in a two stack design then only one of the stacks would own the HSRP virtual IP address at any time, meaning that all traffic to/from the Internet via the firewalls would be via that stack only?

Thanks for all the responses.

From a technical POV it appears that the biggest issue with a single stack is if it goes "down" then the whole of the LAN is lost.

Wow.  This is an "armagedon" scenario.

On the top of my head, there are only two reasons why you'd loose your entire stack in a single blow:  Power or IOS-caused crash.

If you loose a single switch and you've got multiple uplinks, then only the clients on that switch loose contact. 

FYI:  Investigate IOS version 12.2(55)SE8.  If you need 15.X version, then read up 15.0(2)SE4.  Don't even bother with the rest.

Review Cisco Networking products for a $25 gift card