Design questions

Unanswered Question
Oct 9th, 2013
User Badges:

Hi all,


I'm looking to implement some changes to our network to introduce a distribution layers instead of the just Core/Access design we have now and wanted to get some thoughts on a couple of things:


In a Catalyst 3750G stack if there are only two members in a stack and the stack cable breaks, but both switches are still operational would this cause a MAC flap as there is an etherchannel over two switches? If so would a stack with three stop this problem as there would be an 'double hop' connection to the third switch.


Where about's should I be feeding my data centre into the layers, should I be looking at getting some high speed resilient switches and having these fed in as an access layer or should I connect them directly in to the distribution or core layer?


Are there recommendations on the amount of access layer switches/ports/bandwidth you should be connecting to a distribution switch for best practices?


Just to add I've read through the best practice design document, so saw the parts about 20:1 access to distribution and 4:1 distribution to core, but wanted to know more in terms of whether there are limitations on the amount of switches you should stack together in the distribution layer.


Regards,


Ryan                  

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
bgroves Wed, 10/09/2013 - 09:07
User Badges:

As far as your stack of 2 3750G's you mention concern if the stack cable breaks leading me to ask are you using both stack ports to create a stack ring of these two switches or do you just have 1 stack cable between them ?

Are your access switches and core meshed to this stack o' 3750's ?


Do you have a visio and some rough port counts, it's hard to suggest equipment without knowing the scope of the data center where my data center may justify numerous 7010's your's may be a good fit for stackable switches in your distro layer.


Then there is layer 8 to contend ....... budget !

A BOM that might be an acceptable cost of doing business in line with best practice to management at a large corporation may cause the CIO of a smaller firm to have cardiac arrest or be a career limiting event to the engineer proposing it. :-)

Ryan Heseltine Wed, 10/09/2013 - 09:19
User Badges:

Hi Brian,


Currently I have a stack of 3*3750G switches which are all interconnected via stack cables, so with this I presume if for instance the cable from stack member 1 to stack member 2 breaks then stack member 1 can still talk to stack member 2 via stack member 3. In this case if a cable breaks then there isn't any downtime. However, if I had just 2 members in the stack instead(so this is just theoretical at the moment) would I be increasing my risk of a stack cable breaking causing an outage by taking down the etherchannel or does it do something clever to just disable one of the ports in this instance.


A brief description or our network at the moment would be 7 * 3560 * 48 port * 100MB Access Layer switches that connect in to a stack of 3 *3750G * 24 port * 1GB Core Layer. In addition to this we also have 2 * 48 port * 1GB Dell switches that connect one of our virtual environments into the 3750G stack.


The majority of our servers at the moment and 5 VMWare ESX hosts are all plugged in to our 3750G stack. I'm looking to expand this as we're running out of space, but it also seemed like a good time to try and redesign the original network to get in line with best practices and ensure its scalable in the future.


What I'm trying to work out now is how I want to be splitting up our current network in to blocks that we can repeat over and over for growth or whether I'm good to just expand on what we have at the moment.


Ryan

bgroves Wed, 10/09/2013 - 09:55
User Badges:

First off with just 2 you can use all 4 stack ports and make them a ring of 2 so switch 1 and switch 2 have 2 stack cables between them. So if you wanted to grow your stack of three and break it up into 2 stacks of 2 (buying another switch obviously) you'd just cable both stack ports on switch 1 to switch 2 in each stack and still have all the advantages a stack ring offers. Then diversify your core to distro stack as well as the distro to access switch connections across both distro stacks.


Remember stacking is not spanning tree with stack ports a loop is Good and they are designed to work that way.


I think the idea of a repeatable design is great, I do the same thing with my access switches.

Adopt one good design and replicate as required, much more supportable than if every new server access farm is unique.


One thing I suggest would be to break up your access switches be it stacks or chassis into pairs and diversify those esx hosts across the pair sure a stack of G's is very resilient but if you can break say a stack of 7 into two stacks of 4 and 3 it'll contribute to pickup up that miniscule bit of reliability with no real expense.


Depending on budget get yourself a pair of 6K's (sup 720 would be fine) as distro or pair of stacks of some higher end stackables and house the SVI's there.

Then build pairs of layer 2 access switches either chassis or decent stackable switches (x's with power stacking might be good fit) mesh everything, make vtp transparent, micro managing the heck out of your spanning tree and you'll have a very respectable design IMHO.


Then diversify servers across the pair depending on your server fail over strategy,


Now when your network gets bigger you might consider collapsing the core and distro onto vdc's on 7k's.....

Joseph W. Doherty Wed, 10/09/2013 - 11:07
User Badges:
  • Super Bronze, 10000 points or more

Disclaimer


The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.


Liability Disclaimer


In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.


Posting


I'm looking to implement some changes to our network to introduce a distribution layers instead of the just Core/Access design we have now and wanted to get some thoughts on a couple of things:

You believe you need a distribution layer because  . . .


Reason I ask, often many believe all networks "must" have 3 layers, as they also have dedicated "purposes", but often the real answer is "it depends". 


Among other things, the 3 layer design goes back to hubs and routers, but with inexpensive L3 switches available on edges, "smaller" networks can actually be rather large while still using just 2 layers.

In a Catalyst 3750G stack if there are only two members in a stack and the stack cable breaks, but both switches are still operational would this cause a MAC flap as there is an etherchannel over two switches? If so would a stack with three stop this problem as there would be an 'double hop' connection to the third switch.

That would be bad, as Etherchannel should fail as one logical device becomes two.  Even in a dual stack, that's one of the reasons both stack cables should be used (increased bandwidth is another good reason).


Where about's should I be feeding my data centre into the layers, should I be looking at getting some high speed resilient switches and having these fed in as an access layer or should I connect them directly in to the distribution or core layer?

Most Enterprise class switches, today, are rather high speed.  Nexus like cut through is nice, but at gig bandwidths, how often do you need the slight latency improvement?  Resiliency, though, considering the impact of lost network service to many users, should often be of high priority.


In smaller networks, I like to connect to an expanded core to avoid another point of link congestion and to also avoid additional switch hop latency.


Just to add I've read through the best practice design document, so saw the parts about 20:1 access to distribution and 4:1 distribution to core, but wanted to know more in terms of whether there are limitations on the amount of switches you should stack together in the distribution layer.

Those are just rules of thumbs, basically for setting over subscription ratios on your uplinks.  On higher bandwidth links, often the ratios can be reduced because the edge hosts don't actually transfer any more traffic then they would with a smaller bandwidth host connection.


PS:

An example of a "small", but fast and resilient, they might be a model for you would be to use a pair of 4500-x (or pair of 4500) in VSS config with edge 3750/3650/3850 stacks.  Each stack would have a minimum of dual Etherchannel to the VSS core.  (Bandwidth between edge and core can be selected by number of links in Etherchannel and/or gig vs. 10g uplinks).


Servers should also have Etherchannel either to VSS core, or to a dedicated server stack.

Actions

This Discussion