Question about design - use patch-panels or switches in racks?

Unanswered Question
Jun 18th, 2009

I'm trying to suggest should I use patch-panels in server 19" racks and use stand-alone telecomm racks for access layer switches


should I install 2-3 access layer switches in every server rack?

First version looks more correct, but than I have to use MUCH MORE cables, organizers etc - all of it eats place in data-center.

Maybe someone saw some guides for that question?


I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Joseph W. Doherty Thu, 06/18/2009 - 05:27

2 or 3 access layer switches in every server rack? Considering typical stand-alone rack switches offer 24 or 48 ports, how many servers per rack or how big are these racks?! Or, were your intended access layer switches only 4 port switches?! (laugh)

The main disadvantage of rack switches, you either run out of ports or have too many ports. Since server hosts often push bandwidth harder than user hosts, you also have the issue of how much aggregate bandwidth you need to move out of an individual rack. These issues are better addressed by your suggestion of using patch panels from each rack that come to a network switch rack.

You're correct, all the cables, and their organization, has an expense, but this has to be considered against switch costs. Assuming you intend to use Enterprise switches that offer gig to the servers and perhaps 10 gig uplinks, there's a cost to having extra swiches by deploying to each rack. Also switches per rack would disallow the advantages you might find with a chassis switch in a switch rack. Further, depending on performance to provide, do you want to be limited by uplink bandwidth from each rack, or pay for high bandwidth from each rack that's usually underutilized.

Besides "looking right", the foregoing is often why there are server rack patch panels, etc.

One design, that I found interesting, was using the 3750 "stacked" across racks (guess you could do same with newer 2975s).

Collin Clark Thu, 06/18/2009 - 05:39

I'm choosing to give you the short answer: 2 stacked switches in each rack. If you want the long version I can post it :-)

I have seen a couple different solutions to this problem.

1. Racks have no patch or switch. Straight pulls back to switches. (worst idea)

2. Patch panel in each rack to distribution panel where switches are patched up.

3. Per row switching. At the end of each server row there is a distribution switch (3750 or 6509). Each row then connects back to the core.

4. Per rack switching. This is done either 24 or 48 port to the local rack. 3750's have been stacked in a horizontal fashion this way to ease in management.

You could try looking at the data center design SRNDs:

Leo Laohoo Thu, 06/18/2009 - 15:00

There are two ways of doing this:

1. As per Colin's suggestion, put a L2/L3 switch on each rack and run either fiber (good) or copper (depending on your budget) back to the distro or core infrastructure.

2. Copper patch panel on each run and terminating to the distro or core infrastructure.

Option #2 is the most affordable but if your if you have a very large data centre, this could get either out-of-hand fairly quickly or just plain messy. This option is OK if you have a small data centre.

Option #1 is my choice for a true data centre because if you ran out of fiber optic patch ports, you can always use DWDM to expand.

Before I forget, Cisco released the new WS-C2350 L2 switch. It has the same forwarding rate as the 3750 and is designed for "top-of-rack, server aggregation switches for the data center". It has a 2 x 10Gb uplink (support TwinGig converter), 48 x 10/100/1000BaseTx.

Cisco Catalyst 2350 Series Switches

Hope this helps.

Collin Clark Mon, 06/22/2009 - 07:24

Cisco missed the ball on the 2350. 48 ports on a top-of-rack switch? How many U is that server rack? And only LAN-Lite IOS? That doesn't fit into their layer3 at the edge concept.


This Discussion