cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
556
Views
0
Helpful
6
Replies

Wireing Question for Data Center

alaporte
Level 1
Level 1

I work in what I would consider to be a small/mid sized data center. We use two 6513 as the core/distribution for ~25 racks of servers.

My question comes in the way of cabling the servers to the core. Currently they are using long patch cords between the 6513 and each server. Well it’s a mess, functional but messy.

I'm trying to figure out the best way to clean up the mess and make it look professional.

Most people seem to suggest 2 different ways to accomplish this:

1) Install switches in each rack and run fiber from the core to the rack. Wire each server to the switch in the rack.

2) Install 24/48 port patch panels between the core area and the racks.

I'm wondering what people think of these ideas and if there are any other suggested ways of accomplishing this?

Andy

6 Replies 6

Not applicable

I think Installing switches in each rack and running fiber from the core to the rack. Then wiring each server to the switch in the rack would be a better one.This separates each rack from collision and other unnecessary factors.

So, as far as I am concerned, the first option is a good one.

ariel.friedman
Level 1
Level 1

We tried installing switches in server racks and ran into the problems you normally get when using smaller switches such as reduced flexibility of port types and shorter MTBF, as well as physical problems like fitting a switch in a server encloser. Not only did we have to install set back brackets, we also lost access to the switches when the server enclosure was locked.

We decided on the second approach. While the 6500's are still pretty packed with patch cables, it is more manageable and we get all the benefits of the 6500's.

Ari

pwwiddicombe
Level 4
Level 4

Another vote for the second one. We run multiple 48-port panels to each rack cluster within the data center, and patch near the 6509's and again near the server area.

dwalsh
Level 1
Level 1

Hi Andy,

Here's something that we used to do where I worked:

We had 6509's with three/four 48 port blades servicing between 150 and 200 phones roughly. I had four total switches, one on each of four floors. So this would be roughly similar to your DC environment, only we're servicing longer horizontal runs and phones, not servers -- but the idea is the same (i.e. high density cabling issues).

Lord knows that when you're plugging in 48 cables into one of those blades, it can get pretty crowded. And since we don't yet know how to alter the laws of physics that determine space requirements, we have to search for alternatives.

Back to my environment: On three of the four floors, we just wired straight from the patch panel (that ran to floor locations) to the switch. Quite a mess when you're running in 48 cables to one blade! However, this is traditional and this is what we did. My cabling guy (very smart fella) suggested something else. At the time I was too chicken to do it on the other floors, but I did agree to try it on one floor. Here's what we did:

He ran Cat5 (at the time, that was standard) connections in 48 cable bunches from an adjacent wall into the switch. They had RJ-45 connections so that they could plug in, and they were all nice and neat. On the other end, they plugged in to a series of punch down blocks (kind of like you see in a phone room for telephone structured cabling). These, in turn, were cross connected to floor locations on another punch down block that went to the floor locations. Now, whenever we wanted to make a connection live, we simply had to connect the correct CAT5 jumper wire from one punch down block to the other. You never touch the actual ports in the switch. They just stay where they are. All alterations are done on the punch down blocks. This keeps things nice and neat and there's no fiddling with cables in the switch area. Any time you need to put in a new blade, you just harness up 48 more cables (we called them pigtails) and put them in the new blade.

NOTE: You could do the exact same thing with patch panels instead of punch down blocks, but with higher densities, it's a bit easier to use the blocks and takes up much less space.

ADVANTAGES:

* Very neat cable design at the switch side.

* Never have to squeeze patch cables in and out.

* Easy to trace cables (but just better to document them and you'll never have to trace them).

* Makes moves, adds, and changes (particularly adds) very easy.

DISADVANTAGES:

* Not sure that you can do it with CAT6.

* You have to get a punch down tool and actually punch cables (not too bad though after you do a few).

* You need to make sure that you don't deprecate the rating on the cable by improperly terminating it (i.e. insufficient twists)

Anyway, I haven't had a need to do this in a while and I no longer work at the same place, but my biggest concern would be if that meets with the CAT6 spec. Not sure about that, but your cabling person could probably tell you.

I'm not a big fan of decentralizing the switches to remote locations. It can become cumbersome and difficult to manage if you end up with a lot of them. Also, it doesn't scale well and can end up with port waste (i.e. you have 24 servers in one cabinet on one switch and then along comes 25; you now have to buy another 12 or 24 port switch to service the need with either 11/23 ports going to waste -- not good).

Good luck. Let us know how you make out. I'd be glad to go in to more detail if the above isn't explained well enough.

Regards,

Dave

steju
Level 1
Level 1

Hi Andy,

In my opinion, both ideas have pros and cons. I will try to list all the important ones that come to my mind right now (yet I might miss some things).

Solution 1:

+ Less cables between racks (obviously)

+ No single point of failure (if one of the 6513 fails, even if we assume that each server is connected to both and you load-balance somehow, it is still going to be painful) - although 6513s have been proven to be quite stable and have a lower chance of failing completely than a smaller switch (e.g. 3750)

- Possible bottleneck at the link(s) between the 6513s and the rack switches - let's say you have 48 100Mbps ports, you will need some 4 Gbps uplink to feel safe (and this might be hard to achieve with smaller switches)

- Costly (cables and fiber are cheap, while switches are not-so-cheap, especially if you plan a fully-redundant solution with each server connecting to two different switches)

Solution 2:

+ No bottlenecks as before (assuming that the fabrics of the 6513s have been properly sized)

+ You won't have defective ports in patch pannels (although the ports in the 6513s can break, they are less likely to break than the ports in smaller switches)

+ Overall cheaper solution

- Thick runs of cable between racks (most likely you'll have to run them overhead, so you need ladder raceways and alike)

- Failure of one of the core switches should be very painful

- Slightly less scalable (ha, what happens if you eventually exceed the number of available ports on the 6513, buy another one?)

So if I was in the situation of implementing a new network, personally I would go with the second solution, unless the traffic generated by servers is expected to be less than 50% on the average (but this is not really likely to happen), or if the number of servers is so large that two 6513s won't be enough (so I would have to implement a two-layer hierarchy).

Hope this is useful,

Steju

P.S. I assumed you don't need inter-VLAN routing, port security and this kind of stuff - this would change the problem completely as best design practices recommend (at least) a two-layer design.

oguarisco
Level 3
Level 3

Hi Andy,

I'll definitely go for the second one, so you'll keep the benefits of having Modular perfomant and redundant switches for the server Farm not introducing another layer of complexity in the net(redundancy, MTBF, performance,modularity ...)between the switch and the servers

Omar