cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1614
Views
18
Helpful
12
Replies

Cable management for a C6513

mlambdin
Level 1
Level 1

hi all...I am implementing dual 6513's with slots 9-13 loaded with 48 port 10/100/1000 cards. I came to the realization that if I manage my cables equally from the left and right (24 in from each side) that i would not be able to easily hot swap out the fan tray if I ever needed to. (the cables coming in from the left would all need to be pulled to get the tray out)

I am considering bringing all my CAT5 patches in from the right side of the rack. Anyone else do this? How manageable is it to have 48 cables coming across a single slot? Any advice? If you have pictures you can send i would love to see some options. Thanks

12 Replies 12

Nick Egloff
Level 1
Level 1

To be honest, I just try to split them equally across the sides on my 6513s.. The MTBF on the fan tray is so high that I think it's more likely that you'll retire the chassis before the fan tray dies. :-)

Now watch mine die because I said that. :-)

In addition, I'm pretty sure based on the last time I had one powered on where the fan tray was out (when I implemented them about a year ago) that it will not stay on long without the fan tray in - meaning if you try to hot swap it without having the chassis down, it will probably shut itself down.

The Cable management that cisco provides with these really only has enough room for about 24 Cat5e/6 cables; maybe 30, but definitely not 48...

You might be able to replace the standard management and 'dress' everything in from the right by making your own cable management somehow, and making it larger than standard.. Either that or some sort of swing arm from the right that you can strap the cables to, then unplug and swing back if you ever had to replace the card, if you're going to have to make something anyways :-) .

gpulos
Level 8
Level 8

if you take care to 'dress' your cables neatly by making bundles of 12, you should be able to bring them out the right hand side without too much 'ugly' cabling.

when i do a dressing like this, i tend to remove the cisco provided plastic cable management piece and use a panduit on the side of the chassis/rack.

(as stated previously, there is not enough room in this plastic piece for all 48 cables neatly.

you may wish to invest in some panduit to run down the right side(s) of your 6513s so you can have enough room to place all your cables and cover them neatly.

gents; cat 5's pvc jacket stiffens up over time... moving the left side cable plant later can be impossible, or risk breaking delicate rj-45 ends... this is how we do all of our 6500's.... i highly recommend doing yours this way...

I once had to schedule downtime to replace a cable-buried fan module... no fun (it was 100+ connections) try coordinating a good time to do all that!

Hello friends,

we are facing the same problem, imagine that you have 2 or 3 modules with 48-RJ45 ports with 70-80% loaded per each module with cat5e patch cords!!!

And you need to label each cable!!!

Anyway, i have a question? Do Cisco recommend some NW Racks brand name for this 6500 switches? since they need at least 750mm width in order to manage your sides cables.

Or, do you advise which brand is good for that?

Thanks in advance

Abd Alqader Haj Hamad

we've got quite a few of these in production and the "Half left, Half right" solution just won't work if you use Sup720's...you forget the high speed fan trays on the left hand side.

Our cleanest solution so far is to use Carlyle Hydra cable harnesses (http://carlyle-inc.com) which have 12 Cat6 cables in a quick connect block at the head and the length of the cable run is bound by webbing. They have a metal stand-off with rods which replaces the plastic fingers and takes the weight off the connectors.

As to the racks, what are you looking for two or four posts ?

I'm looking for enclosure rack with special cooling system.

Check out the Panduit Racks and Cabinets. They have units that are tailored to fit the 6500 series with cooling ducts to assure proper and safe airflow for the cabinets.

bjw
Level 4
Level 4

You want to consider staggering the rj45 module blades between other modules as to add space between concentrations of blade patches. Especially if the rack and 6513 is in-place and retrofitting the cable management and/or whole caninet is just not feasible, economicaly or resource-wise

bhedlund
Level 4
Level 4

If you are implementing dual 6513 switches for redundancy purposes than you should not need to worry about how fast you can swap a fan try.

When the fan try fails the switch will power itself down in 5 minutes. When this happens your servers will switch their active NIC over to the other chassis.

Unless you have spare fan trays sitting next to your switches and you happen to be standing in the datacenter when you get the page your fan try has failed (if you get it), I doubt you will be able to hot swap a fan tray before the switch powers itself down.

This is why you have invested in a redundant infrastructure ... so you dont have to worry about these things.

If you have to worry about how fast you can run to the datacenter and swap a fan try then you have larger network design issues you should be more concerned about.

Hope this helps. Please rate this post ;)

-Brad

nagel
Level 1
Level 1

We went with the anthenal blades which gives us a concentration of 96 ports per blade with a single cable coming out the front of the module and plugging into the patch panels in the back of the rack. You really might want to consider this option as it solves all of your issues (worst thing that happens is you have to remove one large cable) and the look is way cool.

armando
Level 1
Level 1

This kind of implementation you are doing is what I have in out datacenter don't do it on one side only it will be difficult for the cableling you might want to go on both sides

Chatworths have some good ideas and racks we use the megaframes and cable managment. I do have any idea what i did for fiber and copper email and I will send some pic's if need be.

Brad,

that is a pretty cavalier attitude... Even if the "whole switch" goes down, and you have a redundant switch, who's to say a critical backup of oracle db, or exchange may not fail...

We all know things are never that easy. Trying to get every server dual homed is not always an option. I doubt many "very well designed" datacenters can just drop a primary catalyst 6500 switch and not experience some failure, for longer than it takes RSTP or HSRP to recover.

I think it's always best to leave your fan blade accessible, and that means all cables flow from right to left. I have never had an issue doing it this way... and guess what guys,

the few times I have ever had a 6500 issue, its ALWAYS been the FAN BLADE!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: