cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
422
Views
0
Helpful
4
Replies

Hardware Questions on Catalysts

nccgeorges
Level 1
Level 1

I have a campus design, with two locations for fault tolerance in a internal lan.

There are two 6509 Cores and four Distributions 6509 attached to the cores. The distribution switches are all meshed to the cores. All the links are all fibre. The links from distribution connect to my access layer switches in the MDFs

We have another 6509 switch which is a server farm switch that is attached to our 6509 cores. The hosts belong to their own VLAN. There are 4 1 gig links that meshed to the cores.

We fully populated the server switch with approximately 64 hosts. We want to add more hosts to it.

Adding another 6509 is cost prohibitive. I was suggested to add a 3750 or cheaper catalyst switch and uplink that switch to the server farm 6509.

Why is this bad idea. What are good alternatives? If I swapped out a blade on the server farm switch to put a higher density blade how do I check on the links for oversubscription?

Can someone tell me from a high level why the different catalysts mitigate a huge price differential. Whats the big difference between a 6509 to a 4506; these switches to smaller switches such as a 3750. What is the benefit of purchasing the larger chassis in comparision to smaller individual catalysts and daisychaining them.

4 Replies 4

lgijssel
Level 9
Level 9

6500 series are top of the bill in every aspect with the 4500 only slightly trailing. Selecting the correct device for an application includes skipping functionalities that you may never need. The sole task of a core is to switch traffic, as fast as possible. Not too many features needed there This implies that a switching backbone only should be a 6500 when a 4500 cannot handle the traffic. There are reasons to select a 6500 still, growth expectations for example. You can always get the highest performance from the 6500 but then I ask you: Do you really need that huge performance or are we overdimensioning?

A similar approach can be taken for server switches. Many server farms consist of a few mission-critical servers and many others that are less vital. Personally I think that it is financial suicide to make your server-core uplink 100% non-congesting. One should rather attach only the critial stuff congestion-free and apply a sensible overbooking for the rest. Are there times when all servers on your network are fully utilized? Probably not.

One very big advantage of modular switches like 4500/6500 is that you can be very flexible in setting up the overbooking by assigning (fiber)links to either core or devices. A fixed configuration switch does not offer the same flexibility. (I would never advise copper uplinks to the core in any serious enterprise backbone.)

Personally, I would prefer a 4500 as server switch because it offers pretty costeffective Gb ports.

One could imagine though, that a single 3750 would fit in well to connect the bits and pieces that are there in every datacenter, at least for testing- or temporary servers.

Bottom of the line is that you could give the matter some thought and re-arrange your server farm in more or less the way described here. If you feel the above does not completely apply to your network, let us know.

Regards,

Leo

glen.grant
VIP Alumni
VIP Alumni

Unless there is some undo utilization , 64 hosts sounds like very few on a 6509 . To me it sounds you could add quite a few more hosts to the 6509 before you have to worry about overloading it,even if it is a server farm . we have many more than that running in a server farm environment and we any undo pressure on it or the gig uplinks. if they are worried about everything on one vlan , add another up above and just trunk down to the new vlan , if you are worried about the uplinks then etherchannel a gig uplink. Why does someone think you need to add another switch ????? When you say you fully populated a 6509 what do mean with only 64 hosts? Did you fiber everything to the 6509's ? without knowing the layout of the 6509 it's hard to speculate , it just hard to imagine fully populating a 6509 with only 64 hosts.

How many host do you have on your 6509? How do you test to see if a switch is oversubscribed?

We have 8 slots of 48 ports each , most of which are servers running thru a Sup2 card . I believe the simple show system command shows how busy the backplane is on the switch . Even with that many ports it never shows much over 5% .

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: