Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Suggestions for core switch selection for a DC/Campus site.

Hi,

My company is currently relocating is primary site including its DataCentre to a new location. In this process we wish to upgrade our core switching enviroment from our 2 x 4507Rs and 6 x WS-4948s (Server/Core) and 11 x 3560s (User Block) as we are severly limited by the 3GB backplane on the 4507s that are configured in a collapsed core configuration. I am looking for some high level suggestions.

Requirments:

DataCentre:


227 x 1GB Ports for servers

100 x 100Mb Ports for DRAC/MGMT

User Access:

288-384 x 1GB POE Ports.

288 of the DataPorts terminate in the data Centre

96 of the Data Ports will terminate on 2 remote switches (connected by OM3).

My initial thinking was to get 2 x Nexus 7Ks and 2 x 3750s and have everything just patched into the 2 collapsed cores due to the huge fabric these things have, but gotcha #1 was our POE requirement for the 288 user ports.

So now I am thinking something like:

2 x N7Ks (configured with 3 x 1GB Line cards)

2 x 6500s (configured with SUP32 and 3 x 48x1GB POE Line cards)

2 x 3560s for DRAC/MGMT (from spares)

Which would be great to seperate the Campus enviroment from the DataCentre enviroment, but this cost is obviously much higher...


I have read through the DC3 IP Datacenter Network design, but I have to accomodate campus user access and not blow the project budget out of the water.

Any suggestions would be great.

Thanks

  • LAN Switching and Routing
2 REPLIES

Re: Suggestions for core switch selection for a DC/Campus site.

Hi,

My company is currently relocating is primary site including its DataCentre to a new location. In this process we wish to upgrade our core switching enviroment from our 2 x 4507Rs and 6 x WS-4948s (Server/Core) and 11 x 3560s (User Block) as we are severly limited by the 3GB backplane on the 4507s that are configured in a collapsed core configuration. I am looking for some high level suggestions.

Requirments:

DataCentre:


227 x 1GB Ports for servers

100 x 100Mb Ports for DRAC/MGMT

User Access:

288-384 x 1GB POE Ports.

288 of the DataPorts terminate in the data Centre

96 of the Data Ports will terminate on 2 remote switches (connected by OM3).

My initial thinking was to get 2 x Nexus 7Ks and 2 x 3750s and have everything just patched into the 2 collapsed cores due to the huge fabric these things have, but gotcha #1 was our POE requirement for the 288 user ports.

So now I am thinking something like:

2 x N7Ks (configured with 3 x 1GB Line cards)

2 x 6500s (configured with SUP32 and 3 x 48x1GB POE Line cards)

2 x 3560s for DRAC/MGMT (from spares)

Which would be great to seperate

Hi,

In order to design a network you should be well aware of the application capacity and the amount of users hitting the application,types of data flow like data/voice or video in the network.With having these things in mind then need to goo for backbone design with having in mind for future growth,If in future it will grow according to that we need to deploy high end swicthes at the core which are having high level backplane capacity to handle the load.

The 6500 chassis offers card slot connections for a 32 Gbps shared bus, or one or possibly two fabric channels to each card slot where the fabric channels are either 8 Gbps or 20 Gbps.  "Fabric enabled" cards have one or two fabric channels connections, either the 8 Gbps or 20 Gbps type.

The fabric is supplied on either its own card (the older 8 Gbps channels - 256 Gbps total) or on the sup720 (the newer 20 Gbps channels - 720 Gbps total).

Check out the below link for 6500 series switches 

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aecd80673385.html

Cisco Nexus 7000 is built to support future 40 Gbps and 100 Gbps ethernet.

Check out the below link for Cisco nexus 7000 series switches for more information

http://www.ciscounity.info/en/US/solutions/collateral/ns340/ns517/ns224/nexus_deployment_report.pdf

And i also i would suggest in order to design take suggestions but have network consultant to design as per the requirement in order to avoid blunder's.

Hope to Help !!

Ganesh.H

Remember to rate the helpful post

Hall of Fame Super Gold

Re: Suggestions for core switch selection for a DC/Campus site.

I'm not familiar with the Nexus line yet so I'll go to the 6500.

If you get, say, a pair of 6500E chassis and use the VS-Sup720.  The two chassis will form a single logical switch, like the 3720 and 2975.  I'd say you want to take advantage of multiple 10Gb links so each chassis with have one 6708-10Gb (8 ports or 10Gb each blade) per chassis.  For your line cards, you'll probably want to think about 6748 (48 x 1Gb port per blade) which go well with the.  For the DRAC/iLO you can use the low-end 6148.

For access, I'd recommend the 6500E with dual Sup32 cards and then use the 6148 with PoE daughter board.

566
Views
0
Helpful
2
Replies
This widget could not be displayed.