My company is currently relocating is primary site including its DataCentre to a new location. In this process we wish to upgrade our core switching enviroment from our 2 x 4507Rs and 6 x WS-4948s (Server/Core) and 11 x 3560s (User Block) as we are severly limited by the 3GB backplane on the 4507s that are configured in a collapsed core configuration. I am looking for some high level suggestions.
227 x 1GB Ports for servers
100 x 100Mb Ports for DRAC/MGMT
288-384 x 1GB POE Ports.
288 of the DataPorts terminate in the data Centre
96 of the Data Ports will terminate on 2 remote switches (connected by OM3).
My initial thinking was to get 2 x Nexus 7Ks and 2 x 3750s and have everything just patched into the 2 collapsed cores due to the huge fabric these things have, but gotcha #1 was our POE requirement for the 288 user ports.
So now I am thinking something like:
2 x N7Ks (configured with 3 x 1GB Line cards)
2 x 6500s (configured with SUP32 and 3 x 48x1GB POE Line cards)
2 x 3560s for DRAC/MGMT (from spares)
Which would be great to seperate the Campus enviroment from the DataCentre enviroment, but this cost is obviously much higher...
I have read through the DC3 IP Datacenter Network design, but I have to accomodate campus user access and not blow the project budget out of the water.
Any suggestions would be great.