Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Datacenter Network Design


I am beginning to put together some idea's on what cisco products i will use for the network. There will be 2 ISP's to start, running BGP. I will be providing managed servers with this network and also unmanaged servers.

I was going to get 2 6509's with Duel Sup1A-MSFC in each and 1 48 port 10/100 card in each to start. As my routers i was going to use GSR 12008's with the 8FE cards because initally my uplinks from the ISP's will be Cat5E ethernet.

So it would look something like this.


| |


| |

6509 -- 6509

| |

2950G 2950G

Will the hardware config's be ok for what i am trying to do? Each customer's servers will be in their own VLAN. And i am getting a block of around 4000 Ip's from ARIN. The managed servers will be connected to the 2950G's with duel uplinks to each 6509 for alot of redundantcy. The unmanaged servers will only uplink to one 6509 as they are a much lower cost item. Can someone please share their thoughts and feel free to ask questions. I just want to make sure i am buying the correct things.


New Member

Re: Datacenter Network Design


The diagram & explanation of what you wanted to implement sounds a little "expensive" to me. Is there a specific reason for wanting to use the GSR 12000 series routers to handle your routing, instead of letting the 6509's handle it? Also.. are all of your unmanaged servers going to be on their own switch.. or will they be mixed with the managed servers as well? If the latter is your choice, how do you propose offering redundacy to your managed servers on the same switch? Unless you're talking about just plugging your unmanaged servers directly into the 6500's. It just seems like you're having to spend more money on hardware than you actually need. If you plan on letting the 12000's handle the routing after all, why not just use 4507R's with SupIII/IV's instead of the 6509's? If you could provide a little more insight into exactly how you want your infrastructure to look/act like, it would be most appreciated. Hope this helps. Thanks.

- Matt

New Member

Re: Datacenter Network Design

I think Matt has some very good points. My initial reaction was that your design was overkill, and very very expensive. Of course, all of this is really going to depend on the amount of traffic that these servers are going to take, hence the load on the network. You should really take a look at the traffic you think these servers are going to generate, including the types of applications running on the servers, etc. Of course a little overkill never hurts, especially if you forsee the number of servers or traffic increasing significantly in the near future.

New Member

Re: Datacenter Network Design

Sup1A's are ancient, you want to run at least Sup IIs.

My take would be to replace the GSRs with 7206's with NPE-G1 (3 X 10/100/1000 ports each), and spend the money you save on the 6500. You could do without edge routers and run the ISPs direct into the 6500 - disadvantage to this is DoS attacks etc. where you don't have a box upstream to protect your core.

Buy Sup IIs or Sup 720's preferably, and why not go with 10/100/1000 cards - 6548-GE-TX are fabric enabled and not much more expensive than 10/100. I'd just get 1 X 6513 chassis to start, with redundant power and Sups you're covered.

Look at private VLANs - otherwise what happens when you have 4000 customers?

What other services do you want to be able to provide? Firewalling? Load balancing?

You should look at 3560's for server connectivity, much more scope for rate-limiting etc. You are going to be charging for bandwidth right?