Can i know what will be the best design for data center? I will be using 3750G and 3560G. Planning to have 6500 in the future. Btw, what is the best solution in campus enterprise, is it L3 access mode or traditional (Core-Dist)L3 and (Dist-Access)L2? Hope to hear from all of you guys!
There really isn't a "best" way to do it because it entirely depends on your requirements and each companies requirements will be different.
Similarly with the Campus design. I have designed both ie. L2 access-layer and L3 access-layer and each has it's advantages/disadvantages.
It's up to the designer to match the set of requirements to a design that will meet those requirements.
Cisco have a lot of design guides at http://www.cisco.com/go/srnd - it's worth having a look.
Best design depends very much on what you are trying to do - is your data centre going to be "real" servers, or lots of VMware? If Virtual will you be looking to move the servers around via Vmotion? Will you be adding in features like WAAS or load balancing?
Will it be a multi tennant data centre, or just for your own use?
If it is a real data centre, you need to be looking at 6500s as a minimum.
You need to look at simplifying the L2 topology as much as you possibly can - give spanning tree as simple a job as possible. stacking is good - stackwise on 3750s, VSS on 6500. vPC on Nexus.
If doing lots of virtualisation, remember that you may want decent mobility of a VM. That could mean a VM popping up anywhere. That would rule out the use of L3. It also means you need to look at trying to keep uplinks as simple as possible.
Using VSS for core switches, and etherchannel can effectively give you a star topology at L2 - even when dual homed. That means nothing for SPT to handle for you.
Thanks for your reply. Here are my answers/inquiries.
is your data centre going to be "real" servers, or lots of VMware?
- Yes,my data center will have a lots of Vmware. Can I know how Vmotion works?
Will you be adding in features like WAAS or load balancing?
-We have WAAS right now but I haven't used it. Maybe you can suggest on how to put it on network correctly.
If it is a real data centre, you need to be looking at 6500s as a minimum.
-So, your meaning to say that 6500 will be minimum instead of using 3560 and 3750?
stacking is good - stackwise on 3750s, VSS on 6500. vPC on Nexus.
- How is the performance different from stackwise 3750,VSS 6500 and vPC on Nexus?
If doing lots of virtualisation, remember that you may want decent mobility of a VM. That could mean a VM popping up anywhere. That would rule out the use of L3.
- Can i know what is the meaning of VM popping up anywhere and how L3 will benefit with this?
No offense, but this sounds to be quite a job for which you'd need a consultant on hand. The amount of your questions and the breath of the inquiry suggests to me that you will end up in big troubles if you do not have a professional consultant on hand.
This forum of volunteers can do a lot of helping but in the end we cannot be online full-time and assist in the inevitable problems you will encounter in such a big project.
I agree with Ingolf on this. Designing a data centre is not a trivial thing at all and just answering your questions in the last post could take about 10 pages !!
I posted a link in my original thread to Cisco's design docs for data centres and they include VMWare in these docs.
Again, no offense intended, but from the type of questions you are asking it's clear you should either
1) do a lot of reading up - ie. see design link
2) as Ingolf suggests, hire a network consultant that you can work alongside and learn from.
I am now in agreement with the others - GET HELP!
I will briefly summarise a few bits though.
VMotion basically allows you to move a virtual machine from one physical server to another. That means any physical system that may need to support a particular VM needs to have the VLAN(s) for that VM trunked to it. It reduces the chances of being able to use L3 to the access layer. This is linked to the comment about a VM popping up anywhere.
There are lots of options for WAAS - you can insert it inline, you can use WCCP, you can use PBR you can use an ACE to intercept traffic and aim it to the WAE. All of these are design decisions they you need to make based on what services you are offering.
If it is a real data centre, then I would not look below a 6500, but the other option - the Nexus 7000 is significantly more. For a small, single tennant data centre the 6500 is probably a better choice.
Of course we are all thinking major datacentre - you may be using the term to describe something that is basically a step up from what we used to all a server room.
I will repeat the most impoprtant point - get help.
"Can i know what will be the best design for data center?"
As the other posters have noted, best design depends much on what your data center needs to support.
Although I like the 3560/3750 series, the G (gig) copper port varients could be somewhat "lightweight" for a gig bandwidth data center, especially in the core, unless we're dealing with a very small data center. The 3750G-12S model variant, might be best pick within the 3560/3750 series for a core and/or distribution role both for its wire speed performance and its special SDM templates (and additional TCAM resources). BTW, the "big brother" 3560-E/3750-E series offer (about) wire speed performance, and the 3750-E provides StackWise Plus with 2x the bandwidth of the 3750 StackWise along with being more intelligent in how the stack ring is utilized. (NB: 4900 series also offer wire speed and/or high performance.)
"Planning to have 6500 in the future."
Such can be a very suitable platform for core and/or distribution; however that's assuming hardware is properly selected (i.e. the best match of supervisor[s] and line cards for your requirements).
"Btw, what is the best solution in campus enterprise, is it L3 access mode or traditional (Core-Dist)L3 and (Dist-Access)L2?"
Current design vogue is L3 to the edge, but I don't think its advantages always outweighs the additional cost, but here too, much depends on requirements.
BTW, I believe a "traditional" design, e.g. 3 tier, somewhat overlooks the capacity of current gen L3 and L2 switches. So once again, depending on your requirments, there might be interesting design possibilities. I.e. don't lock yourself into a traditional design approach just because it's traditional but consider a design, whether tradtional or not, that serves your data center requirements.
Also as the other posters have noted, these forums are not really the place to assist someone in designing a data center, if you need additional guidance, you would likely be better served by contracting for it.
There is difference between Data Center Design & Campus Enterprise design.
For Campus Enterprise Design, there are different approaches to achieve what you are looking for. what Hardware is in use? what is the total throughput you are looking for? what is the number of users in the Access layer? Do you require full redundant Dcenario? Do you require Rapid convergence? DO you have QoS requirement?
As for L3 and L2 scenarios , Cisco has differnet Approaches and each have some objectives. You could have L3 from the Access layer up to the Core Layer for one option, and the second option is to have L2 Access layer leaving Layer-3 between the distribution and core. Each of those implementation has its own advantages.
I highly recommend looking at Cisco Campus Design Guide for more details.