Recently, someone I know was tasked with creating a design for a next generation server farm at a new data center. He provided a drawing that depicted a switched access layer, a distribution layer and a core -- the typical Cisco hierarchical model.
The parts I thought were strange were the following:
1. No routed connection between the pair of distribution layer switches. Instead, he shows only a 40G L2 etherchannel between them.
2. No L3/routed connected between the pair of core switches. Instead, he shows only an 80-G L2 etherchannel between them.
3. He also extended the switching domain all the way up to the core with 40-G L2 Trunks that are dual-homed from each distribution switch to each of a pair of core switches.
4. No L3 routed connections at all between the distribution layer and the core.
This design approach looks like it violates some very basic internetworking design principles. Am I wrong?
Attached is the drawing he provided. Where it says "network core", he provides a separate drawing that shows two 6509s connected to each other with an 80-G etherchannel.
In a datacenter, you are better off keeping a L2 connection between the distribution switch. Some application might need to have L2 adjacency to work and if a server farm is split between many access switch, you dont want to have a split network. It can also be useful if you plan on using some service modules.
I would invite you to read Cisco datacenter design guide 2.1 at
For the same reason, if you plan on having many distribution, you might have a need for L2 between them.It all depends on your applications.
Please rate all helpful post
> This design approach looks like it violates some very basic internetworking design principles. Am I wrong?
I don't see how this violates the design principles. While, the drawing does not mention any L3 Vlans, it is very likely the distribution and core switches will have plenty of them.
I can picture the L2 inter-switch links being trunked as opposed as L3 inter-switch links but with SVIs representing the L3 subnetting.
I believe this diagram should be provided with another diagram where it explains how the routing is going to take place.
This can't be something to present to a customer as final draft.
Thats just my point: There are no L3 connections shown -- at all. And I cant read this guys mind and nor can the client, so we have no way of knowing what hes thinking. All we have to go on is the diagram that he has provided.
There should be routed connections between each of the distribution switches as well as between the core switches.
The distribution switches should also be dual-homed to the core switches with ROUTED connections, not L2 trunks. Why extend the switched environment all the way up to the core?? There should be no L2 connections linking the distribution and core switches. This would create a huge L2-loop/STP domain. There is no L3 isolation.
All this is basic but it is not shown. That is why I say it violates basic design principles.
If this is the final draft (not a work in progress) then this drawing does not meet any design guideline.
Not only the Layer3 design is missing but the L2 allocation as well. Is he planning to run everything on Vlan1 ?
I want to give the benefit of the doubt, this person must have another drawing with more information.
The purpose of this diagram was to show the general architecture and design, not specific configurations. So, the fact that there are no VLANs shown or what mode of STP is is going to be running is OK. We know they will be there in the future.
This diagram was to show switch hardware, modules, and the connections between them.
My problem is not just with what he does NOT show, but what he DOES show, like L2 trunks between the distribution and core layers. That is just so dead wrong. So, another drawing lurking somewhere does not solve the problem.
I don't understand how can it be dead wrong. If you had 6500s running in hybrid mode, that's how switches would interconnect (L2 trunks).
Many companies even when using Native IOS, still implement L2 inter-switch links, because that's the design they are used to. BTW, I'm referring to large companies, not just small shops.
I agree with Edison on this. There is nothing inherently "wrong" with this design as such although without more informtion it is difficult to say exactly what is being proposed.
The L2 40Gbps channel between the distribution switches leads me to believe that the L3 vlan interfaces for the server vlans will be on the distribution switches. Now if you want to run HSRP between the 2 distro switches then you need a L2 channel between your distro switches unless you want HSRP to traverse the access-layer.
As for L2 to the core, well Cisco have changed design recommendations a few times on this. It did used to be switch in the core for speed although with the advent of L3 switches that does not hold as true. But to say it is "dead wrong" may be overstating it a bit :). It would be helpful to question his choice on L2 but there may be very good reasons why.
It's also not clear from the visio what the data centre core consists of and what else is attached into it. I would investigate the possbility of using L3 links to the core which would allow automatic load-balancing across the uplinks and would constrain STP to the distro/access layers.
Dominic also makes a valid point in that having the access-layer connected to the distro layer with L2 links allows for future use of service modules in the distro switches.
IMHO, I think that extending the switched environement up to the core is absolutely preposterous these days. Forgive me for being so uncompromising about this. A switched core was the recommendation years ago, before the advent of CEF and dCEF. Switching was faster than routing, but not anymore.
In fact, the trend now is to mitigate and minimize the switched environment even further by deploying EIGRP or OSPF at the access layer (L3 isolation) to create a routed access layer.
Therefore, linking the distribution layer with the core using L2 trunks to me is completely senseless and dangerous. An L2 loop can wipe out the entire data center, as opposed to just isolating the outage to the the access layer.
As for the L2 trunks between distro switch 1 and 2, of course that makes perfect sense. They must exist. My problem was not that they exist, it was that the routed connections between distro 1 and 2 do not exist. That is a basic requirement.
As for the L2 connections between core 1 and 2, they should not exist.
I think this is a good discussion...
When creating a network design, the number one goal is meeting customer's needs and requirement.
We don't know this customer's requirement so stating that is dead wrong, can be a bit dangerous.
I agree, I don't see how this violates the design principles. While, the drawing does not mention any L3 Vlans, it is very likely the distribution and core switches will have plenty of them.
I believe that another diagram should be provided with an explains how the routing is going to take place.
Give him the benefit to explain this design in more detail.
Why are you using 6KW and 4 KW PS in a data center. Any new type of POE modules ; ) How do you get 40 GB on 4 ports on 6708 if it is oversubcribed on fabric by a factor of two :(
The distribution layer is oversubscribed. 80G etherchannel is a misnomer. You cant get 80g on 40G fabric
One last point - lets just hope you never need to add any additional modules to your access layer :0 All the 6506E slots have been filled up - odd that 6509E's were not used here also - 3 extra slots, almost no additional cost and very little additional data center space requirement.
I didnt create this design. Scroll up so you can get the whole picture. I agree with you, by the way...I have a lot of problems with the design...
Hi, could you upload the drawing in an older version Visio format, this version may not be viewable to all (like my case).
Without having seen the design drawing I can imagine a very valid reason to have L2 links between core and distribution. It provides the option to channel interfaces using etherchannel (and from the discussion I understand this is exactly what is done).
By using etherchannel and then limit the L2 (etherchanneled) trunk to just one vlan (interroute/backbone vlan which only carries routing protocol and routed traffic) you get more bandwidth and still have a L3 link in fact. Without this approach you would have multiple L3 interfaces that all participate in the routing protocol updates and would have the same cost so it would load balance. This could have a larger convergence time then the L2 design with etherchannel and limited vlans.
If the intend is to have full L2 trunk carrying all vlans I do agree this violates the basic design rules.
I do agree with the other repliers that without additional info it is impossible to make a good judgement, thus I think stating the design is dead wrong is kinda jumping the conclusion.
I would request more detailed info before sending this design to the bin.
Hope this helps,