Cisco Support Community
Community Member

Datacentre design

                   Guys what is the recommnded design between two datacentre.....shd we span layer 2 or ??? layer 3 should be used....secondly how abt DMZ and firewalls....need an opinion as its a hot topic these days

Everyone's tags (2)

Datacentre design


DataCenters Applicaion typically requires Layer-2 extension and this is recommended when you have definitve backup DataCenter.

So Two DataCenters interconnection recommended to be extended with Layer-2, it provides better performance for Applications that needs High Availability and Mobility.

I didnt get your second question about dmz and FWs? would you elaborate more?



Datacentre design

Your DMZ will host servers which are public facing and you can use firewall to seperate these zones. Are you looking for some specific recommendations?



Datacentre design


Majority of the setup has Layer 2 span across multiple datacenters as said by Mohamed. The main reason what i have seen is because of application requirement (For example : One server in DC-1 & another server in DC-2 - Basically clustered) this provides kind of HA both in terms of physical box failure & site failure as well.

As per DMZ is concerned, it really depends on the requirement. Mostly if you were having public facing servers. Placing the frontend servers within a segregated zone, could be called as a firewall complex & protecting even from corporate LAN users by having an OOB DMZ (Out of Band) Mgmt. This approach helps you secure your frontend environment with security threats from your internal network as well.

Hope this helps.

Community Member

Re: Datacentre design

Hi there,

What edge devices will you be making your Datacenter Interconnect on? Nexus? Catalyst 6500 or something else? Or are you having a service provider offering for Layer 2 extension (eg VPLS or EoMPLS?).

Would you like any design guidance on the product and service selection?



Sent from Cisco Technical Support iPhone App

Re: Datacentre design

Do you have high availability requirements or "mission critical" applications? What other business requirements do you have in terms of DR and continuity?

I tend to steer clear of layer 2 bridging between data centres - I have seen some horrific failures (including one that wore a $1+ million SLA penalty) and there are plenty of reasons why not to do it including:

  • Vendors telling their customers to limit broadcast domains for over 20 years - why has it suddenly changed?
  • Your new super large broadcast domain could be redefined as a super large failure domain - an event in one DC can bring both crashing down
  • Traffic trombining problems
  • Split brain problems at the services layer (firewalls, load balancing etc)

I am a firm believer in designing environments that can support proper application architecture that can scale out, rather than relying on layer 2 clustering (or layer 2 guest migrations). This method involves disparate sites connected at layer 3 and using application delivery controllers (local and global load balancing) to provide your redundancy. I personally cannot sell a customer a solution that involves the word "high availability" or "mission critical" if it involves layer 2 sprawl between data centres, and only poorly designed applications require it (did you know even Windows Server 2008 supports clustering across layer 3 routed domains??).

If you *must* do layer 2 interconnections between data centres for whatever reason, the OTV technology found in the Nexus 7000 is by far the only rational and sane way of doing it (in my opinion of course).

Re: Datacentre design

just  to add to the above posts

OTV is a good option for LAN expansion and Datacenter interconnect, however you need to take into consideration the WAN/MAN connectivity from L3 point of view as you will have asymmetrical routing issues when you start moving devices between the DCs such as vMotioning

one of the ways that can help to over come this issue is by using LISP

hope this help

Re: Datacentre design

Good insight Nikolas. Agree with you. Personally if clustering is required then i would keep that intrasite. and use a load balancer intersite for HA.

spanning L2 across DC's has become a hot topic these days and although there are so many justifications given i personally like to stick to L3 as well. As you mentioned OTV is by far the rational as it relies on "IP". Just wanted to let you knwo that even 6k's sup 2T offers OTV. of course it needs 6800/6900 LC's and DFC4 i believe.

L2 domains are a pain in the butt.Also they dont offer the flexibility that L3 offers and services like VPLS need an established MPLS infrastructure. hence dependencies around.

Would love to hear more on this topic

To the original poster, in regards to the DMZ you can have couple of them, split them like the below

public dmz : - all you web servers and public facing stuff

private dmz :  you internal stuff

mgmt dmz:    your jumphosts, etc

shared dmz:   where you share your stuff with some extranet partners etc

services dmz  : wher eu have ur antivirus servers, citrix etc



Re: Datacentre design

Marwan - You are absolutely correct. Whilst OTV resolves some of the outbound route tromboning problems, depending on design you will still end up with tromboning/asymmetrical routing for the inbound data. This presents another problem in that it can (once again, depending on design) cause state issues in stateful devices (load balancers, firewalls etc). Yes LISP is nice in theory, but why do we require these "fixes" for a problem that solely exists because some sales people believe it's a fantastic idea to do live guest migrations between sites? That is generally the main driving factor in doing L2 data centre interconnects.

Kishore - Thanks for that info, I wasn't aware that OTV was available on the Sup-2T - that is awesome news.

Datacentre design

well OTV, LISP and other new network mobility and DCI technologies i believe will be a requirement for the near future networks with every thing mobile and virtualized and here you will start look into a smooth and flexible technologies to use and fix L2 and L3 issues and requriments

by the way OTV is supported on the ASR1K as well


CreatePlease to create content