ASK THE EXPERT - SERVER NETWORKING DESIGN PRINCIPLES

Unanswered Question
May 18th, 2007

Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to discuss with Cisco expert Chris O'Brien your design architectures for secure, robust server farm deployments and associated network technologies. Chris has been with Cisco Systems for six years. Prior to Cisco, he was an application developer in the health care industry. While at Cisco, he has been engaged in data center design best practices focusing on the integration of blade servers, virtual machines and network services. Currently, he is developing data center solutions in the context of ISV application environments.

Remember to use the rating system to let Chris know if you have received an adequate response.

Chris might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through June 1, 2007. Visit this forum often to view responses to your questions and the questions of other community members.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 4.5 (4 ratings)
Loading.
dangal.43 Fri, 05/18/2007 - 12:55

what are the best options are available for the data center security and HA design?

chrobrie Mon, 05/21/2007 - 18:05

A well-designed data center must avoid any single point of failure and provide predictable traffic convergence in order to meet service level agreements. We have several design documents that expand on this design goal as well as the integration of security throughout the network. I would begin by review the Solution Reference Network Design Guides (SRNDs) at www.cisco.com/go/srnd. Within the Data Center section you will find many documents addressing HA and security concerns but I would highly recommend the ?Data Center Infrastructure Design Guide? for Layer 2 and 3 solutions and ?Integrating Security, Load Balancing and SSL Services? for HA and security services higher in the stack.

The guide titled ?Server Farm Security in the Business Ready Data Center Architecture v2.1? documents DDOS mitigation, SSL and NIDS deployments in the data center. Security is a goal achieved by leveraging the network, the endpoints and sound policies at every layer in the enterprise. Thank you for your question.

Jon Marshall Fri, 05/18/2007 - 13:03

Hi Chris

Hopefully this question fits into the subject. My question is quite long so please bear with me.

We currently have a fairly standard data centre design. 2 Wan routers connecting at L3 to 2 x 6509 distribution switches which are responsible for all inter-vlan routing. They are connected to each other with L2 trunk. They also contain FWSM's and CSM-S.

The server access-layer is made up of 4 6509 switches running CatOS with all servers dual honed to a pair of the switches.

In the last year or so we have started to move to HP blade systems and we connect the systems directly into our distribution switches as per Cisco & HP recommended best practice.

This raises a couple of issues

1) Port capacity is running out on our distribution switches as we already are using service modules and we keep needing more ethernet ports for blade system uplinks. I have read some of the SRND's and it seems there are 2 solutions to this

i) Deploy another pair of 6509's connected alongside the distribution switches which house the service modules and use the main distribution switches for just ethernet port capacity for blade systems.

ii) Deploy another pair of distribution switches alongside the first pair.

Replace the routers with a "core" pair of 6509's and then connect the distribution 6500's with L3 links to this new pair of core switches. We could also terminate our WAN links on these new core 6500's.

You could still have access-layer 6500's connected at L2 to the distribution switches.

Option ii is definitley more scalable in that you keep adding pairs of ditribution switches if needed but also more expensive in that you would need a pair of service modules per pair of distribution switches.

Question 1 (finally !!!)

What are enterprises doing in light of these blade systems in data centres. I know virtualization on the servers is one option but we are not there yet as a company.

Question 2

We are looking to migrate to 2 new data centres which will be within 25 km of each so we have the ability to run active/active data centres.

Certain applications such as Oracle RAC require L2 adjacency between the data centres.

With the old design a layer 2 connection between the 2 data centres would be relatively easy, running from dcentre1 distribution switch pair to dcentre2 distribution switch pair.

Howver with option ii) design above it becomes more complex. If you connect your "core" switches together you are not going to have L2 adjacency between server vlans as the core switches are connected to the distribuion switches with L3 links.

You can connect one pair of distribution switches to another in the other datacentre and this will give you L2 adjacency but then you need more connections for the other switches.

What are the possible solutions/scenarios for this.

Hope this all made sense

Jon

chrobrie Mon, 05/21/2007 - 19:02

Hi Jon, I have recently seen more interest in deploying Option i lately. We refer to that as a ?Service Chassis? design where integrated network services our housed in a pair of Catalyst 6500?s. The main driver for this is to create more uplink capacity in the distribution switches. Option ii does not exclude the use of Option i. The Service Chassis Model may still leverage the Mulit-Tier design model you described above where pair of Core routers provides an L3 transport fabric for multiple distribution blocks in the data center. In fact, a data center core is one of our fundamental design recommendations to provide for future DC growth.

I see most customers connecting their blade switches directly to the distribution layer. However, some have opted to use their existing access layer switches by employing the trunk failover feature available on Cisco blade switches to provide data center scale without extending their Layer 2 domains.

As for question 2, I would have a look at ?Data Centers High Availability Cluster Design Guide? (www.cisco.com/go/srnd under Data Center) This document discusses the extension of clusters and provides detailed designs for LAN extension between data centers.

Wilson Samuel Tue, 05/22/2007 - 10:23

Hi Jon,

From my point of view please find answers:-

Q1. What are enterprises doing in light of these blade systems in data centres. I know virtualization on the servers is one option but we are not there yet as a company?

Ans. In my humble opinion, you guys should go ahead with BladeServers with integrated Blade Switches which would give you better utilization of Space, Energy (in terms of Power and Airconditioning), Port Space (all blade servers are interconnected in a blade chassis).

Finally the blade chasis can be directly connected to the Core 6509s with GigModules.

However PLEASE ensure a word of caution over here, NEVER EVER go ahead with a Server brand BladeSwitch e.g. HP BladeServers offer HP BladeSwitches to connect all the BladeServers in a chasis and then to uplink.

Q.2 We are looking to migrate to 2 new data centres which will be within 25 km of each so we have the ability to run active/active data centres.

Certain applications such as Oracle RAC require L2 adjacency between the data centres.

With the old design a layer 2 connection between the 2 data centres would be relatively easy, running from dcentre1 distribution switch pair to dcentre2 distribution switch pair.

Ans. You may very well connect both the DataCenters using at the L-2 using a OC-3 or above link with fiber Gibic and can easily aviod the problem faced in option 2.

Hope that helps,

Please rate if it helps,

Kind Regards,

Wilson Samuel

bhedlund Tue, 05/22/2007 - 19:33

Jon,

As for Question 2 and establishing L2 adjacency:

You can certainly accomplish this over a DC-core to DC-core L3 connection using EoMPLS, L2TPv3, or even VPLS. You will be able to scale your datacenters with additional distribution blocks without having to fuss with your backend datacenter link.

Hope this helps.

Please give the post a quick rating. ;)

-Brad

chinnari20 Mon, 05/21/2007 - 05:24

Hi Chris,

Please give me solution for my problem. As it is very urgent me.

In the Headquarters we buyed cisco 2801 routers and then for all 9 branch sites we buyed cisco 1841 routers.We r connecting through leased lines and ISP is channelising all the 9sites lines in their premises and providing single leased line to HQ.

So, my question can i connect to all 9sites at a time from HQ system using VPN configuration with cisco2801 router.

Pls clarify my doubt.

Thanks

Phani

chrobrie Tue, 05/22/2007 - 06:42

Hi Phani,

The 2801 and 1841 platforms should support the configuration you described above. I would also like to suggest the use of application acceleration solutions such as our Wide Area Application Solution (WAAS) to increase your end-users experience. Visit www.cisco.com/go/srnd where you can reference our Wide Area Network and Branch deployment guides for best practices related to each. Thanks.

chinnari20 Wed, 05/30/2007 - 02:27

Hi Chrobrie,

We had buyed routers 1-Cisco2801 for central site and 9-cisco1841 for 9branch offices. As per meeting with ISP they r going to provide leased lines for 9branch offices of 128kbps and ISP will channelise these 9lines in their premises and will provide one 2mbps line to central site. And i must also connect 18 remotesites to central site 2801 router using dialup or GPRS, for this we had also buyed VPN client software from cisco.

So, my questions are:

1. Can i connect all these sites at a time from central site? If yes How?

2. If so which type of solution must i use.

a. IPsec VPN solution.

b. IPsec VPN with GRE tunnel

3. If possible please provide any configuration example similar to this configuration?

Pls provide me solution it will be helpful for me. Inadvance thanks.

mohammedrafiq Tue, 05/22/2007 - 08:50

Hi chris,

I have to upgrade my up links from 1 gig to 10 gig on 6500 switches in our data centers, will my non E- series chasis will support the 10 gig module,if I upgrade to sup 720, cat IOS and fant ray 2 ?

---

sh ver

OM: System Bootstrap, Version 12.0(3)XE, RELEASE SOFTWARE

BOOTFLASH: MSFC Software (C6MSFC-JSV-M), Version 12.1(8b)E11, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1)

W2-6509-1-msfc uptime is 3 years, 25 weeks, 3 days, 10 hours, 59 minutes

System returned to ROM by power-on

System restarted at 05:18:11 GMT Wed Nov 26 2003

System image file is "bootflash:c6msfc-jsv-mz.121-8b.E11.bin"

cisco Cat6k-MSFC (R5000) processor with 57344K/8192K bytes of memory.

Processor board ID SAD0406031D

R5000 CPU at 200Mhz, Implementation 35, Rev 2.1, 512KB L2 Cache

Last reset from power-on

Bridging software.

X.25 software, Version 3.0.0.

SuperLAT software (copyright 1990 by Meridian Technology Corp).

TN3270 Emulation software.

55 Virtual Ethernet/IEEE 802.3 interface(s)

123K bytes of non-volatile configuration memory.

4096K bytes of packet SRAM memory.

16384K bytes of Flash internal SIMM (Sector size 256K).

Configuration register is 0x102

Regards,

thomas.chen Tue, 05/22/2007 - 15:51

Chris,

One of our lines of business needs to deploy an HPC application for a compute intensive research application. How can we determine the network requirements and whether we need an Infiniband fabric, or if high speed GE/10GE will suffice?

Thanks,

Tom

chrobrie Thu, 05/24/2007 - 04:44

Hi Tom,

I would look to see what type of IPC traffic is generated by the application. Typically, if the application generates heavy IPC traffic and is sensitive to latency an Infiniband fabric may be your best choice. 10 Gigabit Ethernet is generally deployed in HPC environments where the application processes are loosely coupled, more independent in nature. Profiling the application would be the first step. I would also leverage the application developers who probably have previous deployment experiences you could benefit from.

Thank you for the question,

Chris

vramanaiah Sun, 05/27/2007 - 07:06

Hi,

1) We are currently migrating a datacenter which has CSM to ACE.

We used to have multiple Client/Server Vlan pairs in bridged mode with ACE (Defacto Single context in CSM)

Now we want to do the same with the ACE in a single context. Unfortunately; it doesnt seem to have the gateway command unlike CSM. This is forcing us to eat up the Context license in ACE.. Why is this removed from ACE? or is there a way out which i am not aware of..

2) Does ACE support Active/Active failover like FWSM. I can see Datasheets talking about it; but couldnt confirm this from the configuration guides.?

3) This one is not related to ACE; but in general i see QoS isnt supported by FWSM. So i am wondering how customers worldwide are deploying IPT Serverfarms behind FWSM and still deploy QoS.. One of the solution; we are providing our customers now is to add a pair of Firewalls for other APPs (with oversubscription) and a pair of Firewalls for IPT (without oversubscription). Is there any other way out?

chrobrie Tue, 05/29/2007 - 08:03

Hi

1)You can configure up to 8 bridged virtual interfaces (bvi) in a single context. The mac-sticky enable interface command will ensure the proper forwarding of traffic to the appropriate gateway.

2)The ACE supports the configuration of multiple fault tolerant groups. Each of these groups contains an active and a standby context. Check out this link for a more detailed explanation. http://www.cisco.com/en/US/products/hw/modules/ps2706/products_configuration_guide_chapter09186a0080683a15.html#wp1000829

3)I have generally seen dedicated firewalls for IPT environments which is what you are deploying. That being said my lab is looking to deploy an IPT environment and conduct testing with Oracle and MS applications in the near future. Hopefully, I will have a more detailed answer and design for you at that time.

Take care, Chris

richard.loubier Tue, 05/29/2007 - 06:52

Hi,

We are currently evaluating the possibility to use different technologies to implement disaster tolerant services. Things like VMWare VMotion, distributed clusters on multiple sites, etc.

Actually, we have 3 buildings linked by dark fibers. Each site is composed of 2 stacks of Catalyst 3750 in it's core. Routing is used between all the sites (no VLAN extension).

The problem is that many clustering technologies use L2 networks instead of L3. So we are asked to extend the VLANs to all the sites.

So, here are my questions :

a) what are the possibilities to extends VLANs (or use any appropriate technology) with the current hardware ?

b) what would be the scalability issues (if we extend more than 1 VLAN) ?

c) what would be your recommendations for future hardware and network design (preferable to go to an all switching design, replace hardware with a beefier setup, plan to use a specific technology, etc)?

d) we also use 10 mbps LANE links with remote sites and eventually these sites could be involved in the disaster recovery process. Is 10 mbps usable or if we absolutely need to upgrade these links to higher capacity before implementing any technology related to extending VLANs?

Thanks,

Richard

Actions

This Discussion