HP Blade 3120G best pratice

Unanswered Question
Feb 5th, 2009

Hello,

i would like to know if someone can give me some advices about blade, 3120G switchs and core switch configuration. so we have 16 blades in an enclosure with 4 nics on each one. Each nic is connected to one of the 4 3120G switchs .

At the moment, we only want to use 2 NICs to publish our servers. the 2 switchs are in stack mode, they are connected with 2 GIGABIT UPLINKS on each one going to 2 Catalyst C4006.

we know that we have to configure the way we use each nics but i want to know what do we need to do with the 4 uplinks, if it was possible we would like to use the 4 links at the same time using something like ethernet channel. My question is what is really the difference between this configuration and spanning-tree, according to me we have to remove spanning-tree ??

about core switch configuration they are connected together with 2 UPLINKS on the SUP module.

Thank you and sorry for my poor english !!

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
branfarm1 Thu, 02/05/2009 - 13:06

Hi there. Hopefully I can be of some help here.

First off, you can use the NIC Teaming feature of the HP Proliant Support pack to configure your server NIC's as either 1 virtual NIC with 4 connections, or 2 virtual NIC's with 2 connections each. I use NIC teaming heavily in my installations and it works beautifully.

As for your uplinks, If I understand you right, you have 4 switchs "stacked" into two switches, with each stacked switch having two uplinks, and each uplink goes to 2 core switches. In this case, you absolutely need spanning-tree to help close down links that form loops. You can only use etherchannel if the links are going to the same switch, from the same switch. If these are the new HP blade switches that support VBS, you'll have to consult the VBS documentation to find out how to configure them.

In my installs I have two blade switches per enclosure, with 1 uplink to each core switch, and a link between the blade switches. Mine is a classic "box" topology, and I use STP to close off the link between the switches until one of the uplinks fail.

Hope that helps!

yann.boulet Fri, 02/06/2009 - 05:33

Thank you for your reply,

just consider that we only use for the moment 2 switches "stacked" first one is for NIC1 and seconde one is for NIC2. on the NICs server side we would like to use fault tolerance using HP Teaming, it means that all the first NICs on each servers will only use first switch if we don't configure anything on. on the 2 "stacked" switch that become only one I would like to load balance traffic of this NIC1 on each 4 UPLINKS "2 per switch", those switches are connected to 2 differents core switches, it means : NIC1, SWITCH1, CORE SWITCH1. I want to know if it's possible to have NIC1-->SWITCH1 or 2 --> CORE SWITCH 1 or 2 dynamically... beacause of internal stack. The problem will be on the core switch that can't be stackable.

Why i don' t want to use STP, it's because i will have no traffic on blocked ports and i want to use all of this at the same time.

I can draw something to show you physical connections.

thanks

branfarm1 Fri, 02/06/2009 - 07:38

A diagram would be helpful.

Also, keep in mind that STP will only block a link if it is part of a loop. If there are no loops in your topology from the L2 perspective, then STP will run without blocking any links.

You also might consider reviewing the Data Center Infrastructure 2.5 Design Guide (http://www.cisco.com/application/pdf/en/us/guest/netsol/ns107/c649/ccmigration_09186a008073377d.pdf) It has a lot of helpful information when planning data center topologies.

Actions

This Discussion