Config Multi VLANs on Dell Blade Server with Cisco 3032 Switch Blades

Answered Question
Aug 20th, 2008
User Badges:

We recently purchased a Dell PowerEdge M1000e Blade Server with 4 Cisco 3032 Blade Switches.


We have 2 4507 core switches.


The Blade Servers will have multiple NICs that may or may not be on separate VLANs. For example:


Server-1 NIC-1 will have IP of 172.17.4.20 NIC-2 will have IP of 172.17.5.20 and NIC-3 will have IP of 172.17.102.5


Server-2 All NICs will have IP in 172.17.100.0 255.255.252.0 netowrk


Server-3 NIC-1 will have IP of 172.17.4.30 NIC-2 will have IP of 172.17.6.00 and NIC-3 will have IP of 172.17.102.20


I have a total of 4 Cisco 3032 Blade Switches that I want to trunk and channel to the 4507's in a failover design that if I lose 1 of the 4507's or a 3032 I will not lose connectivity to any of the Blade Servers.


Would appreciate a discussion on the best design for this.


Correct Answer by branfarm1 about 8 years 11 months ago

There's a good whitepaper on Cisco blade switches and the Dell enclosure here:


http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps8742/ps8764/white_paper_c07-443792.pdf

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (4 ratings)
Loading.
branfarm1 Wed, 08/20/2008 - 10:27
User Badges:
  • Bronze, 100 points or more

Hi there. I use something similar, albeit, with only 2 blade switches in a chassis.


You can pretty much think of the blade switches as standalone switches -- so draw your network diagram as if you had 4 2960's (or some other access switch of your choice).


For my implementation, all of my blade switches are connected to each core switch in a standard Triangle topology. I use STP priorities to ensure that the primary core switch is the STP root for each vlan, and I disble the internal links between the blade switches. Each switch carries all of the VLAN's (or at least has the ability to do so), and I trunk the Switch-server links and let the NIC drivers handle the teaming and trunking.


This setup has provided fantastic redundancy and fault-tolerance for me. Essentially, you'll end up with each server having 4 physical network connections. Depending on the capabilities of the NIC drivers, you can do load-balancing across all of the links or just do fault-tolerance (when only one link is active, with others in a standby mode).


Hope that helps,


--Brandon

dohogue Wed, 08/20/2008 - 11:06
User Badges:

Thanks.


So let me know if I am thinking right here.


Server-1 Nic-1 will communicate over port 1 of all 4 3032 switch modules


Server-1 Nic-2 will communicate over port w of all 4 3032 switch modules


etc..etc..


Is that correct?



branfarm1 Wed, 08/20/2008 - 11:46
User Badges:
  • Bronze, 100 points or more

I believe that each server will have 1 NIC mapped to a switch module:


Server1 - NIC 1 -- Port 1 of 3032A1

Server1 - NIC 2 -- Port 1 of 3032A2


I believe access to the B fabric requires an extra mezzanine card in each server. Once you have that, it will connect as:


Server1 - NIC 3 -- Port 1 of 3032B1

Server1 - NIC 4 -- Port 1 of 3032B2

Actions

This Discussion