cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3602
Views
0
Helpful
6
Replies

Help with mezzanine card choice

tohoken
Level 1
Level 1

I am having a hard time trying to figure out which mezzanine card to use in our blade servers (half size).  We currently run vSphere 4 on rackmount servers with 3 four port nics installed for a total of 12 interfaces available to ESX.  We do this for redundancy and increased bandwidth.  If I use the 10G mezzanine card it looks like it only presents 2 10G nics to the ESX server.  Even though the bandwidth is great, this is a far cry from the 12 nics we have for redundancy.  The virtual NIC card looks good because it can present up to 128 "physical" interfaces to the ESX server.  I need to have interfaces for my LAN traffic, my iSCSI SAN traffic, and my console ports.  Which would be my best option?

Thanks for any help you can provide.

Ken

2 Accepted Solutions

Accepted Solutions

stechamb
Level 1
Level 1

Some thoughts to help you

1)  Do you need FC?  If so, forget Oplin/Intel and think Menlo/Qlogic/Emulex or Palo/Cisco

2)  Don't worry about redundancy, it's very well catered for with just two NICs connecting to separate redundant fabrics with a consistent configurion (e.g. same VLANs).

3)  Having less NICs is a Good Thing not a Bad Thing.

4)  If you choose Menlo with the 2 x 10GbE then you will most likely create 1 vSwitch with two uplinks and port groups for each of your VLANs - COS, VMs x whatever, VMotion etc.

5)  You can only apply QOS (guarantees and other stuff) to the Virtual Interface, not per VLAN, so if you want to get fancy then Palo is your option.

6)  Lots / most UCS customers are quite happy with 2 x 10GbE NICs ... it's just, different, and better.

:-)

Any Qs?

View solution in original post

Ken,

With the 10GB solution you assign both 10GB NIC's to a single vSwitch.  You can then use port groups/VLAN's to segregate your iSCSI traffic to it's isolated VLAN.  Prior to the 10GB solution the best practice was to isolate iSCSI traffic onto its own physical network for full bandwidth.  With 10GB iSCSI can exist on the same physical adapter as your production traffic, but use port groups/VLAN's to isolate the traffic.  I am running iSCSI storage in this configuration and have had no issues with iSCSI traffic.  With DCE I am actually getting much better iSCSI performance.

Thanks,

Jeff

View solution in original post

6 Replies 6

jdrurycisco
Level 1
Level 1

Ken,

In vSphere 4 having more than one physical NIC assigned to a vSwitch increases redundancy but it does not increase your overall bandwidth.  If 4 physical 1GB nics are assigned to a virtual switch once one of the 1GB NIC's reaches a 1GB sustained rate the other NIC's will not be able to service data until the 1GB sustained data decreases.  With a 10GB NIC you won't see this problem until your data peaks at 10GB. VMware vSwitches provide concurrent bandwidth, not aggrate bandwidth.

From a redundancy perspective each of your 10GB NIC's on the UCS platform are pinned out to the IOM's in the chassis.  Each IOM has 4 ports, so there are 8 total ports that the NIC's can use.  If you let the UCS system handle the port pinnings then it will allow for IOM port failures and migrate the traffic to another IOM port.  Also each IOM is connected to a different 61xx switch allowing for failure of an entire leg of the system without disrupting traffic to your servers.  In a failure scenario you could loose a full 6120, one IOM, and three of the ports on the surviving IOM and still have an active link.

The Palo adapter can provide virtual 1GB interfaces to your VM's but it is still relying on the two 10GB links to connect to the chassis so no real advantage from a redundancy perspective.

On my demo system I have pulled a full leg of the system without causing problems to my ESX hosts.  After using ESX hosts with 12+ 1GB NIC's, the 10GB solution is amazing.  There are considerable performance benefits with iSCSI/NFS storage and overall vMotion responsiveness.

Hope this helps.

Jeff

Jeff,

Thanks for the information.  It was helpful for me.  I question redundancy though.  With only 2 interfaces being presented to the ESX server I would need to use one for my LAN and one for my iSCSI SAN traffic.  In doing so I would not have redudancy or failover should an interface go down.  Am I just not getting it?

Ken,

With the 10GB solution you assign both 10GB NIC's to a single vSwitch.  You can then use port groups/VLAN's to segregate your iSCSI traffic to it's isolated VLAN.  Prior to the 10GB solution the best practice was to isolate iSCSI traffic onto its own physical network for full bandwidth.  With 10GB iSCSI can exist on the same physical adapter as your production traffic, but use port groups/VLAN's to isolate the traffic.  I am running iSCSI storage in this configuration and have had no issues with iSCSI traffic.  With DCE I am actually getting much better iSCSI performance.

Thanks,

Jeff

stechamb
Level 1
Level 1

Some thoughts to help you

1)  Do you need FC?  If so, forget Oplin/Intel and think Menlo/Qlogic/Emulex or Palo/Cisco

2)  Don't worry about redundancy, it's very well catered for with just two NICs connecting to separate redundant fabrics with a consistent configurion (e.g. same VLANs).

3)  Having less NICs is a Good Thing not a Bad Thing.

4)  If you choose Menlo with the 2 x 10GbE then you will most likely create 1 vSwitch with two uplinks and port groups for each of your VLANs - COS, VMs x whatever, VMotion etc.

5)  You can only apply QOS (guarantees and other stuff) to the Virtual Interface, not per VLAN, so if you want to get fancy then Palo is your option.

6)  Lots / most UCS customers are quite happy with 2 x 10GbE NICs ... it's just, different, and better.

:-)

Any Qs?

Steve,

Thanks for the information.  In my situation I have LAN traffic and iSCSI traffic.  My iSCSI SAN traffic is on a physically isolated network and not accessible from the LAN.  So with the Menlo card I would have to present 1 interface for my LAN and 1 interface for my iSCSI SAN to the ESX server.  That would leave me with no failover options wouldn't it?  Or do I just have ESX use both interfaces and the 6120 will take care of the physical connections to my LAN and SAN?  Thanks for your help.

welcome to unified fabric.

You _could_ (but shouldn't?) use two vswitches with one VNIC uplink each and have iSCSI port on just one vSwitch and use CNA failover in UCS to have redundancy, but this doesn't improve redundancy nor bandwidth so why bother?

Well, you might want to bother if you connect a specific border port to an iSCSI only network: now you can use pin groups to statically pin the iSCSI VNIC to the iSCSI border port.

Or, if you didn't do the pinning and just used the standard one vswitch/many port groups implementation then if your border ports are going to asymmetrical L2 (normal LAN on one border port, iSCSI on another border port) then you could put the 6100s in Switch Mode to let them work out where the traffic needs to go.

If you are thinking of Jumbo Frames then you could set this for all traffic (what's the harm?) or on a per VNIC basis if you use the two vswitch implementation.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card