cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1939
Views
0
Helpful
2
Replies

UCS/VMWare build question

work
Level 1
Level 1

Hi,

I am looking for some clarification on how the Palo VIC presents the vNICs, how to tell/control which vNICs map to which physical ports, and how to best configure my environment for VMWare.

Hardware Summary :

B250 (full width) blades with 2 Palo VIC cards each

5108 chassis with dual FEX (dual uplinks from each FEX)

2 x 6120 controllers with 2 x 10gbit uplinks each to a pair of Nexus 5020s

Software Summary :

UCS 1.3.0 boot loader, 4.1(3)N2(1.2d) kernel/system version

VMWare vSphere 4.1 Enterprise Plus

According to my reading, both Cisco and VMWare are recommending against bandwidth limiting the interfaces - but prefer to go with shares and burst capability.

As such, my desired build is as follows :

for each of the 4 10gbit interfaces available to the B250 blade, i want to have 2 vNICs (1 for VM distributed vSwitch [6Gbit minimum shares, 802.1q trunks], 1 for iSCSI [4 Gbit minimum shares, access port/non 802.1q trunk] ). This means my Service Profile needs to define 8 vNICs in total.

Q1: how do i ensure that i get 1 'data' and 1 'iscsi' vNIC per physical port ?

Q2: how do i define a policy to set the minimum bandwidth to each vNIC ?

Q3: how do i control/ensure the correct bind order so that i can script my vSphere configuration ?

VMWare build will consist of distributed vswitch containing all 'data' vNICs, enabling Network IO Control with shares set to manage the various allocations for VMotion, console and data port groups. The iSCSI vNICs will each get a dedicated VMKernel port.

Many thanks in advance for any help.

1 Accepted Solution

Accepted Solutions

adambaum1
Level 1
Level 1

Our setup is not the same, but the principles may apply since we also have the VIC.  To ensure that we had the ports setup correctly, I provisioned six vNICs in UCSM to each host.  I provisioned the first vNIC to Fabric A and the second vNIC to Fabric B - do not check the "Fabric Failover" box..  Repeat for the other four vNICs.  This will give you 4 vNICs per fabric.  Note down the MAC addresses and which fabric they are attached to.

Next, when in VC and configuring your hosts, go into the Network Adapters and match up the vmnics and the MAC addresses so you know how they map out.  When configuring your vSwitches, just be sure to add a vmnic from Fabric A and an vmnic from Fabric B.    Do this for the Service Console, vmkernel, and virtual machine switches.

If you do this inas single VIC environment, it will ensure that your vswitches are attached to each fabric.  I am guessing that you can do this same thing with two VICs since each card may show up independently in UCSM.

As for bandwidth mgmt and such, check out http://bradhedlund.com/    His recent posts are about UCS networking.

Adam

View solution in original post

2 Replies 2

adambaum1
Level 1
Level 1

Our setup is not the same, but the principles may apply since we also have the VIC.  To ensure that we had the ports setup correctly, I provisioned six vNICs in UCSM to each host.  I provisioned the first vNIC to Fabric A and the second vNIC to Fabric B - do not check the "Fabric Failover" box..  Repeat for the other four vNICs.  This will give you 4 vNICs per fabric.  Note down the MAC addresses and which fabric they are attached to.

Next, when in VC and configuring your hosts, go into the Network Adapters and match up the vmnics and the MAC addresses so you know how they map out.  When configuring your vSwitches, just be sure to add a vmnic from Fabric A and an vmnic from Fabric B.    Do this for the Service Console, vmkernel, and virtual machine switches.

If you do this inas single VIC environment, it will ensure that your vswitches are attached to each fabric.  I am guessing that you can do this same thing with two VICs since each card may show up independently in UCSM.

As for bandwidth mgmt and such, check out http://bradhedlund.com/    His recent posts are about UCS networking.

Adam

Thanks Adam,

i have found from VMWare that the new vSphere 4.1 physical NIC load balancing algorithm wont work properly in a lot of situations due to the hard coding of a 75% load condition (which will rarely be achieved when sharing with storage traffic)

i found a document (http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf) which lists maximum NIC counts, and this seems to indicate that there is a limit of 4 x 10gig interfaces for all listed network cards (notably then Palo / enic chipset is not listed).

Having said that, Brad Hedlund's blog is simply amazing - thanks for pointing me in his direction.

Breckan.

Review Cisco Networking products for a $25 gift card