cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1577
Views
5
Helpful
4
Replies

New UCS Deployment - What am I missing

derekdoucette
Level 1
Level 1

I am getting ready to roll out a new deployment of UCS that I have quasi-inherited part way through.  I have experience with HP blades and Juniper gear, but here is my first soup to nuts roll out for UCS.  I just want to make sure I have all my bases covered and nothing seems completely out of whack.

I have the following:

2 - 5596 Nexus

2 - 6248 FI

14 - 5108 w/

8 - b200 each

Each Nexus will have 1 x 10g to each FI in 2 port channels for 40g of server ports with all vlan's needed in a trunk to the 5108

I will be setting up ESX 5.1 on each of the b200

Each ESX server will have 8 vNic's

- 2 - Management

- 2 - VM Network VLAN's

- 2 - vMotion

- 2 - iSCSI datastore

iSCSI vlan connects to 3Par array also connected to 5596.

Questions to consider (here is where you all come in):

- Should I utilize pinning?  How does that change HA setup?  Can I pin certain vlan's to certain links?

- Is there reason to use iSCSI HBA if not boot from SAN?  If so should I follow hardware or software model for setting up ESX?

- What am I forgetting?

Circumstances with this inheritance are allowing me to talk to sales guys a bit, but I wanted to see what is being done in the field while I cut through the tape.

4 Replies 4

Daniel Laden
Level 4
Level 4

- Should I utilize pinning?  How does that change HA setup?  Can I pin certain vlan's to certain links?

Let the system perform pinning dynamically.  If an uplink fails, the link will move.  This will not occur with static pinning.  Default UCS installs has all vlans on all uplinks, its assume the upstream network as the same.  If you need to carry vlans over explicit links, you will need to review L2 Disjoint Networks.

- Is there reason to use iSCSI HBA if not boot from SAN?  If so should I follow hardware or software model for setting up ESX?

iSCSi boot is supported but I dont believe it is supported with the HP SAN.  Netapp and EMC are supported.  If not booting from iscsi, iscsi hba's are not needed.  This should be called out in the interoperability matrix.

http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html

UCS 2.0(1) iSCSI Boot [updated]

https://supportforums.cisco.com/docs/DOC-18756

-What am I forgetting?

UCS has a lot of knobs, buy your sales guys/gays some treats so they will stay around.

Maybe review the Cisco validated designs.

http://www.cisco.com/en/US/netsol/ns743/networking_solutions_program_home.html

Keep the firmware and bios in the same familty

Reference the interoperability frequently.  Probably the first thing TAC will reference if there is an issue.

Thank You,

Dan Laden

Cisco PDI Data Center

Want to know more about how PDI can assist you?

http://www.youtube.com/watch?v=4BebSCuxcQU&list=PL88EB353557455BD7

http://www.cisco.com/go/pdihelpdesk

Thanks Daniel,

Pinning information is good.  I don't believe that we will need to split the links, they are already segmented via vlan.

We are not booting from iSCSI, but I was wondering if the iSCSI vnic provides more features similar to a hardware iSCSI card with some offloading, otherwise I'll stick with the software layout.

I took a look at the validated systems, but the only VMware one I could find had to do with ESX 3, which is very outdated.  In the past I have split the vnic's as above to make sure certain functions did not starve out others causing errors or poor performance.  I don't know if that is as much of the case today.  Would it make as much sense to just put all traffic into one big vSwitch with port groups for each function, or should I break out vmkernel specific traffic to separate vswitches on dedicated vnic's? 

With the b200, I would only get a max of 20g to the server with the single CNA, right?  I don't know that it makes sense to try to logically separate the traffic since it all comes in over the same link, and they all show up as 10g links.  With HP blades I would set limits on the vNIC's but I don't believe I have that ability with UCS.  I would have to use QoS to get the same results, correct?  If this is the case I think it would make sense just to have 2 vNIC's with 1 vSwitch an then port groups for each function where I previously had dedicated links.  That will simplify all configurations and remove components that are no longer needed with the new technologies.

We are not booting from iSCSI, but I was wondering if the iSCSI vnic  provides more features similar to a hardware iSCSI card with some  offloading, otherwise I'll stick with the software layout.

-  The Scsi HBAs are only used during the boot process.

I took a look at the validated systems, but the only VMware one I could find had to do with ESX 3, which is very outdated.

-  You may not find one on the HP SAN but the following link is about implementing a flexpod deployment with ESXi5

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_50_M3.html

http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_flexpod.html

In the past I have split the vnic's as above to make sure certain  functions did not starve out others causing errors or poor performance.   I don't know if that is as much of the case today.  Would it make as  much sense to just put all traffic into one big vSwitch with port groups  for each function, or should I break out vmkernel specific traffic to  separate vswitches on dedicated vnic's?

-  You will want to look at using Nexus 1000v.  Nexus 1000v 2.1(1)SV2.1(1.1a) changed licensing model with the Essential Edition free of charge with community support or TAC support.  With N1K, you will want to put all your links in a port channel and use QOS queueing to ensure bandwidth is available for various applications.

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-704041.html

With the b200, I would only get a max of 20g to the server with the single CNA, right?

-  Depends on the hardware.  B200 M3 with 1240 or 1280 can get additional bandwidth.  B200 M1/M2 with 1280 can get additional bandwidth.

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/B200M3_SpecSheet.pdf

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/spec_sheet_c17-644236.pdf

With HP blades I would set limits on the vNIC's but I don't believe I have that ability with UCS.

- UCS itself has limited QOS to classify COS on a per vNIC.  One can also limit the bandwidth on the vNIC.  N1K is much more capable.

http://bradhedlund.com/2011/03/08/cisco-ucs-networking-videos-in-hd-updated-improved/

part 11.

Hope this helps.

Thank You,

Dan Laden

Cisco PDI Data Center

Want to know more about how PDI can assist you?

http://www.youtube.com/watch?v=4BebSCuxcQU&list=PL88EB353557455BD7

http://www.cisco.com/go/pdihelpdesk

I confirmed that we have teh 1240 in the M3, so in fact we have 40g (4x10g) links from each chassis.  N1K would be the Cadalliac solution I believe following the layout by Brad H at

http://bradhedlund.com/2009/08/11/cisco-ucs-nexus-1000v-design-palo-virtual-adapter/

However, I don't know if that is going to be an option at this stage in the game.  I'll probably end up more like

http://bradhedlund.com/2009/07/05/cisco-ucs-vmware-vswitch-design-cisco-10ge-virtual-adapter/ with exception of using iSCSI over FC.

Based on his design, it was 2 links for VM traffic, and 2 links for ESX traffic.  I assume I would put the datastore in the same ESX links in a separate port group.  I was just wondering if people did that or broke out a few separate vNIC's.  I'll be posting over on the VMWare forums as well to see what they have done with their instances.

Review Cisco Networking products for a $25 gift card