cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3629
Views
0
Helpful
7
Replies

Setup UCS Auto-Deploy using PXE and dual vNICs

ronbuchalski
Level 1
Level 1

We are trying to set up Auto Deploy for UCS B200 M3 blade servers.  Our setup has the chassis connected to dual 6248 Fabric Interconnects.  We were successful in getting this to work when the blades were identified via MAC address configured on the DHCP server (Infoblox).  However, in trying to resolve the scenario of the server attempting to PXE boot via either NIC, thus having two different MAC addresses, this scenario could not be supported on the DHCP server (mapping two MAC addresses to one IP address).  Then we had the idea of using the GUID/UUID of the blade as a unique client identifier, as it is the same no matter which NIC is used.

We have tried to set this up, but have been unsuccessful.  The blade sends out its' GUID using DHCP option 97, but the DHCP server is only looking for the Client ID via DHCP Option 61.  We have not been able to determine how, or if, the blade server can send its' GUID via DHCP Option 61, and Infoblox tells us that their server cannot be configured to accept DHCP Option 97 as a client identifier.

Has anyone encountered this situation, and resolved it?  Surely this isn't a unique situation, having a blade server with two NICs.

Thanks in advance for your response.

 

Ron Buchalski

 

1 Accepted Solution

Accepted Solutions

This is why you should select the "hardware failover flag" in the vnic definition. If your vnic is attached to fabric A, and A fails, you are automatically switched to fabric B.

View solution in original post

7 Replies 7

fr.mueller
Level 1
Level 1

Hi Ron,

i am currently setting up an autodeploy environment. But i only use one nic per host for boot from san. Because of the autofailover possibility in ucs, i think i do not need a second nic.

Why do you use a second nic? Do i miss something or is it just because of the thinking to have a redundancy for management in vcenter?

Frank

Our current UCS deployment is configured to map each NIC to a specific fabric interconnect, so choosing a single NIC per host, and binding that NIC to a single fabric interconnect, could potentially be a problem if the fabric  is having connectivity issues.

 

This is why you should select the "hardware failover flag" in the vnic definition. If your vnic is attached to fabric A, and A fails, you are automatically switched to fabric B.

Yes, that's an option we're going to look at.  It will require a change to our template to accommodate it.  But this should solve the issue for us.

Thank you,

-rb

 

I think the workaround is known -  but main question is why UUID/GUID not working or how to make it work. 

vvvinayak
Level 1
Level 1

If one uses "Hardware failover flag" then we may have to use 1 NIC for management. In this situation, vSphere will show a bang stating Management has no redundancy. 

Why not using multiple vnic ?

One for AutoDeploy PXE boot, connected eg. to Fabric A, and Failover Flag set

2 vnics for Management, one each per fabric, without Failover Flag set

I also see many customers, using old vswitch for management / vmotion / storage, and DVS or N1k for general VM data traffic

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card