With Michael Petrinovic
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Unified Computing Systems with Cisco expert Michael Petrinovic. Any configuration or troubleshooting question can be asked.
Michael Petrinovic is a customer support engineer in the Customer Advanced Engineering group at Cisco. He supports data center and unified computing solutions through product testing and early field trials of pre-release hardware and software. His support of advanced and emerging technologies provides valuable product direction, quality, support readiness, and knowledge to Technical Services. Petrinovic is currently focused on the Cisco Unified Computing System and Cisco Nexus 1000V. In his five years at Cisco, he has worked in the Cisco Technical Assistance Center supporting Cisco routing and switching architectures, before moving into data center and server virtualization technologies. He holds CCIE certification (#25330 in Routing & Switching.)
Remember to use the rating system to let Michael know if you have received an adequate response.
Michael might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through August 10, 2012. Visit this forum often to view responses to your questions and the questions of other community members.
i don't know if this is the right venue to get to you. but i got a question on UCS operation. i read on a blog that for each vEth port on Fiber Interconnect A there is a replica of this port on Fiber Interconnect B. so when there is fabric failure say on A side, all ports on A will be started on B without the Server noticing it. is this true? if yes, is the reverse true (for every active vEth port on Fabric Interconnect B, there is a replica on A(which is passive))?
Thanks for the great question. An important part of troubleshooting is understanding what is and isn't normal. Obviously, to do that, you need to understand how the system works.
In terms of the replica vEth port, this depends on if you have configured the vNIC within the service profile to utilize fabric failover. Assuming that you have enabled this feature on the vNIC interface, then to answer your question, yes, you're understanding is correct. There is a passive interface on the opposite Fabric Interconnect. If the Fabric Interconnect, through which the active interface is connected, fails (for whatever reason), the vNIC is failed over to the other Fabric Interconnect and that pre-provisioned passive interface then becomes active. This is true for both A to B, or B to A.
Below is sample output where you can see the pre-provisioned interface on a test service profile I created.
I only enabled Fabric Failover on vNIC1. You can see that this vNIC1 is provisioned on both Fabric A and Fabric B. However, vNIC1 on Fabric A (subsequently would be identified as vEth 808) has a state of Active/Primary. vNIC1 on Fabric B (subsequently would be identified as vEth 809) has a current status of Passive/Backup.
I didn't configure this feature on vNIC2. As you can see, it is only provisioned on Fabric B and has a status of No Protection/Unprotected.
A few things to note in relation to fabric failover:
1) Fabric Failover is only available for vNIC interfaces. It is NOT supported with vHBA interfaces.
2) It is ONLY supported on Cisco Virtual Interface Cards or the Cisco Menlo adapters (M71KR)
Please let me know if this answers your question, or if you have any follow up questions.
Thanks Michael! that was of great help. will be back soon because am investigating how UCS works and how it fits in our environment and hopefully convince the decision makers buy one.
I am receiving the error message "enm source pinning failed" when attempting to deploy Disjoint Layer 2 uplinks. Any suggestions on how to resolve this issue?
Thank you. I'm looking forward to your response.
The error message "ENM source pinning failed" means that the UCS Fabric Interconnect (running in End Host Mode) is not able to pin the particular vNIC to an uplink interface. The most common reason this error occurs is when a VLAN assigned to a vNIC is not configured or available on one of the uplink interfaces. This prevents the UCS from pinning the vNIC traffic to an uplink, and therefore produces the error you describe.
With disjoint L2 designs, you also want to ensure that:
Please verify that you are following the above requirements. If you continue to have issues, please provide me with the output of:
cae-syd-ca1-A(nxos)# show platform software enm internal event-history errors
1) Event:E_DEBUG, length:102, at 796222 usecs after Thu Aug 2 11:47:21 2012
 [enm_sif_state_change] [Veth819]: processing sif state 1 - no suitable interface found to pin SIF
2) Event:E_DEBUG, length:81, at 796212 usecs after Thu Aug 2 11:47:21 2012
 [enm_find_best_active_bif] [Veth819]: No suitable interface found to pin SIF
This aids in troubleshooting, as it highlights the reason for the failure. In this example, you can see that Veth819 is failing as there isn't a suitable interface found to pin the server interface (SIF) to.
Further details regarding Disjoint L2:
Please let me know if this helps.
I have been trying to instan an operating system to FlexFlash on a C220-M3 but I when I run through the installation, I do not see the correct partition/drive. Can you kindly advise why this is happening and what is the proper way to do the install?
To begin with, let me quickly give some background into the Cisco Flexible Flash Card.
The Cisco Flexible Flash card is pre-installed with three software bundles, each on one of four preconfigured virtual drives (VDs). The fourth VD allows you to install an OS or embedded hypervisor. The VDs are configured as follows:
The Hypervisor (HV) VD is a pre-configured 3GB partition on the Cisco FlexFlash SD and is called “HV Hypervisor_0”. Due to the size of the partition, it is intended for ESXi installations only.
What is important to understand is that each of these four VDs can be separately enabled or hidden from the host. The default is for all VDs to be hidden. In order to enable the VDs and make them visible to the host, you need to access the CIMC. If you haven't setup your CIMC on your C series server, please follow these instructions:
Once you have gained access to the CIMC, click on the following; Server Tab – click "Inventory" – Storage Tab – click “Cisco FlexFlash” - then "Configure Operational Profile":
Where you will then be presented with the following screen:
You want to ensure that you check the box for HV (hypervisor) so that it is enabled and thus will be visible to the host. After saving your changes, while still highlighting the FlexFlash, you can also select the tab "Virtual Drive Info" where you should see HV has "Host Accessible" set to "true". If not, ensure you saved the previous changes.
Once this has been completed, don't forget to ensure that you have the hypervisor virtual drive higher, in the boot priority order, than the physical disks. This is to ensure that after you have installed your operating system, it will attempt to boot from the HV VD first, rather than the physical HDDs.
If you have successfully enabled the HV VD, when you attempt to install an operating system (in my example, ESXi 5.0 - although in my example output I have all virtual drives enabled to illustrate what you would see from the installer), you should be able to see the HV VD partition:
You can then select this HV Hypervisor_0 partition and proceed through the normal installation.
Some additional information on Cisco's FlexFlash can be found via the following document:
Please let me know if this helps resolve your issue. If not, don't hesitate to ask follow up questions.
is native end-to-end FCoE supported now by Cisco UCS?
Blade > FCoE > FI > FCoE > FCoE storage
Blade >FCoE> FI > FCoE> Nexus >FCoE> FCoE storage
Let me answer your question in two parts, based on the scenarios that you provided.
Of these scenarios, only the first one is currently possible. However, the second scenario will be possible as of the next major release - 2.1 - which is scheduled to be released later this year.
1) Direct Attach FCoE to Fabric Interconnect (supported)
The Fabric Interconnects support direct attach FCoE arrays, which utilizes the FCoE Storage port type on the Fabric Interconnects. In order to configure this type of connection, please refer to the following document:
2) Northbound FCoE from Fabric Interconnect to another Nexus switch (not yet supported)
This scenario is still not supported as there is no FCoE available on the northbound uplink interfaces.
The technical reason behind this is that the Fabric Interconnects don't support VE FCoE ports (yet), however the N5K/N7Ks have support for the VE ports. As a result, you would still be required to use Native Fibre Channel interfaces to connect to the N5K/N7K's which could then connect on to an FCoE storage array.
The next major release of UCS software will support FCoE northbound out of the standard Ethernet uplinks.
Hope that helps.
I have been triying to exchange a couple of blade to anothr chassis, but everytime i move the blades the OS is deleted. There is no Scrub policy applied, the local disk policy is set to raid1 and the protect configuration feature is active.
We shutdown the os and the server in a safe mode, then swap the blades.
Can you help me with this please?
Michael might not be able to respond to your question since this events has ended today. However, you can ask this question in the community. I see that you're from Venezuela and I want to let you know that you can take advantage of our Spanish forum to ask questions on your own language as well. You might want to ask this question in the Spanish forum at
Also, we will have a live webcast in Spanish on UCS next Tuesday. This will be a great place to also ask your questions on data centers. You can register at