Anyone out there have iSCSI MPIO successfully working with Nexus 1000v? I have followed the Cisco Guide to the best of my understanding and I have tried a number of other configurations without success - vSphere still shows the same number of Paths as it shows Targets.
The Cisco document states the following:
Before starting the procedures in this section you must know or do the following:
•You have already configured the host with one port channel that includes two or more physical NICs.
•You have already created VMware kernel NICs to access the SAN external storage.
•A Vmware Kernel NIC can only be pinned or assigned to one physical NIC.
•A physical NIC can have multiple VMware Kernel NICs pinned or assigned to it.
What does " A Vmware Kernel NIC can only be pinned or assigned to one physical NIC" mean in regard to Nexus 1000v? I know how to pin a physical NIC with standard vDS, but how does that work with 1000v? The only thing related to "pinning" I could find inside of 1000v was with port channel sub-groups. I tried creating a port channel with manual sub-groups, assigning sub-group-id values to each uplink, then assigning a pinning id to my two VMkernel port profiles (and directly to the vEthernet ports as well). But, that didn't seem to work for me.
I can ping both of the iSCSI VMkernel ports from the upstream switch and from inside the VSM, so I know Layer 3 connectivity is there. One odd thing, however, is that I only see one of the two VMkernel MAC addresses bound on the upstream switch. Both addresses show bound from inside the VSM.
In regards to the uplink pinning (manual subgroups): Are your VMKs for each iSCSI profile different VLANs?
What you'll find is that it will only pin to separate uplinks if the iSCSI port-profile used is within the same VLAN; ie: using same port-profile for two different iSCSI VMKs. I ran into this problem when I had to use two iSCSI VLANs / two port-profiles for my MPIO setup with a Clarion. It's actually a problem in the flare code that forced us into the setup we're in today, but it just so happens to work against the logic that the 1000v does its iSCSI pinning with.
Thank you for your response. I believe myy configuration matches what you mention in your posting. I have a few more details over in the VMware community site. I think the issue may be related to the fact that our iSCSI storage (EqualLogic) only has a single IP Address. For some reason, that is not an issue for iSCSI Multipath with VMware Standard Switches, but it may be for iSCSI Multipath with Nexus 1000v.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...