UCS customer, using ESXi 5.1 and vswitch, no DS nor N1k
NFS with multiple shares on the same Netapp Filer
ESXi and Netapp are on the same Vlan and IP subnet
With standard balancing setup (originating virtual port ID), the VMkernel interface sends actually traffic over one outbound interface (VIC-1240) to a single NFS share
Statement from VMware:
It is also important to understand that there is only one active pipe for the connection between the ESX server and a single storage target (LUN or mountpoint). This means that although there may be alternate connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide. To leverage more available bandwidth, an ESX server has multiple connections from server to storage targets. One would need to configure multiple datastores with each datastore using separate connections between the server and the storage. This is where one often runs into the distinction between load balancing and load sharing. The configuration of traffic spread across two or more datastores configured on separate connections between the ESX server and the storage array is load sharing.
Question from customer
For e.g. 2 NFS Shares on the same Netapp filer, how can loadbalancing be changed to make use of the second uplink interface ?? if the same VMkernel interface is used for both NFS shares, all the traffic flows over one interface.
We know, that IP-hash isn’t possible; FI doesn't support LACP
After many discussion, the solution is the following (special thanks to Ramses Smeyers)
One vswitch with 2 interfaces vnic0 and vnic1:
Create multiple vmk interfaces for NFS, each one in a different IP subnet:
Then on NetApp side
Int 1 10.0.0.0/24 for NFS Share nr. 1
Int 2 10.0.1.0/24 for NFS Share nr. 2
Int 3 10.0.2.0/24 for NFS Share nr. 3
On UCS-B: src port distribution
On NetApp: LACP/port-channel
On the NFS mount command on the ESX server, you can only specify the destination IP address. ESX will automatically choose the vmk interface which is in the same subnet as the destination (I assume here L2 between server and storage, which was a VMware requirement up to V5); because the different vmk interfaces have different source ports, the perfect distribution over the outbound interfaces vnic0 and 1 is achieved.
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...
Cisco Documents are usually accurate, but when it came to the document
on Cisco APIC Signature-Based Transactions it was slightly off the mark.
This document is for those novices to API like me who cant seem to
figure out how to go about performing signat...