Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

VIP Green

NFS Best practise (loadbalancing)

I have a tricky question and need your advice

UCS customer, using ESXi 5.1 and vswitch, no DS nor N1k

NFS with multiple shares on the same Netapp Filer

ESXi and Netapp are on the same Vlan and IP subnet

With standard balancing setup (originating virtual port ID), the VMkernel interface sends actually traffic over one outbound interface (VIC-1240) to a single NFS share

Statement from VMware:

It is also important to understand that there is only one active pipe for the connection between the ESX server and a single storage target (LUN or mountpoint). This means that although there may be alternate connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide. To leverage more available bandwidth, an ESX server has multiple connections from server to storage targets. One would need to configure multiple datastores with each datastore using separate connections between the server and the storage. This is where one often runs into the distinction between load balancing and load sharing. The configuration of traffic spread across two or more datastores configured on separate connections between the ESX server and the storage array is load sharing.

Question from customer

For e.g. 2 NFS Shares on the same Netapp filer, how can loadbalancing be changed to make use of the second uplink interface ?? if the same VMkernel interface is used for both NFS shares, all the traffic flows over one interface.

We know, that IP-hash isn’t possible; FI doesn't support LACP

Everyone's tags (3)
1 REPLY
VIP Green

NFS Best practise (loadbalancing)

After many discussion, the solution is the following (special thanks to Ramses Smeyers)

VMware side:

One vswitch with 2 interfaces vnic0 and vnic1:

Create multiple vmk interfaces for NFS, each one in a different IP subnet:

                Vmk1 10.0.0.0/24

                 Vmk2 10.0.1.0/24

                 Vmk3 10.0.2.0/24

Then on NetApp side

                Int 1 10.0.0.0/24 for NFS Share nr. 1

                Int 2 10.0.1.0/24 for NFS Share nr. 2

                Int 3 10.0.2.0/24 for NFS Share nr. 3

On UCS-B: src port distribution

On NetApp: LACP/port-channel

On the NFS mount command on the ESX server, you can only specify the destination IP address. ESX will automatically choose the vmk interface which is in the same subnet as the destination (I assume here L2 between server and storage, which was a VMware requirement up to V5); because the different vmk interfaces have different source ports, the perfect distribution over the outbound interfaces vnic0 and 1 is achieved.

1056
Views
0
Helpful
1
Replies
CreatePlease to create content