cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1138
Views
0
Helpful
6
Replies

10Gbps performance issues with Nexus1000V

hleschin
Level 1
Level 1

Hello,

currently we prepare the implemention from several ESX-Hosts with 10gbps NICs. The ESX-Hosts are connected via 10gbps SFP+ ShortRange to Nexus5548UP switches with 2x FEX C2232PP (each ESX host ist connected to a diffferent fex , both fex are connected to the same 5548UP via a 40gbps "channel" ).

At the ESX-Hosts is a Nexus1000V with version 4.2.1.SV1.4a installed.

During some Vmotion performance tests the transfer rate between 2 ESX-hosts was only 5-6gbps.  But when we move the ESX-Nics from the Nexus1000v to a vswitch, the transfer rate is 9-10gbps. At the Nexus1000V is no rate limiting configured.

What could be the reason for that behaviour?

Regards

Hendrik

6 Replies 6

Robert Burns
Cisco Employee
Cisco Employee

Need some missing information.

1. What are the modle of 10G NICs used in your host & driver version.

2. Which uplink channeling method are you using on the 1000v? (Mac pinning, LACP port channel etc)

3. Which build of ESX exactly.

Thanks,

Robert

Hi Robert,

1. What are the model of 10G NICs used in your host & driver version.

Intel 82599EB
Driver : ixgbe
Version : 2.0.84.8.2-10vmw-NAPI
Firmware-Version : 0.9-3

2. Which uplink channeling method are you using on the 1000v? (Mac pinning, LACP port channel etc)

Currently we test only with one nic per host. For production we will use 2x 10G nics per host. In our other productive Nexus1000V environments (with Dual 1Gbps nics per host) we use channeling via mac-pinning. It works very stable and we plan to use it for the 10gig servers too.

3. Which build of ESX exactly.

VMware ESXi 5.0.0 Releasebuild-474610 (3.0)

Thanks Hendrik

Hendrik,

During your tests do you see the results are consistent?  vSwitch is always 9-10Gbps while the 1000v is doing 5-6Gbps?  If you have any data and/or test results please share them.  vMotion can be very bursty at times also testing from within the Hypervisor is not a good measure of performance (as recommended by VMware.  I'd like to see you test from a VM - VM and see what you results are. 

Tests I usually run are:

VM -> VM (same VLAN/subnet, different hosts)

VM -> VM (different VLAN/Subnet, different hosts)

Test both scenarios using the same single NIC on both a vSS and the 1000v.  I find using Linux VMs and wget is a decent test as there's little overhead, but there are other options.

Regards,

Robert

Robert,

we did the performance test with a large VM thas use 124Gbyte RAM and we did this several times. When we move the VM to the other host, we see a 40-50% higher transfer time when we use the n1v.

These are the test results

Transfer time with vswitch:  min. 2:05min  max: 2:10min

Transfer time with n1v:        min. 3:10min  max. 3:34min

In the Vcenter screenshot I attached you can see the difference between such a vmotion transfer via vswitch and Nexus1000V. The first 3 curves show a vmotion transfer via vswitch, the last 2 curves a transfer via Nexus1000V.

I don't see any noticeable things about a high CPU load or memory usage in the VSMs during the vmotion. At Nexus5548UP I can see many TX-Pause frames during the transfers, but this happens for n1v and vswitch.

I've to open a TAC case for this issue, I will update this discussion when I get new information.

Thanks+Regards

Hendrik

Did you ever get to the bottom of this problem?

Sent from Cisco Technical Support iPhone App

We had very close contact with Cisco develpoment about this. Finally it was a bug in n1v software with the mac pinning feature.  It's fixed since version 1.5.1

Hendrik

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: