currently we prepare the implemention from several ESX-Hosts with 10gbps NICs. The ESX-Hosts are connected via 10gbps SFP+ ShortRange to Nexus5548UP switches with 2x FEX C2232PP (each ESX host ist connected to a diffferent fex , both fex are connected to the same 5548UP via a 40gbps "channel" ).
At the ESX-Hosts is a Nexus1000V with version 4.2.1.SV1.4a installed.
During some Vmotion performance tests the transfer rate between 2 ESX-hosts was only 5-6gbps. But when we move the ESX-Nics from the Nexus1000v to a vswitch, the transfer rate is 9-10gbps. At the Nexus1000V is no rate limiting configured.
2. Which uplink channeling method are you using on the 1000v? (Mac pinning, LACP port channel etc)
Currently we test only with one nic per host. For production we will use 2x 10G nics per host. In our other productive Nexus1000V environments (with Dual 1Gbps nics per host) we use channeling via mac-pinning. It works very stable and we plan to use it for the 10gig servers too.
During your tests do you see the results are consistent? vSwitch is always 9-10Gbps while the 1000v is doing 5-6Gbps? If you have any data and/or test results please share them. vMotion can be very bursty at times also testing from within the Hypervisor is not a good measure of performance (as recommended by VMware. I'd like to see you test from a VM - VM and see what you results are.
Tests I usually run are:
VM -> VM (same VLAN/subnet, different hosts)
VM -> VM (different VLAN/Subnet, different hosts)
Test both scenarios using the same single NIC on both a vSS and the 1000v. I find using Linux VMs and wget is a decent test as there's little overhead, but there are other options.
we did the performance test with a large VM thas use 124Gbyte RAM and we did this several times. When we move the VM to the other host, we see a 40-50% higher transfer time when we use the n1v.
These are the test results
Transfer time with vswitch: min. 2:05min max: 2:10min
Transfer time with n1v: min. 3:10min max. 3:34min
In the Vcenter screenshot I attached you can see the difference between such a vmotion transfer via vswitch and Nexus1000V. The first 3 curves show a vmotion transfer via vswitch, the last 2 curves a transfer via Nexus1000V.
I don't see any noticeable things about a high CPU load or memory usage in the VSMs during the vmotion. At Nexus5548UP I can see many TX-Pause frames during the transfers, but this happens for n1v and vswitch.
I've to open a TAC case for this issue, I will update this discussion when I get new information.
Why do you need native HA: The native HA feature allows two Cisco DCNM
appliances to run as active and standby applications, with their
embedded databases synchronized in real time. Therefore, when the active
DCNM is not functioning, the standby DCNM will...
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...