I have a customer that is having HA issues with VMware with Nexus 1000v running and I am trying to troubleshoot odd behavior. A cluster was installed and later customer on their own split it up within the UCS by creating another vCenter server, removed half blades from vcenter created and moved to new vcenter, then built 2 new 1000v VSMs and used current config as a template. They were running for 60 days then license grace-period ended, so now with tac assistance they removed 6 licences from original VSM installation, put back into license_pool and sent to TAC for rehost to new host-id. So now they have a 2nd 1000v pair running but still one UCS domain and ALL vlans are the same. They said they ran fine before the grace-period expired BUT only after putting the new license file in did they enable monitoring for HA to use SRM in VMware. HA is constantly rebooting VM's due to timeouts but this is a very small environment, single UCS domain, 2 FI's 2 chassis, 6 blades each chassis with 5 live ESXi hosts and 1 spare doing nothing. Original cluster continues to run to this moment with zero issues that my colleagues and I installed, 2nd cluster still has "network" appearing issues.
Question: Nexus 1000v best practices are to use 3 vlans for packet/control/mgmt and mgmt usually is same as vcenter for ease. What about a 2nd 1000v VSM pair ?? Can it use the SAME vlans for packet/control/mgmt without issues ??? What about mcast frames for updates to mac-address tables and those real-time updates from the VSM to VEMS could this conflict when received by 2nd pair ??? Not seeing the trafffic I dont know and need to know if someone does ?? If putting in a 2nd vsm should the packet/control vlans minimum be created again unique to the 2nd VSM pair ?? and if so I have to ask because the customer will then why does the 1st pair still work but 2nd not on same vlans ??? I know it's a mess and this is readers digest version to say the least but I think I got all the truth out of them now and I am left with this question. I've never installed multiple VSM pairs using same vlans as no need in most implementations properly designed.
Input ?? working on right now so assistance is appreciated while I continue to search for this type of design information.
In terms of the question on control, management and packet. The management can easily be shared – especially if using the same management subnet to administer these VSMs. The control and packet can be the same – but you need to ensure that each VSM pair that you deploy makes use of a different Domain ID (so as to not have the VEMs confused about who is their parent VSM otherwise it leads to interesting problems). I’ve seen many customers utilize a single control/packet VLAN. However, in saying this, the official recommendation is as follows:
* We recommend using one VLAN for control traffic and a different VLAN for packet traffic.
* We recommend using a distinct VLAN for each instances of Cisco Nexus 1000V (different domains)
I'd first inspect their setups and identify what is the exact configuration in use by the 2 x N1Kv deployments. If utilizing the same control/packet vlan, ensure that each deployment has a different Domain ID. This is really important.
Cisco IT uses the shared control/package/management VLAN model for all VSM in a single L2 Domain. In our largest L2 domain we currently have 8 VSM pairs with unique domain ids sharing the same VLANs without issue. Those 8 VSM pairs are split between two different vCenters and support 300+ ESX hosts and over ~4k active VMs.
Thanks guys this is what I was wanting to hear. I had read about the domain-id's being tagged upon control packets and made sure they had unique domain-id's immediately and surprisingly they do, however I am seeing the same modules in BOTH VSM pairs if I do a show run. The VEMs listed are identical including MAC address so I am thinking they built the first one and copy and pasted the config into the 2nd pair but the first one was completed and it's registered VEMS's are shown in each one......I see a conflict here and why they have issues with traffic......
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...
Cisco Documents are usually accurate, but when it came to the document
on Cisco APIC Signature-Based Transactions it was slightly off the mark.
This document is for those novices to API like me who cant seem to
figure out how to go about performing signat...