vPC error on N7K after connecting them across Datacentres
Brief description. We have a two pairs of N7K's at two Datacentres. These N7K's are vPC'd and were working ok at the DC's, until last week when we connected the N7K's up to each other using some 'dark fibre'. So Nexus A in DC A connected to Nexus A in DC B, and the same for the Nexus B's. The PtP connectivity is L3 and on a seperate OSPF VRF.
Once we connected the Nexus Cores to each other , everything seemed ok. Routing tables were ok , and nothing indicated any issues.
We tried a link failover test, on the Nexus B's , and straight away saw this message in the logs on the far end Nexus B. The peer link had not physically dropped and we checked the vPC and all looked ok.
We have had no subsequent issues, but have not retested as of yet. Any ideas , why connecting a L3 connection into a vPC domain and then disconnecting it , would throw up this error.
In domain 1, VPC peer keep-alive receive has failed
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...