I met with an strange situation and i need help from you experts,
i have 2 core and N7K in vPC+ and below that N5K also in vPC+ connecting the fabric extenders on it, I have a connectivity to 6500 from N7K-1 with an layer 3 interface pointing a default route towards 6500, I have pointed a default route from N7K-2 to N7K-1 because i have only 1 link towards 6500 from N7K1.
The strange part what i want to highlight is whenever i shut the SVI interface for any subnet on N7K1 the users in that subnet are not able to reach 6500 though i have a N7K-2 interface live with an HSRP active role.
Remember that hsrp in vPC has a active/active behavior.
The issue is when the SVI goes up in the second peer, the SVI is the GW of the network and requires to the second router do the packet forward.
If a receive a packet from a vpc peer by the peer-link, and to reach out the network I need send traffic back to the same peer, this will not be allowed. As a chicken or the egg dilemma.
To solve this, keep the SVI on the peer that have connection to outside the network (makes sense because if your SW1 goes down, your outside communication goes together and the SVI on the second peer will be a black hole), or connect the 6500 as a vpc member instead a orphan port.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...