my setup is that I have two servers connected via switch (which does not understand vlans), essentialy I have only one data vlan 104. The whole setup was working, but then I tried upgrade to 4.2.1(4a) and now I lost connectivity to that vlan (VMs and VSM mgmt0)
Maybe the problem is with the native vlan on that trunk (here it says number 1):
I can see that you have configured the control VLAN as 5, management as 104, and packet as 10.
The control VLAN is where most of the communication takes place, that allows the VSM to see the VEM (as in your output of show module).
In your initial configuration, you correctly defined the control and packet VLANs as system vlans. However, the management VLAN should also be a system VLAN to ensure that it is always forwarding (even after a reboot of the VSM and prior to it coming back up), as this is an important VLAN for you to manage the VSM itself. Although, this is mainly important when the VSM is behind a VEM. In this scenario, if you don't define it as a system vlan, then you can run into traffic disruption as you have experienced.
Further details regarding system vlans and their use can be found in previous posts on the forums. Good place to start:
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...