I am trying to migrate to a Nexus 1000v vDS but only VM's in the system VLAN can forward traffic. I do not want to make my voice vlan a system VLAN but that is the only way I can get a VM in that VLAN to work properly. I have a host with its vmk in the L3Control port group. From the VSM, a show module shows the VEM 3 with an "ok" status. I currently only have 1 NIC under the vDS control. My VM's using the VM_Network port group work fine and can forward traffic normally. When I put a VM in the Voice_Network port group I lose communication with it. If I add vlan 5 as a system vlan to my Uplink port profile then the VM's in the Voice_Network work properly. I thought you shouldn't create system vlans for each vlan and only use it for critical management functions so I would rather not make it a system vlan. Below is my n1k config. The upstream switch is a 2960X with the "switchport mode trunk" command. Am I missing something that is not allowing VLAN 5 to communicate over the Uplink port profile?
port-profile type ethernet Unused_Or_Quarantine_Uplink vmware port-group shutdown description Port-group created for Nexus1000V internal usage. Do not use. state enabled port-profile type vethernet Unused_Or_Quarantine_Veth vmware port-group shutdown description Port-group created for Nexus1000V internal usage. Do not use. state enabled
port-profile type vethernet VM_Network vmware port-group switchport mode access switchport access vlan 1 no shutdown system vlan 1 max-ports 256 description VLAN 1 state enabled port-profile type vethernet L3-control-vlan1 capability l3control vmware port-group L3Control switchport mode access switchport access vlan 1 no shutdown system vlan 1 state enabled port-profile type ethernet iSCSI-50 vmware port-group "iSCSI Uplink" switchport mode trunk switchport trunk allowed vlan 50 switchport trunk native vlan 50 mtu 9000 channel-group auto mode active no shutdown system vlan 50 state enabled port-profile type vethernet iSCSI-A vmware port-group switchport access vlan 50 switchport mode access capability iscsi-multipath no shutdown system vlan 50 state enabled port-profile type vethernet iSCSI-B vmware port-group switchport access vlan 50 switchport mode access capability iscsi-multipath no shutdown system vlan 50 state enabled port-profile type ethernet Uplink vmware port-group switchport mode trunk switchport trunk allowed vlan 1,5 no shutdown system vlan 1 state enabled port-profile type vethernet Voice_Network vmware port-group switchport mode access switchport access vlan 5 no shutdown max-ports 256 description VLAN 5 state enabled
I ended up failing over to my other VSM and then I did a shutdown / no shutdown on ethernet3/8 and it started working. I am not sure if it was the failover or the shut/no shut that actually did it but everything is working now. Thanks again for helping with this.
Can you reproduce the issue by reloading the upstream physical switch? I have an open support case at TAC linked to CSCuj82788 bug. The main issue is that the vmnic (Ethx/y) is reported as DOWN both in vCenter and in Nexus 1000V 'show interface ethx/y' output. The consequence is the same as in your case: no VLANs are forwarded except system VLANs. However, the link seems to be UP in 'esxcli network nic list' and on the physical switch side. A simple shut / no shut on the physical switch fixes the situation.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...