I configured VXLAN configuration by the book (Cisco Nexus 1000V VXLAN Configuration Guide, Release 4.2(1)SV1(5.1)), but there is some problem.
There are two ESXs with four VMs (two VMs on each ESX). Each VM has one NIC and that NIC is assigned to a port-profile configured for same VXLAN bridge-domain access. There is connectivity between VMs on same ESX but there is no connectivity between VMs hosted on different ESXs. In other words, L2 connectivity works between VMs on same ESX but not between VMs on different ESXs.
Nexus 1000V VSM is installed on Nexus 1010 Appliance and manages two VEMs through L3 control interfaces.
VSM version is 4.2(1)SV1(5.1) and VEM feature level is 4.2(1)SV1(5.1).
Bridge-domain is VXLAN-5001 with segment id 5001 and group address 220.127.116.11
Port-profile for VMK VXLAN interface is properly configured for access to VLAN 588 ("transport" VLAN for VXLAN) and capability vxlan.
VLAN 588 is allowed on all uplinks on both sides (Nexus and physical switch).
Port profile for VMs if properly configured for access to bridge-domain.
I was create a monitor session for VLAN 588 on upstrean switch (Cisco 6513 with 12.2(18)SXF14 IOS) and did't see any multicast, unicast or any other traffic. According to documentation, first I shuld to see IGMP join, after that multicast and after that unicast traffic between two VMK interfaces.
Here is MAC address table for bridge-domain VXLAN-5001:
Nexus1000V-VSM-1# sh mac address-table bridge-domain VXLAN-5001
Bridge-domain: VXLAN-5001 MAC Address Type Age Port IP Address Mod --------------------------+-------+---------+---------------+---------------+--- 0050.56a3.0009 static 0 Veth6 0.0.0.0 3 0050.56a3.000a static 0 Veth7 0.0.0.0 3 0050.56a3.0007 static 0 Veth4 0.0.0.0 4 0050.56a3.0008 static 0 Veth5 0.0.0.0 4 Total MAC Addresses: 4
As you can see, there is no proper destination IP addresses.
Good hint, but it seems that is not the problem...
Cat ports connecting VEMs support jumbo frames and their MTU is set to 9216B.
I saw that MTU on Ethernet interfaces of VEMs is set to 1500B, I changed uplink port-profile and set MTU to first to 1550B, and after that to 9000B (max), but thing still isn't working.
I'm not using vCloud director, just VMware vSphere 4.1 (vCenter Server with VUM, vCenter Client and two ESX hosts).
Message was edited by: Mate Grbavac
After little research I found something strange... I setted up SVI on Cat in Vlan 588 ("transport" VLAN for VXLAN) and when I ping VMKernel interface (with capabilitiy vxlan) with packet size more than 1500B and df bit set I have no reply. My Cat ports and UpLink port profiles are configured for jumbo frames. Is it possible to change MTU of VMKernel interface?
If I recall correctly you don't need modify the VMK interface. The VEM will automatically do that for you if traffic being transported is VXLAN. As long as the uplink ports are all set for 1550 or greater it should be ok.
Since you are using a Cat 6513 igmp querier should be on by default but if you could check to be sure that would help.
Configuration guide for VXLAN is very bad, there is no infomation about need for IGMP querier or L3 multicast device if you have VXLAN VMkernels connected in L2 (VLAN 588 that I mentioned in first post).
After adding a router to VLAN 588 ("transport" VLAN for VXLAN) with IP Multicast enabled everything works fine.
I didn't started IGMP querier on Cat 6513 because it is high production component and it was much easier to me put RTR in VLAN 588.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...