In reviewing the documentation it appears that within the same subnet that the following would occur
Switch mode: A packet from chassis A to Chassis B would take 1 hop. It is only 1 hop instead of 2 because both Chassis are connected to both Fabric Interconnects. Therefore the packet will be sent directly from FI A to Chassis B without having to go through FI B
End Host Mode: The documentation I have seen indicates it would be three hops (chassis to FI A to Nexus to FI B to Chassis B, but that does not look right. Shouldnt this be the same as the switch mode in that both Chassis are connected directly to the FI's and thus each FI has redundant access to both Chassis and therefore it would be only 1 hop
End Host Mode - server to server within the same chassis or to other chassis on the same fabric side is only 1 hop away. If you need to communicate to a NIC on the other fabric then the traffic will have to traverse northbound and can be 3 hops.
End Host versus Switch mode does not affect the amount of hops. In both modes each IOM in each chassis functions only as a MUX and only connects to a single FI so all blade-to-blade communications will always travel from the chassis up to the fabric interconnect before any switching is performed and then get forwarded based on weather the destination is on the same FI or the other FI. In your example it would be 3 hops in both modes.
Here is a simplified example of how a packet from one blade is forwarded to another blade:
The OS or hypervisor on the blade sends the frame out a single interface. This interface was configured in UCSM to be associated with either FI A or FI B (if we ignore fabric failover) , and the packet is forwarded from the NIC to the IOM that is associated with the appropriate FI.
The IOM always forwards this packet to the FI it is connected too, and no switching is performed even if the destination happens to be another blade on the same chassis that is associated with the same fabric interconnect.
The FI recieves the frame from the IOM and then determines where to send the frame.
In Switch mode, it forwards the frame just like a normal switch running PVST+.
In end host mode the FI is allowed to make some assumtions about the upstream network, like that it already knows the MAC address of all devices connected through it (and since it can assume this it has no need to and will not learn MAC addresses from the uplinks), that all uplinks for each VLAN are connected to the same L2 domain (so it can only listen on a single uplink for broadcasts), and no uplinks are ever blocking due to STP on any VLAN. With Disjoint L2 configured it is a little more complicated. In either case if the destination MAC in the frame shows up on any interface it will be sent to the IOM in the appropriate chassis and then to the correct blade's NIC and then to the correct vNic. If the source and destination are on different FI , it will not have an entry in the CAM table for the destination MAC and the frame will be forwarded to the upstream switch out which ever interface the source vNic is pinned too. You can check which interface it is with the show pinning... commands in nxos mode on each FI. From there the upstream network handles the rest untill the packet arrives at the other FI which sends the frame to the correct IOM, then NIC, then vNic.
The only time end host versus switch mode would affect the number of hops, is when you lose all uplinks to one FI and the packets travel directly between the 2 FI which would be 2 hops for blade-to-blade, and one extra hop for blades communicating to the outside, while in end host mode the interfaces are shut down if there are no uplinks so it would be 1 hop between the blades for the remaining interfaces.
All of this is not that important since Ethernet switch mode is not recommended and should be used only in special circumstances where end host mode will not work.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...