in the release notes for version 4.2(1)SV1(4a) I found a limitation about the vethernet trunk interfaces. 256 veth trunk are supported for the whole DVS with a maximum of 8 veth trunks per host.
For a new n1v environment we need more then 8 veth-trunks per host. In a n1v test environment we have currently 2 hosts with ESXi4.1 installed, each host with 25 vethernet trunk and 19 vethernet non-trunk interfaces. Because this configuration is working it seems, the limitation with 8 veth-trunks per host isn't hard coded. Currently we don't see any issues with these 25 veth-trunks.
But what is the reason to have a limit (or recommendation) with only 8 veth-trunks per host ?
let me add, that in this special n1v environment we will not have more than 4 hosts.
It's for a large Firewall centralization and virtualization project. The firewall VMs we have there, control the access between some hundred server vlans. It's much easier for us and the firewall team, to manage 1-2 veth trunk interfaces per Firewall-VM instead of ~100 veth access interfaces/port-profiles per VM.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...