I don't believe the packet buffer size on the F1 cards is published externally, sorry. What I can say is that it is smaller than on an M1 linecard as F1s are optimised for minimal latency.
Traffic received on an F1 module which needs to be routed is sent on an internal port-channel that by default consists of ports on all M1 modules present in the VDC. So routed traffic is automatically load-balanced similar to Etherchannel hashing across the available M1 forwarding engines.
You can manually configure which M1 modules are used with the 'hardware proxy layer-3 routing ..' command, the following section of the config guide has some more info:
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...