My level of account access wont allow me to view the link you have provided
Here's what I am looking at trying to achieve....
We have a file system that is accessed by many clients, the access is via a proprietary protocol. In order to support generic client access we mount the file system, using our own driver and share it as a Samba share. The systems hosting this Samba share are used for client access.
So, we could potentially have multiple machines hosting the same file system via Samba.
I would like to be able to present this architecture as an N+1 resilience model, which would require load balancing and failover on the machines hosting the Samba mount. IOS SLB seems like the right approach on paper, especially if you substitute Samba for FTP as an example.
One concern I do have is whether the throughput is constrained by utilizing SLB. Do you have any comment on this ?
I will look to see if I can dig up any L4 LB papers elsewhere in the mean time.
Based on this L4 LB is still the only thing you can do.
Regarding performance of IOS SLB on 6500 SUP720-3BXL our tests have shown it can handle ~22K CPS at 98% CPU and ~2M concurrent connections at 99% CPU. You should not go above 500 configured real servers.
Regarding data rate, actual performance in customer network can vary based on the amount of connection overhead in the traffic profile. Connection with one data packet suffer cause the worst data throughput rate because the connection setup and teardown packets are processed in software compared two only one or two packets that are hardware accelerated. As the ratio of control/data packets improves, so does the throughput. On the Cat6k, the data packets for dispatch mode reals are hardware accelerated whereas data packets that must be Server NAT'ed or Client NAT'ed must all go to the MSFC for processing. In the later case, therefore, the faster the CPU of the MSFC improves the data rate performance as well as connection setup/teardown rate.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...