Given a data center architecture designed to avoid STP yet giving a high level of availability, the lack of GEC on the arrowpoints means that a failure of that link offers two choices in a failure model: a) use 2 parallel links and be forced to introduce STP w/ uplinkfast to the network or b) fail over to the redundant arrowpoint unit.
Support for GEC/FEC means I'd have access to a less painful initial fail-over option before resorting to failing over the entire box.
Right,FEC or GEC can avoid single link failure,but it can not avoid session state loss.FEC or GEC is layer two technology,when a link of FEC or GEC broken,some transitting data will be loss.If the critical app running the platform,I think your customer will be despair.
So,I suggest,when customer needing robust e-business platform,you had better design redundance L7-switch to deploy stateful failover.
I do have redundant CSS units. But even that failover is going to risk a few stray packets.
It will work fine failing over to the other unit, and I have no issues with leaving it at that. But if the only problem is a cut/bad cable, I'd just as soon minimize the actions needed in order to recover.
"L7 switch dedicates Layer 4-7 function performance,Not low layer data throughput.
Could you tell me what's the significance CSS support FEC or GEC? Thanks."
Could you explain this a little further? If I am using the CSS as a simple layer 3 loadbalancing switch to distribute high-volume website traffic accross a webserver farm, why would low-level data throughput not be a priority over (or at least along with) the higher layer functionality? Is there a way to increase the bandwidth without using FEC, or is this swtich simply not designed for this purpose, and I need to buy a gigabit ethernet load balancer?
I ask because I've got serious bandwidth issues on my CSS11050 that I'm using in a performance/scalability lab, when using it to loadbalance heavy-XML content webservices my FE NICs are maxed out.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...