As Brian pointed out TAC will only "support" deployments with max 20 chassis. This limit was decided by our engineering team as the appropriate amount of chassis to connect in a single UCS domain. Keep in mind, though a UCS deployment is redundant in almost every aspect, I still would want a failure domain any larger than 20 x 8 blades personally.
We also have UCS Central coming out later this year which will allow you manage multiple UCS domains (clusters) from one Interface. UCS Central was created to address the scaling issues beyond the 20 Chassis limit as well as multi-site deployments.
At current 20 Chassis at full connectivity (with 4 port IOMs) will render 80 ports used for server interfaces. That gives you 16 remaining to divide up between FC Uplinks, Eth uplinks, Appliance ports, direct connect storage, SPAN interfaces, FEXs-for Rack server integration etc. As you can see there's many other uses for your available ports. If the 6296 is too many ports then the 6248 migh be more appropriate.
Robert, I see your point, but your example raises an interesting scenario. What you described is a 2:1 OS ratio from server to ToR FI for both Ethernet and FC. Those 16 ports that are left cannot all be allotted to FC, of course, because you need Ethernet uplinks. You could have 8 for Ethernet and 8 for FC. That would give you a 10:1 OS ratio from server to LAN and 12.5:1 from server to SAN. Thats not too bad, depending on the requirements of course, but that is pushing it.
What makes this interesting is that the 6296 is the most dense FI, yet you can only get a 10:1 OS ratio to the LAN and SAN with only 160 servers. Stated otherwise, if you did need a 2:1 OSR AND you want to get the most of the FI UCSM by populating it with 20 chassis, these OSRs are the best you can get. You cant even do a 2:1 with the 6248.
Now, with a 6248, the best OSR to the ToR FI you can get is 4:1 with 20 chassis. However, that would leave you with 8 available UP ports for BOTH Ethernet and FC - lets say 4 for each. That equates to 20:1 OSR from server to LAN and 25:1 OSR from server to SAN. Furthermore, unless you have a very expensive 1:1 OSR at the core (assuming the UCS FIs are being uplinked directly to a core and that core is a Nexus 7K) using F2 modules, you would have to add another 4:1 OSR offered by the switch itself. That is a total of 80:1 from server to CORE.
By the way, I figure out offered load on converged links by assuming that I have 5G of Ethernet and 5G of FC. That is the default ETS setting on the UCS and Nexus products, so that makes sense to do it that way. Of course the BW of each traffic class will fluctuate, but you have to put the stakes down somewhere.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...