Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to learn about the architecture and design of Cisco Unified Computing with Cisco expert Mike Frase. Mike is one of 16 Cisco distinguished support engineers. He joined Cisco in 1994 as one of the first CCIE certified escalation engineers in the Cisco technical assistance center. He has provided leadership in many new technology areas including storage area network technologies, for which he acquired his second CCIE certification in storage networking. More recently he has led the way in NX-OS support for technical services teams worldwide, and he is currently specializing in data center/consolidated IO architectures and designs. Mike holds a CCIE certification number 1189 and is also a VmWare certified professional.
Remember to use the rating system to let Mike know if you have received an adequate response.
Mike might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through July 3, 2009. Visit this forum often to view responses to your questions and the questions of other community members.
The Blade Servers are orderablewith the UCS, they include now a UCS B200-M1 2 Socket Server that features the Xeon 5500 Series Proc. Later on there will be available a UCS B250-M1 Extended Memory Server that will be a full slot Blade. More details on the Blade servers can be found at http://www.cisco.com/en/US/prod/ps10265/rack_mount_promo.html
When will the product documentation be published? I'm sure I'd have some more informed questions if I had the chance to read that first :)
* Could you explain a bit the licensing structure of UCS ? Especially the Fabric Interconnect ? If i buy the 20-port FI, can i use all 20 ports as FCoE ports - off-the-shelf or need i extra licenses ?
* How many chassis can i connect on one pair of Fabric Interconnects (one UCS domain), not theoretically but practicly with current software releases ?
* Is there any communication/synchronisation between two UCS domains ? Or are they completely seperated ? Is there a MoM (manager of managers ?) mode ?
There are 2 types of 10 GigE ports on the 6100 Interconnect switch. Host facing ports (toward the blades) and they all are FCoE enabled. Other type are uplink ports to the Core Data network. Then there are the Fibre Channel ports that connect the 6100 to the SAN for the FC protocols. The Host ports will be sold in a license model, I am uncertain at time of post what the bundles will be (as in groups of ports, I will try to find if that is set yet) At FCS connecting a FCoE target array is not supported , it would come in follow on release.
For the 20 port 6100 Interconnect with full subscription to each chassis and the use of expansion mods for Uplinks, 5 Chassis.
At FCS it is one UCS Domain, no Manager of Managers. Synchronization to a backup site through import/export tools in the UCS Manager System is supported
1) On the Fabric Interconnect, can i poll the interface load of a physical (!) FCoE port towards a chassis using SNMP ?
2) If yes, can i see/poll the different FCoE 'classes' on a link with SNMP ? (ie the SAN part, the LAN part, other priorities) or does UCS Manager provide me this information ?
3) If no, i assume i can only see the logical server port instances on my fabric interconnect, not the physical links. In this case, what mechanism is put in place to protect me from oversubscribing my FI-FEX connection ? (i assume i am running in switching mode, not pinning mode). Suppose i have 4 servers with 2gb san and 2x10GE uplinks (the min). What prevents me from inserting a new server in the chassis and deploying a 4gb san server ? (ie. 4x2 = 8 + 4 = 12 > 10GE, for redundancy purposes, i need to be able to keep working if one of the 10GE links should fail)
To answer your question on polling the load on interface, Querying the IF-MIB oids (with the physical Ethernet index) will give you the total stats (FCoE + ETH). You can see/poll vfc stats using the IF-MIB. The index for the oid would be the vfc index (not the physical eth interface to which the vfc is bound) Answer to question 3 coming soon.
Question 3 response: in the no-drop lossless fabric service that makes up the interconnect switch of UCS and if interconnect is in either switching mode or pinning mode the PFC pause is still enforced to prevent over driving the FEX connections. By just placement of the blades into the UCS chassis determines with FEX link connection the 10 gig interface on the Mezz cards takes out to the 6100 interconnect switch. One port on the mezz card goes to the right, other port to the left. In a active/active connection you may then have 5 gig and 5 gig (10 total across both links)to the application, or in active/standby 10 gig to the application and only active on one side. if you are using all 4 links of the FEX to the 6100 switch (you can have 1,2 or 4) (10 , 20 or 40 gig)If you populate all 8 slots of the UCS with blades and also have all 8 FEX to 6100 cables in use then you have 10 gig to each blade. If you had only 2 FEX links turned up it would obviously be only 20 gig available and you over subscription higher
Maybe easy question: can i create an Etherchannel bundle between two redundant UCS 6100 fabric interconnects (running in cluster) and a pair of C6500 running VSS ?
What types of servers can the Cisco UCS 6100 Series Fabric Interconnect support?
The interconnect cables for the cluster ports are not used for any data traffic. They are used for control plane traffic only, specifically cluster heart beat messages to both detect a failure as well as run the cluster election protocol.
This confirms my previous information. However, if the cluster interconnects don't carry any user traffic, it means there must be some kind of uplink tracking. Can you confirm ? Suppose each Fabric Interconnect connects upstream to C6500 VSS with 2x10GE and to Brocade 2x4Gb SAN. What happens if one Fabric Interconnect looses ALL its LAN upstream interfaces, but keeps (!) its SAN interfaces ? (Or visa-versa). Will this trigger a cluster switchover ? (uplinks look like this: \/\/)
Or is it mandatory that uplinks are split across both units ? (uplinks look like this: /\/\)
For a high availability L2 design your would be required to have ISL port-channels from each of the 6100 interconnect switches to the core C6500,s. Same is true for the FC port-channels, links to both the A-san and the B-san. so answer is links need to be split across both 6100 interconnects.
6100 switches can run in a pinning mode where there is no spanning-tree running in the interconnect switches, links into the UCS behave like edge ports. other mode is to run in a switch spanning-tree mode, possible this would be used if UCS was in a HPC cluster with data stayin within the compute nodes of the UCS.