07-16-2014 02:33 AM - edited 03-01-2019 11:45 AM
Hello Cisco Community,
we are using our UCS (Version 2.21d) for ESX Hosts. Each host has 3 vnics as follows:
Actually UCS is connected to the Access Layer (Catalyst 6509) and we are migrating to Nexus (vPC). As you know, Cisco UCS Fabric Interconnects can handle layer2 traffic itself. So we are planning to connect our UCS Fabric Interconnects directly to our new l3 nexus switch.
Does anyone have connect the UCS directly to l3? Do we have to pay attention to something? Are there some recommendations?
thanks in advance
best regards
/Danny
07-23-2014 12:03 AM
e.g. one chassis, 2 uplinks between IOM and FI, no PC; odd slots are mapped to uplink nr. 1, even slots to uplink nr. 2. If uplink 1 fails (IOM, FI port issue), all the odd slots fail !
We are using port channels for Connection between IOM - FI and FI - Catalyst / Nexus.
The Uplinks delivers both VLANS in a port channel, so it is logical one "cable". IOM1 of the Chassis are connected with FabA in a PC, IOM2 of the Chassis are connected with FabB in a PC.
If only one cable fails, the over subscription changes and the second cable is more busy. If a complete IOM fails, ok, that chassis will failover to the other fabric and l2 will go outside. I aggree. But this is a failover scenario.
In this case, our l3 switch / router can handle this traffic, too. :)
07-23-2014 11:31 PM
just fyi.
I have a statement of our Cisco SE, that in a typical Access/Aggregation Design, it is recommended to connect UCS directly to Aggregation.
07-24-2014 04:59 AM
Can you please send me this statement by email !
- You should also disclose the design of DVS to the community ! with hardware failover, you must essentially cutoff any failover / loadbalancing of DVS.
- it seems that you don't need FC, because there, this strange design would not apply; hardware failover only applies to ethernet nics, not vhba's !
07-24-2014 07:17 AM
The statement is in german:
"In normalen Access/Aggregation Designs würde ich sogar sagen, es ist *empfohlen*, UCS an die Aggregation Switche anzuschliessen (anstatt an die Access Switche). Der Grund ist, die Uplinks von den FIs können gut belastet werden, und man will diese Uplinks in der Regel an hoch-performante Ports anschliessen."
Spine/Leaf Design it is clear, only connect to leafs.
;)
For FC, we have 4 vHBAs each Host (2 redundant SAN Fabrics x 2 UCS Fabrics =4)
Yes, we use hardware failover only for ethernet / vnics and there is no load-balancing (because each host has only one nic connected to a dvswitch).
(3 dvswitches; 2 server vlans + 1 management vlan)
If you want I can paint the complete design, we are using.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide