We have two 6120s connected to two UCS 5108s in two separate rooms in our building. We have just completed a 2nd formal server room and will be moving one of the 6120/5108 pairs into that room. Currently, the 6120's are cross connected (1Gig copper) to each other and all of the 10Gig connections are appropriately connected.
The issue is that the two server rooms are more than 100m apart so there is no copper connections between the rooms. We have switches in each room connected via 10Gig fiber.
Our VAR who installed originally this sent an e-mail that these 1Gig copper cross connects must not pass through a switch. He has left to get married and is gone for a few weeks.
We are not moving this until November, but I'm not willing to wait a couple of weeks until the engineer returns to get the answer to this question. I have looked on CCO and other than identifying the 1Gig ports as "cross-connect" or "cluster", I can't find anything out.
I would think that they would work through a switch, perhaps in their own non-routed VLAN or something.
Will this work? Is there a design guide on CCO that shows it?
No - a switch in between is not supported and will not work.
We run bonding on the two links etc and the software for heartbeat, sync etc is based on that assumption (amongst other things).
If this is an Ethernet Connection, why can't it be connected through a switch. In the case I'm looking at, it would be:
 <-1Gig copper-> [3750E Switch] <-10Gig fiber-> [Nexus 7010] <-1Gig copper-> 
Is it a latency issue or what? The latency from above would be not much longer than a direct connection.
Can you reference a document that defines this reqirement? I could not find it on CCO.
Can any of the 10Gig port be used for the Heartbeat? That would not be a problem for us. The only problem is that we don't have copper between the two rooms since it is over 100 Meters.
What you are asking for is an enhancement which I fully agree with.
Latency is not the issue here. It is how the software which manages the L1-L2 links on the FI's.
They are bonded internally. Not expecting any other traffic (CDP, BPDU or anything else when you connect to a switch for example amongst other things). You connect a switch in between and bonding breaks. You could now potentially think of 1 link to get around bonding or turn channel-group explicitly on the switch etc..
But the fact of the matter is that it has neither been tested/blessed nor advertised as a possible workaround.
So currently it won't work and *if* it does, you are in unchartered territory where support if something breaks is going to be an issue.
If you look at the User Guide at
Page 63 and 65 have references to it -
The L1 ports on both fabric interconnects are directly connected to each other.
The L2 ports on both fabric interconnects are directly connected to each other.
To use the cluster configuration,the two fabric interconnects must be directly connected together using Ethernet cables between the L1 (L1-to-L1) and L2 (L2-to-L2) high availability ports,with no other fabric interconnects in between.
Note: I would have written "with no other *switches* in between" rather than fabric interconnect above but then thats me.
Is there any issue using a 1Gig Fiber to 1000BaseT tranciever (Media Converter)? If so, we would just use pair (or two pair) on each end of the link.
Could you explain a little bit more about your application? I'm interested to understand your motives for such a configuration. Will you be uplinking the chassis in each location to both interconnects?
Here is our topology as best I can explain without a diagram.
Currently, in *each* of two separate rooms (a server room and a temporary room)we have a:
6120 Fiber Interconnect
5108 UCS Blade server chassis with FEX
Each 6120 has tw copper (I think it is called Twinax) connection to the FEX in the 5108 in the same room.
The same 6120 has dual 10Gig fiber connections to the FEX in the 5108 in the other room.
Also, the 6120 have two 1Gig copper connections to each other. I call this the hearbeat connection. I would like to know what the right name for this is called.
Anyway, we are moving one of the pairs of 6120/5108 from the temporary room into a 2nd server room. That server room is more than 100 meters from the first sever room so we do not have copper connectivity.
I believe the correct fix will be to use 1000BaseT to 1G fiber trancievers (media converters).
So to answer your question, yes both chassis are uplinked to both interconnects.
Does that make sense?
You can use a media converter between the Fabric Interconnects because it is L1; however, any kind of switch or other device(s) is not supported between the Fabric Interconnects.
As you can see, I asked the question here first before having Jose open the ticket formally with TAC.
It is good to see you answering the question in both places.
As soon as we get the appropriate SFPs, we can put this in place between the existing rooms which are currently connected with copper to have it run that way until 1st week of November when one of the 6120/5108 pairs move to the new room which is only Fiber.
We are also planning to implement a similar kind of configuration. Can anybody confirm if we can use media converters to make the cluster between two FIs , apart more than 100 meters.
If anybody can provide this information regarding this as we are in middle of planning our server room relocation and we want plan the move in a phased manner.
Thanks in advance!!