I am in the midst of designing a network that will have Oracle RAC in it. I am using Dell M1000 chassis and I am good to go on the front end production design with Cisco 3130G's blade switches ethernchanneled back to a redundant 6513 core.
However, for the Oracle Private Interconnect I am wondering the best way to do this. I don't want my E2 and E3 interfaces spread between two core 6513's because I don't want the latency. Oracle Private Interconnect basically is a big memory dump from one machine to another. Bandwidth is a big deal and latency is an even bigger deal.
My options would be to put E2 in CoreA and E3 in CoreB and then set up a fat trunk (1Gbps right now, 10Gbps not purchased) with etherchannels. Problem is that this would not allow 2Gbps to the actual servers (cant LACP across switches) and I am worried about the latency of the hop between cores.
Second scenario is to use the Cisco 3130G switches with their stack ability. Uplinks between stacks is 24Gbps. However, what I don't know about the 3130G's is what the latency is between the members in the stack and also the availability. This needs to be a five nines solution.
Any thoughts on the best way to design the Oracle Private Interconnect? Also, after having spent a week on Oracles website, they recommend that the OPI use dedicated, standalone switches with minimum of 1Gbps connections.