Oracle Private Interconnect Questions

Unanswered Question
Oct 20th, 2009

I am in the midst of designing a network that will have Oracle RAC in it. I am using Dell M1000 chassis and I am good to go on the front end production design with Cisco 3130G's blade switches ethernchanneled back to a redundant 6513 core.

However, for the Oracle Private Interconnect I am wondering the best way to do this. I don't want my E2 and E3 interfaces spread between two core 6513's because I don't want the latency. Oracle Private Interconnect basically is a big memory dump from one machine to another. Bandwidth is a big deal and latency is an even bigger deal.

My options would be to put E2 in CoreA and E3 in CoreB and then set up a fat trunk (1Gbps right now, 10Gbps not purchased) with etherchannels. Problem is that this would not allow 2Gbps to the actual servers (cant LACP across switches) and I am worried about the latency of the hop between cores.

Second scenario is to use the Cisco 3130G switches with their stack ability. Uplinks between stacks is 24Gbps. However, what I don't know about the 3130G's is what the latency is between the members in the stack and also the availability. This needs to be a five nines solution.

Any thoughts on the best way to design the Oracle Private Interconnect? Also, after having spent a week on Oracles website, they recommend that the OPI use dedicated, standalone switches with minimum of 1Gbps connections.



I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Jon Marshall Tue, 10/20/2009 - 05:23


From previous implementations i have been involved with we always used dedicated switches and if memory serves me right we utilised 4948 switches in the data centre.

The Oracle Private Interconnect doesn't need access to any other networks so dedicated switches are a good way to go.


jfraasch Tue, 10/20/2009 - 11:45

Thanks for the response. One other question that will help with the the NICs on the Oracle Private Network talk between eachother? By that I mean if ServerA has two NICs, do they send traffic back and forth? I think the second NIC is only used for failover no?

Jon Marshall Tue, 10/20/2009 - 16:44


When we deployed it the servers have an active and standby NIC so only one was in use at any one time.


jfraasch Wed, 10/21/2009 - 03:12

There was no worry about bandwidth back to the servers? I ask because our Oracle db guys here are saying they need 2Gbps to the server.

This means LACP which means using either switch stacks or putting both NICs into a single core (and I dont want that single point of failure). Do you know if there are any bandwidth errors (cache fusion really)

We are talking about four DB servers running Red Hat.


Jon Marshall Wed, 10/21/2009 - 06:00


"Do you know if there are any bandwidth errors (cache fusion really)"

Not sure i understand what you are asking.

As for the 2Gbps if you really do need that and you still want redundancy then yes the way to do that would be use cross stack etherchannel which the 3750s support.


jfraasch Wed, 10/21/2009 - 07:01


Thanks. That's exactly the conclusion I have come to.

For future reference, in case anyone comes to the Oracle Private Interconnect question, I have posted the recommendations I have made to my bosses.

It is a list of the pros and cons of each solution.

Remember, this is a solution for server chassis, in our case the Dell m1000 chassis but I think it will be helpful for those that come across this problem in the future.

The one option I did NOT show is the Infiniband solution. This was not a requirement by the db team and therefore I stayed with ethernet-only solutions.



This Discussion