Bandwidth factor

Unanswered Question
Dec 6th, 2008

Why has Cisco withdrawn your InfiniBand HCAs from the market? From what I've seen so far, there's quite a lot of ambiguity, if not asynchrony, in the InfiniBand space that encapsulates Cisco, HP, Mellanox, IBM, etc.

A few weeks ago, I've read on www.mellanox.com about their 80Gbps InfiniBand fabric, and I think I remember reading about their 120Gbps InfiniBand fabric. All of that has been withdrawn and now they're claiming that they have 40Gbps InfiniBand fabric, and they mention an HCA in the PCI-Express 2.0 x8 form factor. Each lane in PCI-Express 2.0 carry 500MB/s, so an x8 card will give us 31.25Gbps bandwidth, and not really 40Gbps as they claim. Why can't they give us an x16 form factor HCA?

Given all this, its quite possible that there is 200+Gbps InfiniBand fabric that is not exposed to our world.

Here's an interesting idea:

The new Intel Core i7 CPU interfaces with memory through it's integrated memory controller at an aggregate bandwidth of 25.6GBps or a little over 200Gbps, and this requires a chipset for a single socket configuration called X58. The Core i7 CPU can't operate in multisocket configurations. New Xeons based on the Nehalem architecture could possibly provide more bandwidth, given that there would be dual or even more QPI links. If Intel would put in the effort to integrate one or more InfiniBand HCAs into their motherboards, each offering 200+Gbps bandwidth and each interfacing directly with the memory controller, that would open up the possibility of server virtualization hypervisors that can string together VMs they host by distributing those VMs across physical machines. The implications for software such as Microsoft SQL Server are enormous with this possibility. Instead, Microsoft has artificially limited their hypervisor so that it only provides 4-way SMP to VMs it hosts.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.

Actions

This Discussion