I want to attach iSCSI SAN to ethernet switches. Which method would be better: use one (or two for redundancy) 10G link, or combine multiple 1G interfaces? Does anyone know about any pros or cons in using 10G interfaces?
Got it. I don't have any exact numbers for you (mainly because it varies on hardware and specific configuration options), but some other things to think about.
In practice and with some correct planning, you can achieve most, if not all of the throughput you need. Some things to consider though:
- You cannot really achieve any sort of per-packet load balancing on an Etherchannel that I know of. Best I believe you can do is by TCP/UDP port numbers, and even that is iffy. Even if you could do true PPLB, I'd question the performance, given reordering issues/overhead. Generally, decisions are made on the hash you choose, and this obviously is not going to fully utilize your separate links in an efficient manner if you have abnormally large flows.
- Similarly with the above, if you're running per-flow, you aren't going to achieve more than 1GB throughput to any given host because of the algorithm used. This may or may not be an issue, but definitely food for thought.
- If you're bonding multiple 1GB NICs together, make sure that if these GB links are on the same ASIC that you don't overwhelm it. Depending on the hardware you're using, you may need to split them up across separate ASICs so you aren't overloading one. Here's your buffering/queues limitations, possibly.
I'd say it's my opinion that if the environment can support it and you're going to require significantly more than a couple gigabits to a given box, 10gb seems the best choice from a performance/utilization, as well as from a managability standpoint.