cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4396
Views
5
Helpful
3
Replies

iSCSI - 1G or 10G ethernet

szymonpol
Level 1
Level 1

Hi,

I want to attach iSCSI SAN to ethernet switches. Which method would be better: use one (or two for redundancy) 10G link, or combine multiple 1G interfaces? Does anyone know about any pros or cons in using 10G interfaces?

1 Accepted Solution

Accepted Solutions

Got it. I don't have any exact numbers for you (mainly because it varies on hardware and specific configuration options), but some other things to think about.

In practice and with some correct planning, you can achieve most, if not all of the throughput you need. Some things to consider though:

- You cannot really achieve any sort of per-packet load balancing on an Etherchannel that I know of. Best I believe you can do is by TCP/UDP port numbers, and even that is iffy. Even if you could do true PPLB, I'd question the performance, given reordering issues/overhead. Generally, decisions are made on the hash you choose, and this obviously is not going to fully utilize your separate links in an efficient manner if you have abnormally large flows.

- Similarly with the above, if you're running per-flow, you aren't going to achieve more than 1GB throughput to any given host because of the algorithm used. This may or may not be an issue, but definitely food for thought.

- If you're bonding multiple 1GB NICs together, make sure that if these GB links are on the same ASIC that you don't overwhelm it. Depending on the hardware you're using, you may need to split them up across separate ASICs so you aren't overloading one. Here's your buffering/queues limitations, possibly.

I'd say it's my opinion that if the environment can support it and you're going to require significantly more than a couple gigabits to a given box, 10gb seems the best choice from a performance/utilization, as well as from a managability standpoint.

View solution in original post

3 Replies 3

ryan.lambert
Level 1
Level 1

Depends on a few things, I think. Some of them being...

How dense is the usage going to be? Consideration: Is it a virtualized host carrying multiple guests, or is it just one machine with relatively low I/O? For some applications you may be able to get away with a pair of 1GB NICs.

Price/port with 10gig is something to consider, so my suggestion would be if you don't require it, don't use it up. On the flipside of this, I wouldn't want to bond together 4,5,6,7,8 different 1gb links just to achieve decent throughput, either. I think, at least for me, it is less of a performance thing and more of a management/operational type issue. I might feel differently trying to achieve 2gb in a redundant fashion, though, if I felt the platform wasn't going to scale much past that.

For instance, we use dual-10gig NICs in an active/active fashion with MPIO on our vSphere hosts for iSCSI redundancy and performance, but we are supporting several dozen guests (per host) with varying levels of throughput/IO. It makes sense to do this for us from both a bandwidth and cabling simplicity standpoint.

Lower I/O, more static, non-virtualized devices that happen to use SAN storage are perfectly OK on a diverse set of 1gig NIC cards.

I do not think you will see a tangible benefit for something that barely pushes 5-10 megs of iSCSI traffic on a 10gig vs 1gig link, assuming everything is properly configured. If you're not sure, you can either start high or start low, and scale to the appropriate port speed if the hardware is available. Whichever is most comfortable for you. Same type of deal, if you're just teaming them together to squeeze a couple extra gigs out, and going to 10gig doesn't make sense (hardware costs), you'll probably be OK with the exception of some cabling headaches.

Ryan,

thanks for Your reply. It was very helpful. But my question goes a little further. Let's say, hypothetically, that we have virtualized host carrying multiple guests and were are able to saturate those links. I got similar question from a customer: which is better for connecting SAN in terms of IP network - multiple 1G or 10G? More or less we're talking about pure interface differences (buffers, ques etc.) In which scenario we're able to push more data, by several (lets say ten) 1G or 10G?

Got it. I don't have any exact numbers for you (mainly because it varies on hardware and specific configuration options), but some other things to think about.

In practice and with some correct planning, you can achieve most, if not all of the throughput you need. Some things to consider though:

- You cannot really achieve any sort of per-packet load balancing on an Etherchannel that I know of. Best I believe you can do is by TCP/UDP port numbers, and even that is iffy. Even if you could do true PPLB, I'd question the performance, given reordering issues/overhead. Generally, decisions are made on the hash you choose, and this obviously is not going to fully utilize your separate links in an efficient manner if you have abnormally large flows.

- Similarly with the above, if you're running per-flow, you aren't going to achieve more than 1GB throughput to any given host because of the algorithm used. This may or may not be an issue, but definitely food for thought.

- If you're bonding multiple 1GB NICs together, make sure that if these GB links are on the same ASIC that you don't overwhelm it. Depending on the hardware you're using, you may need to split them up across separate ASICs so you aren't overloading one. Here's your buffering/queues limitations, possibly.

I'd say it's my opinion that if the environment can support it and you're going to require significantly more than a couple gigabits to a given box, 10gb seems the best choice from a performance/utilization, as well as from a managability standpoint.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: