We’re having more and more customers asking for iSCSI switches for their data centers, and the only requirements ever mentioned are “non-blocking/line rate.” The 2960-S (for example) would work for this at Layer 2, but I've heard concern from various resources (outside and within Cisco) about these not being viable for a data center iSCSI application.
The concerns always revolve around buffering capability, and I've been pointed consistently towards a Nexus 5K or Catalyst 2360 solution. Let's focus on the 2360 for now. The datasheet - http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps10920/datasheet_c78-599610.html is almost identical to the 2960-S from a performance perspective, one notable difference being that the 2360 can support 4x10-GE uplinks (2x 10-GE for the 2960-S), the other being mention of “Dynamic Buffer Allocation – all ports are allocated reserved buffer space, and additional buffering is allocated from a shared pool."
The 2960-S 48-port (non-PoE) 10-GE switch lists at $6,995, while the comparable 2360 48-port comes in at $8,695. I recognize the extra two 10-GE ports are worth something (providing close to 1:1 subscription), but I need some serious additional detail regarding the buffering benefit in an iSCSI environment. We've sold 2960-S switches into iSCSI roles in the past, and they've always worked fine - the iSCSI traffic in most data centers never comes close to pushing the capacity of the switch (so the dynamic buffering would never come into play, in my opinion).
Simply put - I need more detail as to why a 2360 or Nexus 5000 would be a "better" iSCSI switch, when most of the traffic between servers and storage within the data center would never cross the 10-GE uplinks anyway. Moving forward, I’d like to be able to have the discussion and explain to the customer what the potential risks/benefits are between these two solutions (2960-S vs. a 2360, for example) in an iSCSI deployment. There have to be more detailed docs around that explain why a particular switch will better fit the bill - if it’s buffering, then HOW much buffering would fit the bill, etc.
This is what I can’t seem to find. Just an hour ago, our team received a customer support issue with the 3750-X switch (again, line-rate/non-blocking) family having issues with dropped frames (worse performance) when enabling jumbo frames.
Thanks in advance!
For a switch to provide reliable operation within a Dell EqualLogic SAN infrastructure, the following features must be available:
Non-Blocking backplane design
A switch should be able to provide the same amount of backplane bandwidth to support full duplex communication on ALL ports simultaneously.
Support for Inter-Switch Linking (ISL) or Dedicated Stacking Architecture
ISL support is required to link all switches in SAN infrastructure together. For non-stacking switches, the switch should support designating one or more (through Link Aggregation Groups) ports for inter-switch links.
For stacking switches, the use of stacking ports for ISL is assumed. Switch should provide at least 20Mbps full-duplex bandwidth.
Support for creating Link Aggregation Groups (LAG)
For non-stacking switches, the ability to bind multiple physical ports into a single logical link for use as an ISL is required. Switch should support creating LAGs of at least 8x 1Gbps ports or at least 1x 10Gbps port.
Note: Non-stacking switches with more than three EqualLogic Arrays could exhibit some performance reduction.
Support for active or passive Flow Control (802.3x) on ALL ports.
Switches must be able to actively manage “pause” frames received from hosts, or they must passively pass all “pause” frames through to the target arrays.
Support for Rapid Spanning Tree Protocol (R-STP)
For SAN infrastructures consisting of more than 2 non-stacking switches, R-STP must be enabled on all ports used for ISLs. All non-ISL ports should be marked as “edge” ports or set to “portfast”.
Support for Jumbo Frames
Not a requirement, but desirable. Many storage implementations can take advantage of Jumbo Frames. Jumbo frames may not provide any performance increases depending on the application and data characteristics.
Ability to disable Unicast Storm Control
iSCSI in general, and Dell EqualLogic SANs in particular can send packets in a very “bursty” profile that many switches mis-diagnose as a viral induced packet storm. Since the SAN should be isolated from general Ethernet traffic, the viral possibilities are non-existent. Switches need to always pass Ethernet packets regardless of bandwidth utilization.
Adequate Buffer Space per switch port
The Dell EqualLogic SAN solution makes use of the SAN infrastructure to support inter-array communication and data load balancing on top of supporting data transfers between the hosts and the SAN. For this reason, the more buffer space per port that a switch can provide the better.
Due to the multitude of buffer implementations used by switch vendors, Dell cannot provide definitive guidelines as to how much is enough, but this should not be an issue for most Enterprise-class switch vendor’s solutions.
Thanks for the replies, folks - still hoping for some deeper analysis on the specific Cisco switches mentioned above, though. Also, I'm unclear as to why there is a focus on ISL - Cisco has deprecated this proprietary protocol in favor of standardized 802.1Q VLAN tagging, which all of these switches support for trunking between them.
I'm have no SAN knowledge and I'll take Victor's word seriously.
That being said, don't use the 2XXX-series switches because they won't support ISL.
If the "ISL" means Inter-Switch Link - the proprietary trunk encapsulation used on selected Cisco switches - then I agree with Leo. The 2960 and 2960-S switches do not support ISL trunk encapsulation. Their only supported encapsulation type on trunks is IEEE 802.1Q. The ISL is supported on 3550, 3560 and higher switches.
I believe that the 2960 and 2960-S switches both support ISL.
No they don't. I am certain 2960/2960S do NOT support ISL. As far as I know, only the 2900XL can support both ISL and 802.1Q trunking. 2940/2950/2955, 2970, 2960/2960G/2960S support only 802.1Q.
I too am curious about the buffers on the 2360. This is the only thing definitive I have found:
From the 2360 Q&A:
Q. How much egress buffering is available on the Cisco Catalyst 2360 Series switch?
A. The Cisco Catalyst 2360 Series switch has 2 MB of egress buffering; 1 MB is reserved. The egress buffering amount includes all the Ethernet interfaces (except Fa0). The other 1 MB is in a common pool available for any interface to use when experiencing congestion.
The maximum egress buffer resources a single interface can consume is 360KB.
It's curious that the SMB line seem to have better buffer specs than the 2360 (although I would never recommend these for carrying ip storage traffic):
All numbers are aggregate across all ports, as the buffers are dynamically shared:
2 @ 8 Mb
2 @ 8 Mb
2 @ 8 Mb
2 @ 8 Mb
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
I haven't worked with a 2960-S nor am familiar with the 2360. Looking at both their configuration guides, it seems the 2960-S's QoS features are similar to those of the 3560/3750 series while the 2360's might also be similar but with lobotomized configuration support.
As a guess, the 2360's simplfied QoS configuration support might work better than a 2960-S's default configuration. Both appear to be able to use both dedicated and shared buffers, although the latter's usage can be modified by configuration. Whether these differences are really important depends both on actual traffic patterns and usage.
As to choosing between them, I would be interested in total available buffer space and how much can be provided per port. This so we can better manage congestion to a port or set of ports.
Both, I believe, can do dynamic buffer sharing, although how this is done looks like it too can be configured on the 2960-S.
Assuming both switches are similar in their buffering capabilities, and 2960-S configuration options not needed, then choice might be "traditional", i.e. cost difference, whether we need two or four 10 gig ports, whether we need switch stacking.
The only other limiting factor I found in the 2960S Switches I deployed for an Equalogic and DataDomain Disaster Recovery Site was the limited number of Port Channels.