Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

How to approach performance tuning on NSS 326 family?

We am in the process of evaluating a NSS 326 Smart Storage unit for use to house experimental research data by a number of our scientific groups. On paper the unit looks like a good match;  7.3 TB in RAID 6 configuration, (2) Gigabit NIC, support for NFS, ISCSI, and MS networking, small package size, and a very attractive price. If the evaluation looks good, a number of NSS 326 units can be used in place of a large centralized NAS server.

So far our testing has disclosed what may be a limiting factor for our use: large data transfers.

It is not unusual for group research data to be on the order of up to tens of gigabytes in size. Think ISO images, Clonezilla snapshots of systems, VM images, and so forth. Our test data reproduces these sorts of typical transfer sizes.

When running large data transfers, we are noticing an interesting behavior by monitoring I/O using iftop and the native Resource Monitor: we are seeing short duration bursts of high transfer rates followed by approximately equal durations of no data transfer at all.

This behavior suggests that the NSS 326 is doing write ahead caching of the data in local memory up  to some high water mark and then it blocks until the memory has drained down to some fairly low water mark. The resulting average transfer rate is much lower then one would think this device is capable of. We have noticed this effect on both NFS and ISCSI transfers, so we are fairly certain that it is a lower level OS limitation.

Has anyone else seen this particular behavior and/or does anyone have any thoughts about potential performance tuning to optimize for large file transfers?

2 REPLIES
New Member

Re: How to approach performance tuning on NSS 326 family?

How much data are you talking about in single write? I think what you are seeing is the major increase in parity calcs required by RAID 6. RAID 5 is going to do much better in this arena. I havent had trouble with large writes yet on RAID 5. If you switch your setup and test would like to hear how it went.\

For writing RAID 5 is best. RAID6 would be better for large reads.

Sounds like you have some major i/o overhead..

New Member

Re: How to approach performance tuning on NSS 326 family?

Sorry for the long delay in response; I was out on a much needed vacation. ^_^

So far we have only been running test cases using typical "production" data; duplicating images of an internal Linux ISO/source mirror host being one of the test cases. Some of our real research data contains a mix of both directories with large numbers of very small files and very large multi-gigabyte files.

RAID 6 was initially selected for our test case because it provides protection against a double drive failure, something we have had problems with in the past. I will need to repeat tests with RAID 5 to be sure, but I suspect that the performance problems we are seeing are not so much associated with parity computation as with some other factor.

The current NSS 300 data sheet 1GB ram and a Linux 2.6 kernel, so I was wondering about kernel tuning and/or how write buffering was handled. In any case, it does appear that writes are cached up to some memory limit after which writes are blocked until the cache is drained.

724
Views
0
Helpful
2
Replies
CreatePlease login to create content