Yesterday I setup our new NSS4000 - I updated the firmware to 1.16-3 (Mon, 08 Jun 2009 14:12:52 +0000) - I created a RAID 5 array with four Western Digital WD15EADS 1.5TB drives - after the array completed one of the drives was marked as failed - I removed and reinserted it and added it back to the array and it failed again a few min later - I replaced the drive and added the new drive to the array. This was 14:10 yesterday and the rebuild is only at 55% over 24 hours later. I don't see the WD15EADS drive on the approved list specifically but the WDS10 and WDS20 are both on there.
Is this a normal rebuild time for 1.5TB drives? I am going to delete the array and recreate it since there isn't any data on the system yet but I would like to know if there is a problem with these drives in the NSS4000 before I put it into production.
I have seen this behavior a couple of times (actually twice) on TB drives and creating a RAID 5. I don't think the RAID level matters much, just the fact that we are addressing a lot of space. One word of caution, drives that are not on the approved drive list should really be avoided. That was not a sales plug, or preaching Cisco literature. Granted drives that are not on the list will work and will spin and all that fun stuff, but most of the RAID failures I see are using non-approved drives. Another thing to keep in mind; is that the NSS devices can only be configured for up to 4TB. That is not to say you can not add a larger RAID. You can install 4, 2TB drives, but it does not mean you can safely create a 6TB RAID5 structure.
My guess would be that one drive is not liking or just not playing nice with the NSS and is having a hard time being accessed and addressed. Now add the fact that we are over our 4TB limit, not really surprised it is taking a long time.
My personal rule of thunb; I always create RAIDs with same drives, models and size and never mix manufacturers. That is me talking no one else, its just what I have learned over the years via bad experiences.
Hope that helps.