24bay nas recommendations

Stefan75

n00b
Joined
Nov 12, 2017
Messages
25
Hello

I've been using USB disks to store my data (25TB)... but it's becoming annoying (3/4/5TB disks).
I read about modern NAS/RAID tech and I think it's what I need.

I found a used 24bay hotswap SATA server (around $500):
- Supermicro X7DBE
- 16GB ECC memory
- 9650SE-24M8 controller
- redundant PS

I currently have 7x3TB WD green and 8x3TB Seagate (ex USB disk).
My plan is to make a HW RAID6 to get about 40TB of space.
I want to be able to add more 3TB drives when needed.

Are the 9650SE still reliable enough (age, no more updates)?
Or is it better to get a HBA and do SW raid / ZFS ?

http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
After reading that I'm tending to use HW raid.

What do you think?

thanks
Stefan
 
Regardless of which brand you go with, you definitely want to have hard drives branded for NAS use, which are rated to handle the intensive vibration and workload of a NAS or server environment. Normal desktop hard drives, for example our BarraCuda, are not rated to work in large enclosures with the amount of drives you're considering. They don't have the workload rating for it, as they're only rated for 8 hours a day x 5 days a week use, up to 55TB of data per year, and they don't have the same firmware coded to handle vibrations in those situations. NAS drives (current Seagate branding: IronWolf & IronWolf Pro) are rated for 24x7 use, with the standard IronWolf being rated for up to 180TB of data per year and the IronWolf Pro being rated for up to 300TB per year. Larger capacities (4TB and over for IronWolf, standard for IronWolf Pro) also have RV sensors built onto the drives in addition to the firmware which helps them work in enclosures with large amounts of drives and in RAID teams. The sheer vibrational forces of that many hard drives all spinning in close proximity can cause performance issues and will wear down a normal desktop-grade drive quickly.

Here is a video on choosing the right drive for the right job if you'd like to check it out.
 
Last edited:
$500 sounds high for that server. I'd look for an E5 v1 (X9 series mobo) with 32+GB, and go with an HBA for ZFS.

Is it a Supermicro chassis, too? You'll need to be careful, because Supermicro's older expander backplanes do NOT support 3+TB HDs. The -A & -TQ backplanes use direct connections, so this isn't an issue for them. I've seen good info about this at servethehome.com, but there are probably other resources.

LSI's -16e HBAs seem to be cheapest per port, and they'll connect easily to -A backplanes after you route the 4 cables back into the case. These HBAs provide 4x SAS ports, which means you can connect 16 HDs without an expander.

IOW, there are a lot of variables, and you'll definitely need to do your research to get good results on a budget.
 
Hello

I've been using USB disks to store my data (25TB)... but it's becoming annoying (3/4/5TB disks).
I read about modern NAS/RAID tech and I think it's what I need.

I found a used 24bay hotswap SATA server (around $500):
- Supermicro X7DBE
- 16GB ECC memory
- 9650SE-24M8 controller
- redundant PS
*snip*
What do you think?

thanks
Stefan

That hardware is ancient AF, that is not a good deal.
 
If you go with consumer drives, just keep them spinning 24/7. I think most of the failures are caused by that.

Running a SuperMicro 826 with 2.5" 4tb seagate drives pulled from external enclosures. Running for a year now with no issues. Running with an intel 710 enterprise SSD for cache and you can't even tell how slow the drives are.
 
I skipped out on ZFS and hardware raid. I used SnapRaid instead. It's actually pretty slick. I can spin down the array when not actively using it. This way, only the drive containing the actual data needs to spin up while the rest are sleeping. Seemed like a better option than something that actually stripes that data across all the disks and requires them all to work all the time like ZFS or HW raid adapter. http://www.snapraid.it/
 
I skipped out on ZFS and hardware raid. I used SnapRaid instead. It's actually pretty slick. I can spin down the array when not actively using it. This way, only the drive containing the actual data needs to spin up while the rest are sleeping. Seemed like a better option than something that actually stripes that data across all the disks and requires them all to work all the time like ZFS or HW raid adapter. http://www.snapraid.it/
Hmm, I'm starting to like Snapraid.
But how long will it take to build the 5TB parity for 25TB data spread over 8 disks?
And how long does every change take after writing a few gig.. minutes/hours/days?
 
Hmm, I'm starting to like Snapraid.
But how long will it take to build the 5TB parity for 25TB data spread over 8 disks?
And how long does every change take after writing a few gig.. minutes/hours/days?


It's fast as in minutes. Usually incremental parity sync up to a couple hundred gig (I don't usually write much) takes less than 5 minutes. Most of my data is pretty static, so I set up a nightly scheduled task that runs a snapraid sync to recalculate the parity for any new media added to the drive pool. It only builds parity for the data that's actually there and that has changed since the last rebuild.
 
I'm building the Snapraid double parity now... reading from 8 USB3 disks (veracrypt encrypted + NTFS) and writing to 2 USB3 disks (NTFS).
The average r/w performance is about 47MB/s each (ETA 27h)... A total of 376MB/s read seems quite nice for USB3 (5Gbps) ;)
Had to disable the Windows 10 'Windows Search' and 'Superfetch' service because they were interfering with disk access (100% activity for nothing)!
Can't wait to test the incremental update time.

Sooner of later I'll get a 12-24 bay SATA case.
 
Keep us posted Stefan.

In my case my first home media server has 42TB and using Hitachi 6TB disks and Seagate 3TB and 2TB.

Running Flexraid and just like snap it only runs when youbtell it to run.

Have 2 parity drives

Its been 7 years and it still going strong only had to replace 4 hars drives.

And 1 power supply

Its on 24/7
 
This thread is still alive, yet I'm primarily responding to a 13 day old post, so half a necro?

Regardless of which brand you go with, you definitely want to have hard drives branded for NAS use, which are rated to handle the intensive vibration and workload of a NAS or server environment. Normal desktop hard drives, for example our BarraCuda, are not rated to work in large enclosures with the amount of drives you're considering. ... The sheer vibrational forces of that many hard drives all spinning in close proximity can cause performance issues and will wear down a normal desktop-grade drive quickly.

I get that this is the Seagate rep in the thread here, but the very best quantifiable data I've ever seen presented on the topic disagrees with essentially everything stated. Backblaze, and online backup company that currently has over 86,000 drive in service in their facility, releases drive reliability stats on their blog somewhat often - most recently October 27th. They regularly cram 45 SATA drives into their custom-built 4U chassis, and they keep very detailed statistics on drive reliability over time. Their blog posts are always a good read. Anyways, I bring it up because their entire business is built around use of consumer grade drives en masse, and they have stuck with consumer grade drives (as opposed to NAS or enterprise drives) because the price premium for the drives has *not* correlated to an effective increase in drive reliability. Their most deployed drive is the ST4000DM000, which is a basic 4TB Barracuda drive and they have literally 33,000 of them. That's not to say they don't fail - they have a 3% annualized failure rate - but that's not out of line with any other drives they use. They only buy the non-basic drives if they are available cheaper than the basic drives or are similar in price but available in larger quantity.

The only drive they've ever had a real problem with was the 3TB ST3000DM001 drive purchased around the timeframe of the 2011 Thailand floods. That had nothing to do with the 'consumer' nature of the drive, and more to do with that specific drive model. Backblaze eventually had to purge that drive from their datacenter because of it's abnormally high failure rate. My company had to do the same, actually, after multiple rapid failures of that model drive did cause us to lose an array.
 
As for the BackBlaze data, the ST4000DM001/ST4000DX000 has been problematic for the last few reports as well, with more than 10% failure rates. IIRC, BackBlaze pods also have significant vibration protection built-in. Of the Supermicro chassis that I've seen, they looked less than impressive from the vibration protection front (by default).

IMO, a cost effective route for 24 drive hot-swap is to buy any random Supermicro 24 bay server (as long as it has SAS2 backplane). Gut the interior, because they usually come with ancient Xeon or Opterons. Then buy a newer Supermicro Platinum PSU, unless you are okay with the noise.
 
Yeah, they don't have many of those drives - 550-ish total for the two models together. They obviously ran into some reason to not buy 30k of them, though I'll be honest I don't remember reading anything specific. They do buy a lot of Seagate drives in general. I should also say that despite my comment decrying the idea of Seagate promoting its NAS specific drives, my favorite drives *are* NAS specific - HGST Deskstar NAS drives in particular at 4TB and up. I say this knowing they are owned by WD, but I still prefer these specific drives and get them whenever I (or my customers) can afford it.
 
I'm going to stick with USB disks and Snapraid for a while. The incremental parity update only takes about 5-10min.
Snapraid has simplified my backup task (connect/mirror 8 disks).. and freed up 20TB of disk space (ex backup).
 
Backblaze has admirably maintained over the years a desire to keep their subscription service costs low for their customers, and in the past that has resulted in them using large amounts of consumer drives as stated. It is important to note their current and future plans, however, which do include investing heavily in larger capacity drives engineered for more enterprise & NAS use. Here is an interesting read regarding this: Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives.
 
Backblaze has admirably maintained over the years a desire to keep their subscription service costs low for their customers, and in the past that has resulted in them using large amounts of consumer drives as stated. It is important to note their current and future plans, however, which do include investing heavily in larger capacity drives engineered for more enterprise & NAS use. Here is an interesting read regarding this: Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives.

Yep, Backblaze is buying larger capacity drives engineered for enterprise and NAS use. However, they're not buying them because they're engineered for enterprise and NAS use (higher reliability), they're buying them because they're larger capacity, and in the case of helium filled drives, lower power. Anyone can see this themselves; go to Newegg and look up hard drives, pick 4TB and sort by price. You'll see a bunch of consumer grade drives that are the cheapest. Now, change it to 10TB drives and sort by price and well whaddya know, they don't really *make* el-cheapo 10TB and 12TB drives- the NAS and enterprise models are essentially the only things on offer. Obviously Backblaze doesn't buy their drives from Newegg, but the example holds.

Even in the article you linked, the word 'reliable' or anything like it doesn't even make an appearance. The only time they mention Enterprise is when they mention an update to the comparison they've been running between 8TB consumer and enterprise drives, which if you click on MY link (which was posted 21 days later on October 27th) you'll see that update on the comparison is that, thus far, there isn't any difference in failure rate.

Your whole original post can be summed up as "using large numbers of consumer drives in an enclosure is a bad idea, they won't be reliable in that environment, use NAS/enterprise specific drives instead!" and my response was simply a rebuttal to that assertion. I'm certainly not saying that folks should avoid buying NAS or Enterprise class drives on principle, I'm saying that folks should avoid paying anything extra for them because the only actual evidence I've ever seen posted states that they're not any more reliable than their regular consumer-grade brethren. Obviously there are warranty differences and such that may justify price differences to folks, but that's not the argument you made. You said "The sheer vibrational forces of that many hard drives all spinning in close proximity can cause performance issues and will wear down a normal desktop-grade drive quickly." and I'm saying that isn't borne out by the evidence.
 
I decided to get the empty SC846 case with BPN-SAS-846A and super noisy 900W PSU.

For a few weeks I checked ebay for supermicro X9/X10 boards, but getting the matching CPU and RAM seemed tricky.
In the end I got an old HP DL380 G6 (24GB), added a Fujitsu D2607 (flashed to IT mode).

Used a HP expander and 6 SFF-8087 cables in the SC846.
Had to use a PCIe x16 mining card to power the expander.
Found a 3 port PWM fan control in china, works great.

Opening the 10 USB cases was so much fun (!! NOT !!)... I hate clip systems.
But enjoying sliding the disk/caddy into each slot made up for it ;)

I'm running Windows Server 2012 trial off an old SSD.
Snapraid performance is better now.. getting >100MB/s per disk (USB was 50MB/s).

Could even get full disk speed (~150MB/s) over two parallel gigabit LAN connections using SMB multichannel. :)

Total cost:
- SC846 $270
- DL380 $90
- Fujitsu D2607 $35
- HP expander $25
- PCIe x16 card $10
- SFF-8087 cables $30
- PWM fan controls $6

Todo:
- quiet down the SC846
- use external SAS cable between cases

Result:
Was quite a lot of work, research, waiting, hassle, time, etc.
But now I have a highly scalable solution and I learned a lot in the process.
 
Good job, Stefan! This almost feels like a forum first. You asked a few questions, went off & did your own follow-up research, found a solution you like AND posted a summary?!?? Mind blown!
 
Back
Top