What to do with my new 15x 4TB HGST (and other goodies)

BeardyDude

n00b
Joined
Oct 22, 2015
Messages
3
Hey chaps, long time reader, first time poster!

Ok, my NAS is currently this:

12 x 1.5TB Samsung (and a 13th floating elsewhere in the house)
Norco 4220 (Rackmount case with 20x SATA on backplanes)
2 x IBM M1015 flashed to IT mode

Dual Quad Xeon E5505
Tyan S7002 (dual CPU, Intel 6x SATA, Dual LAN, etc)
32 GB ECC Reg DDR3

2 x 16gb SSDs mirrored as boot drive

Also have a 128gb SLC and 2 x 32gb SLC SSD's lying around that I've never gotten around to integrating into the system yet, plus at least 1 spare 120gb MLC.

All that's running on ZFS Guru (hmmmm), with the drives in 2 x vdev of 6x 1.5tb in raid z2, = 12 TB storage. Not ideal, I know, I had 8 drives on a HW raid 5 card originally, bought another 6 and the M1015's, and did a bit of shuffling. Anyway.

The samsungs have been flawless for 7 years, but It's been full for a year, and I just got a shiny deal on some new spinny things - HGST Megascale 4000 (not B), fifteen of them, 4TB, at £19.75 per TB all in, which I was pretty pleased with. The 7000 at Backblaze seem to be doing alright too.

I'm also not happy with ZFS Guru (generally glitchy, progress on new features and versions is a bit glacial, not getting on with BSD that well, and it's not as 'straight' a BSD as promised, iffy system updating among other things) so I'm not getting much ancillary use from the box (just the shares), and am looking at alternatives.

Currently, my plan is this:

Unplug the 2x boot drives, stick in a freenas USB stick, plug in 10 x hitachi (+ 12 samsungs = the 22 drives the system can support.

From a freenas USB, create a new ZFS pool from the 10 HGST in RaidZ2, mount it, and copy all the stuff over from the samsungs.

Unplug the samsungs, plug in the other 5x HGST, plug in a 120gb MLC SSD and install ubuntu 14.04, then zfs, then probably Napp-IT (seems an ideal 'soft touch' for some ZFS gui management on Ubuntu), import the pool, then create a new RaidZ1 from the 5x HGST, for iscsi targets, temporary dump space, etc.

Figure out how to put the SLC SSD's to use!

Use webmin/cloudmin to manage sharing, ISCSI targets etc, and containers/VM's where easy, host system where not, for transmission, sickbeard, tvheadend and a bunch of streaming stuff.

Plan is to scrap the smaller Z1 array in the event of a failed drive in the larger Z2, to pinch drives as needed and rebuild smaller.

So, my questions are -

1) Anyone spot any grave mistakes?

2) Anyone got any better ideas? I'm up for convincing, nothing set in stone!

3) Anyone got any tips on how to do some file level mirroring in the above setup?
- I'd like some network share locations to be stored both on the larger Z2 and smaller Z1 arrays for extra redundancy - e.g. photos, I do backup (selectively) but I'm a bit lazy in that regard.

4) Oh yes, and how to put the SLC SSD's to good use - I'm thinking the 128gb for L2ARC and a 32GB for SLOG? Worth having a mirrored SLOG nowadays? Or the other 32GB for the l2arc, the 128 is older and I think slower. (ATP Velocity S2 vs TS32GSSD500)

5) Infiniband/FC or something to hookup most used desktops - The most used desktop is actually within 10 feet of the NAS, there's a load of cheap faster than gigE hardware floating around, what could I rig up cheap for a second super fast hookup just for that desktop? I have no idea when it comes to IB/FC!
 
Last edited:
It depends exactly what you'll be doing but I think you'd probably be making it more complicated than necessary by adding FC/IB. Just stick with GbE, unless you have some extreme use case that you didn't mention?

To maximise the size of your new pool you could export the current pool and then shut down and remove the two current boot drives to give you 12 slots free in your chassis rather than 10.
Install new OS on USB sticks or something (even if it's just temporary, until you copy the data off the old pool). Import your old pool and create a new 12x 4TB RAIDZ2.
The other 3 new drives, whatever. Maybe keep one as a cold spare, and with the others take a routine copy of stuff you care about from the main pool and rotate them off site as a backup?
Bear in mind, afterwards you will have 12 spare 1.5TB drives that you could use for other things (perhaps a giant RAID 10? :cool:)

Regards SLOG, your SSD is probably unlikely to fail but you would see a write performance hit if it did. Depends whether that is an issue for you? Just for home file shares and stuff, probably not. Standard practice in production AFAIK is to mirror it. It doesn't need much space, so yes, use the 32GB ones for that.
 
Last edited:
I'd consider doing 7 mirror vdevs, which would still give 28TB raw & excellent performance.

For #3, set up your filesystems so you can use zfs send/recv.
 
It depends exactly what you'll be doing but I think you'd probably be making it more complicated than necessary by adding FC/IB. Just stick with GbE, unless you have some extreme use case that you didn't mention?

To maximise the size of your new pool you could export the current pool and then shut down and remove the two current boot drives to give you 12 slots free in your chassis rather than 10.
Install new OS on USB sticks or something (even if it's just temporary, until you copy the data off the old pool). Import your old pool and create a new 12x 4TB RAIDZ2.
The other 3 new drives, whatever. Maybe keep one as a cold spare, and with the others take a routine copy of stuff you care about from the main pool and rotate them off site as a backup?
Bear in mind, afterwards you will have 12 spare 1.5TB drives that you could use for other things (perhaps a giant RAID 10? :cool:)

Regards SLOG, your SSD is probably unlikely to fail but you would see a write performance hit if it did. Depends whether that is an issue for you? Just for home file shares and stuff, probably not. Standard practice in production AFAIK is to mirror it. It doesn't need much space, so yes, use the 32GB ones for that.

Adding the 10 was already assuming I unplugged the boot mirror - would need to add another controller to plug 12 in! I was also thinking ahead, if in future I decide I want to up things, I can buy another 5 drives, create an identical 10 drive vdev and stripe the two, same as I have now with 2 x 6 - but this time with all 20 bays filled! Down the road perhaps!

Ok, so the mirrored 32gb for the ZIL sounds like a plan. I just got a deal on some Samsung 100GB SSD's, PM825, with power loss protection, but MLC, what do you reckon, those vs the SLC without PLP?

And any experience with L2ARC? Was thinking of using the single 128gb SLC for that purpose, PLP and mirroring isn't much benefit for that right, as the loss of an L2ARC doesn't actually break much?

The 13x 1.5TB Samsung will probably be going back on ebay to repair my bank balance :) Holding their price quite well, as well they should - not a single failure in 7 years! Just like the 8x 400gb Samsungs they themselves replaced :) Darn annoyed Seagate bought them.

I'd consider doing 7 mirror vdevs, which would still give 28TB raw & excellent performance.

For #3, set up your filesystems so you can use zfs send/recv.

Can I do a send/rec on a dataset rather than a pool then? Then pull a sensitive dataset from the main to the secondary, nice idea! Though I'm thinking I might 'KISS' and use rsync on a chron job :)

As for the many mirrors approach..... Nahh. I want my other 20TB :) It's not that sensitive stuff anyway, and chances of SOME data being lost is about the same or more than with a Z2. Assuming I don't lose three drives overnight (+ resilvering) I'll keep it all, whereas a mirror could be lost if I lost 2 drives in one go. I've heard it's good for performance, but from past experience, the Z2 will be pushing more than my LAN will anyway!
 
Back
Top