Backup NAS, FreeNAS and transfer of data

DangerIsGo

2[H]4U
Joined
Apr 16, 2005
Messages
3,000
Currently I'm have a Norco 4020 case with 20 HDDs running Win 7 w/ Drive Bender (since I missed WHS/drive extender so much!)

Currently, out of the 45TB available, 13TB is used with another 13TB as duplicate. (That's 26TB utilized for the math illiterate folk here ;)

What I'd like to do, is in the near future, when all the drives are the same (right now, only 8 need to be changed), I'd like to go to a software RAID solution, like FreeNAS which is really catching my eye, to better utilize my space.

My idea was to have every row in my NAS chassis be an array so that I would be able to use current set of disks to transfer my data over to the newly created array.
(e.g. Remove 4 drives from the drive bender pool, use them to create an array in freeNAS.
Transfer some data.
Remove 4 more drives. Create a new array. So on...)

Is this the best way of accomplishing this or is there a better way? My thought process was that since I heard a very large array rebuild could be on the order of a week or two, a smaller set of arrays would be more beneficial and I could just pool them together.


This brings me to my next part...a backup NAS. I'm petrified of losing my data which has taken me years and years to collect. I don't want to store it in the cloud (so no need to suggest this) so I was looking at after this build is 100% complete, to have a secondary, backup NAS. To be built in a similar fashion as this one, the OS/Software is still up in the air. Maybe a different solution in case an update bombs the primary? Who knows.
Would this be considered a sound investment?

Finally, my last bit. What would be the best way of transferring data between them. I'm also going to be having a separate windows machine which houses several windows applications needed to control some data on the NAS box.
Would gigabit ethernet be the best, or would a dedicated SAS card work better?
Does a SAS card/drivers work with FreeNAS?
Could a lot of overhead be produced to slow the machine/network if traffic is too high?
(I'm going to guess I would do Windows <-> P.NAS <-> S. NAS where the P. NAS would have dedicated connections to the windows box and the secondary NAS)

Thoughts on all this nonsense? Thanks!
 
I would recommend 10 GbE if you have the funds. I rsync between two boxes (FreeNAS 9.3 and an Ubuntu 14.04 box with some really old 2TB drives in md raid) and 100MB/s makes for some long waits till I the Ubuntu box gets shut off again. Supposedly FreeNAS 10 adds Infiniband support so you can pick up some of those cards now and wait a few months for it to release.

I have been very happy with FreeNAS. I like your approach for migration if your data is segmented in such a way that you for make it work.
 
I'll have to check Spiceworks. My take is that FreeNAS is pretty accessible & performs decently. While it's updated fairly regularly, I'm not sold on its security compared to DIY on BSD/Linux/whatever.

Interesting that your link puts BSD at bottom of list...
 
You'll have 20x 3TB drives to work with?

Yes, that's correct.

I'm active on Spiceworks where a lot of customers are building DIY NAS. FreeNAS gets clobbered by people in Spiceworks. If you're looking for free, FreeBSD seems to be the NAS of choice there. Here's a link that might be helpful to look at options: http://mangolassi.it/topic/6233/open-storage-operating-systems-for-sam-sd

That's a very interesting article. I will have to look more into this Linux vs BSD debacle for software RAID and what solutions are out there (free vs not free). He did mention that Solaris isn't free but I just checked Oracle's site and it is. Is there another version that isn't?

As far as the 10 GbE goes, I'll read more into that. I also don't want it to be crazy expensive so from what I've seen, card prices aren't terrible.
 
He did mention that Solaris isn't free but I just checked Oracle's site and it is. Is there another version that isn't?

Options in the Solarish world
Oracle Solaris, commercial OS, free for demo and development

based on Illumos (free Solaris fork):
NexentaStor, commercial software, free up to 18TB raw but commercial use prohibited
OmniOS, free, optionally with a commercial support option
OpenIndiana, free but only dev editions with a desktop option
SmartOS, free, for Cloud and KVM use
 
OK. You need to store 13TB on 20x 3TB. Goals are "to better utilize my space" & reduce being "petrified of losing my data."

Run 3x vdevs, each with 6x HDs, in RAIDZ2. Use #19 as a hot spare & keep #20 on the shelf as a cold spare.

Mirrors would give 10x data HDs - same as your Drivebender, right? - while my proposed Z2 delivers 12x. You could get ~10% more space in a 2x 7+Z3 setup, but 7x data HDs are less efficient than 8x & you'd have no spares.

Z2 delivers strong redundancy: after initial drive failure, the odds that 2 more both hit the degraded.vdev are about 7.4%. With 10 mirrors, 3 failures have 11.1% of pool loss. See this for info on those odds & the "use mirrors" counterargument.
 
So I looked into the 10GbE solutions and I found some things I have questions on. There are optical and copper connections I see.

I really haven't found any inexpensive fiber cards so I'm going to leave fiber out.
For copper, I found SFP and Cat6/7 connections.

I just want something that is low profile (since one will be going in a 2U case), won't break the bank, and works under linux/BSD.

OK. You need to store 13TB on 20x 3TB. Goals are "to better utilize my space" & reduce being "petrified of losing my data."

Run 3x vdevs, each with 6x HDs, in RAIDZ2. Use #19 as a hot spare & keep #20 on the shelf as a cold spare.

Mirrors would give 10x data HDs - same as your Drivebender, right? - while my proposed Z2 delivers 12x. You could get ~10% more space in a 2x 7+Z3 setup, but 7x data HDs are less efficient than 8x & you'd have no spares.

Z2 delivers strong redundancy: after initial drive failure, the odds that 2 more both hit the degraded.vdev are about 7.4%. With 10 mirrors, 3 failures have 11.1% of pool loss. See this for info on those odds & the "use mirrors" counterargument.

Right now I have 13TB of data. That data is quickly and constantly expanding ;)
I read your link to 'why use mirroring instead of raidz' and I must say I'm starting to agree with it. I've been scouring the internet recently and that's pretty much the overall consensus.
I don't know how I'm going to set up my second, backup NAS, but for this one, what I think I'll consider is 10 mirrored vdevs. As he said, "don&#8217;t be greedy. 50% storage efficiency is plenty." That's what I currently have with drive bender/replication so it's not like its going to make a difference. If a drive fails, I don't have to worry about losing the entire pool and at no point will I ever have to worry about losing the *entire* pool. (I say it like that because if two drives fail in the same vdev, then that data is toast) but that's the same thing with how it is now.

I guess, my thing is...will I notice an improvement if I go to an open source solution (linux/bsd/freenas, etc... with mirrored vdevs) from my current windows solution (JBOD-ed like pool with replication enabled...secondary file stored on random other drive)?
 
If a drive fails, I don't have to worry about losing the entire pool and at no point will I ever have to worry about losing the *entire* pool. (I say it like that because if two drives fail in the same vdev, then that data is toast) but that's the same thing with how it is now.
I have some bad news for you - that's not how ZFS works. Lose any vdev, and the whole pool goes bye-bye.

...will I notice an improvement if I go to an open source solution (linux/bsd/freenas, etc... with mirrored vdevs) from my current windows solution (JBOD-ed like pool with replication enabled...secondary file stored on random other drive)?
I don't know about Drivebender, but my experience is that ZFS is far superior.to both WHS & Stablebit Drivepool. Just opinion, but I - and my data - stand by it.
 
Back
Top