Inexpensive ZFS or BTRFS build

Sure, if you're going to run FreeBSD you're going for ZFS not btrfs.
While we're at it, you want to run a RAID-Z (RAID-5) array (all 4 HDDs) or two mirror arrays (RAID1)?
 
Last edited:
Unlike Windows driver support for Unix (BSD or Solaris) is more limited.
There are also stability and performance reasons to use only those controllers
that are mainstream for a ZFS storage server and this are LSI controllers.

In professional storage boxes you will find nothing else, does not matter if you buy HP,
Dell, Solaris or SuperMicro. So follow this suggestion. Cheapest are OEM versions of the
LSI 9211 like a Dell H200 or IBM M1015 that you may need to reflash with the original LSI IT firmware.

For the beginning, you can start with onboard Sata/Ahci.

I would also start with a webmanaged appliance. If you want to use BSD, start with FreeNAS.
Other option is Solaris/ OmniOS where ZFS comes from and where ZFS/OS integration is best.
For them you can compare my napp-it web-UI, see http://www.napp-it.org/doc/downloads/napp-it.pdf

If you are familiar with ZFS principles you may switch to BSD and CLI. With napp-it you can use
CLI together with the Web-UI what you should not with FreeNAS.

btw
STH is servethehome, a forum that is specialized on storage and server hardware
 
It's not like the zfs and zpool commands are hard and you also have the zfs scripts in that you can run via periodic.conf . I don't really see a need for discouragement for a full OS since it's pretty much straight forward.
 
So you're going to claim that any frontend covers all that?
It's just a small file server at home, at least FreeBSD does it fine with pretty much default settings and the essential commands. All basic information is covered in the handbook and is more than enough to get it going in a working state. You can do overly complicated setups which more than likely isn't the goal in this case so I fail to see the need.
 
Sure, if you're going to run FreeBSD you're going for ZFS not btrfs.
While we're at it, you want to run a RAID-Z (RAID-5) array (all 4 HDDs) or two mirror arrays (RAID1)?

I wanted 2 drive redundancy. I was just doing some research and found some worrisome advice at another source which suggests:


Do not use raidz1 at all on > 750 GB disks.
Do not use less than 3 or more than 7 disks on a raidz1.
If thinking of using 3-disk raidz1 vdevs, seriously consider 3-way mirror vdevs instead.
Do not use less than 6 or more than 12 disks on a raidz2.
Do not use less than 7 or more than 15 disks on a raidz3.
Always remember that unlike traditional RAID arrays where # of disks increase IOPS, in ZFS it is # of VDEVS, so going with shorter stripe vdevs improves pool IOPS potential.

Since I have 4 TB drives, I fall outside, any recommended practice here....

What do you suggest?
 
I wanted 2 drive redundancy. I was just doing some research and found some worrisome advice at another source which suggests:


Do not use raidz1 at all on > 750 GB disks.
Do not use less than 3 or more than 7 disks on a raidz1.
If thinking of using 3-disk raidz1 vdevs, seriously consider 3-way mirror vdevs instead.
Do not use less than 6 or more than 12 disks on a raidz2.
Do not use less than 7 or more than 15 disks on a raidz3.
Always remember that unlike traditional RAID arrays where # of disks increase IOPS, in ZFS it is # of VDEVS, so going with shorter stripe vdevs improves pool IOPS potential.

Since I have 4 TB drives, I fall outside, any recommended practice here....

What do you suggest?

With 4x4TB drives you are best off going with two mirror VDEV's (essentially like raid 10)
So, create a single ZPOOL with 2 drives in a mirror (this will be the first VDEV) then add a new VDEV to it with the second 2 drives in a mirror. The individual VDEVS are essentially striped together. Create them both at the same time though, as if you create the first, put data on, then create the second, it will NOT migrate data evenly between the 2 VDEV's. ZFS is AWESOME, but expanding existing ZPOOLS is kind of it's main weakness, in this regard, in my opinion.

If you could get 2 more drives, I would then say go with RaidZ2 across all 6.

I run 6x2TB w/RaidZ2 plus a hotspare (I should have probably just went with RaidZ3 but oh well.)
I need to expand my setup because I am out of space!!


Also, if you prefer Linux, try ZoL -- ZFS on Linux. I have been using it now for a few years with zero issues. Had some drives fail and swapped them out, all rebuilds went smooth. http://zfsonlinux.org/ There are packages for most major Linux distros (I use Ubuntu, so you just add the ppa, then install it and it builds the kernel modules and everything on each new version of ZFS or the Linux kernel itself.) Very slick.
 
Last edited:
With 4x4TB drives you are best off going with two mirror VDEV's (essentially like raid 10)

I have 4x2TB drives. That's what triggered this build, I was looking for a low-cost way to put them to use. I think your point still holds the same, though. Not sure I want to invest in 2 more drives for this build, though. I haven't yet bought the 3 items I discussed with diizzy above, but the PCIe SATA card I'm looking at only allows me to add 2 more drives total, including my OS drive, and the enclosure that comes stock with the T20 doesn't have room for any more drives.

I should have planned the whole thing out more thoroughly before buying anything... I've only bought the T20 so far besides the 4x2TB drives.
 
Well, if you have 4 HDDs you have several options.

RAID-1 (mirroring) * 2 --> Two separate arrays (available space will be equal to one HDD)
RAID-10 (striped+mirroring) --> Single array with (available space will be equal to two HDDs)
RAID-Z (RAID-5) --> Single array, Single array (available space will be equal to three HDDs)

People are running this on low-powered HP Microservers (that includes myself) and what not so it surely doesn't need to be expensive to work well. Again, it's home usage not a data center.
 
You may use a small SSD in an USB case to boot the OS (or a very reliable 32GB USB stick).
Then you can use the 4 disks for a pool.

Other option is an SAS controller with an external connector and an external SAS JBOD case.
 
Why would he need an external case when everything fits inside?
http://homeservershow.com/forums/index.php?/topic/7449-dell-t20/
He has 4 3.5" slots and 2 2.5"...

@ facesnorth
The case is perfectly fine...

* Place the 2.5" HDD where the ODD slot is, connect the 2.5" HDD which will be the boot drive to the first SATA port on the motherboard.
* Place two of the 2TB HDDs in the lower drive slots and connect the motherboard using that last ports on the motherboard leaving one unpopulated.
* Place remaining HDDs in the upper drive slots and connect the Senda card.
* Done

@ All
Again, please have in mind that is for home usage not a Data Center or similar.
Budget is an issue, yes the ASMedia card isn't as fast as the LSI HBAs but without getting a new case only two addtional HDDs can be fitted securely and the NIC is a gigabit one so the bottleneck is going to be the network interface anyways.
He can always later on if you needs to expand get a LSI HBA and move it to another case or get an externa case for additional HDDs without any issues. (what _Gea mentioned)
...and yes, given what I mentioned above I do think getting a LSI HBA and cables (about 100 + cabling) is waste given the current setup instead of getting a SATA card that handles everything fine and costs about a fourth.
As for the performance concerns, http://www.techpowerup.com/forums/threads/asmedia-asm1061-perfomance.158215/#post-2605379
The write speed of the SSD bottlenecks at ~200mbyte/s so I'm going to say that it performs good enough to saturate two mechanical drives.
 
Last edited:
To be clear, it doesn't really matter which pciE card I buy as long as it has the Asmedia ASM1061 chipset right?

So I could for example choose:

http://www.amazon.com/Optimal-Shop-...e=UTF8&qid=1453219444&sr=1-4&keywords=ASM1061
http://www.amazon.com/Express-Adapt...e=UTF8&qid=1453219444&sr=1-3&keywords=ASM1061
http://www.amazon.com/mSATA-Express...e=UTF8&qid=1453219444&sr=1-7&keywords=ASM1061
etc?

I'd like to order one through prime to get it faster (the T20 just arrived). Most of these have iffy reviews, to be expected. Would like to avoid one that isn't recognized half the time I boot up if possible, though I'll just have to see what I end up with. Also leaning towards one with the external ports, so I'd configure it to use 1 internal and leave my free port remain as an external port for easy access.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I have 4x2TB drives. That's what triggered this build, I was looking for a low-cost way to put them to use. I think your point still holds the same, though.
Yes, point still stands.

Well, if you have 4 HDDs you have several options.

RAID-1 (mirroring) * 2 --> Two separate arrays (available space will be equal to one HDD)
RAID-10 (striped+mirroring) --> Single array with (available space will be equal to two HDDs)
RAID-Z (RAID-5) --> Single array, Single array (available space will be equal to three HDDs)

Well, first of all it is a TERRIBLE idea to use single parity raid on disks > 1TB or so.. Second of all, when you use ZFS and you create two mirror VDEVS, in the same zpool, you are essentially creating a raid 10 mirror/stripe. There would not be much point to making two separate ZPOOLS, unless you really wanted to partition your data that way. Plus a single zpool with 2 vdevs would be ~2x as fast.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
@ facesnoth
Yes, get this one in that case.
http://www.amazon.com/IO-Crest-Port...e=UTF8&qid=1453225423&sr=1-1&keywords=ASM1061

@ extide
RAID-5 is very common and is widely used, it's a tradeoff as with everything else. It's been used for years just fine, you have to remember that RAID != backup. I've been using for several years just fine as long as you keep backups which you should do irregardless.

Raid 5 is very commonly used in enterprize, with smaller drives that have significantly lower URE rates. Running raid 5 on 2TB drives that are ALSO not enterprize drives is just ASKING for your data to go away.

Yeah raid isn't a backup, but why design a solution that sucks?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
So what should I do? Is there anything I can do with my 4 disks? Should I be buying 2 more 2TB disks?
 
@ facesnorth
You don't need the external ports, you're going to use the internal ones only. :)

@ extide
Again, this isn't enterprise setups... That said, a lot of use are doing this including myself just fine. Yes, I've had drive failures and replaced HDDs just fine.
 
@ facesnorth
That'll do just fine, just ignore cantalup in this case...
If you just want to it to serve as a file server FreeNAS is a decent way to go, if you want to learn more give FreeBSD a spin. It's not hard and you'll have a lot more options available.

@ cantalup
Please be more constructive in your replies, we can all do random spamming.
And feel free to prove your point rather than doing "..." and "$$$".

I can do objectively,
and already explained,
go to STH and many will discuss

I am not spamming, but not to argue without ending.

Gea will help you to understand since good on all in one and napp-it
 
I don't know what sth forum is.

I'm looking for specific suggestions. I wouldn't know which hba card to pick. And a few suggestions made on this thread have turned out not to be good suggestions so I'm looking for things that are clearly consensus.

Not really sure how to determine what's a good or crappy SATA card. Also not sure how to differentiate good from crappy enterprise gear.

servethehome forum

I can explained in detail, go to STH and many will help you including me to understand your need
STH focus on storage and server parts (home user os SOHO)

crappy sata card ASM...
speed is very slow
marvell card is better than asm


if you have storage:
1) need to backup
2) need to recovery

when you are doing with crappy sata card, the transfer rate always slow and take much time

go with onboard Intel SATA + 6G lsi HBA card ( can be bought 35 to 50 bucks).
you will understand later when smehting happens and need to quick backup or recovery...

ASM is NOT enterprise gear, but cheap sata port pcie- 1X.

my one motheboard has that, I disabled due on many issue on slow performance.

you want to use ZFS? that is good since BTRFS raid5/6 is not stable enough

you can pick any OS to run ZFS
I am running ZFS on Linux for my 4 box machines.. 2X is running 24/7....

go to STH forum, you will understand to build the best machine for you
since used enterprise parts are not expensive compare to buy brand new consumer parts..

I started with new consumer parts 10 years ago, and move to use entperise parts for cheap or less than new consumer parts.
the build should be last 5-7 if possible..no update on hardware, just HD only
 
A ZFS Storage server is much more than the commands zfs and zpool

You only need to look at the ZFS administration guides from Oracle
https://docs.oracle.com/cd/E26505_01/pdf/E37384.pdf


well said, Gea :)

this make reasonable for average/starting user that ZFS is simple only on zfs/zpool with scripting.
actually many more can be explored in ZFS :D

I am stop to reply since seems this thread going to neverland ......
 
I checked out the site, I don't really see how it's any better than over here. Seems a lot less fleshed out. I've always gotten the help I've needed over here, not sure why the need to go to another site.
 
I said before I never heard of STH and don't know what it is.

edit: nevermind. Didn't read the next message



Thanks, reading through the rest of your message now.

just for intro http://www.servethehome.com/current-lsi-hba-controller-features-compared/

this was my journey to storage :p

1) HW raid adaptec,, series 3XXX
2) move to software raid, onboard sata and pcie 1x sata dual or four ports.(ASM was one and the other one that I do remember), this was not good when doing recovery and backup. I disable ASM and extra pcie 1x sata card, and just use onboard sata, and OK with that. I also put adaptec raid 3XXXX again
3) move to HBA DELL H200 and M1015 , flashed to IT firmware. running smoothlu, and backup and recovery fast than before.

I was using mdraid (linux software raid), and move to ZFS on linux and btrfs raid 0/1


I stop posting since believing enough simple info and understanding.

good luck for your build
 
I checked out the site, I don't really see how it's any better than over here. Seems a lot less fleshed out. I've always gotten the help I've needed over here, not sure why the need to go to another site.

post your thread first....

I know sth better on anserint storage DIY.
 
@ facesnorth
You're asking someone who makes rather non constructive claims ;-)

That said, the ASM-controller only has 2 channels so the jumpers are used to set use of external or internal connector(s). It's also faster than Marvell controllers (someone needs to do their homework) that are available as 2ch and 4ch controllers, they have quirky AHCI protocol-support at best.

You'll have one free SATA slot, I mentioned it here a possible way to connect all drives.
http://hardforum.com/showpost.php?p=1042091805&postcount=53
I left one port on the mobo left as I intended to balance the setup, mainly for logical reasons. You can use all connectors on the mobo and use one on the ASM-card if you perfer that.
 
Raid 5 is very commonly used in enterprize, with smaller drives that have significantly lower URE rates. Running raid 5 on 2TB drives that are ALSO not enterprize drives is just ASKING for your data to go away.

Yeah raid isn't a backup, but why design a solution that sucks?

extide, what should I be doing in your opinion? I don't want to spend more than a few hundred total, and want to make use of the existing 4x2TB drives that I have.

I've already bought the T20 and now an 8GB stick of ECC RAM, which is most of the money I wanted to spend. I still need to get an OS drive and a SATA extender.

I was ultimately looking to create a RAID 6 type setup with 2 parity disks. Is this possible with 4 disks on ZFS?
 
@ facesnorth
That said, the ASM-controller only has 2 channels so the jumpers are used to set use of external or internal connector(s). It's also faster than Marvell controllers (someone needs to do their homework) that are available as 2ch and 4ch controllers, they have quirky AHCI protocol-support at best.

You'll have one free SATA slot, I mentioned it here a possible way to connect all drives.
http://hardforum.com/showpost.php?p=1042091805&postcount=53
I left one port on the mobo left as I intended to balance the setup, mainly for logical reasons. You can use all connectors on the mobo and use one on the ASM-card if you perfer that.

OK, I was confused why you said I'd have one free on the motherboard, now I understand. If that's the best way to do it then I can do it that way (using the 2 on the card...).
 
It's a matter of preference mostly, also if you want to mirror the boot drive later on its better to have the pair on the same controller.
 
Am I gonna be able to create a RAID 6 type setup with 2 parity disks. Is this possible with 4 disks on ZFS?

I didn't really want to do RAID 5 or 1 or 10.
 
No, you need at least 5 (storage) drives.
You can do RAID Z (RAID 5), RAID 1 x 2 or RAID 10 given your current amount of HDDs.
This is not a limitation by ZFS.
 
extide, what should I be doing in your opinion? I don't want to spend more than a few hundred total, and want to make use of the existing 4x2TB drives that I have.
Extide prefers 2 mirrors to Z1.

Mirrors will give better performance & faster rebuild, but you'll have the capacity of 2x HDs, not 3x.
 
extide, what should I be doing in your opinion? I don't want to spend more than a few hundred total, and want to make use of the existing 4x2TB drives that I have.

I've already bought the T20 and now an 8GB stick of ECC RAM, which is most of the money I wanted to spend. I still need to get an OS drive and a SATA extender.

I was ultimately looking to create a RAID 6 type setup with 2 parity disks. Is this possible with 4 disks on ZFS?

Am I gonna be able to create a RAID 6 type setup with 2 parity disks. Is this possible with 4 disks on ZFS?

I didn't really want to do RAID 5 or 1 or 10.

Ok, IMHO you should do two mirror vdevs in a single vpool.

So you have 4 drives, you want double parity, you CAN do this with RaidZ2 but you will end up with less performance than doing two mirror vdevs, and the same amount of space.

If you really want to do a raid 6 type (RaidZ2) then you really ought to have 6 disks. That way you have 4 for data and a 2/3 space efficiency.
With 4 drives you can do either 1/2 efficiency with two mirrors or with RaidZ2 (less perf though). Single parity (raid5/RaidZ1) is possible (3/4 efficiency) but I would highly suggest avoiding it.

So yeah do two mirror vdevs in a single Zpool -- then you will have a single 4TB volume with two parity drives.

I guess a 4-disk RaidZ2 setup does have ONE advantage -- you can lose any two disks and be ok, vs two mirror vdevs, you can only lose one of each mirror set. ZFS best practices says 6+ drives for RaidZ2 though, I believe


So yeah, use your T20, your RAM, your 4x2TB drives, and then get something cheap/small for the OS and be done with it. I hear that case has 2x2.5" so you can get a small SSD or a laptop HD for the OS.

EDIT: SO, in reviewing the thread it appears that the T20 only has 4 sata ports. Is that true? If so, you will need to get a cheap PCIe SATA card to plug in all drives. Can you verify that there are only 4 ports?
 
RAID-Z is going to be fine, the CPU is speedy so it won't be a bottleneck if you need to resilver.
In your case I'd probably go for 2 separate mirror arrays rather than RAID 10 unless you need the space and are willing to use RAID-Z(1).

fwiw, I've been running RAID-Z (4x HDDs in 3 arrays) for 7+ years now without issues, but to keep backups of what's really important.

Doing some quick math, if you max out your array it'll take about slightly less than 6h for a rebuild if avg write speed is 100mbyte/s to replace a HDD.
That isn't going to be any faster irregardless of any array as the HDD will be the bottleneck.
 
A traditional resilvering (without the Oracle Solaris sequential improvements) is not sequential but is highly iops sensitive. Iops scale with number of vdevs (on read with mirrors 2 x vdev). This means that a Raid-10 of 4 disks has 2 x the write iops and 4 x the read iops of a single disk or a Raid-Z of 4 disks.

May not be relevant on a home setup with 4 disks but may become important with many disks.
https://blogs.oracle.com/roch/entry/sequential_resilvering
 
In your case I'd probably go for 2 separate mirror arrays rather than RAID 10 unless you need the space and are willing to use RAID-Z(1).

What do you mean by this? Are you are saying create two ZPOOLs with a single mirror VDEV in each, instead of one ZPOOL with 2 mirror VDEVs in it? Why?

It's not really raid 10 but similar, in that it's a stripe of two mirrors

My suggestion is still: one ZPOOL with 2 mirror VDEVs (of 2 drives each)
 
Back
Top