ZFS build - feedback requested

Slogger

n00b
Joined
Oct 30, 2009
Messages
2
Over the last few days I have been planning a server build to run FreeBSD + ZFS. I have used FreeBSD since 1996 which is why I am choosing this implementation of ZFS. I have been reading many ZFS posts on this site as well as a couple of other forums regarding the hardware to choose. Below is the equipment I am considering:

Case: Fractal Design Define XL Black
Mobo: Gigabyte GA-870A-UD3
CPU: AMD Phenom II X6 1075T
RAM: 2 x F3-10666CL9D-8GBXL G.Skill 2x4GB PC-10666 (16gb)
NIC: 1 x Intel Gigabit CT Desktop Adapter PCIe
PSU: Seasonic X-650
OS Drive: 1 x Corsair Force Series 60GB SSD
ZFS: 6 x 2TB Samsung F4 HD204UI

My goal is to run it in RAIDZ2 giving me about 8TB of useable space. Does anyone have any suggestions or issues with any of this hardware in a similar setup? I have tried to avoid hardware that has been mentioned as problematic.

My primary concerns are if the motherboard, cpu and ram are a good combo to run this on and the power requirements (should I use a lower rated PSU?). Power consumption is always an issue of course, but I would like good performance.

Any input would be appreciated :)
 
@Slogger

Heya, nice build it's pretty close to mine. Do note there were some issues with those samsung drives (firmware), just double check to make sure they are fine.

There is only 1 part i don't really "get" the SSD as OS drive. Other then giving nice fast reboot speeds i don't think you'll get a lot out of it. Better to use an old/spare/cheap drive (or 2 in mirror) for the OS & use the SSD as cache, I'm not using an ssd in my build at home since i don't care that much about the speed (it's a storage server ^^).

But maybe you got a reason for it.
 
I would go for an Asus motherboard and get ECC memory.

And you should mirror the OS drive. You don't need SSD for OS drive, use it as cache for ZFS instead.
 
With this processing-power I would even consider buying two Intel NICs and aggregate them...
 
16GiB memory is very good! ECC would be even better, but if you make this a dedicated ZFS box you can do without ECC unless the data is absolutely crucial and you need enterprise level data security. In other words, you would want ECC more on your workstations than on your ZFS server. ZFS can correct many memory errors; your workstation likely uses windows and as such is very fragile to memory corruption and could corrupt your data outside of ZFS' control. So if you go ECC, put ECC on your workstation(s) first.

I would use a different method for your system disk, a pair of good USB sticks can do. For example one 4GB and one 8GB. That would make sure that both flash devices do not fail at the same time. Or install directly to your pool.

What kind of FreeBSD installation are you looking at? Regular FreeBSD means a UFS system disk. This is something you should avoid, since mixing UFS and ZFS is bad for your memory consumption; UFS will steal memory from ZFS. The best is a ZFS-only system, with Root-on-ZFS systemdisk. My ZFSguru distro can install such a system, or alternatively you can use mfsBSD zfs install script.

You may not need the Intel NIC; i would try with the onboard gigabit NIC first if that gives you decent performance you can save the money for an Intel adapter in your windows workstations, since Windows is very driver-sensitive; Windows only uses third party drivers unlike Linux/BSD which also write their own drivers.

A 300W power supply should be enough for that system, and idling around 40W should be your target. You probably won't achieve that with a 650W PSU. Generally your system should idle at 20% of its capacity. So taking 40W idle that would be a 5*40= 200W power supply. That would be optimal for power consumption, but probably not for spin up current; the system might need around 200W to boot up and spinup the disks.

You should connect your SSD to the onboard SATA to be TRIM-capable. Have you thought about a controller? Since chipsets offer max 6 SATA ports and with the SSD you need 7.
 
Very interesting and useful posts.

I was considering ECC memory but here in Australia ECC ram is a lot more expensive than regular memory. My workstation is a Mac Pro and has 12GB ECC memory in it.

After reading through the thread I think I will remove the SSD and add two mirrored 7200rpm drives. How would speed compare running the system off USB sticks?

I had totally forgotten to check if that motherboard had integrated GPU. The GA-880GA-UD3H looks good. The motherboard has 6 x SATA 6Gb/s and 2 x SATA 3Gb/s connectors. I was going to connect the OS drive(s) to the 2 SATA 3Gb/s connectors and the 6 storage drives to the others.

When it comes to installing FreeBSD I had not yet worked out exactly how I would do it. I didn't realise ZFS and UFS didn't get along. I will have to read up on setting the system up with root on ZFS.
 
16GiB memory is very good! ECC would be even better, but if you make this a dedicated ZFS box you can do without ECC unless the data is absolutely crucial and you need enterprise level data security. In other words, you would want ECC more on your workstations than on your ZFS server. ZFS can correct many memory errors; your workstation likely uses windows and as such is very fragile to memory corruption and could corrupt your data outside of ZFS' control. So if you go ECC, put ECC on your workstation(s) first.

wish i saw that before i made my build ^^
 
The motherboard has 6 x SATA 6Gb/s and 2 x SATA 3Gb/s connectors. I was going to connect the OS drive(s) to the 2 SATA 3Gb/s connectors and the 6 storage drives to the others.

I dont think this will work, since the 6x SATA are from the southbridge and are just fine.
However, the additionally two are from some third-party chip ("GIGABYTE SATA2 chip") and I doubt BSD or solaris like that one ... you would have to check with sub.mesa, gea or some other guru
 
The GA-870A-UD3 motherboard has 6 native SATA 6Gbps ports by chipset, and 2 FakeRAID ports offered by "GIGABYTE SATA RAID" which is a rebranded Silicon Image or JMicron controller. They should work, but you should consider them of much less quality and with potential problems such as avoid writing to the last sector or you may not be able to boot. This kind of FakeRAID is known for alot of BIOS boot issues.

Gigabyte also isn't a brand known for low power consumption, and their software DualBIOS feature can overwrite data on your HDDs without you ever having given consent to such an action; beware! You absolutely do not want DualBIOS feature to overwrite your HDDs; check/email whether this is the case.

You can also consider 6 standard SATA ports and a HBA for additional ports. I would definately use SSD(s) for your new ZFS box, but you can decide to buy those later when they have a supercapacitor (Intel G3, Marvell C400, Sandforce SF2500) and are faster as well. The 6 onboard SATA ports would be TRIM-capable in AHCI mode; alot of HBAs are not.

Having two mechanical disks in a mirror just for your 200MB system disk is quite an inefficient choice for power consumption. Consider using no system disk at all and install directly to your pool instead, alot simpler, less power, less SATA cables used and cheaper as well.

Performance of your system disk is irrelevant if it's going to be used as fileserver. In fact you may want to put /tmp and /var on a memory filesystem so the system disk wouldn't be accessed at all after booting.
 
A Phenom will likely use more power than an Athlon. I went with an Athlon II 635 (cause it was 15 bucks more than an x2) and it's more than overkill.;) Just something to think about.

I just build mine last month and went with:
ASUS M4A79XTD EVO - 2 pcie x8 (x16 slots) plus 2 x1
Older chipset, but everything seems to be working fine.
Previously mentioned x4 635
4g Ecc ram
3x Samsung f4 2tb

I'm using the Realtek nics on both the NAS box and my i7 rig. I put in an Intel at some point, but it seems to max out the old 80g drive I'm running OS X from anyway.
 
one more thing to consider
do you want to use it as a NAS only?

Personally, i use more than one OS. With a modern computer,
you can do this with virtualization. The main problem. If you
want to virtualize several systems including Windows and a
ZFS-storage OS, you need a system, capable running a type-1
virtualizer like ESXi or XEN and capable of i/O virtualizing for
pass-through disk controller and disks to the ZFS-OS.

With most desktop systems, this is not doable like with this
100 Euro/$Gigabyte mainboard. The cheapest mainboards, capable
vt-d Hardware virtualization are about 160 Euro/$ like a Supermicro X8 SIL-F

Just to consider.
I would never miss this feature

Gea
 
Last edited:
A Phenom will likely use more power than an Athlon.
The Phenom X6 is remarkably efficient when stressed, adding two cores didn't raise stressed power that much; either in Anand or THG review don't remember. The idle power is less good though; some have a solid 10W higher than other AMD offerings. Still it's not that bad, if you need alot of power the X6 can give to you. Compression/encryption sort of stuff would thrive on this CPU.

But really slick will be Bulldozer if AMD can get this to the shelves fast enough. That would be AMD's first Metal Gated (HKMG) cpu that should have very low idle power drains. Intel already uses HKMG from 45nm forward. We saw a huge reduction in idle power for 45nm Core 2 versus 65nm Core 2 so the same could be true for AMD. That said most recent AMD chips do fairly well at idle power consumption.
 
Personally, i use more than one OS. With a modern computer, you can do this with virtualization.
Is it worth the trouble? Going for two separate boxes (workstation/server) seems so much simpler. And you don't need specific hardware (Vt-d) and don't have to share resources with your storage box.

On the other hand, it would be very slick to use both on the same system, preferably with internal network that can far exceed the limitations of ordinary gigabit. The problem is that I've yet to see this play nicely! To me, virtualization seems like an unnecessary risk added to your storage that you can easily remove and go for dedicated boxes. One less problem to worry about or able to cause headaches when it's not working or having some issue.

What about doing it the other way, and using a ZFS OS as host and running Virtualbox on there. Not great as desktop OS but it could offload some tasks for you to the ZFS server while still being in a Windows environment.
 
What about doing it the other way, and using a ZFS OS as host and running Virtualbox on there. Not great as desktop OS but it could offload some tasks for you to the ZFS server while still being in a Windows environment.

this is what i plan on doing. running xen in a jail (so i don't accidentally break anything while setting up xen) and virtualize what i need in there. preferably dedicate a vm for timemachine backup for my macbook, and a few other things. i've never setup xen on freebsd but i have a friend of mine who knows the installation like the back of his hand. if i get frustrated i'll just use virtualbox. cuz thats pretty simple. :p
 
I thought FreeBSD could function as DomU (guest) but not as Dom0 (host)? Xen is really cool though, i hope i can integrate that sometime in my ZFSguru project. :)
 
Is it worth the trouble? Going for two separate boxes (workstation/server) seems so much simpler. And you don't need specific hardware (Vt-d) and don't have to share resources with your storage box.

On the other hand, it would be very slick to use both on the same system, preferably with internal network that can far exceed the limitations of ordinary gigabit. The problem is that I've yet to see this play nicely! To me, virtualization seems like an unnecessary risk added to your storage that you can easily remove and go for dedicated boxes. One less problem to worry about or able to cause headaches when it's not working or having some issue.

What about doing it the other way, and using a ZFS OS as host and running Virtualbox on there. Not great as desktop OS but it could offload some tasks for you to the ZFS server while still being in a Windows environment.

hello sub.mesa
the point is, when buying a new server, you can add this as an option. it is also absolutely trouble-free on
hardware that can run ESXi and vt-d + supported sas controllers like lsi 1068 or 2008 without problems.
I have't tried it with Free-BSD, but i do not expect more troubles with it and passthrough than with any other virtualized guest.

I use it on all newer installations at work and at home No need to have several boxes for server tasks beside
a desktop machine.

The other way, using a ZFS-OS (or any other full featured os as base + a type 2 hypervisor like virtualbox) is far below
the feature-set of ESXi in tems of supported guest systems, overall perforformance or stability.

XEN with a dom0 ZFS-OS would be a dream. But its not available and usability of XEN is currently far below ESXi or
Xenserver. With Solaris, Sun/Oracle cancelled dom0 efforts on Solaris. I know there are efforts to reintegrate it in
OpenIndiana - eventiually.

Gea
 
I thought FreeBSD could function as DomU (guest) but not as Dom0 (host)?

yeah i misunderstood something about xen from a friend of mine earlier, he added me to a conf call earlier (gets pretty boring at work) and i asked him about it. vbox here i come.

hardware that can run ESXi and vt-d + supported sas controllers like lsi 1068 or 2008 without problems.
I have't tried it with Free-BSD, but i do not expect more troubles with it and passthrough than with any other virtualized guest.

The other way, using a ZFS-OS (or any other full featured os as base + a type 2 hypervisor like virtualbox) is far below
the feature-set of ESXi in tems of supported guest systems, overall perforformance or stability.

XEN with a dom0 ZFS-OS would be a dream. But its not available and usability of XEN is currently far below ESXi or
Xenserver. With Solaris, Sun/Oracle cancelled dom0 efforts on Solaris. I know there are efforts to reintegrate it in
OpenIndiana - eventiually.

Gea

apparently virtualbox can do hardware emulation? or am i misunderstanding something here too. i've used ESX less times than i have fingers on one hand. and i agree with you as far as using type 2 hypervisors leads to less performance etc.. but if you're just joe schmoe doing this in their home for small things, i don't think it would matter too much.. unless you're a performance addict
 
yeah i misunderstood something about xen from a friend of mine earlier, he added me to a conf call earlier (gets pretty boring at work) and i asked him about it. vbox here i come.



apparently virtualbox can do hardware emulation? or am i misunderstanding something here too. i've used ESX less times than i have fingers on one hand. and i agree with you as far as using type 2 hypervisors leads to less performance etc.. but if you're just joe schmoe doing this in their home for small things, i don't think it would matter too much.. unless you're a performance addict

from wikipedia:
Hardware emulation
VirtualBox supports both Intel's hardware virtualization VT-x and AMD's AMD-V.[25]
Hard disks are emulated in one of three disk image formats: a VirtualBox-specific container format, called "Virtual Disk Image" (VDI), which are stored as files (with a .vdi suffix) on the host operating system; VMware Virtual Machine Disk Format (VMDK); and Microsoft Virtual PC VHD format. A VirtualBox virtual machine can, therefore, use disks that were created in VMware or Microsoft Virtual PC, as well as its own native format. VirtualBox can also connect to iSCSI targets and to raw partitions on the host, using either as virtual hard disks. VirtualBox emulates IDE, SCSI, SATA and SAS controllers to which hard drives can be attached.

The point is vt-x allows several guests to share a cpu and ram. it does not allow direct hardware access for example direct disk access. this is not a problem for a usual guest os but its a huge problem for a ZFS-storage os . you loose not only performance but also the ability to import such a pool in another system (disks are not zfs formatted), disk failure control within zfs, smart-control within ZFS and control for syncronous writes. To solve this, you need not only hardware-emulation, you need i/o virtualisation to allow guests to have full control on real hardware like disk controller and disks and not only access to a emulated disk on a emulated controller. This was introduced by intel as vt-d (much more sophisticated than vt-x only).

it is not a little bit better or not, its a possible or not.

Gea
 
which is why i asked, thanks for the clarification ;) how much of a loss do you think could be expected if running a guess in vbox on top of freebsd w/ zfs?
 
The Phenom X6 is remarkably efficient when stressed, adding two cores didn't raise stressed power that much; either in Anand or THG review don't remember. The idle power is less good though; some have a solid 10W higher than other AMD offerings. Still it's not that bad, if you need alot of power the X6 can give to you. Compression/encryption sort of stuff would thrive on this CPU.

Idle power is mostly what I was considering. The OP didn't mention much as far as specific use, but I would guess that most home servers spend the large majority of their time at idle. I'm admittedly not very familiar with the x6, however.

I agree that for most home uses Virtualbox would be sufficient. ESXi would be slick to play around with, but unless you already know that you need it, you probably don't. I almost think it would still be wise to have an ESXi test box on a separate machine anyway.
 
which is why i asked, thanks for the clarification ;) how much of a loss do you think could be expected if running a guess in vbox on top of freebsd w/ zfs?

if your base os is already a zfs-os, and you are looking to the differences beside the performance aspect (only free-bsd runs on full speed), you have to compare vbox vs ESXi.

I do not use vbox, but i suppose the main advantages of ESXi are:
best overall performance
no overhead for base os (the ESXi core is abut 70MB in size)
-> nearly all resources are available for guests
more stable base-OS
better management functions
better resource control between guests (cpu, ram)
memory over-commitment (add more memory to guests than you have physically)
virtual network switch management
supported guest systems (nearly everthing with ESXi)

conclusion:
a full featured base-os + type 2 virtualiser like vbox are ok if you primarly need the base os,
guests are not as important.

a type 1 barebone virtualiser is needed if you have several guests with quite the same
importance or if you need high availabilty and stability.

and at last.
ESXi is quite easy:
-boot ESXi installer cd and install ESXi on a small boot-drive (10min)
-connect the ESXi server with a browser from Windows and install vsphere management tool
all management tasks are done from this Windows app
-create a new vm or import a vm and start it via vspehre application

Gea
 
Last edited:
]|[ Mar']['in ]|[;1036864292 said:
I'd like to no how the OP's build goes, as that is pretty much exactly what I want to build for myself.

Build a server, based on a barebone ESXi?

Part 1:

- You need certified or known to work hardware
- Download free ESXi 4.1 boot-iso from Vmware
- Boot from iso and install to a boot disk (usb or sata, 4 GB for ESXi only, 16-32 GB for All-In-One)

Installation is just simple, no special questions or knowledge needed.
if you get problems, your hardware is not supported

thats it. ESXi is up and running. Think about ESXi like a firmware of a wlan router. Its just 80 MB in size.
You cannot set anything beside network setting, keyboard, language and admin password locally.

to manage your ESXi server, you need a Windows computer. Start your Web-Browser with the IP of your ESXi system to download and install the free Windows management software vsphere.

Start vsphere and connect your ESXi server.You can now create, manage and remote control your virtual machines.


Gea
 
I was considering ECC memory but here in Australia ECC ram is a lot more expensive than regular memory.

Are you sure? Looked into Kingston modules?

Consider importing ECC ram from the states. In Norway ECC is cheap (5-10% more expensive than non-ECC) but we have to wait 5-7 weeks for them to be made and arrive via boat (go figure!).
 
Back
Top