First ZFS upgrade some questions

DataMine

n00b
Joined
Feb 8, 2012
Messages
41
OK, I am new to the ZFS data storage thing and have a few questions, and yes I have googled it before asking, sizes rounded up for easy math
-Unit is mainly used for storage, will be accessed over a 2x1Gbps Bonded addapter, to upto 10 machines, but 5 mostly, (2 XBMC units, 1 laptop, and 2 destops)
-also i currently running ubunutu + zfs, would it be better to run ESXi + zfs os + ubunutu instead?

-Raidz vs Raidz2 and Raidz3, is there much overhead and will it hamper and VMs i have stored on those partions. I know it can be done, but I found no real specs on how much of an issue it was. Should i just setup a raid 0 with 2 500GB blue disk and backup to the raidz array, or can i run dirrectly off the zfs.

-setting up a zpool, is is possible to blend multiple Raidz levels into one large pool like you can with LVM2 (example raidz 8x2Tb (14TB), and raidz2 8x1.5TB (9TB), in one large pool for 23 TB of storage

-zfs, I currently have 14 2TB disk I am getting 3 more soon for a total of 17 2TB, all data on these disk is backed up to individual 750gb-1TB disks, how should i setup my raid pools, since these disk are all mixed I think of a few different setups. I think that the data is important but since it is all backed up I am not too worried about it failing, as I have run a 8x2TB LVM2 Spanned with no raid for over 2 years with out issue. I plan on using these 8 disk in my new build.

1=17x2TB in Raidz for about 32TB of usable storage + 1 Hot Spare added later.
2=17x2TB in Raidz2 for about 30TB of usable storage
3=17x2TB in Raidz3 for about 28TB of usable storage
4=9x2TB Green 5400 RPM disk Raidz +8x2TB regular 7200 RPM in Raidz + Shared hotspare to be added later. gives me about 30 TB of usable space. does mixing 5400 and 7200 RPM disk have a huge impact on transfers.

here is my rig specs.

.
server new build

OS: Ubuntu 12.04 -Linux Mint 14, LXDE as Desktop, uses 140 Mb of ram booted will all listed programs
Case: Rackable U4 16 Bay Sata case $160 (reused from last build) + Dual 2.5" Hard drive PCI adapter for holding OS drive and maybe cache SSD later $8
CPU: Intel Xeon L5520 @ 2.27GHz, (4 real cores) 8 cores 30*C-35*C $84 (for 2 only 1 in use)
HeatSink: Supermicro SNK-P0035AP4 $48 (for 2 only 1 in use)
Motherboard: TD240 $52
Sata Card: IBM ServeRAID-BR10i PCI-e 8 Port Controller Sata SAS 44E8690 $26
Sata Cables: 2 1 M cables SFF-8087 to sata $16 each
Memory: 16 GB (4x4GB) PC3-8500 ECC $47
PSU: 80 Plus 500-600 watt not picked out yet,
OS HDD: Seagate Momentus 5400.2 ST96812AS 60GB 2.5" 5400 RPM hard drive $15 I had laying around
Storage HDD: 17x2TB
Server Manufacturer: Custom done by Me
Graphics: Integrated
Other Software-
-Webmin 1.620 (remote admin and terminal)
-Transmission (torrent)
-Jdownloader (file downloader)
-Filezilla (FTP file Downloader)
-ProFTP (FTP server)
-mdadm (Software Raid)
-Gnome Disk Ulitiy 3.0.2 (downgraded from 3.6, I really hate the new interface)
-ZFS on Linux (ZFS)
-HandBreak (Video Encoder)
-XChat (IRC)
-VirtualBox (Virtual Machine)
-Samba (File Share)

my old build. not with ZFS
OS: Ubuntu 12.04 (upgraded from 10)
Case: Rackable U4 16 Bay Sata case $160 (reused from last build)
CPU: AMD ATHLON X2 4450e 2.3ghz $19
HeatSink: bubble cooler $15
Motherboard: ASUS M2N-SLI Deluxe AM2 NVIDIA nForce 570 SLI MCP ATX AMD $99
Sata Card: 3Ware 9500S 12 Port (PCI-X in PCI port) $50 + 2 port PCI-E x2 $15 each
Sata Cables: 16x24" standard cables $10
Memory: 4 GB (2x2GB) $45 (upgraded from 2GB in 2012 at 4x512MB)
PSU:?
OS HDD: 1TB (os and download folder)
Storage HDD:
-8x2TB (14.4TB LVM2 no raid, all data had a backup)
-5x1.5TB (5.41TB no backup mdadm raid5 on 3Ware PCI)
-880 GB from OS disk for downloading in LVM2 volume, never expanded

Server Manufacturer: Custom done by Me
Graphics:
Other Software-
-Webmin 1.5 (remote admin and terminal)
-Transmission (torrent)
-Jdownloader (file downloader)
-Filezilla (FTP file Downloader)
-ProFTP (FTP server)
-mdadm (Software Raid)
-LVM 2 (logic drive management
-Gnome Disk Ulitiy 3.0.2
-ZFS on Linux (ZFS) (test run install for later upgrade)
-HandBreak (Video Encoder)
-XChat (IRC)
-VirtualBox (Virtual Machine)
-Samba (File Share)
 
Last edited:
Why the slow memory, is it cheaper ? Why not 2*8GB 1333 or 1600MHz ?

Yes a zpool is made up of 1 or several vdevs and you can have one filesystem spanning all the drives. I wouldn't go below RAIDZ2.
 
16GiB RAM is on the low side for this class. Always buy the maximum capacity DRAM modules you can afford; buying 4GiB modules means a potential RAM upgrade is going to be very pricey. Buy 2 times 8GiB if you want to save money. As it looks now, you have an overkill excess of CPU power, but limited RAM quantity. ZFS likes it pretty much the other way around. In most cases you would want to disable HyperThreading, by the way. Video render farms and other 'crunch' applications are the main exception to this.

I'm missing information on the harddrives themselves. Are they 4K sector harddrives? The ZFS pool configurations you listed are not optimal for modern 'Advanced Format' harddrives sporting 4K sectors. In many cases you want multiple vdevs instead of one big and slow RAID-Z3 with a flotilla of harddrives. Multiple vdevs of 6 or 10 disks in RAID-Z2 are optimal configurations.

If you choose RAID-Z you might want to utilise an SSD to fix the slow random read performance. This only works effectively if your ZFS box is going to be used 24/7. If you reboot often, the SSD caching is not interesting to you.

What is a 'Hard drive PCI adapter'? Just something passive to store a 2,5" harddrive at the location of a PCI slot, or are you actually using a PCI slot on your motherboard?

Finally, please be aware that ZFS on Linux is another class of software (bèta) than Solaris and BSD implementations of ZFS. Though gaining stability and maturity, I do not recommend ZFS on Linux at this time. One alternative you might look into is to build one system dedicated to ZFS NAS, and run all your other software on a different box. This greatly simplifies your setup and allows you to make use of BSD / Solaris ZFS implementations without having a hard time to get your software up and running, just because everything has to run on one machine. I would consider separating storage and applications both in software and in hardware.

Cheers,
- sub
 
If you are serving several clients at the same time, you need to avoid one vdev. Instead, use two raidz2, or raidz3. IOPS will suffer from only one vdev, which makes it slow serving many clients.
 
Yeah I went to a LAN party with my single vdev zpool of 10 - 2TB disks in RAIDZ2, and got killed serving movies (10-12 clients at once) over a single gigabit line. I had to limit it to 4 clients to actually be able to saturate gigabit.

(some of that was also probably using DC++ on Wine)
 
Why the slow memory, is it cheaper ? Why not 2*8GB 1333 or 1600MHz ?

Yes a zpool is made up of 1 or several vdevs and you can have one filesystem spanning all the drives. I wouldn't go below RAIDZ2.

Memory: 16 GB (4x4GB) PC3-8500 ECC $47
The cheapest price i could find for 8GBx2 was around $90 for the 106 and around 120-140 for 1333 to 1600 If youhave a cheap source please tell me. I can still add 4 more modual if I need to.

16GiB RAM is on the low side for this class. Always buy the maximum capacity DRAM modules you can afford; buying 4GiB modules means a potential RAM upgrade is going to be very pricey. Buy 2 times 8GiB if you want to save money. As it looks now, you have an overkill excess of CPU power, but limited RAM quantity. ZFS likes it pretty much the other way around. In most cases you would want to disable HyperThreading, by the way. Video render farms and other 'crunch' applications are the main exception to this.
-1 Im cheap, and 2 this unit will be doing some video converting
I'm missing information on the harddrives themselves. Are they 4K sector harddrives? The ZFS pool configurations you listed are not optimal for modern 'Advanced Format' harddrives sporting 4K sectors. In many cases you want multiple vdevs instead of one big and slow RAID-Z3 with a flotilla of harddrives. Multiple vdevs of 6 or 10 disks in RAID-Z2 are optimal configurations.
-All units are 4K hard drives,
If you choose RAID-Z you might want to utilise an SSD to fix the slow random read performance. This only works effectively if your ZFS box is going to be used 24/7. If you reboot often, the SSD caching is not interesting to you.
-Unit will be running 24/7
What is a 'Hard drive PCI adapter'? Just something passive to store a 2,5" harddrive at the location of a PCI slot, or are you actually using a PCI slot on your motherboard?
It just lets me put the 2.5" drives in a PCI slot on the case, no use of the PCI or PCIe motherboard slot is used
Finally, please be aware that ZFS on Linux is another class of software (bèta) than Solaris and BSD implementations of ZFS. Though gaining stability and maturity, I do not recommend ZFS on Linux at this time. One alternative you might look into is to build one system dedicated to ZFS NAS, and run all your other software on a different box. This greatly simplifies your setup and allows you to make use of BSD / Solaris ZFS implementations without having a hard time to get your software up and running, just because everything has to run on one machine. I would consider separating storage and applications both in software and in hardware.
-I did think about setting up a separate box for each item, i do have a lot of lower end components for the task, but that would add
1-Torrent box (25-30watts)
1-Firewall Box (20 watts)
at .27 cents a killowatt in FL, thinking low end of 30 additional watts and high of 50 Im looking at an additional $26-43 a year, which does not sound like much, but adds up quickly
I am thinking of going staright to ESX or ProxMox and adding Ubutunt and OpenIndiana+Napp-it
 
Finally, please be aware that ZFS on Linux is another class of software (bèta) than Solaris and BSD implementations of ZFS. Though gaining stability and maturity, I do not recommend ZFS on Linux at this time.
I would disagree ZoL is no less stable than OI or any of the Solaris offshoots. Solaris the commercial version is in another class, but all of the other open source platforms carrying ZFS are no more stable than ZoL is.
 
If you need max IOPS (very many clients) then you should use only mirrors.

I think you should at least use two vdev. Create a zpool out of two raidz2. And do some testing, simulate the work load. If two vdevs does not suffice, use three vdevs. If that does not suffice, use only mirrors (it will not be faster than only mirrors).

Adding a SSD (L2ARC) will not help too much in your case, i believe. SSD are good for caching but if you have many users accessing lot of different media files, it will not help with cache.
 
Memory: 16 GB (4x4GB) PC3-8500 ECC $47
The cheapest price i could find for 8GBx2 was around $90 for the 106 and around 120-140 for 1333 to 1600 If youhave a cheap source please tell me. I can still add 4 more modual if I need to.

If it's 47$ for the whole 16GB then I guess it's a good deal and if later you need to replace them it will not be a big problem, I thought it was 4*47$. I'm in France so can't really help you, I just ordered a 8GB 1600MHz stick myself, for 65€, I spent several days finding that deal, and slower memory was only marginally cheaper.
 
nope, not at all, i wanted to keep this entire upgrade under $300
65€ for 8gb of 1600 is about $85 US so thats what i was finding here, 47/16 is less than $3 per gb, so not bad,
 
Last edited:
Back
Top