ZFS file server Build

Shockey

2[H]4U
Joined
Nov 24, 2008
Messages
2,272
Hello everyone,

I been planning and researching for awhile now.

Posting to see what other think and looking for feedback on my build below. In need of some information for some decisions.


Uses:
  • Target for backups
  • Media Collection (Blu-ray/DVD & music)
  • Torrent/nzb/couchpotato/sickbeard/headphone download box.
  • Handbrake encoding blu-ray/dvd collection.
  • VMware data store for ESX Server.
  • Virtualize Plex Media server for over internet streaming. (Possibly XBMC as well)

Operating System Choices

Openindiana/Virtualize centos install for plex

ESX install.

I seriously considering just using ESXi and doing all in one type setup.

One thing is unclear is if i was ever to go from virtual openindiana install to a physical openindiana install would i be able to re import the pool?



ZFS Raid Level:
Either RAIDZ2 or Striped Mirrored Vdevs.

I think i just go with RaidZ2 as i don't think i will be needing the performance of stripped Mirrors. What everyone else opinion for a home file server?

I did purchase a crash plan unlimited plan during black Friday. As i well aware that raid is not a backup. I will be uploading important files for redundancy purposes.


And the Hardware
X1-Norco 2424 Case
X1-Supermicro X9SRL-F
X1-- Intel Xeon Quad-Core Processor E5-2609
X4---CORSAIR 8GB CMV8GX3M1A1600C11
X2--IBM ServeRaid M1015 (Flashed with IT Firmware
X1 Intel 240GB MLC L2ARC Cache SSD
X4---0.5m 30AWG Internal Mini SAS 36pin (SFF-8087)
X1---Corsair 750W ATX
X8-- hard drives (already purchased)

No boot device selected yet as need some more info. (ESX vs openindiana)


Thanks for the help/reading
 
Got almost the same purpose/setup built, but not running yet, so...
You probably meant norco 4224 right?
The 4 port SCU sata controller on the motherboard can not be passtrough, so use it for the local datastore/boot device
What info do you need for boot device? I just used a intel 320 40gb ssd for esxi&OI install. For other VM's i have 2 samsung 840pro waiting, but you can also use one larger SSD for everything.
I would suggest running everything first and then buy a L2ARC or ZIL devices if you're gonna need them. I'm probably going to add 32gb more ram and a ZIL if things will be too slow.
0.5m cables are too short.
Since you have 24bay case, make sure your PSU is capable of running it (and I don't mean the just wattage). There is a debate going in the separate thread.
 
If you're going for a single socket 2011 setup, I'd pick the E5-1620 instead, thats for sure.. No reason paying for a CPU capable of running in a 2-way setup if you're not using it.
 
If you're going for a single socket 2011 setup, I'd pick the E5-1620 instead, thats for sure.. No reason paying for a CPU capable of running in a 2-way setup if you're not using it.
They're the same price. The question is if you need more power (and 50W more TDP) or not.
I went with 2620 as i wanted more cores with lower power and is also cheaper than the single socket one.
 
Server CPU and Server Motherboard, but no ECC RAM. Seems like a strange omission. I wouldn't build a file server without ECC.
 
There is no reason to build a socket 2011 system if you are only going to use 8GB of RAM.

I just ordered the following:
Supermicro X9SRL-F (Socket 2011 - 8 Dimm Slots)
NO CPU (I am deciding if I want 1620 or 2620 E5) - I want the higher clock speed of the 1620 v. the 2620, but I am not sure if the 6 cores will benefit me more on the 2620 overall for ZFS. I am still thinking that the clock speed is better on the 1620.

Anyways I also ordered 32GB kit of Kingston Registered DDR3-1600 for $224.00. The same amount of unbuffered would have been around $340'ish on new egg no kidding.

Link: http://www.amazon.com/gp/product/B0080K6BI2/ref=oh_details_o01_s01_i00

The reason I went with socket 2011 for my ZFS build is due to the ability to handle a gargantuan amount of RAM as it becomes cheaper I will throw more at it. DDR3 is not going away any time soon. Also due to the amount of lanes. I am going to need more than 16. I am running a 10GB/s eth (8lanes), I might throw an M1015 in there or two as I grow it, (8-16 lanes more). So I need more lanes.

And you might want to change that RAM to ECC OP. The chances of bit flipping is actually very high in a 24/7 system as proven by a huge study released by Google not too long ago.

And if you are not running ESX I would get a simple 8GB USB 2.0 flash drive to boot your system off.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
They're the same price. The question is if you need more power (and 50W more TDP) or not.
I went with 2620 as i wanted more cores with lower power and is also cheaper than the single socket one.

Do you think that the 6 core chip would be a better performer in ZFS (regardless of OS) due to more cores over the faster clockspeed quad core 1620 part?
 
Why not go with a Xeon LGA 1155 system?

I cant speak for the OP but I chose 2011 over 1155 for these reasons:

-After all the math, number crunching, deal couponing (lol), price differences in equivalent RAM, reg vs unb ecc, the cost was nearly the same with the 2011 part coming out about $100.00 more.

-1155 MAX is 32GB RAM
-2011 MAX is way more than this haha
-2011 board I purchased supports 64GB Unb or 256GB Registered. Never say never when it comes to more ram, you just never know.
-2011 is a bigger number so it means bigger is better (just kidding)
-Has a ton of lanes over 1155, for adding in HBA's NIC's etc...

I dont know really just the breaking point for me was the RAM I guess.
 
All of what you listed becomes even more compelling when the 2011 Ivy's come out...unfortunately, with AMD falling flat at the high end all the pressure is off of Intel and Ivy Bridge-E now looks like a Q3 '13 release.
 
Last edited:
There is no reason to build a socket 2011 system if you are only going to use 8GB of RAM.
He did write X4---CORSAIR 8GB CMV8GX3M1A1600C11
But I definetly agree, ECC is a must have.
I went with low voltage.
Do you think that the 6 core chip would be a better performer in ZFS (regardless of OS) due to more cores over the faster clockspeed quad core 1620 part?
I really don't know. I think there is also a debate going about this. All I do know that it's gonna be more than enough eitherway and I wanted to make it run at the lowest power possible. 90W at idle with no HDD's (3x m1015 and 6 fans). Which reminds me.
@Shockey, you're going to need cpu cooler aswell. I bought a Supermicro SNK-P0050AP4.
 
Got almost the same purpose/setup built, but not running yet, so...
You probably meant norco 4224 right?
The 4 port SCU sata controller on the motherboard can not be passtrough, so use it for the local datastore/boot device
What info do you need for boot device? I just used a intel 320 40gb ssd for esxi&OI install. For other VM's i have 2 samsung 840pro waiting, but you can also use one larger SSD for everything.
I would suggest running everything first and then buy a L2ARC or ZIL devices if you're gonna need them. I'm probably going to add 32gb more ram and a ZIL if things will be too slow.
0.5m cables are too short.
Since you have 24bay case, make sure your PSU is capable of running it (and I don't mean the just wattage). There is a debate going in the separate thread.

Yes i meant the norco 4224 case. Typo :p

I still debating to go All in one setup or just visualize a centos install on top of openindiana.

The boot drive info request was regarding if i ever decided to change my configuration, would i have to recreate the pool or could i import it on a new install of openindiana? (Example: go from all in one to separate physical boxes. 2 ESX server and a ZFS file server)

As for ZIL/Cache devices, Already purchased but will wait and see what performance is like. I may be able to rig it into another system that still has a mechanical drive installed in it.

Thanks for heads up on cables. I would of missed that most likely :p

If you're going for a single socket 2011 setup, I'd pick the E5-1620 instead, thats for sure.. No reason paying for a CPU capable of running in a 2-way setup if you're not using it.

I will have to look into this further. As the 1620 seems more difficult to find for a comparable price.

hummm. Did more researching around. I'm on the fence again.


Server CPU and Server Motherboard, but no ECC RAM. Seems like a strange omission. I wouldn't build a file server without ECC.

Seriously read a lot of thread over the past few months about this very topic. I am reconsidering seeing as i am investing quite a bit of money into this setup as i believe in future proofing.

Why not go with a Xeon LGA 1155 system?

mobo supports up to 8 core CPU if needed. (Future proofing, very important to me.)
Ram
ZFS loves ram and it certainly is cheap and getting cheaper by the day. So i didn't want to max myself out at 32. So i went with a board and chip that support an insane amount. Also the PCI-E lanes for expandability had me sold. Couple HBA and a few quad intel network cards :D I'll be golden!!!!!!



Thanks for feedback everyone!!!! :)
 
Unless you are deduping there is no zil cache needed. Even with dedup etc... given that you can slam 256gb ram on that 2011 board haha Id juat get more ram for the ssd money. Zil ssd is good for very heavybhit servers where ram is better used for block io etc... and dedup tables can be stored on the ssd.
 
Greetings

Why not get the X79S-UP5 board instead, its cheaper and has Xeon and ECC support, the onboard SAS C606 controller gives you 8 SAS/SATA ports which means you need one less M1015 board. My board is going to be used for a gaming PC but I did a test install with Solaris 11.1 and it appears everything installed correctly with the solitary exception of the USB3 controller.

Cheers
 
The X79S-UP5 is coming in at $300, the x9srl at $275

It only has 3 useful pcie slots

And it says it's limited to 64gigs of ram, not sure what that is about though.
 
Greetings

Why not get the X79S-UP5 board instead, its cheaper and has Xeon and ECC support, the onboard SAS C606 controller gives you 8 SAS/SATA ports which means you need one less M1015 board. My board is going to be used for a gaming PC but I did a test install with Solaris 11.1 and it appears everything installed correctly with the solitary exception of the USB3 controller.

Cheers

Yeah I also looked at that board very closely however I chose to not get that board because it didnt support registered DIMMs only unbuffered which is a huge turn off on a high ram cap board that socket 2011 is supposed to support.

The X79S-UP5 is coming in at $300, the x9srl at $275

It only has 3 useful pcie slots

And it says it's limited to 64gigs of ram, not sure what that is about though.

And for above poster this is due to the inability to support registered dimms. With unbuffered ECC dimms the max you can get per card is 8GB. 8gb x 8 slots = 64GB. What a fuckin blow for what could be a good board otherwise.
 
I got this cooler due to the design and internal ducting of my case that is coming
song9-9.jpg
a121.jpg
song9-5.jpg
song9-4.jpg
12.jpg
song9-10.jpg
5.jpg

Were you gonna finish your statement?
 
Greetings

Why not get the X79S-UP5 board instead, its cheaper and has Xeon and ECC support, the onboard SAS C606 controller gives you 8 SAS/SATA ports which means you need one less M1015 board. My board is going to be used for a gaming PC but I did a test install with Solaris 11.1 and it appears everything installed correctly with the solitary exception of the USB3 controller.

Cheers

Not enough pci-e ports for my liking. Ram limitation. I know this is a home system but expandability without overhauling is important to me.

forgot to add the new hardware i will be purchasing.

Kingston 32GB (4 x 8GB) 240-Pin DDR3-224 Thank you for the heads up on this :)
 
Unless you are deduping there is no zil cache needed. Even with dedup etc... given that you can slam 256gb ram on that 2011 board haha Id juat get more ram for the ssd money. Zil ssd is good for very heavybhit servers where ram is better used for block io etc... and dedup tables can be stored on the ssd.

I thought zil was never used for the dedup table, L2ARC is used for the dedup table.
 
I thought zil was never used for the dedup table, L2ARC is used for the dedup table.

Sorry you are right. But yes ... change my reply to L2ARC disk is not needed unless you are deduping and even then I would just add RAM.
 
So I got me a nice E5-1620 for $311 today. Im so stoked. It is the same single thread performance as the 1240V2 1155, a few bucks more, and supports TONS of ram more.
 
Sorry you are right. But yes ... change my reply to L2ARC disk is not needed unless you are deduping and even then I would just add RAM.

I do not agree with this. L2ARC absolutely makes sense if you have a latency sensitive application whose workset doesn't fit in RAM, and expanding your RAM to the point that your workset DOES fit, is cost prohibitive.
The DDT is only allowed up to 25% (iirc) of your RAM, anyway.
 
I thought zil was never used for the dedup table, L2ARC is used for the dedup table.

Thought ram was used for the dedupe table? Isn't that why people warn against using it. I hear it gobbles up all the ram and can cause performance issues.
 
ddt can be in ram or l2arc, it's best in ram though.

The issue is it uses up approx 320bytes per disk block, and if your mainly using 4k or 8k blocks, that is costly, if it's all backup data using 128k blocks, you will be fine.

You can get away with going >25% of ram for ddt, but then your going be killing other things that need ram.

I think it was, if you stuck with 25% about, a 800gig zvol using 4k blocks would require 150gigs of ram to fit the ddt fully in ram, if it didn't fully fit, then you have additional l2arc latency lookup times for every write.
 
I do not agree with this. L2ARC absolutely makes sense if you have a latency sensitive application whose workset doesn't fit in RAM, and expanding your RAM to the point that your workset DOES fit, is cost prohibitive.
The DDT is only allowed up to 25% (iirc) of your RAM, anyway.

Two letters ..... "IF" and the OP doesnt appear to need one as it was not disclosed.
 
Thought ram was used for the dedupe table? Isn't that why people warn against using it. I hear it gobbles up all the ram and can cause performance issues.

Yes RAM is used for dedpu, but you can manually tell ZFS to use an SSD instead.

However Dedup should not be used for your massive porn collection rather you should only dedup stuff like ... an office document dataset where say in example 10 people hit it every day all day constantly working on documents etc.... where PDF files when duplicated can easily cost precious and costly disk space over the course of time. Dedup is truly enterprise and most home users would never be able to justify it's usage of course your needs and mileage may vary and there is NO rule that sais' dedup isn't appropriate for home users. Just be advised of the cost you will incur if you implement out of control dedup features.

This is all information I am gathering from my studies of ZFS en masse'
 
Last edited:
Order case and came today. Noticed i order wrong cables.

Just found out the correct ones

Six internal SFF-8087 Mini SAS connectors support up to twenty-four 3.5" or 2.5" SATA (II or III) or SAS hard drives;


Anyone know the pin count for the cable that compatible with norco back plane?
 
Order case and came today. Noticed i order wrong cables.

Just found out the correct ones

Six internal SFF-8087 Mini SAS connectors support up to twenty-four 3.5" or 2.5" SATA (II or III) or SAS hard drives;


Anyone know the pin count for the cable that compatible with norco back plane?

All SFF Cables should be based on industry standardized IEEE stuff. I wouldn't worry about cabling. If it has a fan out connector then any SFF cable should work. The question is making sure you do not order Sata Cables for a SAS disk because they will not fit.
 
So here is some results from raid testing 8 Western Digital Drives using onboard C206 controller. All drives are on 3gbps Sata II ports.

Not sure what is actually useful to the community here but I will post everything text wise.....

Code:
SGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: 4096 bytes
Number of disks: 8 disks
disk 1: gpt/disk1.nop
disk 2: gpt/disk2.nop
disk 3: gpt/disk3.nop
disk 4: gpt/disk4.nop
disk 5: gpt/disk5.nop
disk 6: gpt/disk6.nop
disk 7: gpt/disk7.nop
disk 8: gpt/disk8.nop


* Test Settings: TS32; SECT4096; 
* Tuning: none
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	587 MiB/sec	606 MiB/sec	583 MiB/sec	= 592 MiB/sec avg
WRITE:	987 MiB/sec	960 MiB/sec	953 MiB/sec	= 967 MiB/sec avg
raidtest.read:	84	107	83	= 91 IOps ( ~6006 KiB/sec )
raidtest.write:	84	115	86	= 95 IOps ( ~6270 KiB/sec )
raidtest.mixed:	74	79	75	= 76 IOps ( ~5016 KiB/sec )

Now testing RAIDZ configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	627 MiB/sec	594 MiB/sec	626 MiB/sec	= 615 MiB/sec avg
WRITE:	729 MiB/sec	709 MiB/sec	670 MiB/sec	= 703 MiB/sec avg
raidtest.read:	61	60	62	= 61 IOps ( ~4026 KiB/sec )
raidtest.write:	70	68	70	= 69 IOps ( ~4554 KiB/sec )
raidtest.mixed:	57	59	59	= 58 IOps ( ~3828 KiB/sec )

Now testing RAIDZ2 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	602 MiB/sec	583 MiB/sec	605 MiB/sec	= 597 MiB/sec avg
WRITE:	585 MiB/sec	585 MiB/sec	548 MiB/sec	= 573 MiB/sec avg
raidtest.read:	59	59	59	= 59 IOps ( ~3894 KiB/sec )
raidtest.write:	63	63	73	= 66 IOps ( ~4356 KiB/sec )
raidtest.mixed:	58	59	57	= 58 IOps ( ~3828 KiB/sec )

Now testing RAID1 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	552 MiB/sec	500 MiB/sec	583 MiB/sec	= 545 MiB/sec avg
WRITE:	102 MiB/sec	98 MiB/sec	98 MiB/sec	= 99 MiB/sec avg
raidtest.read:	83	73	75	= 77 IOps ( ~5082 KiB/sec )
raidtest.write:	73	65	68	= 68 IOps ( ~4488 KiB/sec )
raidtest.mixed:	50	52	51	= 51 IOps ( ~3366 KiB/sec )

Now testing RAID1+0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	582 MiB/sec	579 MiB/sec	588 MiB/sec	= 583 MiB/sec avg
WRITE:	462 MiB/sec	488 MiB/sec	467 MiB/sec	= 472 MiB/sec avg
raidtest.read:	82	79	81	= 80 IOps ( ~5280 KiB/sec )
raidtest.write:	79	81	79	= 79 IOps ( ~5214 KiB/sec )
raidtest.mixed:	67	67	68	= 67 IOps ( ~4422 KiB/sec )

Now testing RAIDZ+0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	458 MiB/sec	473 MiB/sec	468 MiB/sec	= 467 MiB/sec avg
WRITE:	672 MiB/sec	659 MiB/sec	657 MiB/sec	= 663 MiB/sec avg
raidtest.read:	64	64	64	= 64 IOps ( ~4224 KiB/sec )
raidtest.write:	69	69	70	= 69 IOps ( ~4554 KiB/sec )
raidtest.mixed:	62	63	63	= 62 IOps ( ~4092 KiB/sec )

Now testing RAID0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	398 MiB/sec	401 MiB/sec	400 MiB/sec	= 400 MiB/sec avg
WRITE:	486 MiB/sec	489 MiB/sec	483 MiB/sec	= 486 MiB/sec avg
raidtest.read:	90	74	74	= 79 IOps ( ~5214 KiB/sec )
raidtest.write:	93	78	77	= 82 IOps ( ~5412 KiB/sec )
raidtest.mixed:	68	67	66	= 67 IOps ( ~4422 KiB/sec )

Now testing RAID0 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	444 MiB/sec	444 MiB/sec	449 MiB/sec	= 446 MiB/sec avg
WRITE:	618 MiB/sec	615 MiB/sec	590 MiB/sec	= 608 MiB/sec avg
raidtest.read:	96	94	97	= 95 IOps ( ~6270 KiB/sec )
raidtest.write:	99	99	100	= 99 IOps ( ~6534 KiB/sec )
raidtest.mixed:	73	71	72	= 72 IOps ( ~4752 KiB/sec )

Now testing RAID0 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	505 MiB/sec	501 MiB/sec	502 MiB/sec	= 503 MiB/sec avg
WRITE:	696 MiB/sec	738 MiB/sec	748 MiB/sec	= 727 MiB/sec avg
raidtest.read:	85	79	80	= 81 IOps ( ~5346 KiB/sec )
raidtest.write:	90	80	81	= 83 IOps ( ~5478 KiB/sec )
raidtest.mixed:	73	73	74	= 73 IOps ( ~4818 KiB/sec )

Now testing RAID0 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	571 MiB/sec	589 MiB/sec	587 MiB/sec	= 582 MiB/sec avg
WRITE:	855 MiB/sec	804 MiB/sec	864 MiB/sec	= 841 MiB/sec avg
raidtest.read:	81	91	81	= 84 IOps ( ~5544 KiB/sec )
raidtest.write:	83	91	83	= 85 IOps ( ~5610 KiB/sec )
raidtest.mixed:	75	74	74	= 74 IOps ( ~4884 KiB/sec )

Now testing RAIDZ configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	256 MiB/sec	240 MiB/sec	261 MiB/sec	= 252 MiB/sec avg
WRITE:	343 MiB/sec	358 MiB/sec	322 MiB/sec	= 341 MiB/sec avg
raidtest.read:	62	61	62	= 61 IOps ( ~4026 KiB/sec )
raidtest.write:	65	64	65	= 64 IOps ( ~4224 KiB/sec )
raidtest.mixed:	60	59	59	= 59 IOps ( ~3894 KiB/sec )

Now testing RAIDZ configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	360 MiB/sec	349 MiB/sec	348 MiB/sec	= 352 MiB/sec avg
WRITE:	433 MiB/sec	591 MiB/sec	440 MiB/sec	= 488 MiB/sec avg
raidtest.read:	58	58	60	= 58 IOps ( ~3828 KiB/sec )
raidtest.write:	63	71	71	= 68 IOps ( ~4488 KiB/sec )
raidtest.mixed:	57	56	57	= 56 IOps ( ~3696 KiB/sec )

Now testing RAIDZ configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	482 MiB/sec	468 MiB/sec	451 MiB/sec	= 467 MiB/sec avg
WRITE:	693 MiB/sec	555 MiB/sec	528 MiB/sec	= 592 MiB/sec avg
raidtest.read:	64	60	63	= 62 IOps ( ~4092 KiB/sec )
raidtest.write:	75	66	76	= 72 IOps ( ~4752 KiB/sec )
raidtest.mixed:	58	58	57	= 57 IOps ( ~3762 KiB/sec )

Now testing RAIDZ configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	527 MiB/sec	514 MiB/sec	512 MiB/sec	= 518 MiB/sec avg
WRITE:	645 MiB/sec	616 MiB/sec	609 MiB/sec	= 623 MiB/sec avg
raidtest.read:	59	62	62	= 61 IOps ( ~4026 KiB/sec )
raidtest.write:	66	76	75	= 72 IOps ( ~4752 KiB/sec )
raidtest.mixed:	57	58	58	= 57 IOps ( ~3762 KiB/sec )

Now testing RAIDZ2 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	230 MiB/sec	208 MiB/sec	202 MiB/sec	= 213 MiB/sec avg
WRITE:	231 MiB/sec	250 MiB/sec	254 MiB/sec	= 245 MiB/sec avg
raidtest.read:	56	56	56	= 56 IOps ( ~3696 KiB/sec )
raidtest.write:	62	60	61	= 61 IOps ( ~4026 KiB/sec )
raidtest.mixed:	53	41	53	= 49 IOps ( ~3234 KiB/sec )

Now testing RAIDZ2 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	294 MiB/sec	295 MiB/sec	284 MiB/sec	= 291 MiB/sec avg
WRITE:	339 MiB/sec	358 MiB/sec	399 MiB/sec	= 366 MiB/sec avg
raidtest.read:	56	46	56	= 52 IOps ( ~3432 KiB/sec )
raidtest.write:	59	47	60	= 55 IOps ( ~3630 KiB/sec )
raidtest.mixed:	40	38	54	= 44 IOps ( ~2904 KiB/sec )

Now testing RAIDZ2 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	379 MiB/sec	392 MiB/sec	371 MiB/sec	= 381 MiB/sec avg
WRITE:	460 MiB/sec	588 MiB/sec	397 MiB/sec	= 482 MiB/sec avg
raidtest.read:	65	81	80	= 75 IOps ( ~4950 KiB/sec )
raidtest.write:	69	81	79	= 76 IOps ( ~5016 KiB/sec )
raidtest.mixed:	45	48	47	= 46 IOps ( ~3036 KiB/sec )

Now testing RAIDZ2 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	476 MiB/sec	465 MiB/sec	476 MiB/sec	= 472 MiB/sec avg
WRITE:	549 MiB/sec	465 MiB/sec	504 MiB/sec	= 506 MiB/sec avg
raidtest.read:	59	58	58	= 58 IOps ( ~3828 KiB/sec )
raidtest.write:	65	62	62	= 63 IOps ( ~4158 KiB/sec )
raidtest.mixed:	57	56	57	= 56 IOps ( ~3696 KiB/sec )

Now testing RAID1 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	341 MiB/sec	351 MiB/sec	357 MiB/sec	= 350 MiB/sec avg
WRITE:	117 MiB/sec	126 MiB/sec	118 MiB/sec	= 121 MiB/sec avg
raidtest.read:	79	74	71	= 74 IOps ( ~4884 KiB/sec )
raidtest.write:	73	70	67	= 70 IOps ( ~4620 KiB/sec )
raidtest.mixed:	46	58	57	= 53 IOps ( ~3498 KiB/sec )

Now testing RAID1 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	419 MiB/sec	415 MiB/sec	415 MiB/sec	= 417 MiB/sec avg
WRITE:	105 MiB/sec	104 MiB/sec	103 MiB/sec	= 104 MiB/sec avg
raidtest.read:	72	76	78	= 75 IOps ( ~4950 KiB/sec )
raidtest.write:	67	72	73	= 70 IOps ( ~4620 KiB/sec )
raidtest.mixed:	59	48	49	= 52 IOps ( ~3432 KiB/sec )

Now testing RAID1 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	484 MiB/sec	495 MiB/sec	539 MiB/sec	= 506 MiB/sec avg
WRITE:	104 MiB/sec	105 MiB/sec	141 MiB/sec	= 117 MiB/sec avg
raidtest.read:	77	73	93	= 81 IOps ( ~5346 KiB/sec )
raidtest.write:	71	66	89	= 75 IOps ( ~4950 KiB/sec )
raidtest.mixed:	48	49	62	= 53 IOps ( ~3498 KiB/sec )

Now testing RAID1 configuration with 7 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	537 MiB/sec	572 MiB/sec	541 MiB/sec	= 550 MiB/sec avg
WRITE:	103 MiB/sec	108 MiB/sec	103 MiB/sec	= 105 MiB/sec avg
raidtest.read:	75	65	75	= 71 IOps ( ~4686 KiB/sec )
raidtest.write:	67	59	66	= 64 IOps ( ~4224 KiB/sec )
raidtest.mixed:	59	56	60	= 58 IOps ( ~3828 KiB/sec )

Now testing RAID1+0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	382 MiB/sec	408 MiB/sec	402 MiB/sec	= 398 MiB/sec avg
WRITE:	233 MiB/sec	255 MiB/sec	258 MiB/sec	= 248 MiB/sec avg
raidtest.read:	72	75	71	= 72 IOps ( ~4752 KiB/sec )
raidtest.write:	69	74	70	= 71 IOps ( ~4686 KiB/sec )
raidtest.mixed:	61	60	60	= 60 IOps ( ~3960 KiB/sec )

Now testing RAID1+0 configuration with 6 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	497 MiB/sec	491 MiB/sec	491 MiB/sec	= 493 MiB/sec avg
WRITE:	411 MiB/sec	346 MiB/sec	342 MiB/sec	= 367 MiB/sec avg
raidtest.read:	82	76	76	= 78 IOps ( ~5148 KiB/sec )
raidtest.write:	82	74	76	= 77 IOps ( ~5082 KiB/sec )
raidtest.mixed:	67	66	67	= 66 IOps ( ~4356 KiB/sec )

Now testing RAIDZ+0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	458 MiB/sec	477 MiB/sec	477 MiB/sec	= 252 MiB/sec avg
WRITE:	660 MiB/sec	646 MiB/sec	663 MiB/sec	= 341 MiB/sec avg
raidtest.read:	63	63	63	= 61 IOps ( ~4026 KiB/sec )
raidtest.write:	70	69	69	= 64 IOps ( ~4224 KiB/sec )
raidtest.mixed:	62	61	62	= 59 IOps ( ~3894 KiB/sec )

Now testing RAID0 configuration with 1 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	145 MiB/sec	138 MiB/sec	126 MiB/sec	= 137 MiB/sec avg
WRITE:	140 MiB/sec	114 MiB/sec	109 MiB/sec	= 121 MiB/sec avg
raidtest.read:	75	74	81	= 76 IOps ( ~5016 KiB/sec )
raidtest.write:	66	66	70	= 67 IOps ( ~4422 KiB/sec )
raidtest.mixed:	40	41	52	= 44 IOps ( ~2904 KiB/sec )

Now testing RAID0 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	243 MiB/sec	256 MiB/sec	255 MiB/sec	= 251 MiB/sec avg
WRITE:	250 MiB/sec	256 MiB/sec	296 MiB/sec	= 267 MiB/sec avg
raidtest.read:	71	66	71	= 69 IOps ( ~4554 KiB/sec )
raidtest.write:	69	65	68	= 67 IOps ( ~4422 KiB/sec )
raidtest.mixed:	58	55	59	= 57 IOps ( ~3762 KiB/sec )

Now testing RAID0 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	326 MiB/sec	318 MiB/sec	314 MiB/sec	= 319 MiB/sec avg
WRITE:	386 MiB/sec	356 MiB/sec	357 MiB/sec	= 366 MiB/sec avg
raidtest.read:	76	82	83	= 80 IOps ( ~5280 KiB/sec )
raidtest.write:	77	85	87	= 83 IOps ( ~5478 KiB/sec )
raidtest.mixed:	66	64	66	= 65 IOps ( ~4290 KiB/sec )

Now testing RAIDZ configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	128 MiB/sec	129 MiB/sec	149 MiB/sec	= 135 MiB/sec avg
WRITE:	110 MiB/sec	130 MiB/sec	132 MiB/sec	= 124 MiB/sec avg
raidtest.read:	67	68	71	= 68 IOps ( ~4488 KiB/sec )
raidtest.write:	65	64	68	= 65 IOps ( ~4290 KiB/sec )
raidtest.mixed:	56	40	44	= 46 IOps ( ~3036 KiB/sec )

Now testing RAIDZ configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	222 MiB/sec	224 MiB/sec	213 MiB/sec	= 220 MiB/sec avg
WRITE:	242 MiB/sec	240 MiB/sec	221 MiB/sec	= 235 MiB/sec avg
raidtest.read:	56	61	59	= 58 IOps ( ~3828 KiB/sec )
raidtest.write:	60	64	64	= 62 IOps ( ~4092 KiB/sec )
raidtest.mixed:	56	56	39	= 50 IOps ( ~3300 KiB/sec )

Now testing RAIDZ2 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	142 MiB/sec	142 MiB/sec	150 MiB/sec	= 145 MiB/sec avg
WRITE:	88 MiB/sec	100 MiB/sec	96 MiB/sec	= 95 MiB/sec avg
raidtest.read:	85	71	79	= 78 IOps ( ~5148 KiB/sec )
raidtest.write:	71	63	70	= 68 IOps ( ~4488 KiB/sec )
raidtest.mixed:	45	66	66	= 59 IOps ( ~3894 KiB/sec )

Now testing RAID1 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	215 MiB/sec	216 MiB/sec	194 MiB/sec	= 208 MiB/sec avg
WRITE:	110 MiB/sec	111 MiB/sec	112 MiB/sec	= 111 MiB/sec avg
raidtest.read:	71	69	68	= 69 IOps ( ~4554 KiB/sec )
raidtest.write:	66	66	63	= 65 IOps ( ~4290 KiB/sec )
raidtest.mixed:	56	56	54	= 55 IOps ( ~3630 KiB/sec )

Now testing RAID1 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	301 MiB/sec	281 MiB/sec	300 MiB/sec	= 294 MiB/sec avg
WRITE:	116 MiB/sec	105 MiB/sec	109 MiB/sec	= 110 MiB/sec avg
raidtest.read:	71	71	71	= 71 IOps ( ~4686 KiB/sec )
raidtest.write:	66	66	66	= 66 IOps ( ~4356 KiB/sec )
raidtest.mixed:	57	45	56	= 52 IOps ( ~3432 KiB/sec )

Now testing RAIDZ+0 configuration with 8 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ:	462 MiB/sec	472 MiB/sec	486 MiB/sec	= 252 MiB/sec avg
WRITE:	667 MiB/sec	643 MiB/sec	623 MiB/sec	= 341 MiB/sec avg
raidtest.read:	64	63	65	= 61 IOps ( ~4026 KiB/sec )
raidtest.write:	69	68	70	= 64 IOps ( ~4224 KiB/sec )
raidtest.mixed:	62	61	62	= 59 IOps ( ~3894 KiB/sec )

Done

it would seem apparent that my best bang for the buck configuration performance and redundnacy wise with these drives and controler is going to be RaidZ-1, referred to as RaidZ in this particular benchmark suite.

ZFS Guru isnt mature yet but it looks VERY promising. And what I love about ZFS is that I can change operating systems across the board and simply import my VDEVS and POOLS into any other host OS that runs ZFS.

Lastly I will point out that I am very impressed with the performance of these 5900RPM Drives as per stated here...

Now testing RAID0 configuration with 1 disks: cWmRzmId@cWmRzmId@cWmRzmId@
READ: 145 MiB/sec 138 MiB/sec 126 MiB/sec = 137 MiB/sec avg
WRITE: 140 MiB/sec 114 MiB/sec 109 MiB/sec = 121 MiB/sec avg
raidtest.read: 75 74 81 = 76 IOps ( ~5016 KiB/sec )
raidtest.write: 66 66 70 = 67 IOps ( ~4422 KiB/sec )
raidtest.mixed: 40 41 52 = 44 IOps ( ~2904 KiB/sec )

This is not bad for a single disk running 5900 RPM with 64mb cache on a sata II connection.
 
Did anyone replace the fans in the case wit 120mm fans?

If so which ones.

Cables arrived today. I'll hook it all up and post pic in a bit :D
 
After messing with the IBM 1015 card trying to flash it to IT Firmware. When trying it with my supermicro motherboard i received the "ERROR: Failed to initialize PAL. Exiting program" error after i would wipe the card and try and flash the new firmware. So after trying multiple guide and different suggestion from google that other have tried and succeeded i just switched it to another system and continued the flash in my HTPC system. Which worked lawlessly :D

I have it all setup with ESX booting from USB and OI installed on SSD (local datastore) and installed napp-it.

I followed the all in one guide and created a pool then created a folder. My pools shows 21.8TB when i click on pools menu. When i go into the ZFS folder menu, my pool (named data) shows 15.2 and my folder size shows 13.7.

Why can't my folder use up the whole 21.8TB of the pool :confused:
 
I followed the all in one guide and created a pool then created a folder. My pools shows 21.8TB when i click on pools menu. When i go into the ZFS folder menu, my pool (named data) shows 15.2 and my folder size shows 13.7.

Why can't my folder use up the whole 21.8TB of the pool :confused:

You nearly can if you use a Raid-0 without pool-reservation (overflow protection)
Problem: The pool menu shows the output of zpool list with raw disk capacity without counting redundancy.

If you have for example a Raid-Z2 you must subtract 2 redundancy disks
If you use overflow protection, you set a pool reservation lowering capacity for filesystems (-10%) added with a small immanent reservation to keep pools working even with 100% fillrate

The folder menu show only usable capacity from zfs list.

If you use the new napp-it 0.9 you will also see all disk, vdev and pool values in GB and the lower GiB (you may know that a Kilobyte is not 1000 Bytes but 1024) to have same units like a lot of disk tools.
 
You nearly can if you use a Raid-0 without pool-reservation (overflow protection)
Problem: The pool menu shows the output of zpool list with raw disk capacity without counting redundancy.

If you have for example a Raid-Z2 you must subtract 2 redundancy disks
If you use overflow protection, you set a pool reservation lowering capacity for filesystems (-10%) added with a small immanent reservation to keep pools working even with 100% fillrate

The folder menu show only usable capacity from zfs list.

If you use the new napp-it 0.9 you will also see all disk, vdev and pool values in GB and the lower GiB (you may know that a Kilobyte is not 1000 Bytes but 1024) to have same units like a lot of disk tools.

overflow protection explain it now that i see it laid out. I knew i lose two disk for redundancy purposes, I planned for this. :)

harmful to turn off the overflow protection?

thanks for explanation!!!
 
Last edited:
overflow protection explain it now that i see it laid out. I knew i lose two disk for redundancy purposes, I planned for this. :)

harmful to turn off the overflow protection?

thanks for explanation!!!

It is just a reservation on the pool itself that prevents a filesystem to eat up all space accidently.
If you fill your pool near to 100% all filesystems can become extraordinary slow.

In fact you should avoid to go over about 80% - either force it with a reservation or check regularly.
 
Back
Top