ZFSGuru Build

vraa

Gawd
Joined
Mar 20, 2007
Messages
598
8 drives - Raid 2Z - Must be AMD
What parts? What 8087 controller - opinions?
Will run ZFS v28
Must use 2TB drives
Performance not an issue
 
I actually gutted my old machine and did a hybrid match up between a lot of the parts I have and created a small monster

I want to thank a local Houston supplier who helped me source all the parts (and some replacements) within 20 minutes of notification!!!!

I had an old pc (my desktop gaming rig) which was getting short of breath. I got Dead Space 2 and while playing, I felt somewhat incomplete. I enjoyed Civilization 5, after I swapped the E6300 (1.86ghz C2D) with a E8400 (3.0ghz C2D) the time it took for turns dramatically decreased. That still did not make playing large maps enjoyable. Also, there is that Bad Company game which had some issues in heavy firefight scenes. Don't get me wrong, I love my PS3 (Don't have an Xbox) but I am tired of playing MAG. Plus I need something to pull my Excel up fast... a PC is still a requirement, even though Android and tablets dominated CES. The gaming rig also had a ATi Radeon 5750 1gb. Motherboard was a Gigabyte DS3 P35 and it had 4x1gb Patriot sticks (for a total for 4gb ram, which yes I know is tiny for ZFS, but.... it works and it's just storage not speed).

For this build, I used the motherboard, ram, cpu, video card mentioned above, I added a Norco RPC 450 case, 8x2tb Hitachi HDS722020ALA330's, and 3 Seagate 7200.10 500gb drives. I am using the Seagates in mirroring for the zfs root pool, and then the 8 drives in raid2z for a 12tb array.

Now the motherboard only has 6 sata on board. I am using four of the onboard ones for the 3 seagates, and 1 for the dvd burner. I at first was about to buy a Intel somethingsomethingsomething with 2 8087 connectors so I could buy the fanout cables and hook up all 8 Hitachi's to the same controller (and have to go through the bullshit of disconfiguring and configuring shit. No thanks. Instead I got 2x romise SATA300 TX4 for 60$ each.

Now this is where I am at

physicaldisks.png


smartstatus.png


zfs12tb.png


zfsroot.png


Thanks for the cropping idea!

Sorry these benchmarks are incomplete, I got tired and I wanted to sleep
ZFSGURU-benchmark, version 1
Test size: 1.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 8 disks
disk 1: gpt/2000-1
disk 2: gpt/2000-2
disk 3: gpt/2000-3
disk 4: gpt/2000-4
disk 5: gpt/2000-5
disk 6: gpt/2000-6
disk 7: gpt/2000-7
disk 8: gpt/2000-8

* Test Settings: TS1;
* Tuning: KMEM=6g; AMIN=2g; AMAX=3g;
* Stopping background processes: sendmail, moused, syslogd and cron
* Stopping Samba service

Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 109 MiB/sec 108 MiB/sec 108 MiB/sec = 108 MiB/sec avg
WRITE: 136 MiB/sec 126 MiB/sec 132 MiB/sec = 131 MiB/sec avg

Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 104 MiB/sec 105 MiB/sec 104 MiB/sec = 105 MiB/sec avg
WRITE: 89 MiB/sec 86 MiB/sec 88 MiB/sec = 87 MiB/sec avg

Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 109 MiB/sec 112 MiB/sec 107 MiB/sec = 109 MiB/sec avg
WRITE: 69 MiB/sec 73 MiB/sec 73 MiB/sec = 72 MiB/sec avg

Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 134 MiB/sec 90 MiB/sec 87 MiB/sec = 104 MiB/sec avg
WRITE: 19 MiB/sec 18 MiB/sec 17 MiB/sec = 18 MiB/sec avg

Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 97 MiB/sec 101 MiB/sec 94 MiB/sec = 97 MiB/sec avg
WRITE: 55 MiB/sec 58 MiB/sec 56 MiB/sec = 56 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 112 MiB/sec 102 MiB/sec 113 MiB/sec = 109 MiB/sec avg
WRITE: 76 MiB/sec 80 MiB/sec 78 MiB/sec = 78 MiB/sec avg

Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 107 MiB/sec 107 MiB/sec 107 MiB/sec = 107 MiB/sec avg
WRITE: 116 MiB/sec 129 MiB/sec 130 MiB/sec = 125 MiB/sec avg

Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 114 MiB/sec 114 MiB/sec 103 MiB/sec = 110 MiB/sec avg
WRITE: 125 MiB/sec 118 MiB/sec 121 MiB/sec = 122 MiB/sec avg

Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 108 MiB/sec 108 MiB/sec 107 MiB/sec = 107 MiB/sec avg
WRITE: 131 MiB/sec 120 MiB/sec 130 MiB/sec = 127 MiB/sec avg

Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 106 MiB/sec 106 MiB/sec 115 MiB/sec = 109 MiB/sec avg
WRITE: 123 MiB/sec 125 MiB/sec 122 MiB/sec = 123 MiB/sec avg

Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 110 MiB/sec 106 MiB/sec 106 MiB/sec = 107 MiB/sec avg
WRITE: 72 MiB/sec 76 MiB/sec 72 MiB/sec = 73 MiB/sec avg

Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 114 MiB/sec 103 MiB/sec 103 MiB/sec = 107 MiB/sec avg
WRITE: 83 MiB/sec 79 MiB/sec 78 MiB/sec = 80 MiB/sec avg

Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 100 MiB/sec 114 MiB/sec 108 MiB/sec = 107 MiB/sec avg
WRITE: 79 MiB/sec 81 MiB/sec 81 MiB/sec = 80 MiB/sec avg

Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 99 MiB/sec 101 MiB/sec 101 MiB/sec = 100 MiB/sec avg
WRITE: 87 MiB/sec 87 MiB/sec 88 MiB/sec = 88 MiB/sec avg

Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 106 MiB/sec 106 MiB/sec 109 MiB/sec = 107 MiB/sec avg
WRITE: 46 MiB/sec 45 MiB/sec 44 MiB/sec = 45 MiB/sec avg

Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 104 MiB/sec 108 MiB/sec 104 MiB/sec = 106 MiB/sec avg
WRITE: 53 MiB/sec 53 MiB/sec 54 MiB/sec = 53 MiB/sec avg

Now testing RAIDZ2 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 102 MiB/sec 98 MiB/sec 99 MiB/sec = 100 MiB/sec avg
WRITE: 62 MiB/sec 61 MiB/sec 60 MiB/sec = 61 MiB/sec avg

Now testing RAIDZ2 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 100 MiB/sec 103 MiB/sec 104 MiB/sec = 103 MiB/sec avg
WRITE: 68 MiB/sec 62 MiB/sec 67 MiB/sec = 65 MiB/sec avg

Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 92 MiB/sec 92 MiB/sec 94 MiB/sec = 93 MiB/sec avg
WRITE: 22 MiB/sec 22 MiB/sec 22 MiB/sec = 22 MiB/sec avg

Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
READ: 92 MiB/sec 92 MiB/sec 91 MiB/sec = 92 MiB/sec avg
WRITE: 17 MiB/sec 17 MiB/sec 17 MiB/sec = 17 MiB/sec avg

Now testing RAID1 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 93 MiB/sec 97 MiB/sec 88 MiB/sec = 93 MiB/sec avg
WRITE: 23 MiB/sec 25 MiB/sec 18 MiB/sec = 22 MiB/sec avg

Now testing RAID1 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
READ: 88 MiB/sec 93 MiB/sec 91 MiB/sec = 90 MiB/sec avg
WRITE: 15 MiB/sec 22 MiB/sec 16 MiB/sec = 18 MiB/sec avg

Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
READ: 97 MiB/sec 102 MiB/sec 104 MiB/sec = 101 MiB/sec avg
WRITE: 52 MiB/sec 51 MiB/sec 51 MiB/sec = 51 MiB/sec avg

Now testing RAID1+0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
READ: 112 MiB/sec 108 MiB/sec 104 MiB/sec = 108 MiB/sec avg
WRITE: 56 MiB/sec 55 MiB/sec 54 MiB/sec = 55 MiB/sec avg

Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
READ: 105 MiB/sec 105 MiB/sec 105 MiB/sec = 107 MiB/sec avg
WRITE: 74 MiB/sec 83 MiB/sec 76 MiB/sec = 73 MiB/sec avg

Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
READ: 95 MiB/sec 96 MiB/sec 96 MiB/sec = 96 MiB/sec avg
WRITE: 81 MiB/sec 86 MiB/sec 76 MiB/sec = 81 MiB/sec avg

Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 102 MiB/sec 112 MiB/sec 102 MiB/sec = 105 MiB/sec avg
WRITE: 107 MiB/sec 107 MiB/sec 109 MiB/sec = 108 MiB/sec avg

Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 105 MiB/sec 105 MiB/sec 116 MiB/sec = 108 MiB/sec avg
WRITE: 121 MiB/sec 127 MiB/sec 131 MiB/sec = 126 MiB/sec avg

Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 109 MiB/sec 105 MiB/sec 109 MiB/sec = 108 MiB/sec avg
WRITE: 41 MiB/sec 41 MiB/sec 40 MiB/sec = 41 MiB/sec avg

Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 103 MiB/sec 104 MiB/sec 104 MiB/sec = 104 MiB/sec avg
WRITE: 62 MiB/sec 63 MiB/sec 64 MiB/sec = 63 MiB/sec avg

Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
READ: 104 MiB/sec 98 MiB/sec 99 MiB/sec = 100 MiB/sec avg
WRITE: 28 MiB/sec 28 MiB/sec 28 MiB/sec = 28 MiB/sec avg

Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
READ: 100 MiB/sec 98 MiB/sec 100 MiB/sec = 100 MiB/sec avg
WRITE: 43 MiB/sec 42 MiB/sec 41 MiB/sec = 42 MiB/sec avg

Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cW
* ERROR during "dd write"; got return value 15

What else would you guys like to see?
 
Bloody fantastic, not sure what I did but I did

su
ee /etc/rc.conf

Changed hostname from zfsguru.bsd to zfsguru-vraa.bsd
Restart
Then voila
img20110205173238.jpg
 
I enjoyed Civilization 5, after I swapped the E6300 (1.86ghz C2D) with a E8400 (3.0ghz C2D) the time it took for turns dramatically decreased. That still did not make playing large maps enjoyable.
I believe Civ 5 is one of the most multithreaded games in extence, meaning that performance basically scales with CPU cores; if you have a quadcore at the same frequency, it could perform up to 2 times as good (less turn time).

Especially when you get late into the game there would be lots of pressure on the CPU. Less so on the GPU, i believe. So a quadcore for this game would be recommended.

For this build, I used the motherboard, ram, cpu, video card mentioned above, I added a Norco RPC 450 case, 8x2tb Hitachi HDS722020ALA330's, and 3 Seagate 7200.10 500gb drives. I am using the Seagates in mirroring for the zfs root pool, and then the 8 drives in raid2z for a 12tb array.

Now the motherboard only has 6 sata on board. I am using four of the onboard ones for the 3 seagates, and 1 for the dvd burner. I at first was about to buy a Intel somethingsomethingsomething with 2 8087 connectors so I could buy the fanout cables and hook up all 8 Hitachi's to the same controller (and have to go through the bullshit of disconfiguring and configuring shit. No thanks. Instead I got 2x romise SATA300 TX4 for 60$ each.
The Intel SASUC8i would be alot better though, since that Promise card is PCI. This will hurt your performance, it would be very hard to saturate gigabit. If the gigabit ethernet is on PCI as well then your performance will be really really bad.

PCI-express means data doesn't have to wait for 'its turn' like the shared-access PCI bus. With RAID, you would want dedicated links to your storage devices to gain optimal performance.

Regarding your boot problem, where is the system disk connected to? Can you connect the system disk(s) to the onboard controller? Is your onboard controller set to AHCI?
 
I believe Civ 5 is one of the most multithreaded games in extence, meaning that performance basically scales with CPU cores; if you have a quadcore at the same frequency, it could perform up to 2 times as good (less turn time).

Especially when you get late into the game there would be lots of pressure on the CPU. Less so on the GPU, i believe. So a quadcore for this game would be recommended.

For this build, I used the motherboard, ram, cpu, video card mentioned above, I added a Norco RPC 450 case, 8x2tb Hitachi HDS722020ALA330's, and 3 Seagate 7200.10 500gb drives. I am using the Seagates in mirroring for the zfs root pool, and then the 8 drives in raid2z for a 12tb array.


The Intel SASUC8i would be alot better though, since that Promise card is PCI. This will hurt your performance, it would be very hard to saturate gigabit. If the gigabit ethernet is on PCI as well then your performance will be really really bad.

PCI-express means data doesn't have to wait for 'its turn' like the shared-access PCI bus. With RAID, you would want dedicated links to your storage devices to gain optimal performance.

Regarding your boot problem, where is the system disk connected to? Can you connect the system disk(s) to the onboard controller? Is your onboard controller set to AHCI?

I reinstalled ZFSGuru -- I think it has to do with the setcopties=n were n is anything but off (on the boot pool)

They are AHCI and the boot pool drives are connected to the onboard controller

I didn't get the Intel cards because the local supplier didnt have any + it was 2x as much -- otherwise - I agree about the performance, I think in the benchmarks you can see 1.06gbit / 8 = ~130mb/sec * .80 = ~100mb/sec which is what the max my benchmarks show
 
Perhaps you enabled compression on the root filesystem; this is not supported. The compression can be active on usr and var filesystems though, which is the default.

Generally i recommend not to change your boot/system filesystem; just let it be! Especially for a production box you would be safer using separate USB sticks if you want to 'experiment around'; so if you want it to just work again you use the original stick instead. Do not connect two or more bootable devices at the same time, though! You should do this only when the system is powered off.
 
Back
Top