I wanted first to build my fileserver using an expensive areca ARC-1882ix-24 controller. Then, after reading lots of threads about ZFS, which seems to be much more safer about all the silent data corruption problems than hardware RAID.
So i changed my mind and rethink my build.
First question, do i miss something? Is there still any benefit from using this kind of expensive hardware RAID setup against a cheaper and safer software RAIDZ that doesn't require expensive controller, neither expensive TLER enterprise drives while having extended healing capabilities? Does hardware RAID give more performances of anything?
As data integrity is my priority i don't think i have much choice, but anyway i would like to be sure.
UPDATE
The config i thought:
mobo: X9SCM-iiF
cpu: xeon E3-1270 V2
ram: 4*8GB DDR3 ECC
hba: 3*M1015
network: X540-T2
ssd: intel 335 240GB (boot/ZIL/L2ARC to be defined)
hdd: 24*5K4000
case: RM424pro
psu: seasonic P-760
os: ? (OpenIndiana, OpenSolaris, FreeNAS, Nexenta, ???)
mobo: X9SRH-7TF or X9SRL-F
cpu: xeon E5-1620
ram: x*16GB DDR3 ECC registered
hba: 2 or 3*M1015
network: none or X540-T2
boot: SLC USB key
ssd: ZIL intel 320 80GB + L2ARC intel 520 480GB
hdd: 24*5K4000
case: RM424pro
psu: seasonic P-760
os: OpenIndiana+napp-it
cooling: kama stay
Questions:
About the RAM size, i wonder if replacing this LGA1155 system with a LGA2011 one is worthy ?
I could use a lot more RAM than 32GB, and i read that ZFS love RAM.
But i also saw this too much RAM issue.
So what's the best ammount of RAM to avoid these freeze?
Does increasing RAM above 32GB worth it? (128GB would be required for the 1GB RAM > 1TB HDD ratio, and more could always be useful)
Also does deduplication worth it? Is it safe? (as it seems to be impossible to "undedupe" data)
Seems to consume a lot of CPU power and lot of RAM to work properly, so i wonder.
I won't use deduplication
About CPU power, what are ZFS's needs (with/without deduplication)?
Because there is also the E3 1220L with his tiny 17W TDP, but i'm afraid it would bottleneck the performances.
Also 2011 setup would allow 6core CPU.
About the M1015, i read on a thread there is different revisions (46M0861 & 46M0831), which one is the good and how to know?
How to cool them properly as they seems to becomes very hot?
120mm fans cooling the PCI cards should make it, but infortunately i couldn't find setups that don't eat PCI ports (i thought of placing the fans "topdown").
I noted the (discontinued) kama stay, but it still require one PCI slot.
I hope the 3*120mm fanwall will be enough as i can't find the kama stay or similar
About SSD, ZIL, L2ARC & HDD, does it worth it ?
ZIL seems to be very dangerous to use as losing it could lead to losing the whole data stored on the hdds, Does the performance increase worth the risk of having such a weak link?
How do you install the OS? On a third SSD? Is it possible to use the ZIL SSD as only 8GB are required for it? (from what i read)
The 5xxxRPM non TLER hdds should work quietly enough and give sufficient performance, i hope so.
I also read that ZFS don't work well with 4k hdds, but i can't find any 4TB non-4k hdd, so how to proceed? Is there a way to build a proper RAIDZ using 4TB hdds?
ashift=12
About L2ARC, while it being very useful for increasing the IOPS of slow 5xxxRPM drives and safer than ZIL being only a read cache, how does it deal with faster read performance?
What if the L2ARC SSD can't read as fast as the 24*drives can? (And can't even write data as fast as the hdds can read).
Because i don't think there is any consumer SSD that can reach the 1GB/s read (and even less write) speed attained by a 24*hdd array...
Still wondering if a lower througput L2ARC can bottleneck a large array in case of sequential reads
Can ZFS use SGPIO from backplanes for spotting failed drive(s) ?
About the OS, which one to use?
What are the pros/cons of the different ZFS capable OS ?
OpenIndiana+napp-it sounds like a nice choice
About the RAIDZ configuration, what's the best setup for 24 drives?
I read pools should be made of a precise number of drives, like 10 for best performances, but that would make a third pool of only 4 hdds.
Also what kind of RAIDZ ?
-24*hdd all in one RAIDZ2/RAIDZ3 ?
-2*10 RAIDZ2 + 4 in RAIDZ ?
-3*8 RAIDZ1/RAIDZ2 ?
-2*12 RAIDZ2 ?
Does the performance drop is significant if i don't make 10drives pools ? Because i don't want to lose too much diskspace as the main goal will be storage.
I will probably make a 24*hdd Z3
-----------------------------------------------------------------------------------------------------------------------------
Second build:
I also want to renew my windows desktop multi purpose config. (internet, photo editing, light games, download box...)
The problem is while using windows, required for apps/games, i won't be able to benefit the ZFS strength and selfhealing capabilities, which annoy me as data could be corrupted before being written on the fileserver. Also i will get the RAID5 write hole problem using hardware RAID5...
I read someone made a ZFS through VM, but lost his data because some flush problem from VM.
Is there any ways to do this? (safe windows RAID system) with checksums and all.
Some sort of automatic par/checksum system, added to weekly scrub.
I know i could also write directly on the ZFS server through network and even boot from it, but this would mean leaving the fileserver online 24/7, consuming power and making noise (probably more noise than the quiet desktop config i want) as it will never idle.
Any ideas for this problem?
I still have to choose between all-in-one or two separate configs
The config I though so far :
mobo: X9SCM-iiF
cpu: E3 1270 V2
heatsink: CR95C
ram: 16/32GB DDR3 ECC
network: X540-T2
video: passive 7750 or 7770
sound: xonar stx
ssd: intel 335 240GB
raid: areca ARC-1223 8i
hdd: 4*WD RED 3/4TB in RAID5
case: RSV-L4000
fans: 3*noctua NF-P12 for the middle 120mm fanwall
backplane: SK-34A-S2, removing the stock 80mm and using the fanwall for cooling instead
psu: seasonic P-660
os: windows7
in case of two configs:
desktop config:
mobo: X9SCM-iiF or X9SRH-7TF
cpu: E3-1270V2 or E5-1620
ram: 16GB DDR3 ECC or registered
network: X540-T2 or none
video: 7750/7770
sound: xonar stx/st
ssd: intel 335 240GB
case: RSV-L4000
fans: 3*NF-P12
psu: seasonic P-520 or P-660 (unsure about safe position of a fanless psu into a 4U case)
os: windows7
5"1/4 drive bays: LTO6, sata CF reader, 2.5" racks
ZFS config:
mobo: X9SCM-iiF or X9SRH-7TF
cpu: E3-1270V2 or E5-1620
ram: 4*8GB DDR3 ECC or 2/4*16GB DDR3 ECC registered
network: X540-T2 or none
ssd: ZIL 2*intel 320 80GB mirrored, L2ARC 2*intel 520 480GB stripped
case: RSV-L4000
fans: 3*NF-P12
backplane(s): 1/2*CSE-M35T-1 black
hba: M1015 or none
hdds: 5/10*5K4000 raidz2
psu: seasonic P-520 or P-660
os: OI+napp-it
5"1/4 bays: 6*2.5>1*5"1/4 rack
in case of all-in-one:
all-in-one:
mobo: X9SRH-7TF
cpu: E5-1620
ram: 3*16GB DDR3 ECC registered
video: 7750/7770
sound: xonar st
ssd: 2*335 240GB boot mirrored, 2*320 80GB ZIL mirrored, 2*520 480GB L2ARC stripped
case: RSV-L4000
fans: 3*NF-P12
backplane(s): 1/2*CSE-M35T-1 black
hba: M1015
hdds: 5/10*5K4000 raidz2
psu: seasonic P-520 or P-660
os: OI+napp-it
5"1/4 bays: 6*2.5>1*5"1/4 rack, LTO6, sata CF reader.
About compatibility, does the nofan CR95C fit into this case ?
Does the SK-34A-S2 fit in the RSV-L4000, before the fanwall?
Or do you know a better rackmount case that also have a 3*120mm mid fanwall and could use it to cool the SK-34A-S2 backplane with its fan removed?
Still searching for alternate choices of 4U case with 3*120mm fanwall
About network, i though of a 10GbE link between these two config, the X540-T2 seems to be the thing to buy, but what about the switch?
i would need something with at least 8/12*1Gb ports and 4*10Gb ports.
I noted some interesting 24*Gb+4*10Gb switchs within the $1000 price range from cicso/hp/netgear, like the HP E2910-24G, the SG500X-24-K9-NA or the GSM7328S-200NAS.
What about the noise? They seems to have inbuilt fan.
About the backup, i thought of a LTO loader, but it seems these are subject to the same silent data corruption problem.
Is there a way to safely backup data without building a costly second ZFS server?
edit:
I just realized that the X9SCM-iiF has only two PCI-E 8x and two PCI-E 4x, is it going to create problem or bottleneck?
For the M1015 (+8*5k4000)
For the X540-T2 10GbE NIC
For the areca 1223 8i
For the 7750/7770
problem would only occur in all-in-one into 1155 plateform.
So i changed my mind and rethink my build.
First question, do i miss something? Is there still any benefit from using this kind of expensive hardware RAID setup against a cheaper and safer software RAIDZ that doesn't require expensive controller, neither expensive TLER enterprise drives while having extended healing capabilities? Does hardware RAID give more performances of anything?
As data integrity is my priority i don't think i have much choice, but anyway i would like to be sure.
UPDATE
The config i thought:
mobo: X9SCM-iiF
cpu: xeon E3-1270 V2
ram: 4*8GB DDR3 ECC
hba: 3*M1015
network: X540-T2
ssd: intel 335 240GB (boot/ZIL/L2ARC to be defined)
hdd: 24*5K4000
case: RM424pro
psu: seasonic P-760
os: ? (OpenIndiana, OpenSolaris, FreeNAS, Nexenta, ???)
mobo: X9SRH-7TF or X9SRL-F
cpu: xeon E5-1620
ram: x*16GB DDR3 ECC registered
hba: 2 or 3*M1015
network: none or X540-T2
boot: SLC USB key
ssd: ZIL intel 320 80GB + L2ARC intel 520 480GB
hdd: 24*5K4000
case: RM424pro
psu: seasonic P-760
os: OpenIndiana+napp-it
cooling: kama stay
Questions:
About the RAM size, i wonder if replacing this LGA1155 system with a LGA2011 one is worthy ?
I could use a lot more RAM than 32GB, and i read that ZFS love RAM.
But i also saw this too much RAM issue.
So what's the best ammount of RAM to avoid these freeze?
Does increasing RAM above 32GB worth it? (128GB would be required for the 1GB RAM > 1TB HDD ratio, and more could always be useful)
Also does deduplication worth it? Is it safe? (as it seems to be impossible to "undedupe" data)
Seems to consume a lot of CPU power and lot of RAM to work properly, so i wonder.
I won't use deduplication
About CPU power, what are ZFS's needs (with/without deduplication)?
Because there is also the E3 1220L with his tiny 17W TDP, but i'm afraid it would bottleneck the performances.
Also 2011 setup would allow 6core CPU.
About the M1015, i read on a thread there is different revisions (46M0861 & 46M0831), which one is the good and how to know?
How to cool them properly as they seems to becomes very hot?
120mm fans cooling the PCI cards should make it, but infortunately i couldn't find setups that don't eat PCI ports (i thought of placing the fans "topdown").
I noted the (discontinued) kama stay, but it still require one PCI slot.
I hope the 3*120mm fanwall will be enough as i can't find the kama stay or similar
About SSD, ZIL, L2ARC & HDD, does it worth it ?
ZIL seems to be very dangerous to use as losing it could lead to losing the whole data stored on the hdds, Does the performance increase worth the risk of having such a weak link?
How do you install the OS? On a third SSD? Is it possible to use the ZIL SSD as only 8GB are required for it? (from what i read)
The 5xxxRPM non TLER hdds should work quietly enough and give sufficient performance, i hope so.
I also read that ZFS don't work well with 4k hdds, but i can't find any 4TB non-4k hdd, so how to proceed? Is there a way to build a proper RAIDZ using 4TB hdds?
ashift=12
About L2ARC, while it being very useful for increasing the IOPS of slow 5xxxRPM drives and safer than ZIL being only a read cache, how does it deal with faster read performance?
What if the L2ARC SSD can't read as fast as the 24*drives can? (And can't even write data as fast as the hdds can read).
Because i don't think there is any consumer SSD that can reach the 1GB/s read (and even less write) speed attained by a 24*hdd array...
Still wondering if a lower througput L2ARC can bottleneck a large array in case of sequential reads
Can ZFS use SGPIO from backplanes for spotting failed drive(s) ?
About the OS, which one to use?
What are the pros/cons of the different ZFS capable OS ?
OpenIndiana+napp-it sounds like a nice choice
About the RAIDZ configuration, what's the best setup for 24 drives?
I read pools should be made of a precise number of drives, like 10 for best performances, but that would make a third pool of only 4 hdds.
Also what kind of RAIDZ ?
-24*hdd all in one RAIDZ2/RAIDZ3 ?
-2*10 RAIDZ2 + 4 in RAIDZ ?
-3*8 RAIDZ1/RAIDZ2 ?
-2*12 RAIDZ2 ?
Does the performance drop is significant if i don't make 10drives pools ? Because i don't want to lose too much diskspace as the main goal will be storage.
I will probably make a 24*hdd Z3
-----------------------------------------------------------------------------------------------------------------------------
Second build:
I also want to renew my windows desktop multi purpose config. (internet, photo editing, light games, download box...)
The problem is while using windows, required for apps/games, i won't be able to benefit the ZFS strength and selfhealing capabilities, which annoy me as data could be corrupted before being written on the fileserver. Also i will get the RAID5 write hole problem using hardware RAID5...
I read someone made a ZFS through VM, but lost his data because some flush problem from VM.
Is there any ways to do this? (safe windows RAID system) with checksums and all.
Some sort of automatic par/checksum system, added to weekly scrub.
I know i could also write directly on the ZFS server through network and even boot from it, but this would mean leaving the fileserver online 24/7, consuming power and making noise (probably more noise than the quiet desktop config i want) as it will never idle.
Any ideas for this problem?
I still have to choose between all-in-one or two separate configs
The config I though so far :
mobo: X9SCM-iiF
cpu: E3 1270 V2
heatsink: CR95C
ram: 16/32GB DDR3 ECC
network: X540-T2
video: passive 7750 or 7770
sound: xonar stx
ssd: intel 335 240GB
raid: areca ARC-1223 8i
hdd: 4*WD RED 3/4TB in RAID5
case: RSV-L4000
fans: 3*noctua NF-P12 for the middle 120mm fanwall
backplane: SK-34A-S2, removing the stock 80mm and using the fanwall for cooling instead
psu: seasonic P-660
os: windows7
in case of two configs:
desktop config:
mobo: X9SCM-iiF or X9SRH-7TF
cpu: E3-1270V2 or E5-1620
ram: 16GB DDR3 ECC or registered
network: X540-T2 or none
video: 7750/7770
sound: xonar stx/st
ssd: intel 335 240GB
case: RSV-L4000
fans: 3*NF-P12
psu: seasonic P-520 or P-660 (unsure about safe position of a fanless psu into a 4U case)
os: windows7
5"1/4 drive bays: LTO6, sata CF reader, 2.5" racks
ZFS config:
mobo: X9SCM-iiF or X9SRH-7TF
cpu: E3-1270V2 or E5-1620
ram: 4*8GB DDR3 ECC or 2/4*16GB DDR3 ECC registered
network: X540-T2 or none
ssd: ZIL 2*intel 320 80GB mirrored, L2ARC 2*intel 520 480GB stripped
case: RSV-L4000
fans: 3*NF-P12
backplane(s): 1/2*CSE-M35T-1 black
hba: M1015 or none
hdds: 5/10*5K4000 raidz2
psu: seasonic P-520 or P-660
os: OI+napp-it
5"1/4 bays: 6*2.5>1*5"1/4 rack
in case of all-in-one:
all-in-one:
mobo: X9SRH-7TF
cpu: E5-1620
ram: 3*16GB DDR3 ECC registered
video: 7750/7770
sound: xonar st
ssd: 2*335 240GB boot mirrored, 2*320 80GB ZIL mirrored, 2*520 480GB L2ARC stripped
case: RSV-L4000
fans: 3*NF-P12
backplane(s): 1/2*CSE-M35T-1 black
hba: M1015
hdds: 5/10*5K4000 raidz2
psu: seasonic P-520 or P-660
os: OI+napp-it
5"1/4 bays: 6*2.5>1*5"1/4 rack, LTO6, sata CF reader.
About compatibility, does the nofan CR95C fit into this case ?
Does the SK-34A-S2 fit in the RSV-L4000, before the fanwall?
Or do you know a better rackmount case that also have a 3*120mm mid fanwall and could use it to cool the SK-34A-S2 backplane with its fan removed?
Still searching for alternate choices of 4U case with 3*120mm fanwall
About network, i though of a 10GbE link between these two config, the X540-T2 seems to be the thing to buy, but what about the switch?
i would need something with at least 8/12*1Gb ports and 4*10Gb ports.
I noted some interesting 24*Gb+4*10Gb switchs within the $1000 price range from cicso/hp/netgear, like the HP E2910-24G, the SG500X-24-K9-NA or the GSM7328S-200NAS.
What about the noise? They seems to have inbuilt fan.
About the backup, i thought of a LTO loader, but it seems these are subject to the same silent data corruption problem.
Is there a way to safely backup data without building a costly second ZFS server?
edit:
I just realized that the X9SCM-iiF has only two PCI-E 8x and two PCI-E 4x, is it going to create problem or bottleneck?
For the M1015 (+8*5k4000)
For the X540-T2 10GbE NIC
For the areca 1223 8i
For the 7750/7770
problem would only occur in all-in-one into 1155 plateform.
Last edited: