2nd part of this thread:
1st ZFS build....PART 2
$1585.23-CHASSIS: 1 x SuperMicro SC847E16-R1400LPB [instead of SC846E16-R1200B; thanks packetboy]
$ 960.00-HDD: 12 x Samsung HD204UI F4EG
SSD:
RAM: See updated RAM list & note2.
M/B: See updated M/B list & note2.
CPU: See M/B note3 below.
UPS:
$ 164.41Other: [154.41+10 ship] See Parts & Cables below for detail.
--------
$2709.64 = TOTAL
1. Easily expandable, through adding more drives to fill the storage chassis to grow the ZFS pool/vdev, replacing drives with larger capacity ones, by cascading more chassis or by doing all three.
2. Data availability through redundancy & a robust filesystem that can handle all that low level management.
3. Complete storage pool encryption [and hopefully even the L2ARC/ZIL cache drives can be encrypted].
3.NOTE: As stated by Ruroni in POST #3, L2ARC is not encrypted; alas!
4. Make the data easily available to users.
Planning on building a ZFS storage solution & working around some limitations, mainly keeping in mind the budget & future expansion needs so that further down the road it doesn't become another complete buildup. This is the first of what will be [hopefully] a two part post. This post will be mostly about the hardware that I will be using as basis for this build.
I have been putting off building a ZFS solution for over a year now, but recently a drive crash and loss of quite a lot of irreplaceable personal data has put an end to that [embarrassingly enough, only a few months ago moved all of this data off to a new drive as I thought the old drive that contained all these data might croak any day now. Surprisingly enough [actually not] it was the new drive that bit the dust & the old one is till chugging along; SIGH!
My excuses, although valid, to stall the ZFS project was mainly waiting for 1TB platter drives to hit market. So that I could use 2 platter 2TB drives instead of the current 3 platter ones, which would be more prone to failure & SandyBridge-E/LGA 2011. Alas they're both [probably] only a quarter away. Didn't want to buy up components that are about to be made obsolete in just a few months & have no viable or easy future upgrade paths, other than completely replacing the entire system.
I have a basic idea of what I need for this project; I will be listing them below. I have broken down the list by components, so that it becomes easier for readers to go directly to that specific section to read the relevant information. Post suggestions, comments or experience you might have had with these components in a ZFS build or just in general. The only main components I haven't decided on yet are motherboard/CPU [the last one depends entirely on the motherboard I'll be getting] & a UPS.
Thank you all for your time & help. I will be updating this post with what component I am going to be getting, so that anyone reading won't have to read the whole thread just to get to an answer, in case it becomes that long indeed.
Motherboard/CPU
NOTE1: The main sticking point is whether to go for a UP or DP system. Not sure if I'm going to be doing mirror or RAIDZ3. Initially I wanted to do mirrors as they're simple enough not stress the CPU too much [not exactly, as I later learned.] I have seen single Quad core Xeon's hit the dirt when accessing a 20-24 drive mirror [so forget RADZ anything,] as ZFS still has to do checksumming for both reads/writes & a bunch of other fancy stuff that even a hardware RAID card doesn't. I even plan to encrypt the entire ZFS pool.
NOTE2: I don't have any good options when it somes to picking a M/B. I sent an email to SuperMicro asking if they have any Socket LGA 2011 boards in the planning/design/testing phase. Thanks to the corporate email address that I used, 4 days later I got a call form their business sales department. Asking a bunch of questions about the nature of business & then forwarding me the spec. sheet & pictures of a new LGA 2011 board under testing, that basically has [almost] all the things I wanted, except support for x8 & Kingston RAM, also its a UP M/B.
SuperMicro boards seem to support only a few RAM models from a handful of vendors! Samsung, Micron & Hynix being the most supported, on most of their boards; these RAM modules are hard to find or very expensive. Kingston, one of the more ubiguitus manufacturer & a cheaper option, is solely missing support on SM M/Bs. Even though the 5500/5520 chipset support SDDC with x8 modules, I could not find a single SM board that does! So I'm stuck planning to buy the more expensive x4 RAM. Lazy BIOS programmers from SM?
NOTE3: CPU - Seems AMD cpus's are really slower than the current Intel offerings. For example, the Opteron 6128 [8 core 2.0GHz] is about 58% slower than Xeon E3-1230 [4 core 3.2GHz] & is about 3% slower than Xeon X3440 [4 core 2.53GHz].
=Socket LGA 2011
*Unfortunately, I was asked by the SM sales rep, not to disclose the details or forward the spec. sheet; otherwise I would post the PDF here
=Socket LGA 1155
240 - Xeon E3-1230 3.2GHz 80W 32nm Quad/8MB /8,211*
350 - Tyan S5512WGM2NR
*SuperMicro sales rep told to me that SM is skipping LGA 1155 M/Bs with SAS2 & going straight to LGA 2011 boards, so that they can be the first to market; yeah I know, doesn't make any sense to me either!
=Socket LGA 1156 / 3420
240 - Xeon X3440 2.53GHz 95W 45nm Quad/8MB /5,266*
280 - Supermicro X8SI6-F [ MBD-X8SI6-F-O ] /SOLARIS 10u8
/1.50v=KVR1333D3D4R9S/8GHB /all x4
=Socket LGA 1366 / 5500
235 - Xeon E5606 2.13GHz 80W Quad/8M /*
390 - Xeon E5620 2.40GHz 80W Quad+HT/12M [5.86GT QPI] [no VT-d] /*
395 - Supermicro X8DTL-6F [MBD-X8DTL-6F-O] /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
406 - Supermicro X8DTL-6 [MBD-X8DTL-6-O] no-IPMI /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
=Socket LGA 1366 / 5520
525 - Supermicro X8DTH-6F [ MBD-X8DTH-6F-O ] /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4
515 - Supermicro X8DTH-6 [ MBD-X8DTH-6-O ] no-IPMI /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4
450 - Supermicro X8DT6-F [ MBD-X8DT6-F-O ] /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
433 - Supermicro X8DT6 [ MBD-X8DT6-O ] no-IPMI /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
463 - Supermicro X8DA6 [ MBD-X8DA6-O ] no-IPMI /SOLARIS 10u6
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9, KVR1333D3D4R9S/8GHB /all x4
-Socket G34
260 - Opteron 6128 2.0GHz 115W Octo/12M /5,105*
* PassMark - CPU Mark @ cpubenchmark.net
RAID 10 [stripped mirror]
+Able to expand a mirrored pool/existing vdev by adding new drives & without resilvering [correct me if i'm wrong on either point]
+100% redundancy
+Best performance of all the ZFS RAID, if not being hit by CPU/system IOPS ceiling [hmm, needs more verification]
+Much faster resilvering as no parity calculation is needed
+Drive capacity expansion much simpler [i can disconnect 1 set of drives, connect the larger ones & rebuild the mirror, then do it again for the other set of drives]
?Cheap/low power quad core CPU able to handle FS task [not sure about this anymore]
-100% redundancy at the cost of 1/2 the storage capacity
RAIDz3
+Only 15% or so storage space used for redundancy in a 22 drive zpool
-Slower than RAID 10 [hmm, also needs more verification]
-Unable to expand capacity to existing vdev by adding new drives [wait what happened to block pointer rewrite functionality?]
-Very slow resilvering
-Cheap/low power quad core CPU is unable to handle FS tasks
So, whether to use ZFS as mirror or RAID, I'll leave the details for the 2nd thread.
?Q. Has anyone done tests comparing the performance/scaling of the underlying hardware & these two types of ZFS RAID [mirror & Z3]?
Although I am leaning towards Intel M/B + CPU, I am open to suggestions for AMD components if there is significant price/performance advantage. As I see it currently, both DP/UP has some benefits and negatives:
-DP-
+I can start with 1 CPU now & later add another when I start cascading to more storage chassis
+Won't have to replace motherboard/CPU to accomodate future growth
+Might be cheaper in the long run, when compared to replacing the whole system [or maybe not]
+3 channel/6 RAM slots
+Intel SDDC/AMD Chipkill
+RAS feature [at least some]
-Socket LGA 1366 only as no LGA 1155 DP M/B
-Socket LGA 1366 about to be made irrelevant with Socket LGA 2011/SandyBridge-E next quarter
?Initially more expensive than UP M/B + CPU [not sure about this either]
-UP-
+Cheaper than DP, initially
+Socket LGA 1155/SandyBridge based stuff is newer than LGA 1366/Nehalem based DP systems
-No RAS features whatsoever
-2 channel/4 RAM slots only
-No expandability to accommodate future growth, will have to replace M/B & CPU completely
?Probably will hit I/O limit even with simple mirror type ZFS when using 20-24 drives [i highly doubt can handle RAIDZ3]
+Motherboard must be compatible with Solaris Express or one of the derivatives based off its dead cousin [OpenSol; RIP].
+Motherboard must be compatible with SuperMicro SC846E16-R1200B chassis
+SAS2 6.0 Gbps controller [preferably LSI SAS2008 or better]
+SAS controller must support Initiator/Target mode [as I will only be doing software/ZFS RAID]
+Intel SDDC [or AMD Chipkill if AMD motherboard]
+IPMI 2.0 + IP-KVM with remote ISO mounting capability
+2 Pci-E 2.0 x8/x16 slots [well, the more the better]
+2 Gbit Ethernet, capable of teaming [until I can get 10G Ethernet cards]
?Q. How many disk supported in I/T mode? [LSI 1068e & LSI 9211-8i based on SAS2008 supports up to 122, so I've heard, but not sure]
?Q. How much bandwidth does the controller have to the MB, not the ports [LSI 1068e has a 2000 MB/s based on x8 PCIe 1.0/1.1]
?Q. How much real throughput can the controller handle, between the ports & the motherboard [most RAID cards can only do 1 GB/s or less]
?Q. How many IOPS is the SAS controller rated for? [LSI 1068e was rated for up to 144,000 IOPS, if I remember correctly]
Would prefer a SuperMicro M/B, as getting multiple parts from the same vendor would simplify warranty & support, but I'd rather get something better price/performance wise, no matter the manufacturer.
UPS
NOTE1: Absolutely no idea whatsoever. I wanted to get 3-4 decent consumer grade PSU's & cascade them together, but everyone said not to. Basically don't want to shell out several K for a enterprise level PSU; heck don't even need something like that.
NOTE2: Currently I have a APC RS1500 LCD. I tried testing it today with my current system to see if it works & it failed; computer shuts off as soon as I pulled the cable from the wall & computer constantly powers up then shuts down! My current system is a Q9550/GA-EP45-UD3P + 24" LCD, which takes up 200W at idle [135W without the LCD].
+180.00 - Cyber Power 900W LCD UPS [1500VA]
+A few small network appliances & the ZFS storage chassis should last long enough [~10 minutes] at full load, for a proper shutdown
+UPS will not turn itself back on or the system until at least 15-20% charged after depletion [so as not to crash system from a subsequent brownout]
+UPS power outage alarm can be turned off [don't want to wake up the whole neighborhood when on battery]
HDD + SSD
NOTE1: 8-10 + 2 spare SAMSUNG HD204UI F4EG 2.0TB drives for now [already bought 9]
I am thinking of getting 3-4 [2 ZIL + 1-2 L2ARC] cheap & small consumer grade SSD's. These will be MLC based & in the 40-80GB range, all depending on the price. OCZ Solid 3, OCZ Agility 3 & OCZ Vertex 3 seems to have nice read/write & IOPS [both sustained & random] numbers! Haven't decided which one yet, as I don't know much about the current gen [esp. OCZ model] drives. These cheap SSD's don't need to have any SuperCap's as I will have a PSU to go with this build.
?Q. What is the difference between OCZ Solid 3, Agility 3 & Vertex 3 drives?
?Q. Suggestion for any other SSD brand/model for my build, that has 3-5 years warranty? [def will be needing the longer warranty]
Chassis
NOTE1: I decided on SuperMicro SC846E16-R1200B. SC846E26-R1200B has dual SAS 6.0 Gbps ports, but also costs an extra $100-$150. Though I can always use the extra bandwidth when I start cascading with other chassis, I don't think the 2 SAS ports can be teamed [like Ethernet port's] by connecting them both to the same controller, I think they're only for failover.
Even with a DP motherboard I don't think I need a 1200W PSU! Any chassis that has SAS 6.0 Gbps backplane & a 900W PSU would do, not to mention could have saved $100-$150 on the price. Unfortunately SuperMicro doesn't have any E16/E26 chassis with 900W PSU.
4U Habey chassis that can hold 20-24 drives are almost nonexistent. Both Habey & NORCO ones are so barebones that I would have to spend a lot of money on good quality redundant PSU + HBA cards, both of which can easily add another 1K. Not to mention they usually have 4-6 separate backplanes with a separate SAS connector for each, no on board SAS controller will have enough ports for these.
1585.23 @ wiredzone.com = SuperMicro SC847E16-R1400LPB / part# CSE-847E16-R1400LPB [free shipping]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
+12 extra hot swap drive bays in the back.
+Only cost 20% extra for an increase of 50% more capacity over the SC846E16-R1200B.
-Expensive! 2 backplanes will run up to 700-800 dollars, about half the price of this chassis!!!
-Working on the 12 drives in the back got to be tough, esp if your server cabinet doesn't have a rear door!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bazel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.
1234.79 @ wiredzone.com = SuperMicro SC846E16-R1200B / part# CSE-846E16-R1200BP [free ship, which usually runs in the 150-200; yay!!!]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
-Expensive! Backplane by itself is ~600 dollars, half the price of this chassis!!!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bezel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.
RAM
NOTE1: I'd prefer to get x8 modules, as they're 33%-50% cheaper than x4 modules, as long as I can get SDDC working [well most new Intel chipsets now work with both types of module, except a dew things.] Mainly I want demand & patrol scrubbing capability of SDDC. So I've heard, that RAM errors are quite common. The numbers bandied around are between: 1 bit error/hour/GB of RAM or 10^−10 & 1 bit error/century/GB of RAM or 10^−17.
NOTE2: Even though Intel 5500 & 5520 chipsets support x8 [quad ranked] modules, even with SDDC, seems that SuperMicro M/Bs don't! So I'll have to fork up 2x-3x times as much for the x4 [single/dual ranked] modules! Oh & did I mention Kingston server RAM support is wholly missing on the SM boards too [they're more ubiquitous & cheaper than other brands]?/color]
Initially I'd like to start off with 24GB, seeing as how cheap RAM is nowadays and of course the M/B permitting.
@newegg.com
389.99 - x4 Kingston 24GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK3/24G [3 x 8GB]
259.99 - x4 Kingston 16GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK2/16G [2 x 8GB]
+152.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8GHB
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-CH9
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-YH9 /1.35v
+124.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8G
+89.99 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3Q8R9S/8G
@fticomputer.com +ship
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4C-H9
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4A-H9 /1.35v
++79.74 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3LQ8R9S/8G /1.35v
http://www.buy.com/prod/kingston-kvrt.aspx?Item=N221238858.html
+RAM needs to be compatible with motherboard.
?Q. Should I look to get LV/1.35v DDR3 or stick to the regular stuff [heard LV DIMMs can't be used in dense configurations, as in less modules]
Parts & Cables
@ wiredzone.com
42.24 - 2 x [$21.12] MCP-220-84701-0N Supermicro 3.5"/2x2.5" System hard drive tray
61.55 - 1 x 4U, Front Bezel Cover, for SC846, SC848
@ newegg.com
07.98 - 2 x [$3.99] Rosewill 12" PWM Splitter Model RCW-FPS-401
01.99 - 1 x StarTech 8" EPS 8 Pin Power Extension Cable Model EPS8EXT
09.98 - 2 x [4.99] NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable
19.99 - 1 x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM
09.68 - Shipping
- 1 or 2 heatsink bracket, in case the M/B doesn't some with one
P.S. Please forgive any typos or omission, its very late here.
20110816_1733: Link to 2nd part of this thread
20110814_1710: added UPS section; edited list of RAM, M/B models; also added [some] officially supported make/model of RAM for each M/B
20110807_1642: typos corrected
1st ZFS build....PART 2
$1585.23-CHASSIS: 1 x SuperMicro SC847E16-R1400LPB [instead of SC846E16-R1200B; thanks packetboy]
$ 960.00-HDD: 12 x Samsung HD204UI F4EG
SSD:
RAM: See updated RAM list & note2.
M/B: See updated M/B list & note2.
CPU: See M/B note3 below.
UPS:
$ 164.41Other: [154.41+10 ship] See Parts & Cables below for detail.
--------
$2709.64 = TOTAL
1. Easily expandable, through adding more drives to fill the storage chassis to grow the ZFS pool/vdev, replacing drives with larger capacity ones, by cascading more chassis or by doing all three.
2. Data availability through redundancy & a robust filesystem that can handle all that low level management.
3. Complete storage pool encryption [and hopefully even the L2ARC/ZIL cache drives can be encrypted].
3.NOTE: As stated by Ruroni in POST #3, L2ARC is not encrypted; alas!
4. Make the data easily available to users.
Planning on building a ZFS storage solution & working around some limitations, mainly keeping in mind the budget & future expansion needs so that further down the road it doesn't become another complete buildup. This is the first of what will be [hopefully] a two part post. This post will be mostly about the hardware that I will be using as basis for this build.
I have been putting off building a ZFS solution for over a year now, but recently a drive crash and loss of quite a lot of irreplaceable personal data has put an end to that [embarrassingly enough, only a few months ago moved all of this data off to a new drive as I thought the old drive that contained all these data might croak any day now. Surprisingly enough [actually not] it was the new drive that bit the dust & the old one is till chugging along; SIGH!
My excuses, although valid, to stall the ZFS project was mainly waiting for 1TB platter drives to hit market. So that I could use 2 platter 2TB drives instead of the current 3 platter ones, which would be more prone to failure & SandyBridge-E/LGA 2011. Alas they're both [probably] only a quarter away. Didn't want to buy up components that are about to be made obsolete in just a few months & have no viable or easy future upgrade paths, other than completely replacing the entire system.
I have a basic idea of what I need for this project; I will be listing them below. I have broken down the list by components, so that it becomes easier for readers to go directly to that specific section to read the relevant information. Post suggestions, comments or experience you might have had with these components in a ZFS build or just in general. The only main components I haven't decided on yet are motherboard/CPU [the last one depends entirely on the motherboard I'll be getting] & a UPS.
Thank you all for your time & help. I will be updating this post with what component I am going to be getting, so that anyone reading won't have to read the whole thread just to get to an answer, in case it becomes that long indeed.
Motherboard/CPU
NOTE1: The main sticking point is whether to go for a UP or DP system. Not sure if I'm going to be doing mirror or RAIDZ3. Initially I wanted to do mirrors as they're simple enough not stress the CPU too much [not exactly, as I later learned.] I have seen single Quad core Xeon's hit the dirt when accessing a 20-24 drive mirror [so forget RADZ anything,] as ZFS still has to do checksumming for both reads/writes & a bunch of other fancy stuff that even a hardware RAID card doesn't. I even plan to encrypt the entire ZFS pool.
NOTE2: I don't have any good options when it somes to picking a M/B. I sent an email to SuperMicro asking if they have any Socket LGA 2011 boards in the planning/design/testing phase. Thanks to the corporate email address that I used, 4 days later I got a call form their business sales department. Asking a bunch of questions about the nature of business & then forwarding me the spec. sheet & pictures of a new LGA 2011 board under testing, that basically has [almost] all the things I wanted, except support for x8 & Kingston RAM, also its a UP M/B.
SuperMicro boards seem to support only a few RAM models from a handful of vendors! Samsung, Micron & Hynix being the most supported, on most of their boards; these RAM modules are hard to find or very expensive. Kingston, one of the more ubiguitus manufacturer & a cheaper option, is solely missing support on SM M/Bs. Even though the 5500/5520 chipset support SDDC with x8 modules, I could not find a single SM board that does! So I'm stuck planning to buy the more expensive x4 RAM. Lazy BIOS programmers from SM?
NOTE3: CPU - Seems AMD cpus's are really slower than the current Intel offerings. For example, the Opteron 6128 [8 core 2.0GHz] is about 58% slower than Xeon E3-1230 [4 core 3.2GHz] & is about 3% slower than Xeon X3440 [4 core 2.53GHz].
=Socket LGA 2011
*Unfortunately, I was asked by the SM sales rep, not to disclose the details or forward the spec. sheet; otherwise I would post the PDF here
=Socket LGA 1155
240 - Xeon E3-1230 3.2GHz 80W 32nm Quad/8MB /8,211*
350 - Tyan S5512WGM2NR
*SuperMicro sales rep told to me that SM is skipping LGA 1155 M/Bs with SAS2 & going straight to LGA 2011 boards, so that they can be the first to market; yeah I know, doesn't make any sense to me either!
=Socket LGA 1156 / 3420
240 - Xeon X3440 2.53GHz 95W 45nm Quad/8MB /5,266*
280 - Supermicro X8SI6-F [ MBD-X8SI6-F-O ] /SOLARIS 10u8
/1.50v=KVR1333D3D4R9S/8GHB /all x4
=Socket LGA 1366 / 5500
235 - Xeon E5606 2.13GHz 80W Quad/8M /*
390 - Xeon E5620 2.40GHz 80W Quad+HT/12M [5.86GT QPI] [no VT-d] /*
395 - Supermicro X8DTL-6F [MBD-X8DTL-6F-O] /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
406 - Supermicro X8DTL-6 [MBD-X8DTL-6-O] no-IPMI /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
=Socket LGA 1366 / 5520
525 - Supermicro X8DTH-6F [ MBD-X8DTH-6F-O ] /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4
515 - Supermicro X8DTH-6 [ MBD-X8DTH-6-O ] no-IPMI /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4
450 - Supermicro X8DT6-F [ MBD-X8DT6-F-O ] /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
433 - Supermicro X8DT6 [ MBD-X8DT6-O ] no-IPMI /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
463 - Supermicro X8DA6 [ MBD-X8DA6-O ] no-IPMI /SOLARIS 10u6
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9, KVR1333D3D4R9S/8GHB /all x4
-Socket G34
260 - Opteron 6128 2.0GHz 115W Octo/12M /5,105*
* PassMark - CPU Mark @ cpubenchmark.net
RAID 10 [stripped mirror]
+Able to expand a mirrored pool/existing vdev by adding new drives & without resilvering [correct me if i'm wrong on either point]
+100% redundancy
+Best performance of all the ZFS RAID, if not being hit by CPU/system IOPS ceiling [hmm, needs more verification]
+Much faster resilvering as no parity calculation is needed
+Drive capacity expansion much simpler [i can disconnect 1 set of drives, connect the larger ones & rebuild the mirror, then do it again for the other set of drives]
?Cheap/low power quad core CPU able to handle FS task [not sure about this anymore]
-100% redundancy at the cost of 1/2 the storage capacity
RAIDz3
+Only 15% or so storage space used for redundancy in a 22 drive zpool
-Slower than RAID 10 [hmm, also needs more verification]
-Unable to expand capacity to existing vdev by adding new drives [wait what happened to block pointer rewrite functionality?]
-Very slow resilvering
-Cheap/low power quad core CPU is unable to handle FS tasks
So, whether to use ZFS as mirror or RAID, I'll leave the details for the 2nd thread.
?Q. Has anyone done tests comparing the performance/scaling of the underlying hardware & these two types of ZFS RAID [mirror & Z3]?
Although I am leaning towards Intel M/B + CPU, I am open to suggestions for AMD components if there is significant price/performance advantage. As I see it currently, both DP/UP has some benefits and negatives:
-DP-
+I can start with 1 CPU now & later add another when I start cascading to more storage chassis
+Won't have to replace motherboard/CPU to accomodate future growth
+Might be cheaper in the long run, when compared to replacing the whole system [or maybe not]
+3 channel/6 RAM slots
+Intel SDDC/AMD Chipkill
+RAS feature [at least some]
-Socket LGA 1366 only as no LGA 1155 DP M/B
-Socket LGA 1366 about to be made irrelevant with Socket LGA 2011/SandyBridge-E next quarter
?Initially more expensive than UP M/B + CPU [not sure about this either]
-UP-
+Cheaper than DP, initially
+Socket LGA 1155/SandyBridge based stuff is newer than LGA 1366/Nehalem based DP systems
-No RAS features whatsoever
-2 channel/4 RAM slots only
-No expandability to accommodate future growth, will have to replace M/B & CPU completely
?Probably will hit I/O limit even with simple mirror type ZFS when using 20-24 drives [i highly doubt can handle RAIDZ3]
+Motherboard must be compatible with Solaris Express or one of the derivatives based off its dead cousin [OpenSol; RIP].
+Motherboard must be compatible with SuperMicro SC846E16-R1200B chassis
+SAS2 6.0 Gbps controller [preferably LSI SAS2008 or better]
+SAS controller must support Initiator/Target mode [as I will only be doing software/ZFS RAID]
+Intel SDDC [or AMD Chipkill if AMD motherboard]
+IPMI 2.0 + IP-KVM with remote ISO mounting capability
+2 Pci-E 2.0 x8/x16 slots [well, the more the better]
+2 Gbit Ethernet, capable of teaming [until I can get 10G Ethernet cards]
?Q. How many disk supported in I/T mode? [LSI 1068e & LSI 9211-8i based on SAS2008 supports up to 122, so I've heard, but not sure]
?Q. How much bandwidth does the controller have to the MB, not the ports [LSI 1068e has a 2000 MB/s based on x8 PCIe 1.0/1.1]
?Q. How much real throughput can the controller handle, between the ports & the motherboard [most RAID cards can only do 1 GB/s or less]
?Q. How many IOPS is the SAS controller rated for? [LSI 1068e was rated for up to 144,000 IOPS, if I remember correctly]
Would prefer a SuperMicro M/B, as getting multiple parts from the same vendor would simplify warranty & support, but I'd rather get something better price/performance wise, no matter the manufacturer.
UPS
NOTE1: Absolutely no idea whatsoever. I wanted to get 3-4 decent consumer grade PSU's & cascade them together, but everyone said not to. Basically don't want to shell out several K for a enterprise level PSU; heck don't even need something like that.
NOTE2: Currently I have a APC RS1500 LCD. I tried testing it today with my current system to see if it works & it failed; computer shuts off as soon as I pulled the cable from the wall & computer constantly powers up then shuts down! My current system is a Q9550/GA-EP45-UD3P + 24" LCD, which takes up 200W at idle [135W without the LCD].
+180.00 - Cyber Power 900W LCD UPS [1500VA]
+A few small network appliances & the ZFS storage chassis should last long enough [~10 minutes] at full load, for a proper shutdown
+UPS will not turn itself back on or the system until at least 15-20% charged after depletion [so as not to crash system from a subsequent brownout]
+UPS power outage alarm can be turned off [don't want to wake up the whole neighborhood when on battery]
HDD + SSD
NOTE1: 8-10 + 2 spare SAMSUNG HD204UI F4EG 2.0TB drives for now [already bought 9]
I am thinking of getting 3-4 [2 ZIL + 1-2 L2ARC] cheap & small consumer grade SSD's. These will be MLC based & in the 40-80GB range, all depending on the price. OCZ Solid 3, OCZ Agility 3 & OCZ Vertex 3 seems to have nice read/write & IOPS [both sustained & random] numbers! Haven't decided which one yet, as I don't know much about the current gen [esp. OCZ model] drives. These cheap SSD's don't need to have any SuperCap's as I will have a PSU to go with this build.
?Q. What is the difference between OCZ Solid 3, Agility 3 & Vertex 3 drives?
?Q. Suggestion for any other SSD brand/model for my build, that has 3-5 years warranty? [def will be needing the longer warranty]
Chassis
NOTE1: I decided on SuperMicro SC846E16-R1200B. SC846E26-R1200B has dual SAS 6.0 Gbps ports, but also costs an extra $100-$150. Though I can always use the extra bandwidth when I start cascading with other chassis, I don't think the 2 SAS ports can be teamed [like Ethernet port's] by connecting them both to the same controller, I think they're only for failover.
Even with a DP motherboard I don't think I need a 1200W PSU! Any chassis that has SAS 6.0 Gbps backplane & a 900W PSU would do, not to mention could have saved $100-$150 on the price. Unfortunately SuperMicro doesn't have any E16/E26 chassis with 900W PSU.
4U Habey chassis that can hold 20-24 drives are almost nonexistent. Both Habey & NORCO ones are so barebones that I would have to spend a lot of money on good quality redundant PSU + HBA cards, both of which can easily add another 1K. Not to mention they usually have 4-6 separate backplanes with a separate SAS connector for each, no on board SAS controller will have enough ports for these.
1585.23 @ wiredzone.com = SuperMicro SC847E16-R1400LPB / part# CSE-847E16-R1400LPB [free shipping]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
+12 extra hot swap drive bays in the back.
+Only cost 20% extra for an increase of 50% more capacity over the SC846E16-R1200B.
-Expensive! 2 backplanes will run up to 700-800 dollars, about half the price of this chassis!!!
-Working on the 12 drives in the back got to be tough, esp if your server cabinet doesn't have a rear door!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bazel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.
1234.79 @ wiredzone.com = SuperMicro SC846E16-R1200B / part# CSE-846E16-R1200BP [free ship, which usually runs in the 150-200; yay!!!]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
-Expensive! Backplane by itself is ~600 dollars, half the price of this chassis!!!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bezel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.
RAM
NOTE1: I'd prefer to get x8 modules, as they're 33%-50% cheaper than x4 modules, as long as I can get SDDC working [well most new Intel chipsets now work with both types of module, except a dew things.] Mainly I want demand & patrol scrubbing capability of SDDC. So I've heard, that RAM errors are quite common. The numbers bandied around are between: 1 bit error/hour/GB of RAM or 10^−10 & 1 bit error/century/GB of RAM or 10^−17.
NOTE2: Even though Intel 5500 & 5520 chipsets support x8 [quad ranked] modules, even with SDDC, seems that SuperMicro M/Bs don't! So I'll have to fork up 2x-3x times as much for the x4 [single/dual ranked] modules! Oh & did I mention Kingston server RAM support is wholly missing on the SM boards too [they're more ubiquitous & cheaper than other brands]?/color]
Initially I'd like to start off with 24GB, seeing as how cheap RAM is nowadays and of course the M/B permitting.
@newegg.com
389.99 - x4 Kingston 24GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK3/24G [3 x 8GB]
259.99 - x4 Kingston 16GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK2/16G [2 x 8GB]
+152.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8GHB
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-CH9
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-YH9 /1.35v
+124.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8G
+89.99 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3Q8R9S/8G
@fticomputer.com +ship
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4C-H9
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4A-H9 /1.35v
++79.74 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3LQ8R9S/8G /1.35v
http://www.buy.com/prod/kingston-kvrt.aspx?Item=N221238858.html
+RAM needs to be compatible with motherboard.
?Q. Should I look to get LV/1.35v DDR3 or stick to the regular stuff [heard LV DIMMs can't be used in dense configurations, as in less modules]
Parts & Cables
@ wiredzone.com
42.24 - 2 x [$21.12] MCP-220-84701-0N Supermicro 3.5"/2x2.5" System hard drive tray
61.55 - 1 x 4U, Front Bezel Cover, for SC846, SC848
@ newegg.com
07.98 - 2 x [$3.99] Rosewill 12" PWM Splitter Model RCW-FPS-401
01.99 - 1 x StarTech 8" EPS 8 Pin Power Extension Cable Model EPS8EXT
09.98 - 2 x [4.99] NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable
19.99 - 1 x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM
09.68 - Shipping
- 1 or 2 heatsink bracket, in case the M/B doesn't some with one
P.S. Please forgive any typos or omission, its very late here.
20110816_1733: Link to 2nd part of this thread
20110814_1710: added UPS section; edited list of RAM, M/B models; also added [some] officially supported make/model of RAM for each M/B
20110807_1642: typos corrected
Last edited: