1st ZFS build....PART 1

taroumaru

Weaksauce
Joined
Dec 22, 2005
Messages
87
2nd part of this thread:
1st ZFS build....PART 2

$1585.23-CHASSIS: 1 x SuperMicro SC847E16-R1400LPB [instead of SC846E16-R1200B; thanks packetboy]
$ 960.00-HDD: 12 x Samsung HD204UI F4EG
SSD:
RAM: See updated RAM list & note2.
M/B: See updated M/B list & note2.
CPU: See M/B note3 below.
UPS:
$ 164.41Other: [154.41+10 ship] See Parts & Cables below for detail.
--------
$2709.64 = TOTAL


1. Easily expandable, through adding more drives to fill the storage chassis to grow the ZFS pool/vdev, replacing drives with larger capacity ones, by cascading more chassis or by doing all three.
2. Data availability through redundancy & a robust filesystem that can handle all that low level management.
3. Complete storage pool encryption [and hopefully even the L2ARC/ZIL cache drives can be encrypted].
3.NOTE: As stated by Ruroni in POST #3, L2ARC is not encrypted; alas!
4. Make the data easily available to users.

Planning on building a ZFS storage solution & working around some limitations, mainly keeping in mind the budget & future expansion needs so that further down the road it doesn't become another complete buildup. This is the first of what will be [hopefully] a two part post. This post will be mostly about the hardware that I will be using as basis for this build.

I have been putting off building a ZFS solution for over a year now, but recently a drive crash and loss of quite a lot of irreplaceable personal data has put an end to that [embarrassingly enough, only a few months ago moved all of this data off to a new drive as I thought the old drive that contained all these data might croak any day now. Surprisingly enough [actually not] it was the new drive that bit the dust & the old one is till chugging along; SIGH!

My excuses, although valid, to stall the ZFS project was mainly waiting for 1TB platter drives to hit market. So that I could use 2 platter 2TB drives instead of the current 3 platter ones, which would be more prone to failure & SandyBridge-E/LGA 2011. Alas they're both [probably] only a quarter away. Didn't want to buy up components that are about to be made obsolete in just a few months & have no viable or easy future upgrade paths, other than completely replacing the entire system.

I have a basic idea of what I need for this project; I will be listing them below. I have broken down the list by components, so that it becomes easier for readers to go directly to that specific section to read the relevant information. Post suggestions, comments or experience you might have had with these components in a ZFS build or just in general. The only main components I haven't decided on yet are motherboard/CPU [the last one depends entirely on the motherboard I'll be getting] & a UPS.

Thank you all for your time & help. I will be updating this post with what component I am going to be getting, so that anyone reading won't have to read the whole thread just to get to an answer, in case it becomes that long indeed.

Motherboard/CPU

NOTE1: The main sticking point is whether to go for a UP or DP system. Not sure if I'm going to be doing mirror or RAIDZ3. Initially I wanted to do mirrors as they're simple enough not stress the CPU too much [not exactly, as I later learned.] I have seen single Quad core Xeon's hit the dirt when accessing a 20-24 drive mirror [so forget RADZ anything,] as ZFS still has to do checksumming for both reads/writes & a bunch of other fancy stuff that even a hardware RAID card doesn't. I even plan to encrypt the entire ZFS pool.
NOTE2: I don't have any good options when it somes to picking a M/B. I sent an email to SuperMicro asking if they have any Socket LGA 2011 boards in the planning/design/testing phase. Thanks to the corporate email address that I used, 4 days later I got a call form their business sales department. Asking a bunch of questions about the nature of business & then forwarding me the spec. sheet & pictures of a new LGA 2011 board under testing, that basically has [almost] all the things I wanted, except support for x8 & Kingston RAM, also its a UP M/B.

SuperMicro boards seem to support only a few RAM models from a handful of vendors! Samsung, Micron & Hynix being the most supported, on most of their boards; these RAM modules are hard to find or very expensive. Kingston, one of the more ubiguitus manufacturer & a cheaper option, is solely missing support on SM M/Bs. Even though the 5500/5520 chipset support SDDC with x8 modules, I could not find a single SM board that does! So I'm stuck planning to buy the more expensive x4 RAM. Lazy BIOS programmers from SM?

NOTE3: CPU - Seems AMD cpus's are really slower than the current Intel offerings. For example, the Opteron 6128 [8 core 2.0GHz] is about 58% slower than Xeon E3-1230 [4 core 3.2GHz] & is about 3% slower than Xeon X3440 [4 core 2.53GHz].

=Socket LGA 2011
*Unfortunately, I was asked by the SM sales rep, not to disclose the details or forward the spec. sheet; otherwise I would post the PDF here


=Socket LGA 1155
240 - Xeon E3-1230 3.2GHz 80W 32nm Quad/8MB /8,211*
350 - Tyan S5512WGM2NR
*SuperMicro sales rep told to me that SM is skipping LGA 1155 M/Bs with SAS2 & going straight to LGA 2011 boards, so that they can be the first to market; yeah I know, doesn't make any sense to me either!

=Socket LGA 1156 / 3420
240 - Xeon X3440 2.53GHz 95W 45nm Quad/8MB /5,266*
280 - Supermicro X8SI6-F [ MBD-X8SI6-F-O ] /SOLARIS 10u8
/1.50v=KVR1333D3D4R9S/8GHB /all x4

=Socket LGA 1366 / 5500
235 - Xeon E5606 2.13GHz 80W Quad/8M /*
390 - Xeon E5620 2.40GHz 80W Quad+HT/12M [5.86GT QPI] [no VT-d] /*
395 - Supermicro X8DTL-6F [MBD-X8DTL-6F-O] /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
406 - Supermicro X8DTL-6 [MBD-X8DTL-6-O] no-IPMI /SOLARIS 10u8
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4

=Socket LGA 1366 / 5520
525 - Supermicro X8DTH-6F [ MBD-X8DTH-6F-O ] /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4
515 - Supermicro X8DTH-6 [ MBD-X8DTH-6-O ] no-IPMI /no-SOLARIS
/1.50v=KVR1333D3D4R9S/8GHB /all x4

450 - Supermicro X8DT6-F [ MBD-X8DT6-F-O ] /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4
433 - Supermicro X8DT6 [ MBD-X8DT6-O ] no-IPMI /SOLARIS 10u7
/1.35v=HMT31GR7BFR4A-H9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9 /all x4

463 - Supermicro X8DA6 [ MBD-X8DA6-O ] no-IPMI /SOLARIS 10u6
/1.35v=HMT31GR7BFR4A-H9, M393B1K70CH0-YH9 /1.50v=HMT31GR7BFR4C-H9, M393B1K70CH0-CH9, KVR1333D3D4R9S/8GHB /all x4

-Socket G34
260 - Opteron 6128 2.0GHz 115W Octo/12M /5,105*

* PassMark - CPU Mark @ cpubenchmark.net


RAID 10 [stripped mirror]
+Able to expand a mirrored pool/existing vdev by adding new drives & without resilvering [correct me if i'm wrong on either point]
+100% redundancy
+Best performance of all the ZFS RAID, if not being hit by CPU/system IOPS ceiling [hmm, needs more verification]
+Much faster resilvering as no parity calculation is needed
+Drive capacity expansion much simpler [i can disconnect 1 set of drives, connect the larger ones & rebuild the mirror, then do it again for the other set of drives]
?Cheap/low power quad core CPU able to handle FS task [not sure about this anymore]
-100% redundancy at the cost of 1/2 the storage capacity

RAIDz3
+Only 15% or so storage space used for redundancy in a 22 drive zpool
-Slower than RAID 10 [hmm, also needs more verification]
-Unable to expand capacity to existing vdev by adding new drives [wait what happened to block pointer rewrite functionality?]
-Very slow resilvering
-Cheap/low power quad core CPU is unable to handle FS tasks

So, whether to use ZFS as mirror or RAID, I'll leave the details for the 2nd thread.

?Q. Has anyone done tests comparing the performance/scaling of the underlying hardware & these two types of ZFS RAID [mirror & Z3]?

Although I am leaning towards Intel M/B + CPU, I am open to suggestions for AMD components if there is significant price/performance advantage. As I see it currently, both DP/UP has some benefits and negatives:
-DP-
+I can start with 1 CPU now & later add another when I start cascading to more storage chassis
+Won't have to replace motherboard/CPU to accomodate future growth
+Might be cheaper in the long run, when compared to replacing the whole system [or maybe not]
+3 channel/6 RAM slots
+Intel SDDC/AMD Chipkill
+RAS feature [at least some]
-Socket LGA 1366 only as no LGA 1155 DP M/B
-Socket LGA 1366 about to be made irrelevant with Socket LGA 2011/SandyBridge-E next quarter
?Initially more expensive than UP M/B + CPU [not sure about this either]

-UP-
+Cheaper than DP, initially
+Socket LGA 1155/SandyBridge based stuff is newer than LGA 1366/Nehalem based DP systems
-No RAS features whatsoever
-2 channel/4 RAM slots only
-No expandability to accommodate future growth, will have to replace M/B & CPU completely
?Probably will hit I/O limit even with simple mirror type ZFS when using 20-24 drives [i highly doubt can handle RAIDZ3]

+Motherboard must be compatible with Solaris Express or one of the derivatives based off its dead cousin [OpenSol; RIP].
+Motherboard must be compatible with SuperMicro SC846E16-R1200B chassis
+SAS2 6.0 Gbps controller [preferably LSI SAS2008 or better]
+SAS controller must support Initiator/Target mode [as I will only be doing software/ZFS RAID]
+Intel SDDC [or AMD Chipkill if AMD motherboard]
+IPMI 2.0 + IP-KVM with remote ISO mounting capability
+2 Pci-E 2.0 x8/x16 slots [well, the more the better]
+2 Gbit Ethernet, capable of teaming [until I can get 10G Ethernet cards]
?Q. How many disk supported in I/T mode? [LSI 1068e & LSI 9211-8i based on SAS2008 supports up to 122, so I've heard, but not sure]
?Q. How much bandwidth does the controller have to the MB, not the ports [LSI 1068e has a 2000 MB/s based on x8 PCIe 1.0/1.1]
?Q. How much real throughput can the controller handle, between the ports & the motherboard [most RAID cards can only do 1 GB/s or less]
?Q. How many IOPS is the SAS controller rated for? [LSI 1068e was rated for up to 144,000 IOPS, if I remember correctly]

Would prefer a SuperMicro M/B, as getting multiple parts from the same vendor would simplify warranty & support, but I'd rather get something better price/performance wise, no matter the manufacturer.

UPS

NOTE1: Absolutely no idea whatsoever. I wanted to get 3-4 decent consumer grade PSU's & cascade them together, but everyone said not to. Basically don't want to shell out several K for a enterprise level PSU; heck don't even need something like that.
NOTE2: Currently I have a APC RS1500 LCD. I tried testing it today with my current system to see if it works & it failed; computer shuts off as soon as I pulled the cable from the wall & computer constantly powers up then shuts down! My current system is a Q9550/GA-EP45-UD3P + 24" LCD, which takes up 200W at idle [135W without the LCD].

+180.00 - Cyber Power 900W LCD UPS [1500VA]

+A few small network appliances & the ZFS storage chassis should last long enough [~10 minutes] at full load, for a proper shutdown
+UPS will not turn itself back on or the system until at least 15-20% charged after depletion [so as not to crash system from a subsequent brownout]
+UPS power outage alarm can be turned off [don't want to wake up the whole neighborhood when on battery]

HDD + SSD

NOTE1: 8-10 + 2 spare SAMSUNG HD204UI F4EG 2.0TB drives for now [already bought 9]

I am thinking of getting 3-4 [2 ZIL + 1-2 L2ARC] cheap & small consumer grade SSD's. These will be MLC based & in the 40-80GB range, all depending on the price. OCZ Solid 3, OCZ Agility 3 & OCZ Vertex 3 seems to have nice read/write & IOPS [both sustained & random] numbers! Haven't decided which one yet, as I don't know much about the current gen [esp. OCZ model] drives. These cheap SSD's don't need to have any SuperCap's as I will have a PSU to go with this build.

?Q. What is the difference between OCZ Solid 3, Agility 3 & Vertex 3 drives?
?Q. Suggestion for any other SSD brand/model for my build, that has 3-5 years warranty? [def will be needing the longer warranty]

Chassis

NOTE1: I decided on SuperMicro SC846E16-R1200B. SC846E26-R1200B has dual SAS 6.0 Gbps ports, but also costs an extra $100-$150. Though I can always use the extra bandwidth when I start cascading with other chassis, I don't think the 2 SAS ports can be teamed [like Ethernet port's] by connecting them both to the same controller, I think they're only for failover.

Even with a DP motherboard I don't think I need a 1200W PSU! Any chassis that has SAS 6.0 Gbps backplane & a 900W PSU would do, not to mention could have saved $100-$150 on the price. Unfortunately SuperMicro doesn't have any E16/E26 chassis with 900W PSU.

4U Habey chassis that can hold 20-24 drives are almost nonexistent. Both Habey & NORCO ones are so barebones that I would have to spend a lot of money on good quality redundant PSU + HBA cards, both of which can easily add another 1K. Not to mention they usually have 4-6 separate backplanes with a separate SAS connector for each, no on board SAS controller will have enough ports for these.

1585.23 @ wiredzone.com = SuperMicro SC847E16-R1400LPB / part# CSE-847E16-R1400LPB [free shipping]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
+12 extra hot swap drive bays in the back.
+Only cost 20% extra for an increase of 50% more capacity over the SC846E16-R1200B.
-Expensive! 2 backplanes will run up to 700-800 dollars, about half the price of this chassis!!!
-Working on the 12 drives in the back got to be tough, esp if your server cabinet doesn't have a rear door!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bazel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.

1234.79 @ wiredzone.com = SuperMicro SC846E16-R1200B / part# CSE-846E16-R1200BP [free ship, which usually runs in the 150-200; yay!!!]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
-Expensive! Backplane by itself is ~600 dollars, half the price of this chassis!!!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bezel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.

RAM

NOTE1: I'd prefer to get x8 modules, as they're 33%-50% cheaper than x4 modules, as long as I can get SDDC working [well most new Intel chipsets now work with both types of module, except a dew things.] Mainly I want demand & patrol scrubbing capability of SDDC. So I've heard, that RAM errors are quite common. The numbers bandied around are between: 1 bit error/hour/GB of RAM or 10^−10 & 1 bit error/century/GB of RAM or 10^−17.
NOTE2: Even though Intel 5500 & 5520 chipsets support x8 [quad ranked] modules, even with SDDC, seems that SuperMicro M/Bs don't! So I'll have to fork up 2x-3x times as much for the x4 [single/dual ranked] modules! Oh & did I mention Kingston server RAM support is wholly missing on the SM boards too [they're more ubiquitous & cheaper than other brands]?/color]

Initially I'd like to start off with 24GB, seeing as how cheap RAM is nowadays and of course the M/B permitting.

@newegg.com
389.99 - x4 Kingston 24GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK3/24G [3 x 8GB]
259.99 - x4 Kingston 16GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9SK2/16G [2 x 8GB]
+152.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8GHB
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-CH9
+149.99 - x4 SAMSUNG 8GB 240-Pin ECC Registered DDR3 1333 M393B1K70CH0-YH9 /1.35v
+124.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3D4R9S/8G
+89.99 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3Q8R9S/8G

@fticomputer.com +ship
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4C-H9
+131.48 - x4 Hynix 8GB 240-Pin ECC Registered DDR3 1333 HMT31GR7BFR4A-H9 /1.35v

++79.74 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 KVR1333D3LQ8R9S/8G /1.35v
http://www.buy.com/prod/kingston-kvrt.aspx?Item=N221238858.html

+RAM needs to be compatible with motherboard.
?Q. Should I look to get LV/1.35v DDR3 or stick to the regular stuff [heard LV DIMMs can't be used in dense configurations, as in less modules]

Parts & Cables
@ wiredzone.com
42.24 - 2 x [$21.12] MCP-220-84701-0N Supermicro 3.5"/2x2.5" System hard drive tray
61.55 - 1 x 4U, Front Bezel Cover, for SC846, SC848
@ newegg.com
07.98 - 2 x [$3.99] Rosewill 12" PWM Splitter Model RCW-FPS-401
01.99 - 1 x StarTech 8" EPS 8 Pin Power Extension Cable Model EPS8EXT
09.98 - 2 x [4.99] NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable
19.99 - 1 x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable - OEM
09.68 - Shipping
- 1 or 2 heatsink bracket, in case the M/B doesn't some with one


P.S. Please forgive any typos or omission, its very late here.

20110816_1733: Link to 2nd part of this thread
20110814_1710: added UPS section; edited list of RAM, M/B models; also added [some] officially supported make/model of RAM for each M/B
20110807_1642: typos corrected
 
Last edited:
1234.79 @ = SuperMicro SC846E16-R1200B

Why not do the SC847 (double-sided drives)...for 30% more you can have 36 (45 if you build your head end in an external case) drive capacity. That's what I'm using and have been extremely happy (except the lack of SAS multiplexing..but pretty sure the expander in the SC846 is implemented the same way).
 
It's clear you've spent a lot of time thinking about this. I am a ZFS n00b, and only recently placed an order for the hardware required for an all-in-one build.

I do wonder since you've spent so much time and are going to spend so much money, why you are choosing OCZ SSDs? I am under the impression that their failure rate is much higher than Intel. An SSD failing in this build with data not yet written to disk is one of the few ways you can actually lose data. I know - you are going to have the machine on battery backup - but this does not solve the problem of the SSD failing.

Also, you make some claims about ZFS RAIDZ performance. I understand that with ZFS there are many data calculations required of the CPU to error-correct. I also understand that with any level of RAIDZ, you introduce additional parity calculations. However, a SB quad core xeon seems like it should be able to handle RAIDZ calculations for a few simultaneous users without a problem.

Are you saying that this is a problem only with large disk sets and RAIDZ3, where parity calculations are larger? Or are you saying this is due to the extra CPU load from encryption?

I have not heard anyone else suggest that a modern quad core xeon is incapable of handling the FS load from a medium size RAIDZ array. It seems to me like the disks would be overwhelmed long before the CPU would be.

As I said, I am a ZFS n00b - I'm just trying to learn more here so if you please, enlighten me!

edit: I'm going to quote the typical response I hear when asking about CPU requirements for ZFS:

"For light usage (I'm assuming you want to use this kind of thing for at
home), it should be fine. You should have no problems serving via CIFS
or NFS or even iSCSI. RAM is a bit light, but if you are not doing
anything else which requires much RAM, you should be OK.

Without Dedup or Compression turned on, ZFS isn't that much of a CPU
pig. Particularly for serving small numbers of disks. RaidZ[123] will
consume slightly more CPU load that mirroring, but not so much as I'd
notice in a config such as yours.

Remember with RaidZ[123], you only get the IOPS equivalent of a single
drive (about 100/s for typical Sata drives), and random I/O performance
(throughput) of about the same (i.e. as one disk). However, you should
get streaming read/write speeds close to that of the number of data
disks (i.e. N-1 for RaidZ1, N-2 for RaidZ2, etc.). So, for things like
being a home media server, you'll easily keep up with a Gbit Ethernet.
For doing things like compiling over NFS/CIFS, the disks are going to be
your bottleneck." -Erik Trimble
 
Last edited:
Why not do the SC847 (double-sided drives)...for 30% more you can have 36 (45 if you build your head end in an external case) drive capacity. That's what I'm using and have been extremely happy (except the lack of SAS multiplexing..but pretty sure the expander in the SC846 is implemented the same way).
You're right, the SC847E16-R1400LPB is only 20% more! Thanks for the suggestion. The only thing would be to get all LP PCIe expansion cards, which can be expensive. Yes, the 2 SAS connectors cannot be multiplexed/teamed, its only used for fail-over; would be awesome if that could be done so that you would get twice the theoretical bandwidth when you start cascading multiple chassis.

Working on drives in the back must be quite hard too. You'd probably need a server cabinet with rear doors to be able to access the rear drives more easily. What kind of server cabinet/rack do you have these in? What would you recommend?

B.T.W how loud is the chassis/fans; I've heard people describe these SM chassis as extremely loud, if indeed true, what can you do to dampen/reduce the noise?

Did you ever start a thread when you built your system with that thread? It would be great to read up on that & get some more info as to how you specced the hardware for your build. Might just help me spec out my system.

Great post!

I believe I read that data in the SSDs will not be encrypted... yep. At least on the L2ARC. http://hardforum.com/showthread.php?t=1587905
The first bump on the road & surely not the last till this setup is completed and up-n-running. Thank you for the info, somehow that little info got through me 'reading' fingers.

It's clear you've spent a lot of time thinking about this. I am a ZFS n00b, and only recently placed an order for the hardware required for an all-in-one build.

I do wonder since you've spent so much time and are going to spend so much money, why you are choosing OCZ SSDs? I am under the impression that their failure rate is much higher than Intel. An SSD failing in this build with data not yet written to disk is one of the few ways you can actually lose data. I know - you are going to have the machine on battery backup - but this does not solve the problem of the SSD failing.

Also, you make some claims about ZFS RAIDZ performance. I understand that with ZFS there are many data calculations required of the CPU to error-correct. I also understand that with any level of RAIDZ, you introduce additional parity calculations. However, a SB quad core xeon seems like it should be able to handle RAIDZ calculations for a few simultaneous users without a problem.

Are you saying that this is a problem only with large disk sets and RAIDZ3, where parity calculations are larger? Or are you saying this is due to the extra CPU load from encryption?

I have not heard anyone else suggest that a modern quad core xeon is incapable of handling the FS load from a medium size RAIDZ array. It seems to me like the disks would be overwhelmed long before the CPU would be.

As I said, I am a ZFS n00b - I'm just trying to learn more here so if you please, enlighten me!

edit: I'm going to quote the typical response I hear when asking about CPU requirements for ZFS:

"For light usage (I'm assuming you want to use this kind of thing for at
home), it should be fine. You should have no problems serving via CIFS
or NFS or even iSCSI. RAM is a bit light, but if you are not doing
anything else which requires much RAM, you should be OK.

Without Dedup or Compression turned on, ZFS isn't that much of a CPU
pig. Particularly for serving small numbers of disks. RaidZ[123] will
consume slightly more CPU load that mirroring, but not so much as I'd
notice in a config such as yours.

Remember with RaidZ[123], you only get the IOPS equivalent of a single
drive (about 100/s for typical Sata drives), and random I/O performance
(throughput) of about the same (i.e. as one disk). However, you should
get streaming read/write speeds close to that of the number of data
disks (i.e. N-1 for RaidZ1, N-2 for RaidZ2, etc.). So, for things like
being a home media server, you'll easily keep up with a Gbit Ethernet.
For doing things like compiling over NFS/CIFS, the disks are going to be
your bottleneck." -Erik Trimble
When it comes to H|Forum topics, I think were all pretty much n00b's here. Some, like me, more so than others.

Its not really so much money, the budget is 2800-3000; thats it! Half that is going into that SM chassis [which is why I'd have preferred a 900W & several hundred cheaper one with a SAS2 backplane.]

Its not specifically OCZ SSD's, but rather consumer grade MLC based SSD's. Why? Because they're cheaper & generally has larger capacity than enterprise [or Intel] drives; in most cases these [the 3rd gen SSD's] are way faster than the most affordable enterprise [or Intel] drives too.

As for the potential of a ZIL SSD failing, I mentioned "I am thinking of getting 3-4 [2 ZIL + 1-2 L2ARC"; basically the ZIL SSD's will be mirrored, to prevent data loss from a single ZIL failure/corruption!

For a L2ARC SSD needs to have fast read speed & can do with so-so write speed, which is the opposite for a SSD used for ZIL as needs fast write speed & can do with so-so read speed. Most affordable Intel SSD's don't stack up in either category. BTW the 3 OCZ SSD models I've listed are all SATA3/6Gbps, capable of reading & writing at around ~75/80 MB/s-550 MB/s + ranges in the 20k-80k IOPS; all this for a mere $110-$130 [60GB models]!

First of all, encryption, compression & dedup are all an option, as long as whatever system I build can handle it. Seconds I did mention getting upgrading with 10gigE if/when affordable to get one. Third I also mentioned that I will be expanding capacity by:
1. Adding more drives to fill out the chassis
2. Replacing current drives with larger capacity ones [in about 3 years warranty will expire on these drives, wouldn't make any sense to keep them around in a production environment. you can be sure larger capacity drives will be the norm when i do replace them in 3 years time]
3. Cascading to more chassis [one of the main reasons for forking up so much money for a single backplane & SAS2 capable SM chassis]

When I any of the things listed above, you can be sure, any system that I build currently specced to my current 10-12 drive ZFS mirror/RADZ will become the bottleneck.

As mentioned in the OP I've seen, ZFS striped mirror of only 20-24 drives barely able to hit half the bandwidth of a 10G Infiniband. I think a RAIDZ/Z2/Z3 will probably be worse because of all the extra parity calculations that needs to be done. Not to mention I'm not even sure if 24 drive RAIDZ3 is doable or even recommended.

Don't forget that not only parity & check summing, but everything else, from dedup, compression, encryption to all other filesystem related work has to be done on the CPU. Suddenly you start to realize just how much more process intensive ZFS is & why even simple striped mirroring can push quad cores to the limit.

As you mentioned encryption, only if I go with a LGA 1155/SB based CPU with AES-NI, the CPU will not be taxed for the encryption. But go with a LGA 1156/1366 [Nehalem] & without any dedicated AES hardware & everything again gets put on the CPU.

But none the less, I would actually like to see some benchmarks [rather than several year old posts] performed on similarly sized arrays [20-24 disks] with striped mirror & RAIDZ2/3.

If you come across any such posts/threads, please post them here or PM me. Similarly if I come across any such post/thread, I will definitely let you know.
 
I've been checking out some ssd's for my current build and came across this Wintec FileMate 33121301 drive. Not sure if its the best for what you're trying to achieve but it's probably the cheapest zil drive around. Two at $170 shipped isn't bad. I may give them a test run when/if the prices drop. (I am out of budget atm! :p)

Anyone try this drive?
http://www.newegg.com/Product/Product.aspx?Item=N82E16820161375
 
Last edited:
Thx for the link dizzy! I just purchased one from glcomp along with some 8087 cables from superbiz. :D
 
Thanks for the thorough write up. Let us know some benchmarks when you get to it :), perhaps with performance issue with and without the SSD caches.

My current ZFS setup [that i'm experimenting with] has so-so [50MB/s read or write] over cifs with 5 7200RPM hitachi's, but no cache yet. Curious to how much the cache will help you [and maybe help me].

I noticed Intel sells little 20GB SLC SSD's for $100, wonder if they would make a good ZIL drive in R1.
 
The non scientific way of benchmarking...

dd if=/dev/zero of=temp1.txt bs=1024k count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 16.264823 secs (264064801 bytes/sec)

dd if=temp1.txt of=/dev/null bs=1024k count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 13.657068 secs (314486782 bytes/sec)

SMB performance goes above 70Mbyte/s, will probably be a lot better with Samba 3.6 since it introduces SMB2.

Specs:
Mobo: Intel DG45ID
CPU: Xeon E3110
RAM: 8Gb
HBA: IBM M1015 (flashed)
Additional controllers: 2 * ASMedia ASM1061 2-port PCIe (not currently used)
HDDs: 4 * Hitachi 7K1000.B 1TB (RAID-Z)
OS: FreeBSD 9.0-CURRENT #1: Tue Jun 7 00:01:33 CEST 2011

//Danne
 
NOTE: The main sticking point is whether to go for a UP or DP system. Not sure if I'm going to be doing mirror or RAIDZ3. Initially I wanted to do mirrors as they're simple enough not stress the CPU too much [not exactly, as I later learned.] I have seen single Quad core Xeon's hit the dirt when accessing a 20-24 drive mirror [so forget RADZ anything,] as ZFS still has to do checksumming for both reads/writes & a bunch of other fancy stuff that even a hardware RAID card doesn't. I even plan to encrypt the entire ZFS pool.

If you are that concerned with CPU, stick with the 1366 DP chipset. Then you can always drop the second xeon chip in there. I would perhaps stick with the lower tdW chips (maybe 80 or 95W chips rather than the higher wattage ones). ZFS really doesn't utilize that much cpu in the overall scheme of things, so you should be fine with one multi-core cpu. Especially if you go all-in-one because you're essentially limited to two cores to OI anyhow via ESXi for stability reasons.

So, whether to use ZFS as mirror or RAID, I'll leave the details for the 2nd thread.

What is your IOPS need ? If you're not IOPS bound, I'd stick with RaidZ2. Believe it or not, raidz2 compares with mirrors in terms of redundancy. Z3 beats both. For home use, I don't believe you're going to need the IOPS of mirrors. So, I would stick with stripes of Raidz2. Also, do NOT stripe 24 drives together in a raid z3. Use 3 stripes of 8 drives in z2 or 4 stripes of 6 drives in z2. That will give you the performance advantage of multiple vdevs (like striping 3 or 4 drives together). It will also allow you to expand the array in 6 or 8 drive increments.


?Q. Has anyone done tests comparing the performance/scaling of the underlying hardware & these two types of ZFS RAID [mirror & Z3]?

Yes, there are numerous benchmarks on the internet. Again, forget about benchmarks. What are you using it for and why do you need the iops ?


Although I am leaning towards Intel M/B + CPU, I am open to suggestions for AMD components if there is significant price/performance advantage. As I see it currently, both DP/UP has some benefits and negatives:
-DP-
+I can start with 1 CPU now & later add another when I start cascading to more storage chassis
+Won't have to replace motherboard/CPU to accomodate future growth
+Might be cheaper in the long run, when compared to replacing the whole system [or maybe not]
+3 channel/6 RAM slots
+Intel SDDC/AMD Chipkill
+RAS feature [at least some]
-Socket LGA 1366 only as no LGA 1155 DP M/B
-Socket LGA 1366 about to be made irrelevant with Socket LGA 2011/SandyBridge-E next quarter
?Initially more expensive than UP M/B + CPU [not sure about this either]

-UP-
+Cheaper than DP, initially
+Socket LGA 1155/SandyBridge based stuff is newer than LGA 1366/Nehalem based DP systems
-No RAS features whatsoever
-2 channel/4 RAM slots only
-No expandability to accommodate future growth, will have to replace M/B & CPU completely
?Probably will hit I/O limit even with simple mirror type ZFS when using 20-24 drives [i highly doubt can handle RAIDZ3]

Just get the 1366 based chipsets if you want the option of 2nd processor. However, like I said before - you are limited to 2 vCPU's if you're doing all-in-one on the same machine.

+Motherboard must be compatible with Solaris Express or one of the derivatives based off its dead cousin [OpenSol; RIP].
+Motherboard must be compatible with SuperMicro SC846E16-R1200B chassis
+SAS2 6.0 Gbps controller [preferably LSI SAS2008 or better]
+SAS controller must support Initiator/Target mode [as I will only be doing software/ZFS RAID]
+Intel SDDC [or AMD Chipkill if AMD motherboard]
+IPMI 2.0 + IP-KVM with remote ISO mounting capability
+2 Pci-E 2.0 x8/x16 slots [well, the more the better]
+2 Gbit Ethernet, capable of teaming [until I can get 10G Ethernet cards]
?Q. How many disk supported in I/T mode? [LSI 1068e & LSI 9211-8i based on SAS2008 supports up to 122, so I've heard, but not sure]
?Q. How much bandwidth does the controller have to the MB, not the ports [LSI 1068e has a 2000 MB/s based on x8 PCIe 1.0/1.1]
?Q. How much real throughput can the controller handle, between the ports & the motherboard [most RAID cards can only do 1 GB/s or less]
?Q. How many IOPS is the SAS controller rated for? [LSI 1068e was rated for up to 144,000 IOPS, if I remember correctly]

If you want all of those, just get the SM server motherboards. the ones with LSI2008 can easily be flashed to IT mode. There was a small glitch where if they were being used on a LSI SAS backplane, they reported double drives but that was fixed in a firmware upgrade.

NOTE: 8-10 + 2 spare SAMSUNG HD204UI F4EG 2.0TB drives for now [already bought 9]

I would prefer the Enterprise Seagate constellation drives but they are 30% more.

I am thinking of getting 3-4 [2 ZIL + 1-2 L2ARC] cheap & small consumer grade SSD's. These will be MLC based & in the 40-80GB range, all depending on the price. OCZ Solid 3, OCZ Agility 3 & OCZ Vertex 3 seems to have nice read/write & IOPS [both sustained & random] numbers! Haven't decided which one yet, as I don't know much about the current gen [esp. OCZ model] drives. These cheap SSD's don't need to have any SuperCap's as I will have a PSU to go with this build.

Do NOT use a cheap MLC as a ZIL log. It's not the supercap issue (although that helps). It's that the hammering that the ZIL log does can wipe out a low-end SSD drive in a couple months.

1585.23 @ wiredzone.com = SuperMicro SC847E16-R1400LPB / part# CSE-847E16-R1400LPB [free shipping]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
+12 extra hot swap drive bays in the back.
+Only cost 20% extra for an increase of 50% more capacity over the SC846E16-R1200B.
-Expensive! 2 backplanes will run up to 700-800 dollars, about half the price of this chassis!!!
-Working on the 12 drives in the back got to be tough, esp if your server cabinet doesn't have a rear door!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bazel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.

The 847 is a good chassis. Forget the warranty. A) have yet to find a chassis/backplane go bad (power supply does) B) dealing with SM is ok, but still a pain.

You definitely don't need/want the E26 for home use. Most businesses don't even dual-connect. You can use the 12 drives in the back for system drives, log drives, etc. Don't forget you can attach 4 2.5" drives or 2 3.5" drives in the chassis.

Doors do not open randomly. Only when you move/touch the chassis. Not a problem.
You can easily connect several chassis together cascaded. A better option would be to drop another 9211e card and hook additional chassis to additional SAS 6g busses.



Initially I'd like to start off with 24GB, seeing as how cheap RAM is nowadays and of course the M/B permitting.

ECC ram is good. Not cheap, but better to run. ZFS will use as little as 4gb (no dedup, etc) to as much ram as you will throw at it.

One note, why in the world do you want to encrypt the data ? There is little functional use to encrypt the data unless you have a reason to do so. Besides, you aren't running things secure that if someone came in and got access to your machine that had access to the data store. You would have to have things so encrypted, password protected etc that encryption would be beneficial. Doubt thats the case. That would also mean anything accessing said content has to be able to handle whatever encryption you have in place. Not a good idea. . . .
 
OP edited: added UPS section; edited list of RAM, M/B models; also added [some] officially supported make/model of RAM for each M/B

Here packetboy build

part 1
http://hardforum.com/showthread.php?t=1508468

cant find part 2

but here 3
http://hardforum.com/showthread.php?t=1539643

Search results incase your interested in other threads he started.

http://hardforum.com/search.php?searchid=17770683

Thanks for the links; didn't want to search through hundred of packetboy's post.

I've been checking out some ssd's for my current build and came across this Wintec FileMate 33121301 drive. Not sure if its the best for what you're trying to achieve but it's probably the cheapest zil drive around. Two at $170 shipped isn't bad. I may give them a test run when/if the prices drop. (I am out of budget atm! :p)

Anyone try this drive?
http://www.newegg.com/Product/Product.aspx?Item=N82E16820161375

Sustained write & random IOPS on that drive sucks, big time. Also I'm going to go with the much cheaper but large capacity, consumer MLC SSD's with SATA 6.0 ports. But if you're looking for enterprise quality SLC drives for ZIL, few post above, hotcrandel has posted about a new 20GB Intel SLC SSD, which is only $10 more than the Wintec 8GB model you posted about. You might want to look at those.

I'm not aware how well this will work in Solaris but it might be worth having a look at.

Motherboard (be sure to grab a CPU with integrated graphics)
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131725

Add some controllers (I'm not sure if you can have several but I don't see why it wouldn't work) and flash them.
http://www.glcomp.com/ibm-system-x-serveraid-m1015-sas-sata-controller
http://hardforum.com/showthread.php?t=1612482

//Danne

Thanks for the links; I'm looking to save the several K i'd have to spend on HBA cards b y going the SAS controller embedded on M/B route. Also a consumer M/Bs don't have either ECC/RDIMM [or both] support. Without RDIMM support higher RAM density is impossible.

Thanks for the thorough write up. Let us know some benchmarks when you get to it :), perhaps with performance issue with and without the SSD caches.

My current ZFS setup [that i'm experimenting with] has so-so [50MB/s read or write] over cifs with 5 7200RPM hitachi's, but no cache yet. Curious to how much the cache will help you [and maybe help me].

I noticed Intel sells little 20GB SLC SSD's for $100, wonder if they would make a good ZIL drive in R1.

You're welcome. Figured since I'm going to be asking others for help/opinion, why not make it a detailed & comprehensive so others looking for a similar setup in the future can use this post as a guide.

Even though I'm not getting any SSD's currently [running almost over budget], once everything is setup I will be running benchmarks with various RAIDz levels to gauge my hardware/software limitations & do them again once I add SSD's. When I do that I will definitely add benchmarks [which will be posted in the 2nd part/thread, I will link that post at the top of OP here]. I will be testing both the in-system & over network sustained & random read/write [both IOPS and transfer speed].

Yes, those Intel drives have came out recently; though I'm not sure what kind of program/erase cycle Intel 34nm SLC is rated for [definitely greater than 10K]. However the small capacity of these drives will mean you'll run through them quite a lot faster. Moreover, these small SLC drives [besides the the high price] has very low write speed & IOPS. Nonetheless very good suggestion [price/performance] if someone is looking for SLC SSD's for ZIL.

The non scientific way of benchmarking...

dd if=/dev/zero of=temp1.txt bs=1024k count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 16.264823 secs (264064801 bytes/sec)

dd if=temp1.txt of=/dev/null bs=1024k count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 13.657068 secs (314486782 bytes/sec)

SMB performance goes above 70Mbyte/s, will probably be a lot better with Samba 3.6 since it introduces SMB2.

Specs:
Mobo: Intel DG45ID
CPU: Xeon E3110
RAM: 8Gb
HBA: IBM M1015 (flashed)
Additional controllers: 2 * ASMedia ASM1061 2-port PCIe (not currently used)
HDDs: 4 * Hitachi 7K1000.B 1TB (RAID-Z)
OS: FreeBSD 9.0-CURRENT #1: Tue Jun 7 00:01:33 CEST 2011

//Danne

Is there any utility for Sol or any of the OSol derivatives that can measure random/sustained read/write IOPS with different size & queue depth [similar to ATTO, HD Tune & AS SSD] along with CPU usage?

If you are that concerned with CPU, stick with the 1366 DP chipset. Then you can always drop the second xeon chip in there. I would perhaps stick with the lower tdW chips (maybe 80 or 95W chips rather than the higher wattage ones). ZFS really doesn't utilize that much cpu in the overall scheme of things, so you should be fine with one multi-core cpu. Especially if you go all-in-one because you're essentially limited to two cores to OI anyhow via ESXi for stability reasons.

What is your IOPS need ? If you're not IOPS bound, I'd stick with RaidZ2. Believe it or not, raidz2 compares with mirrors in terms of redundancy. Z3 beats both. For home use, I don't believe you're going to need the IOPS of mirrors. So, I would stick with stripes of Raidz2. Also, do NOT stripe 24 drives together in a raid z3. Use 3 stripes of 8 drives in z2 or 4 stripes of 6 drives in z2. That will give you the performance advantage of multiple vdevs (like striping 3 or 4 drives together). It will also allow you to expand the array in 6 or 8 drive increments.

Yes, there are numerous benchmarks on the internet. Again, forget about benchmarks. What are you using it for and why do you need the iops ?

Just get the 1366 based chipsets if you want the option of 2nd processor. However, like I said before - you are limited to 2 vCPU's if you're doing all-in-one on the same machine.
Kinda figured as much over the past week of reading & following up on a lot of M/B, chipset & CPU specifications. No I'm not going to be doing an all-in-one because I will be expanding this storage solution by daisy chaining additional chassis on a as needed basis, in which case I might very well become limited by CPU IOPS & SAS2 link bandwidth. Also these will not only hold me personal files [media or otherwise], I will use it to store & serve VM images [VM's will run on separate machines however].

As mentioned several times before, I've seen 20 drive mirrors [of just 20 GB] hit CPU/IOPS barrier on Quad core Xeons & SSD's for both L2ARC & ZIL; using a single 10 Gb link that machine could barely use 60% of the available bandwidth & without even using dedup or encryption. Something I'd like to avoid

Since I don't have any good choice of a M/B & CPU combo right now, I'll wait for the newer LGA 2011 M/Bs [the SM sales rep told me their new LGA 2011 boards will be release in several weeks to possibly as early as the end of this month]. So currently I'm going to dismantle my personally computer & put it into the SM chassis to run tests. My current system is composed of a GA-EP45-UD3P M/B, Q9550 CPU & 8GB of DDR2 1066 RAM [the 16GB of OCZ I bought initially for this board didn't work so I had to return that.]

When I have expanded to multiple chassis I'd like the whole system to sustain performance for 2x10 Gbps Ethernet controllers [minimum of one 10 Gbps connection]. Currently I want the performance of 4x1.0 Gbps [or at least two 1.0 Gbps] Ethernet connections.

I haven't decided on a specific RAID level yet. I thought of mirrors as they're easier to expand by adding as little as 2 drive mirror vdev's to the pool. The larger the z2 array [number of drives] the more it falls off on the reliability side compared to a mirror. Although I would love to maximize the storage space available compared to a mirror, after all it makes no sense to spend almost 4k on drives only to lose half of it to mirroring.

Doing 6 drive RAID z2 is going to cost me 18 drives to parity, that is as much as a 36 drive mirror without any of the mirrors benefits [namely easier expansion & much better performance]! 36 drives total; 6 sets of z2 x 5 drives [12 of which is for parity] + 6 spare [1 per z2 vdev] = 18 drives for data & 18 for parity/spare. Or if I skip the spare drives I can do 6x6 drive z2 & lose only 1/3 to parity. Which is why I wanted to do RAID z3, but doing that on 28 drives [2 sets of z3 with 14 data + 3 parity + 1 spare drive] is not giving me an even number when doing 128 KiB / # of drive calculation.

Actually I haven't found any benchmarks with this particular [846/847] SM chassis, 24-36 drives & ZFS. Since you mentioned there are plenty, maybe I should look again.

If you want all of those, just get the SM server motherboards. the ones with LSI2008 can easily be flashed to IT mode. There was a small glitch where if they were being used on a LSI SAS backplane, they reported double drives but that was fixed in a firmware upgrade.
Any idea what kind of IOPS the LSI2008 is rated for? How about how many drives it supports in I/T mode? This will basically tell me how many chassis I will be able to daisy chain together.

I would prefer the Enterprise Seagate constellation drives but they are 30% more.
I already have 12 SAMSUNG drives, so I don't want to mix & match different make/model as that might have unforeseen consequences.

Do NOT use a cheap MLC as a ZIL log. It's not the supercap issue (although that helps). It's that the hammering that the ZIL log does can wipe out a low-end SSD drive in a couple months.
Which is why a large enough MLC drive with a good controller [so it does wear leveling properly] & 3-5 year warranty will be used.

The 847 is a good chassis. Forget the warranty. A) have yet to find a chassis/backplane go bad (power supply does) B) dealing with SM is ok, but still a pain.
A) These 1400W PSU's are going for as much $220 + shipping, each!
B) Figures, even with the corp email it took them 4 days to get back to me.

Do you know if this particular PSU from the SM 847 chassis is Active PFC? Then it won't work with the APC RS1500 UPS unit I currently have.

You definitely don't need/want the E26 for home use. Most businesses don't even dual-connect. You can use the 12 drives in the back for system drives, log drives, etc. Don't forget you can attach 4 2.5" drives or 2 3.5" drives in the chassis.

Doors do not open randomly. Only when you move/touch the chassis. Not a problem.
You can easily connect several chassis together cascaded. A better option would be to drop another 9211e card and hook additional chassis to additional SAS 6g busses.

ECC ram is good. Not cheap, but better to run. ZFS will use as little as 4gb (no dedup, etc) to as much ram as you will throw at it.
No I wanted to know if someone was able to use the second connection to effectively double the bandwidth available, instead of using it just for failover. Anyway, since I ordered the SC847 chassis, I can have the 2nd group of 4 SAS connectors from the M/B controller hooked up directly to the backplane, instead of daisy chaining it from the backplane in the front. Which will mitigate bandwidth problems somewhat down the road when I expand to more chassis, although will double the number of internal cables. Oh well, you can't win everything I guess.

Since I don't have a sever chassis with a lockable door yet, I ordered the front bezel. Don't want someone walking into the room & going "ohhh, shiny blinky lights; I wanna touch it" & pop some drives out.

Adding HBA cards will end up costing extra, along with the new chassis & drives, if/when I were to add more. So I would rather keep the initial & later expansion costs within whatever budget I'll end up with.

One note, why in the world do you want to encrypt the data ? There is little functional use to encrypt the data unless you have a reason to do so. Besides, you aren't running things secure that if someone came in and got access to your machine that had access to the data store. You would have to have things so encrypted, password protected etc that encryption would be beneficial. Doubt thats the case. That would also mean anything accessing said content has to be able to handle whatever encryption you have in place. Not a good idea. . . .
Since I'm building this & will have the storage space, I will be storing sensitive customer related information. Since I don't want anyone else, besides me & whomever needs this data, to have access to it, encryption can act as an access control mechanism, besides the security.

I am still not entirely clear as to how ZFS handles encryption or to be more clear, what it does to the encrypted data when the said data leaves the FS/computer its in. But I will be delving into this topic in more detail in the next thread, which will be the 2nd part to this thread. [This thread was mainly about hardware & estimating a requirement before getting them. Part two will be about the software/OS, raid levels & benchmarks.]

As you seem to have experience with ZFS systems, I hope you will post in the other thread too. Thank you for the help.
 
Back
Top