Homebrew ZFS SAN project

The Spyder

2[H]4U
Joined
Jun 18, 2002
Messages
2,628
A few months ago I started looking in to storage solutions for a virtualization project at work. After much research and speaking to several very knowledgeable people, I decided it was worth the initial investment to get a test rig together for a proof of concept. I was very impressed with ZFS and what it had to offer. Gea's Napp-It made it seem like a piece of cake to implement and I worked with a Sun admin who swears by it. I ordered similar hardware for myself to replace my aging WHS. Due to some delays, my home gear got here first and I built the following:

Supermicro X8SIL -F -O
4x4gb Kingston DDR 3 Unbuffered ECC
Intel X3430
IBM ServeRAID BR10i
1x Intel Pro 1000 PT Dual Port Nic
10x Hitachi 2tb Cool Spin drives
1x Crucial C300 64g cache drive
1x 640gb WD blue main storage drive
Supermicro 5x3.5 hard drive chassis
OCZ 600w powersupply

img1427zs.jpg


So far (knock on wood) this has been running flawlessly on OpenSolaris with Napp-It using vSphere 5 as my all in one host. Running 3 VM's + OpenSolaris without breaking a sweat so far.


Back to the proof of concept machine. I located a 34bay chassis with hot swap power supplies at a local ewaste recyclers and decided it fit the bill. Sadly this will bite me later. The initial config was as follows:

Supermicro X8DAH+-F-O
16gb Kingston DDR3 ECC
Xeon e5420 2.5 Quad
Two LSI SAS9201-16i controllers
10 300gb 15k SAS hard drives
10 1tb WD RE3 hard drives
2x Quad Port Intel Nics
2x Dual Port Intel Nics

After waiting for weeks everything finally showed up and I got the system assembled.
20 WD RE3 1tb :


Uploaded with ImageShack.us
Nothing like power tools to help with removing and reinstalling 20 drive cages...


Uploaded with ImageShack.us
16 with 4 hot spares :)


Uploaded with ImageShack.us



Uploaded with ImageShack.us


Uploaded with ImageShack.us
zfsbuildnetworkadapters.jpg

By thespyder at 2011-09-30
zfsbuildharddisk.jpg

By thespyder at 2011-09-30

Now that 34bay case biting me? Well... Lets just say I should have bought a Supermicro 900b. I seem to have run in to problem after problem with it. First, the motherboard would not fit- It hits one of the exhaust fan brackets. Solution- remove the mounting bracket. Second, 3 motherboard studs do not line up. Solution- Grinder and plastic stand offs. Third, it required custom 8047 to 8088 cables at $29x8... Fourth, the two bottom trays for the system drives have the incorrect drive sleds. Solution: modify the sleds with new holes. Fifth, I did not like the cooling setup, so a custom 3x 120mm fan bracket has been made and is waiting to be installed. The final blow came today and it was completely my fault. It is SATA only. My SAS drives were useless. Solution: Sell the SAS drives and buy 6 2tb Hitachi drives.

For now, it is burning in over the weekend and Monday I will start benchmarking. Hopefully the rest of my memory, cache SSD, log SSD, rails will show up next week. I am planning on running a Open Solaris, Open Indian, Freenas, Nexanta benchmark gauntlet on this.
Drive setup:
8x 1tb Raid10 mirror
8x 1tb Raidz2 storage
4x 1tb hot spare (2 per pool)
2x 250gb mirror OS
2x 120gb SSD cache/logs

Questions and suggestions are always welcome.

PS if anyone would like this 34 bay chassis, I have 4 weeks before this gets racked and would happily buy a Supermicro to replace it with. www.servercases.com still sells them and parts.
A standard ATX board would easily fit and all you would need is a HP expander + controller card. 900w redundant PS with 2x 8 pin and 1x24.
 
Last edited:
"It is SATA only. My SAS drives were useless. Solution: Sell the SAS drives and buy 6 2tb Hitachi drives."

I did not understand this part. What is SATA only? The LSI cards accept your SAS disks without problems.
 
What are you going to do with all those interfaces? LACP? If so, why not bump up to 10GbE?
(and nice build - I was just struck by all the network interfaces)
 
nice build. Looks good other than the SAS issue. These 8088-8087 cables (http://www.monoprice.com/products/p...=10254&cs_id=1025411&p_id=8197&seq=1&format=2) might be a bit cheaper. Probably not worth returning/replacing but in the future.

Are you planning on mirroring the rpool drive, might be worth it for the added reliability.

Typo- 8087 to 8470.

Yes on the rpool- I need to figure out why is lost the 2nd 250gb drive. It is installed and at last check, detected by the bios.
 
What are you going to do with all those interfaces? LACP? If so, why not bump up to 10GbE?
(and nice build - I was just struck by all the network interfaces)

I actually do not need the two dual ports that are in there, they will be moved to another box here on Monday. I am providing two redundant 2Gb links to the switch and a redundant 1Gb to each server. The switch has a 2Gb trunk dedicated to the SAN vlan. Should be fun to play with.
 
Subscribed

You should put on some Prolimatech Megahalem's with some 120mm Delta 1212UHE's
 
Sadly those are too tall. We tried several heatsinks and all hit. We have a custom 3 120mm fan bracket being installed and I will be removing the intel stock 60mm fan. I should have time to make a small duct off the 120mm that will sit in front of it.
 
I actually do not need the two dual ports that are in there, they will be moved to another box here on Monday. I am providing two redundant 2Gb links to the switch and a redundant 1Gb to each server. The switch has a 2Gb trunk dedicated to the SAN vlan. Should be fun to play with.
this doesn't make sense. why the dedicated links to each server? you're losing the switch redundancy for those servers relying completely on the cards in the chassis.

just do 2 4gb trunks one for each switch and if you must use initiator access control or NFS ACLs for each server.
 
i dont understand. you say 2 trunks from the zfs to the switch, that is 4 ports. then you say 1 redundant link to each server so i'm presuming you mean the other 4 ports won't be trunked/802.3ad/lacp etc.

what is the point of the links to the switch and then the separate 4 links? i don't get it.
 
Bad wording, sorry. Bit busy around here :) I ended up changing how I was going to do the LCAP from the initial config and ended up with extra ports. The number of servers also has been reduced and available switch ports halved.
In my head I have it laid out as follows:
ZFS server: 2 4Gb trunks -> HP switch
HP Switch -> 2 2Gb trunks to each server (6 total, setup in fallover)

This to me seems the most logical with the gear I have (2 1800-24g switches and 3 ESX servers).
If you have any suggestions, let me know!
 
lol ah ok yeah that makes sense.

if you're doing iscsi though i would recommend not trunking the fileserver. instead use a portal with 8 targets and set the multi path policy to round robin on the esx server.

do test both but i've seen better performance from my own testing not using trunking on the file servers. i do trunk the esx servers though as testing gave better performance with that end trunked.
 
It was purchased some time ago and onsale.

If I built this server again, I would use a supermicro chassis, X9 board, newer xeon, and using a board that would not require the $500 LSI controllers.

Check out www.zfsbuild.com for a outdated yet similar setup.
 
It was purchased some time ago and onsale.

If I built this server again, I would use a supermicro chassis, X9 board, newer xeon, and using a board that would not require the $500 LSI controllers.

Check out www.zfsbuild.com for a outdated yet similar setup.
not to hijack your thread, but what do you think of this board and controller?

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182253
3x http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157
 
Despite your challenges, looks like this thing is coming together! I'd have to agree with your supermicro comment though. I used a Norco RPC4224 for my build and I've never been overly happy with the build quality (I'm a perfectionist) so at some point I'm probably going to rebuild when I can afford it with some of the supermicro servers.

My idea is to use a 2U server with SFF disk slots in the front, and then use a separate 4U front & back JBOD case for the disks. This way I can cram the server node with SSD's and simply dual SAS link to the JBOD.
 
Despite your challenges, looks like this thing is coming together! I'd have to agree with your supermicro comment though. I used a Norco RPC4224 for my build and I've never been overly happy with the build quality (I'm a perfectionist) so at some point I'm probably going to rebuild when I can afford it with some of the supermicro servers.

My idea is to use a 2U server with SFF disk slots in the front, and then use a separate 4U front & back JBOD case for the disks. This way I can cram the server node with SSD's and simply dual SAS link to the JBOD.
You are not happy with the norco 4224 build quality? What is the problem? I am about buy one of them now.

Your main idea, sounds neat. I want to do a similar thing. Have the norco 4224 as JBOD with a Chenbro SAS expander, and connect that to my PC with a LSI2008 card. What cables would I be using to connect them? SFF8087? How many SFF8087?
 
Update:
Personal home server has been upgraded to some extent (more on this once I get home).

Benchmarking has begun. Initial results are below:
16 1tb drives in a "raid 10" setup
18gb ram
e5420 2.5 quad

Bonnie is showing 945Mbs Read and 888Mbs write. Ordering some SSD's hopefully soon to add as L2ARC and log file drives. Should be interesting!
 
Solaris 11 txt is showing 988/767
Solaris 11 GUI is showing 945/888

Both on IO meter are hitting right around 27,000 IOPS.
This is only for ~6 vm's so I believe I should be good :)
Photos in a bit. I will be testing Openindiana and Nexanta over the next few days.
 
Back
Top