Talk to me about a no-storage "head unit" for my ZFS arrays ...

cbyunta88

n00b
Joined
Sep 22, 2011
Messages
35
I have built quite a few large ZFS fileservers with DAS over the years, and they all look about the same - the computer itself is a 4U, 36-drive chassis, and then we attach 2 or 3 or 4 of the 4U, 45-drive supermicro JBOD chassis to them.

So, a bunch of big 4U devices, and the first one of them has a bunch of storage (36 disks) in it. All is well.

But now we're going to build some more, and I am thinking about NOT having any disks in the main system - that is, just get a 1U or 2U "head unit" that has no spinning disks in it - just a small system for the motherboard and boot SSDs and cache SSDs and put ALL of the storage in the attached JBODs.

The nice thing about this setup is that I have nice, hotswap 2.5" trays on the front of the system for lots of SSDs - a boot mirror and maybe 4 or 6 SSDs for SLOG, etc. Also I get a built-in CD rom for system load and updates, which is really handy. When we used the 36 drive 4U chassis, we have zero slots for SSDs and they have to be inside the chassis and not hot swap, and also we have a lame USB cd rom in the rack.

Is this how folks do it ? Any downsides to this ?

The only downside I can think of is that we are "spending" 2U of rack space and "spending" some wattage on non-storage equipment, but it's not that much ...

Any reason not to do this ?

Thanks.
 
Interesting. Can you briefly share some of your experiences? What are your recommendations, if one were to build such a system you describe? hardware? best practices, etc?
 
What are you trying to GAIN from not doing it how you are now? What you explained is the same thing setup differently, with no gains but more risk since more drives are cable attached vs. in hot swap trays in the main system, slightly more risk not much ;) The only real benefit I see is being able to easily swap out your system without touching the drives.

I am/doing the same thing really. I use the 4U SuperMicro (36 hot swap I think it was) for my system/mobo/etc, and then another one for just the hotswap bays,backplane, and power. The 'system' one I have trays for 2.5" drives, so I run my SSD in hot swap in there. My goal is to have 4 or less SSD "inside" so things are MUCH easier to swap, and work on.

I had the "system" in a SM 3U 16Bay until I decided why not go with the 36 for more density, SLIGHTLY taller.

My SM 3U and 4U 1200w and 900w Gold and Platinum Power Supplies use about 20-35w more than a SeaSonic 550 Gold Which is more sized for my # of drives and makes sense.

Doing it like this the system chassis is going to run ZFS, the externals I may run HW RAID and passthrough in ESXI so I can have various hot-swap configurations and file systems.
 
Last edited:
"The only real benefit I see is being able to easily swap out your system without touching the drives."

Yes, that's the idea - that if we have failing ram/cpu/SAS card or whatever, we don't need to send a tech out and open things up and do surgery - we can just swap in a different head unit and be done with it.

The other advantage is having nice hotswap front-loaded SSDs. I know you can buy adaptors, but then I have uneven blocks of disks (instead of 3 12-disk arrays, I have 2 12-disk arrays and one oddball array with 8 or 6 drives in it, and if you are using raidz3, that's just a waste).

So those are the two benefits - easy hotswap of the entire system, and easy SSD access - up to 16 SSDs in the front so we can add more cache later if we want to, or maybe put a hot spare in for the boot mirror, etc.
 
It sounds like it would be a bit of an advantage for you to do so.
You would be buying an extra case, power supply, and the space and power plugs it takes up is the only real disadvantage.
 
It boggles my mind people still insist to load software from CD's when SM, Dell, HP, IBM all have virtual cd capability and mounting ISO's. Ditch that disposable media mindset man! LCISOCreator, Folder2ISO (and others) are your friend, friend!
 
A storage head without disks + storage nodes with disks can give advantages
- capacity not limited by drive bays
- performance scale over nodes as you are not limited by a single link/expander
(can use multiple SAS or network links in parallel)
- Availability if you do ZFS raid over storage nodes
- redundant nodes can be placed remotely/ next building
- HA or rapid manual failover with a second head

If you use network based connectivity, use direct links or redundant switches.

The last option requires MPIO SAS configurations if you use "local SAS disks on external jbod boxes" or a SAN network based connectivity between head and nodes over AoE, IB, FC or iSCSI

see options at http://napp-it.org/configurations_en.html
 
im doing a similar thing on a smaller scale. My "head unit" is on of these mounted on the back side of the rack:
0080792_rackmount-2u-eatx-w-700w-red-p-supply.jpg

http://www.supermicro.com/products/chassis/2U/823/SC823MTQ-R700LP.cfm

i then reverse the airflow of the fans on the PSU and chassis and load it up with low profile HBAs. I originally used the dell H200 external, but have since started experimenting with the LSI 9202-16e:
s-l500.jpg

http://www.ebay.com/itm/Dell-LSI-SA...036?pt=LH_DefaultDomain_0&hash=item463369d6ec

paired with a 7 pci-e slot board you have the ability to run 28 SFF ports BEFORE expanders.

rather than using big expensive supermicro JBOD chassis, i am using barebones dell C2100s with the two SFF8087 backplane ports ran straight out the rear pcie slot to a SFF8088 adapter. You cant beat a 12 bay JBOD with dual PSU AND its own BMC for $150!
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
It boggles my mind people still insist to load software from CD's when SM, Dell, HP, IBM all have virtual cd capability and mounting ISO's. Ditch that disposable media mindset man! LCISOCreator, Folder2ISO (and others) are your friend, friend!


Wake On LAN and automated installers too. :)
 
Back
Top