Upgrade home storage... what would you do?

Discussion in 'SSDs & Data Storage' started by farscapesg1, May 16, 2016.

  1. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    Ugg.. been going back and forth on this for awhile and just can't make a decision. I'm currently running an installation of OpenIndiana with the following;
    A pool of 6 3TB drives in mirrored pairs shared for general storage (music, video, documents, etc.)
    A pool of 6 320GB drives in mirrored pairs shared via iSCSI as a VMWare datastore
    A pool of 4 240GB SSD drives in RAIDZ1 shared via iSCSI as a VMWare datastore

    This is currently being run as a standalone server in a Norco 4224 case, using a Supermicro X8SIL-F, XEON x3430, 16GB RAM, 2 M1015 storage cards and a QLogic 2462 providing fiber for two ESXI hosts (only one is currently in use).

    Due to some issues with the OS (which has been limping along), I need to rebuild and I'm currently looking at the following options.
    1) I have a spare X8SIL-F, xeon x3440, and 32GB RAM. I can get everything set up (using OmniOS/Linux/FreeBSD/whatever). The downside to this is the PCIe slot limitation on the motherboard. With the QLogic card and two M1015 controllers, I'm out of slots and stuck with only the two onboard NICs, unless I replace the two M1015s with something like a LSI 9201-16i which is too pricey right now.

    2) I have two unused Dell R710 servers (one with a single X5550 and 64GB RAM, the other with dual X5650s and 64GB RAM). I would need to pick up an external HBA (looking at an LSI 9201-16e) and a couple 8088-8087 adapter brackets so I can re-purpose the Norco case as a JBOD case for the drives (already have a JBOD board to control the power).

    With either option I'm leaning towards moving back to an "All-in-one" setup to be able to run a few other VMs with the storage that are essential (storage, AD, vCenter, and maybe Plex).

    I guess the pros/cons between deciding are;

    X8SIL/x3440
    • Pros = lower power (about 80-90W average usage), storage and processing in same box
    • Cons = 3 PCIe slots max, limited to 32GB RAM, only 2 onboard NICs
    R710
    • Pros = 4 PCIe slots, 4 onboard NICS, dual proc support, 64GB RAM (have memory to increase up to 96GB), able to run additional virtual systems
    • Cons = no 3.5" storage, need external SAS cabling, more power (170W with X5650's or 140W with the single X5550) plus additional power to run the hard drives in a JBOD case.
     
    Last edited: May 20, 2016
  2. ST3F

    ST3F Limp Gawd

    Messages:
    181
    Joined:
    Oct 19, 2011
    Change : Supermicro X8SIL-F, XEON x3430, 16GB RAM
    for : Supermicro X9SRL, Xeon 2670 (60 $ eB) or 2630L (120 $ eB) ... and 96 GB ECC REG from R710
    OS :
    -> ESXi + OmniOS + Nappit for iSCSI
    or
    -> w2k2r2 + Hyper-V + ZoL under last Debian or Ubuntu

    or :

    Change R710 CPU x5650
    for L5630
     
  3. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,697
    Joined:
    Jun 9, 2004
    Why not use a SAS expander with a single M1015? That gets you the 24 ports you need and only uses a single PCIe slot.
     
  4. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    Hmm, trying to avoid spending too much extra on this change over and I'm more than happy with the x5650 procs (dual procs, 64GB RAM, 8 drives, 4port Intel NIC and fiber card stays around 175W under load). Honestly, the X8SIL/x3430 combo handles everything I need it to from a storage aspect, just with the PCIe slot restrictions.

    I thought about that, but don't the expanders (at least the HP one) "downgrade" to 3G with SATA disks? Considering one of my storage pools is comprised of all SSD drives, wouldn't that bottleneck them? Also, don't the expanders take a PCIe slot anyways for power? I guess you can use those PCIe 1x > 16x adapters, but I can't remember if there is enough room in the Norco 4220 (1st edition, no fan wall replacement) with the X8SIL-F board.

    For now, I picked up an LSI 9201-16e card (locally for about 1/2 the price of the HP expanders on Ebay, $30) and two 8088->8087 brackets. I already had some 8088 SAS cables from work. So, I've only put about $90 into the upgrades needed to roll it into my host. Added the LSI card to my host and bumped the RAM from 64 to 96GB and it's still running under load at only about 200W. Just waiting on the brackets to cable everything up.

    Worst case, I decide I don't like the All-in-one setup (ran it before on older hardware and split it off) and I look at the expander route to move back to a dedicated box... as long as there isn't a disk performance hit with the expander and the SATA3 SSD drives. Is the HP expander still the "go to" option, with a mining PCIe board for the power?
     
  5. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,697
    Joined:
    Jun 9, 2004
    I think the HP one does that, not sure about the Intel though. Intel expander does not require a PCIe slot for power. There's actually a really easy solution to your bandwidth dilemma anyway. Seeing as the Intel expander is only 24 ports total and you need 4 of those for the uplink, on the M1015, run one cable to your 4 SSDs and the other to the SAS expander, which is then connected to your other 20 bays.
     
  6. farscapesg1

    farscapesg1 2[H]4U

    Messages:
    2,575
    Joined:
    Aug 4, 2004
    Duh.. for whatever reason I had it in my head that I needed both ports connected to the expander. I'll look into picking one up as I've decided the All-In-One option isn't going to work out for me anyways. The little bit of power saving doesn't justify the hassle of getting both hosts to talk to it and the limitations it imposes on the hypervisor side (not being able to restart the host without taking all the storage down). Really wish my 3rd R710 was a Gen 2 model and a tad quieter. Are the HP SAS expanders still the cheapest/easiest route to go?

    Now to decide if I should just stick with my current 4GB fiber connection.. or move to 10GB since I have two Intel X520-DA2 cards sitting here. Would just need to pick up a X520-DA1 (or compatible) and another direct attach cable. I'm not sure what all 10GB cards would be compatible for direct connecting two ESXi hosts back to an OmniOS/ZFS server. I definitely don't have the funds for a 10GB switch :( Technically, it isn't like I really "need" the extra performance, but the FC cards were definitely nice compared to my original attempt at iSCSI with just 2 1GB nics.

    Would prefer to move to NFS for my datastores also, but that means picking up a SLOG device (Intel S3500 maybe, S3700 or something else?) since ESXI forces sync writes no matter the config on ZFS. Yeah, I know, you should use sync anyways.. been playing russian roulette for a couple years since I have everything on UPS and back up the VMs.