[H]ard Forum Storage Showoff Thread

Discussion in 'SSDs & Data Storage' started by EnderW, Jan 1, 2015.

  1. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,729
    Joined:
    Jun 13, 2003
    Just now?

    I haven't seen a commercial mass storage system using 3.5" drives in nearly a decade...

    [bad anecdote, but I'm seriously under the impression that the industry moved to 2.5" drives happened some time ago, and with SSDs I don't see 3.5" drives coming back]
     
  2. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    I haven't had a 3.5" setup to compare with, but haven't had any issues whatsoever with responsiveness. Plex loads quickly, no buffering; bulk file transfer through samba and time machine works like a charm. No idea what the actual quantitative data says (other than seeing xfers @ >100MB/sec in windows), but it passes the "technology is invisible" test the wife usually throws at everything I build.

    Yeah, If I didn't just buy a family trip to NZ, I would have probably gone that route, just for another 25% of space. I'm getting the 4TB drives at $65/per, though, haven't seen the 5TBs anywhere near that.
     
  3. b3nno

    b3nno n00b

    Messages:
    19
    Joined:
    Nov 19, 2014
    What OS are you running on your NAS? And what RAID level?
     
  4. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    FreeNAS, Z2 in the current build, may go to Z3 with the new build due to tripling the # of drives in the array.
     
  5. b3nno

    b3nno n00b

    Messages:
    19
    Joined:
    Nov 19, 2014
    Okay, good to know.. running FreeNAS myself, and planning to populate a chassis with ST4000LM024's:
    VcPbDDY.jpg
    snipsnip:
    VLtD3FG.jpg
    n7ukq35.jpg
    MxIgFek.jpg
    Chopped it in half, put a 200w Shuttle powersupply in the 5,25" bay.
    For now, 1m 8087 cables going directly from the backplane to internal controllers in the server below.
    Been running four ST4000LM024's in RaidZ for a couple of months for testing, they've been running smoothly so far.
     
  6. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000
    I thought the ZFS manual said to never use more than 12 drives in a single vdev, as itihas reliability implications, and instead use multiple VDEV's per pool.

    Personally I run mine as two RAIDz2 VDEV's with 6 drives in each.
     
  7. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Good point -- sorry, had a brain fart there for a sec -- going to do 3x vdevs at z2 each (3x 8+2 drives mashed together)
     
    Zarathustra[H] likes this.
  8. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,411
    Joined:
    Feb 8, 2005
    We use a ton of them at work, from just about every major manufacturer. Hitachi, Dell/EMC, Supermicro, HPE 3PAR, you name it. Looking at how quickly the data drops off, I wish we could double our capacity at minimum. We're currently utilizing many, many petabytes, and could use exabytes of storage easily. 3.5" drives are alive and well, at least in the telecom industry.
     
    IdiotInCharge likes this.
  9. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000
    Fair enough I guess.

    I'm not an IT professional, so I don't have a clue what the Pro's are using.

    That being said, nearly a decade? 2008 feels like it was just yesterday.

    All I did was blink, and here we are in 2018.

    Nothing of importance can possibly have changed since then. :p

    If I close my eyes and don't think too hard about it, my neutral time state is still ~1996 :p


    More seriously though, why has the market moved to 2.5" drives? What has the appeal been?

    Whenever I have looked into it, they have had higher seek times and lower max capacities. Sure 3.5" drives take more space and use more power than 2.5" drives, but according to my calculations they still provide high enougn performance and capacity to more than make up for this.
     
    Last edited: Apr 8, 2018
    IdiotInCharge likes this.
  10. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,411
    Joined:
    Feb 8, 2005
    I want to say it was speed related, back when. For a given density of rack units, you could have shorter seek times and greater write speed with the smaller drives. (24x2.5" vs 12x 3.5") I know "more spindles" was the mantra for anything database related pre-SSDs. I'm not really sure why we're sticking with it these days. Seems to me that we're limited to the number of chips you can put on a 2.5" solid drive for speed right now, so a format increase would make sense. Personally, I'd like to see a lot more RDMA-type tech. Not really much point to having local storage anymore with the kinds of access times you can get (600Gb/s@0.5ms now, somewhere in the Tb/s@ns range in the next 5 years). Boot off an embedded chip, load the OS into RAM, and keep everything else in the storage row in the datacenter. Heck, given the rate at which networks are progressing, I could see a return to the Cray era, where we separate memory, storage, and processing into different physical systems.
     
    Zarathustra[H] and IdiotInCharge like this.
  11. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,729
    Joined:
    Jun 13, 2003
    I think it has to do with the ability to stack so many 2.5" drives side by side when installed vertically in a 2U chassis.

    As for SSD's, we can already get 2TB in the M.2 format, and Intel has their 'ruler' form-factor coming, which I think will be perfect. Probably get 16TB per module just with today's technology.

    For the future- I see 'compute units' expanding, but memory will likely be tightly coupled with CPUs. And given that booting from network is most certainly a thing, I'd bet that only a local hypervisor would be needed for the compute modules- and hell, that could be a USB stick or even a modern flash device. Sony's XQD format, used by high-end stills and video cameras, is straight up PCIe and plenty fast, while also being rugged like an SSD would be.
     
  12. Deadjasper

    Deadjasper [H]ard|Gawd

    Messages:
    1,627
    Joined:
    Oct 28, 2001
    2.5" drives use less power. When you're running 10's or even 100's of these, the savings add up quick.
     
  13. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,411
    Joined:
    Feb 8, 2005
    There's no real power savings. 2.5" drives use almost exactly half the power of 3.5" drives (0.51A vs 0.9A), and are racked almost exactly twice as densely (8 drives per unit vs 4). Furthermore, the additional heat generated by 2.5" drives means the CRAC unit has to work harder to cool the same number of rack units worth of storage, which actually increases the overall energy usage. It really doesn't make sense to stick with the 2.5" format for much longer.

    Personally, I think PCI-E NGSFF is our future:
    SSG-1029P-NMR36L.jpg
     

    Attached Files:

  14. FLECOM

    FLECOM Modder(ator) & [H]ardest Folder Evar Staff Member

    Messages:
    15,569
    Joined:
    Jun 27, 2001
    na bought most of them online (maybe a hand full in store), was fun shucking them all, have enough USB3>SATA bridges and 12v 1.5A ac adapters for a couple lifetimes hehe

    I made a small array with 6x 4TB 2.5" 5400 RPM drives I shucked from seagate externals ($100 at costco), have them in a RAID6 and they have been performing great so far for what I needed... performance is actually better than I thought it would be (guess the smaller physical disks, so less area for the head to need to move, makes up a bit for the slower spindle speed)
     
  15. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000

    Interesting.

    I guess one should also mention that the largest capacity 3.5" drives tend to me more than double the capacity of the largest capacity 2.5" drives, so in order to get the same amount of total capacity, you'd need more than twice as many.

    If the rack density is only 2x that of 3.5" drives, then you'll need more than 2x as many for the same max capacity.
     
  16. TeeJayHoward

    TeeJayHoward Limpness Supreme

    Messages:
    9,411
    Joined:
    Feb 8, 2005
    Yup. I think that's a big reason why commercial SANs and NAS setups still use 3.5" drives for low-speed, low-cost, high-capacity storage. However, if NGSFF takes off, that won't be the case for much longer. At 16TB per drive and 32 drives per unit, it's denser than 3.5, and faster than 2.5, while being on par with current SSD pricing. Most of the places I work in use tiered storage solutions - Based on data access patterns, it's shuffled around between solid state, spinning rust, and even straight RAM. For the next decade or so, there will probably be a place for all the drives to co-exist. I personally don't see that lasting for much longer than that, though. By 2030, I'd be surprised if the 3.5 format's still around.
     
  17. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    So, 20 months into NAS operations (nearly 100% uptime) with these shucked 4TB drives, I finally got my first drive with uncorrectable errors. Resilvering has been going for four hours; 4% complete, lol.

    Edit: Resilver complete (72 hours later) and we are back at double redundancy :)
     
    Last edited: Feb 8, 2019
    gigatexal likes this.
  18. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Well, you've inspired me :)

    Building a 20TB m.2 NAS to play around with (sata, not NVME, because it is just for media).

    Yeah.... This makes me feel like I am just lighting money on fire for fun, really, but hey... Science. Or something.
     
  19. IceDigger

    IceDigger [H]ardForum Junkie

    Messages:
    10,500
    Joined:
    Feb 22, 2001
    For great justice!

    We need pics when done!
     
  20. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000

    Tell me about it. It took me a while to get over how much money I lit on fire when I upgraded by 12x4TB WD Reds to my 12x10TB Seagate Enterprise drives.

    It was a lot of cash. I'm glad I did though, as I would have been out of storage by now if I hadn't.

    Hopefully the 10TB drives will be enough storage for another 4-5 years.
     
    gigatexal and IceDigger like this.
  21. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    First look at the new SFF NAS (1u, 9" depth case).

    Mirrored 250GB boot drives (total overkill)
    10x 2TB data drives (will be in Z2)

    If I can figure out how to fit a PCIe HBA back on top of the motherboard (right riser instead of left riser), there is room for quite a bit of expansion. Could theoretically fit 30x data drives in this setup (keeping the mirrored boot drives).

    1u_nas.jpg
     
    86 5.0L, mrwizardno2, Angry and 2 others like this.
  22. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,120
    Joined:
    Jun 22, 2004
    Dude! what are those? I want do do a silent setup just like it.
     
  23. Angry

    Angry Limp Gawd

    Messages:
    465
    Joined:
    Feb 27, 2006
    I just saw those drives and saw the price and wanted some, but they only had the 500gb. I guess you saw the same deal I did.
    Love the setup.

    Please do tell about those adapters.
     
    gigatexal likes this.
  24. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    gigatexal likes this.
  25. gigatexal

    gigatexal [H]ardness Supreme

    Messages:
    7,120
    Joined:
    Jun 22, 2004
    They're 331EUR per drive here in the EU :-(
     
  26. Deadjasper

    Deadjasper [H]ard|Gawd

    Messages:
    1,627
    Joined:
    Oct 28, 2001
  27. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Every 1-star review (except for one) was for the 2-drive raid adapter (which requires that your mobo have SATA port multiplication capability). The single one-star review for this quad pass-through adapter was by a guy trying to use an nvme drive (these are sata drives). The star average doesn't usually tell the whole story ;)

    There is a 3-star review for this quad adapter where the guy mentions that his SATA power connector broke off. So, I will be gentle!
     
    kirbyrj and IdiotInCharge like this.
  28. Trimlock

    Trimlock [H]ardForum Junkie

    Messages:
    15,054
    Joined:
    Sep 23, 2005
    Stink "amazon" reviews mean jack shit.

    Doing a setup like this is converting a single M.2 drive to a single SATA. It may not increase density and will be fairly expensive. Would probably be cheaper and easier to use plain SATA SSD's.

    Although these do have a single power connector for 4 devices, so simplification is going for it.
     
  29. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,729
    Joined:
    Jun 13, 2003
    Would you mind sharing the rest of the system for curious minds?

    :D
     
  30. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000

    I'm curious what you use that for. An SSD NAS seems pretty nuts to me, unless you are doing some pretty heavy duty nearline stuff.
     
  31. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    In short: absolutely nothing that needs it.

    I was bored with the current iteration (somewhere up in this thread). When I get bored, weird things happen, as shown by a long history of projects in the SFF sub forum, lol.

    Ok, I'll throw myself a bone: moving another system from its own case into a 1u enclosure in the main rack. But that is pretty weak justification, hah!
     
    mrwizardno2 and Zarathustra[H] like this.
  32. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Yuck! Over the 10x 2TB, I averaged out to 219 USD shipped each. That was just low enough to get past my personal cost barrier for silly projects :p
     
  33. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Once I get it through initial sea trials I will post a full write-up :)
     
  34. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Did a little reorganization:
    - cut the PCBs on the quad adapters down to 120mm in length (instead of 140mm)
    - drilled corner holes in the "far end" to line up with the ones at the connector end
    - inverted the second set of drives

    Now have 10x 2TB data drives and 2x 250GB OS drives in the space of a thick 3.5" hdd :D (with standoffs it is a full 1U; would need to order some custom height standoffs to fit a 2nd inverted drive set in there).

    triple_stack1.jpg

    triple_stack2.jpg
     
  35. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000

    As cool as this is, it almost feels like a bit of a shame to limit m.2. drives to SATA bandwidth.

    If I were to ever use m.2 drives in my NAS (that time may some day come) I'd probaböy be looking at getting some sort of a server board with a crazy number of PCIe lanes instead.
     
  36. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,729
    Joined:
    Jun 13, 2003
    I mean, 550MB/s per drive realized- assuming that they're fairly resilient, you can likely roll a single parity stripe, and just four would obliterate 10Gbit.

    Sure, 40Gbit is probably coming down in price on Ebay, but still :D
     
    Deadjasper and Zarathustra[H] like this.
  37. Machupo

    Machupo Gravity Tester

    Messages:
    4,797
    Joined:
    Nov 14, 2004
    Just doing teamed GbE right now. Next upgrade with be to SFP+ or QSFP+ (also for no good reason :p)
     
  38. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,595
    Joined:
    Oct 29, 2000
    I have a direct 10G BaseT copper link between my desktop and my NAS. It's been great.

    Before it I used a Brocade SFP+ fiber direct link and it was garbage. Has turned me off from fiber for life.
     
  39. PigLover

    PigLover [H]ard|Gawd

    Messages:
    1,171
    Joined:
    Jul 11, 2009
    All that SSD love over 1Gbe LAGs? Like running on a blown hamstring...

    Nice touch on the mounting though. I do hope you have direct airflow through those things. They will get plenty hot packed in that dense.