[H]ard Forum Storage Showoff Thread

Just now?

I haven't seen a commercial mass storage system using 3.5" drives in nearly a decade...

[bad anecdote, but I'm seriously under the impression that the industry moved to 2.5" drives happened some time ago, and with SSDs I don't see 3.5" drives coming back]
 
How do you find the array performs with them compared to traditional 3.5" drives?

I haven't had a 3.5" setup to compare with, but haven't had any issues whatsoever with responsiveness. Plex loads quickly, no buffering; bulk file transfer through samba and time machine works like a charm. No idea what the actual quantitative data says (other than seeing xfers @ >100MB/sec in windows), but it passes the "technology is invisible" test the wife usually throws at everything I build.

I've thrown those 4TB drives away, replaced them with 5TB drives. Realized that these 5TB are great in read speeds, awful in write speeds. Those went away and I have an 8x2.5" hole in my case with no drives to put in there.

Yeah, If I didn't just buy a family trip to NZ, I would have probably gone that route, just for another 25% of space. I'm getting the 4TB drives at $65/per, though, haven't seen the 5TBs anywhere near that.
 
Another year and no issues with the drives. Just bought another 24 of them and am rebuilding into a custom short depth 2U case. Should triple my storage density (TB/L).

What OS are you running on your NAS? And what RAID level?
 
FreeNAS, Z2 in the current build, may go to Z3 with the new build due to tripling the # of drives in the array.
 
Okay, good to know.. running FreeNAS myself, and planning to populate a chassis with ST4000LM024's:
VcPbDDY.jpg

snipsnip:
VLtD3FG.jpg

n7ukq35.jpg

MxIgFek.jpg

Chopped it in half, put a 200w Shuttle powersupply in the 5,25" bay.
For now, 1m 8087 cables going directly from the backplane to internal controllers in the server below.
Been running four ST4000LM024's in RaidZ for a couple of months for testing, they've been running smoothly so far.
 
FreeNAS, Z2 in the current build, may go to Z3 with the new build due to tripling the # of drives in the array.

I thought the ZFS manual said to never use more than 12 drives in a single vdev, as itihas reliability implications, and instead use multiple VDEV's per pool.

Personally I run mine as two RAIDz2 VDEV's with 6 drives in each.
 
I thought the ZFS manual said to never use more than 12 drives in a single vdev, as itihas reliability implications, and instead use multiple VDEV's per pool.

Personally I run mine as two RAIDz2 VDEV's with 6 drives in each.

Good point -- sorry, had a brain fart there for a sec -- going to do 3x vdevs at z2 each (3x 8+2 drives mashed together)
 
I haven't seen a commercial mass storage system using 3.5" drives in nearly a decade...
We use a ton of them at work, from just about every major manufacturer. Hitachi, Dell/EMC, Supermicro, HPE 3PAR, you name it. Looking at how quickly the data drops off, I wish we could double our capacity at minimum. We're currently utilizing many, many petabytes, and could use exabytes of storage easily. 3.5" drives are alive and well, at least in the telecom industry.
 
Just now?

I haven't seen a commercial mass storage system using 3.5" drives in nearly a decade...

[bad anecdote, but I'm seriously under the impression that the industry moved to 2.5" drives happened some time ago, and with SSDs I don't see 3.5" drives coming back]

Fair enough I guess.

I'm not an IT professional, so I don't have a clue what the Pro's are using.

That being said, nearly a decade? 2008 feels like it was just yesterday.

All I did was blink, and here we are in 2018.

Nothing of importance can possibly have changed since then. :p

If I close my eyes and don't think too hard about it, my neutral time state is still ~1996 :p


More seriously though, why has the market moved to 2.5" drives? What has the appeal been?

Whenever I have looked into it, they have had higher seek times and lower max capacities. Sure 3.5" drives take more space and use more power than 2.5" drives, but according to my calculations they still provide high enougn performance and capacity to more than make up for this.
 
Last edited:
More seriously though, why has the market moved to 2.5" drives? What has the appeal been?
I want to say it was speed related, back when. For a given density of rack units, you could have shorter seek times and greater write speed with the smaller drives. (24x2.5" vs 12x 3.5") I know "more spindles" was the mantra for anything database related pre-SSDs. I'm not really sure why we're sticking with it these days. Seems to me that we're limited to the number of chips you can put on a 2.5" solid drive for speed right now, so a format increase would make sense. Personally, I'd like to see a lot more RDMA-type tech. Not really much point to having local storage anymore with the kinds of access times you can get (600Gb/[email protected] now, somewhere in the Tb/s@ns range in the next 5 years). Boot off an embedded chip, load the OS into RAM, and keep everything else in the storage row in the datacenter. Heck, given the rate at which networks are progressing, I could see a return to the Cray era, where we separate memory, storage, and processing into different physical systems.
 
I think it has to do with the ability to stack so many 2.5" drives side by side when installed vertically in a 2U chassis.

As for SSD's, we can already get 2TB in the M.2 format, and Intel has their 'ruler' form-factor coming, which I think will be perfect. Probably get 16TB per module just with today's technology.

For the future- I see 'compute units' expanding, but memory will likely be tightly coupled with CPUs. And given that booting from network is most certainly a thing, I'd bet that only a local hypervisor would be needed for the compute modules- and hell, that could be a USB stick or even a modern flash device. Sony's XQD format, used by high-end stills and video cameras, is straight up PCIe and plenty fast, while also being rugged like an SSD would be.
 
2.5" drives use less power. When you're running 10's or even 100's of these, the savings add up quick.
 
2.5" drives use less power. When you're running 10's or even 100's of these, the savings add up quick.
There's no real power savings. 2.5" drives use almost exactly half the power of 3.5" drives (0.51A vs 0.9A), and are racked almost exactly twice as densely (8 drives per unit vs 4). Furthermore, the additional heat generated by 2.5" drives means the CRAC unit has to work harder to cool the same number of rack units worth of storage, which actually increases the overall energy usage. It really doesn't make sense to stick with the 2.5" format for much longer.

Personally, I think PCI-E NGSFF is our future:
SSG-1029P-NMR36L.jpg
 

Attachments

  • maxresdefault.jpg
    maxresdefault.jpg
    56.8 KB · Views: 340
lol didn't see that picture until now.. hope you bought them at the store like i did, pretty sure the lady thought i was nuts when i asked how many 8TB WD easystore drives they had in the back and said there was 14 of them. but the best reaction from her was when i said i'd take them all.. she kept asking me if i was sure i needed that many external drives. :p at the end of the day i didn't need them and actually gave away and sold a bunch of them to people. but hey it made my dream come true seeing it say 101TB free.. honestly never thought i'd see 100TB of storage be considered affordable this soon.

na bought most of them online (maybe a hand full in store), was fun shucking them all, have enough USB3>SATA bridges and 12v 1.5A ac adapters for a couple lifetimes hehe

I find it interesting that 2.5" drives are moving into the mass storage market now.

I've always been a little. It uncomfortable with them.

How do you find the array performs with them compared to traditional 3.5" drives?

When I recently upgraded my ZFS box, I went with 12x 10TB 7200rpm Seagate Enterprise drives. I wanted to make sure that the pool was responsive.

If 10k rpm drives still existed in the top size tiers, I would have seriously considered them.

I made a small array with 6x 4TB 2.5" 5400 RPM drives I shucked from seagate externals ($100 at costco), have them in a RAID6 and they have been performing great so far for what I needed... performance is actually better than I thought it would be (guess the smaller physical disks, so less area for the head to need to move, makes up a bit for the slower spindle speed)
 
There's no real power savings. 2.5" drives use almost exactly half the power of 3.5" drives (0.51A vs 0.9A), and are racked almost exactly twice as densely (8 drives per unit vs 4). Furthermore, the additional heat generated by 2.5" drives means the CRAC unit has to work harder to cool the same number of rack units worth of storage, which actually increases the overall energy usage. It really doesn't make sense to stick with the 2.5" format for much longer.

Personally, I think PCI-E NGSFF is our future:
View attachment 66118


Interesting.

I guess one should also mention that the largest capacity 3.5" drives tend to me more than double the capacity of the largest capacity 2.5" drives, so in order to get the same amount of total capacity, you'd need more than twice as many.

If the rack density is only 2x that of 3.5" drives, then you'll need more than 2x as many for the same max capacity.
 
Interesting.

I guess one should also mention that the largest capacity 3.5" drives tend to me more than double the capacity of the largest capacity 2.5" drives, so in order to get the same amount of total capacity, you'd need more than twice as many.

If the rack density is only 2x that of 3.5" drives, then you'll need more than 2x as many for the same max capacity.
Yup. I think that's a big reason why commercial SANs and NAS setups still use 3.5" drives for low-speed, low-cost, high-capacity storage. However, if NGSFF takes off, that won't be the case for much longer. At 16TB per drive and 32 drives per unit, it's denser than 3.5, and faster than 2.5, while being on par with current SSD pricing. Most of the places I work in use tiered storage solutions - Based on data access patterns, it's shuffled around between solid state, spinning rust, and even straight RAM. For the next decade or so, there will probably be a place for all the drives to co-exist. I personally don't see that lasting for much longer than that, though. By 2030, I'd be surprised if the 3.5 format's still around.
 
^ Very nice! How have those fat 4TB 2.5" Seagates been holding out so far?
Did you shuck them from externals (eg. Backup Plus)?

So, 20 months into NAS operations (nearly 100% uptime) with these shucked 4TB drives, I finally got my first drive with uncorrectable errors. Resilvering has been going for four hours; 4% complete, lol.

Edit: Resilver complete (72 hours later) and we are back at double redundancy :)
 
Last edited:
Yup. I think that's a big reason why commercial SANs and NAS setups still use 3.5" drives for low-speed, low-cost, high-capacity storage. However, if NGSFF takes off, that won't be the case for much longer. At 16TB per drive and 32 drives per unit, it's denser than 3.5, and faster than 2.5, while being on par with current SSD pricing. Most of the places I work in use tiered storage solutions - Based on data access patterns, it's shuffled around between solid state, spinning rust, and even straight RAM. For the next decade or so, there will probably be a place for all the drives to co-exist. I personally don't see that lasting for much longer than that, though. By 2030, I'd be surprised if the 3.5 format's still around.

Well, you've inspired me :)

Building a 20TB m.2 NAS to play around with (sata, not NVME, because it is just for media).

Yeah.... This makes me feel like I am just lighting money on fire for fun, really, but hey... Science. Or something.
 
Well, you've inspired me :)

Building a 20TB m.2 NAS to play around with (sata, not NVME, because it is just for media).

Yeah.... This makes me feel like I am just lighting money on fire for fun, really, but hey... Science. Or something.
For great justice!

We need pics when done!
 
Well, you've inspired me :)

Building a 20TB m.2 NAS to play around with (sata, not NVME, because it is just for media).

Yeah.... This makes me feel like I am just lighting money on fire for fun, really, but hey... Science. Or something.


Tell me about it. It took me a while to get over how much money I lit on fire when I upgraded by 12x4TB WD Reds to my 12x10TB Seagate Enterprise drives.

It was a lot of cash. I'm glad I did though, as I would have been out of storage by now if I hadn't.

Hopefully the 10TB drives will be enough storage for another 4-5 years.
 
First look at the new SFF NAS (1u, 9" depth case).

Mirrored 250GB boot drives (total overkill)
10x 2TB data drives (will be in Z2)

If I can figure out how to fit a PCIe HBA back on top of the motherboard (right riser instead of left riser), there is room for quite a bit of expansion. Could theoretically fit 30x data drives in this setup (keeping the mirrored boot drives).

1u_nas.jpg
 
First look at the new SFF NAS (1u, 9" depth case).

Mirrored 250GB boot drives (total overkill)
10x 2TB data drives (will be in Z2)

If I can figure out how to fit a PCIe HBA back on top of the motherboard (right riser instead of left riser), there is room for quite a bit of expansion. Could theoretically fit 30x data drives in this setup (keeping the mirrored boot drives).

View attachment 150816
Dude! what are those? I want do do a silent setup just like it.
 
First look at the new SFF NAS (1u, 9" depth case).

Mirrored 250GB boot drives (total overkill)
10x 2TB data drives (will be in Z2)

If I can figure out how to fit a PCIe HBA back on top of the motherboard (right riser instead of left riser), there is room for quite a bit of expansion. Could theoretically fit 30x data drives in this setup (keeping the mirrored boot drives).

View attachment 150816

I just saw those drives and saw the price and wanted some, but they only had the 500gb. I guess you saw the same deal I did.
Love the setup.

Please do tell about those adapters.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Stink reviews.

Every 1-star review (except for one) was for the 2-drive raid adapter (which requires that your mobo have SATA port multiplication capability). The single one-star review for this quad pass-through adapter was by a guy trying to use an nvme drive (these are sata drives). The star average doesn't usually tell the whole story ;)

There is a 3-star review for this quad adapter where the guy mentions that his SATA power connector broke off. So, I will be gentle!
 
Stink reviews.

Stink "amazon" reviews mean jack shit.

Doing a setup like this is converting a single M.2 drive to a single SATA. It may not increase density and will be fairly expensive. Would probably be cheaper and easier to use plain SATA SSD's.

Although these do have a single power connector for 4 devices, so simplification is going for it.
 
First look at the new SFF NAS (1u, 9" depth case).

Mirrored 250GB boot drives (total overkill)
10x 2TB data drives (will be in Z2)

If I can figure out how to fit a PCIe HBA back on top of the motherboard (right riser instead of left riser), there is room for quite a bit of expansion. Could theoretically fit 30x data drives in this setup (keeping the mirrored boot drives).

View attachment 150816


I'm curious what you use that for. An SSD NAS seems pretty nuts to me, unless you are doing some pretty heavy duty nearline stuff.
 
I'm curious what you use that for. An SSD NAS seems pretty nuts to me, unless you are doing some pretty heavy duty nearline stuff.

In short: absolutely nothing that needs it.

I was bored with the current iteration (somewhere up in this thread). When I get bored, weird things happen, as shown by a long history of projects in the SFF sub forum, lol.

Ok, I'll throw myself a bone: moving another system from its own case into a 1u enclosure in the main rack. But that is pretty weak justification, hah!
 
Did a little reorganization:
- cut the PCBs on the quad adapters down to 120mm in length (instead of 140mm)
- drilled corner holes in the "far end" to line up with the ones at the connector end
- inverted the second set of drives

Now have 10x 2TB data drives and 2x 250GB OS drives in the space of a thick 3.5" hdd :D (with standoffs it is a full 1U; would need to order some custom height standoffs to fit a 2nd inverted drive set in there).

triple_stack1.jpg


triple_stack2.jpg
 
Did a little reorganization:
- cut the PCBs on the quad adapters down to 120mm in length (instead of 140mm)
- drilled corner holes in the "far end" to line up with the ones at the connector end
- inverted the second set of drives

Now have 10x 2TB data drives and 2x 250GB OS drives in the space of a thick 3.5" hdd :D (with standoffs it is a full 1U; would need to order some custom height standoffs to fit a 2nd inverted drive set in there).

View attachment 151769

View attachment 151770


As cool as this is, it almost feels like a bit of a shame to limit m.2. drives to SATA bandwidth.

If I were to ever use m.2 drives in my NAS (that time may some day come) I'd probaböy be looking at getting some sort of a server board with a crazy number of PCIe lanes instead.
 
Just doing teamed GbE right now. Next upgrade with be to SFP+ or QSFP+ (also for no good reason :p)
 
Just doing teamed GbE right now. Next upgrade with be to SFP+ or QSFP+ (also for no good reason :p)

I have a direct 10G BaseT copper link between my desktop and my NAS. It's been great.

Before it I used a Brocade SFP+ fiber direct link and it was garbage. Has turned me off from fiber for life.
 
Just doing teamed GbE right now. Next upgrade with be to SFP+ or QSFP+ (also for no good reason :p)
All that SSD love over 1Gbe LAGs? Like running on a blown hamstring...

Nice touch on the mounting though. I do hope you have direct airflow through those things. They will get plenty hot packed in that dense.
 
Back
Top