[H]ard Forum Storage Showoff Thread

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,466
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.

I can only speak for enterprise grade stuff, but all the data I have seen say the opposite, even back from the google study from 2016...failrue rate is much lower than with HDD's.
 

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,466
Yes, failure rate is much better but data retention is another thing altogether.

We run RAID6 and have for years, never had any data loss...and we are talking HUGE amounts of data.
There is a reason we are ditching spindles as fast as we can.
 

mnewxcv

[H]F Junkie
Joined
Mar 4, 2007
Messages
8,783
Storage for chromebook/Ubuntu dual boot.
 

Attachments

  • 20200412_115424.jpg
    20200412_115424.jpg
    292.7 KB · Views: 0

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.

iirc, that only applies if you unplug a SSD and let it sit for quite some time
 

Deadjasper

2[H]4U
Joined
Oct 28, 2001
Messages
2,246
iirc, that only applies if you unplug a SSD and let it sit for quite some time

Yup. As long as it's regularly powered up all is good. It's when you park it on the shelf that the trouble lies.

Having said this, I have an old Dell RD1000 drive and 5 80GB cartridges. It's been sitting idle for at least 4 years. Wanted to see if the data was still there. It was. Don't remember how long data lasts on an SSD but I remember reading it wasn't anywhere near as long as a spinner. Of course it was some time ago when I read this. Things might have changed.
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,669
Latest rendition of server up and running:
Gigabyte x370 gaming k7
AMD 2700x
32gb FlareX (cause I can)
10 HGST Deskstar NAS 6tb
6 Seagate Ironwolf NAS 6tb

5WbYLti.jpg

One bigass pool, because I have a redundant server;
StorPool.jpg
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,587
Are you just using storage spaces? How's performance with parity these days?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,669
Are you just using storage spaces? How's performance with parity these days?
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,587
My performance has always been abysmal with parity SS. Read is fine, but write is just god awful

Yeah I was hoping that latest changes to 2019 had changed that, thinking of trying a 4x8TB and 4x4TB parity pool and see how it works.

Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.

Ah got you, how are you backing it up to the redundant server, just robocopy or something?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,669
Ah got you, how are you backing it up to the redundant server, just robocopy or something?

I have a batch file that launches OpenVPN, runs a robocopy /mir script, and then kills the OpenVPN connection once completed. I set it as a scheduled task.

Redundant server:
Gigabyte GA-F2A88X-D3H
AMD A6-6400k
16gb Generic ram
9211-16i in IT mode
11 Shucked WD EasyStore 10tb drives (white labels)

Forgot to add, redundant server is off site. Both locations are 1gb fiber.
 

IdiotInCharge

NVIDIA SHILL
Joined
Jun 13, 2003
Messages
14,679
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s.
Single drive speeds... are single drive speeds ;)

Not sure how that would compare to Hardware RAID or ZFS.
That depends significantly on the RAID configuration. Even SS will do a decent RAID1 etc, and RAID1 would double your read speeds while maintaining write speeds, theoretically.
 

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
Ran out of space on my old SFF/SSD NAS and my Avoton mobo finally bit the dust (the dreaded BMC/novideo issue). Instead of just getting a new mobo, decided it was time for an upgrade (that had lasted 5 years through a couple storage pool iterations).

New SFF NAS:
Sharing a twin-ITX 2U chassis with my edge device
Supermicro X10SDV-6C+-TLN4F
64GB DDR4-2133 ECC
Pico PSU 150
970 evo plus 250gb boot drive / cache (yeah, way overkill)
9305-24i
3x vdevs of Z2 goodness (each 8x 2TB WD Blue m.2 SATA drives) + 2 hot spares

31 TiB available



View attachment 218544

forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?
 

Machupo

Gravity Tester
Joined
Nov 14, 2004
Messages
5,480
forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?

Unfortunately, I'm on the other side of the world from the system and didn't think to write it anywhere. It ran fine off of a 60w pico psu, when I had it sharing a brick with my edge device. I switched it to the 150 b/c I thought the spontaneous rebooting issue I was having might have been related to pulling too much power, but it turned out to be a bug in Freenas 11.3 (described in a post somewhere in this thread regarding temperature SMART reporting on SSDs) and I never got around to swapping it back on to the 60w psu.
 

longblock454

2[H]4U
Joined
Nov 28, 2004
Messages
2,248
Revive!

Going through some boxes during downtime I ran across (3) unused M1015 cards I don't use anymore, decommissioned on my last upgrade due to the availability of >14TB spinners. In doing so it brought back memories of this thread, which is in need of some new attention/posts!

Any noteworthy upgrades since May?
 

Deadjasper

2[H]4U
Joined
Oct 28, 2001
Messages
2,246
I'm slowly upgrading my server farm. Started out with mostly SM X8 MB's. Now I'm down to 2 servers left with X8DTH-F MB's which are probably the best X8 boards SM ever made so I'll keep those. I have an X9SRH-7F on the way that will go in backup chassis. I also have a Xeon E5 2680 v4 on the way. Thinking about converting the system it's going in (X10SRL-F) into my everyday driver.

Recently ordered a rear window for a SM SC825TQ chassis so I could upgrade the MB from an old X8DTU-F to a X9DRW-IF. I should not have bought this chassis way back a long time ago. Didn't realize it was designed for a proprietary MB.

Also like to mention if you need Supermicro chassis parts or any other non commodity part, chances are they will be much cheaper direct from SM. The cheapest I could find via Google was $89 plus shipping. Price direct from SM was $29.59 delivered. Gotta jump through a few hoops as SM has no system for the little guy to order parts so they use the RMA system. It's easy once you know what to do and they are very helpful and friendly on the phone and email.

Also like to mention that only commodity items are available in their eStore, no parts.
 

86 5.0L

Supreme [H]ardness
Joined
Nov 13, 2006
Messages
7,071
Got a 34TB array i might upgrade to a 56TB array with a new 10gig enclosure. But my current stuff is working well
 

xtream1101

n00b
Joined
Dec 17, 2014
Messages
9
longblock454 I have not been here in a while, but got an email notification about this thread because of your post and damn it brings back some memories. I posted in this thread way back in the day (https://hardforum.com/threads/h-ard-forum-storage-showoff-thread.1847026/#post-1041328562)

My setup has changed a lot since then, So here is what I have now...

Server 1:
Freenas Server for my important data
Intel Xeon E5-2630 v3 @ 2.40GHz
64GB memory
ZFS array with 8 x 8TB hdd's in mirrored pairs
10GB networking

Server 2:
Proxmox server
Running a mix of lxc's & VM's depending on the workload
Intel Xeon E3-1225 v6 @ 3.30GHz
32GB memory
ZFS array with 4x 1TB ssd's in mirrored pairs
10GB networking

Server 3:
Proxmox server
Running a mix of lxc's & VM's depending on the workload
2x Intel Xeon X5690 @ 3.47GHz
96GB memory
ZFS array with 4x 1TB ssd's in mirrored pairs
10GB networking

Server 4:
OpenMediaVault
media storage/playback/downloading
Chenbro NR40700 4U 48-bay (top load)
Intel i5-9600K @ 3.70GHz
Quadro P2000 (for transcoding)
32GB memory
100gb ssd for os
2x 1TB ssds in zfs mirror for app data
Main storage array uses SnapRAID and MergerFS
19x 16TB hdd
3x 10TB hdd
3x 8TB hdd


I also have 24x 4TB hdd's and a handful of 6TB hdds I need to do something with
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
33,809
I have to admit, upgrading the old pool always makes me a little giddy.

I got a pair of new old stock 280GB Optane 900p's to replace my aging slow SATA S3700 SLOG drives:

1626736375943.png


I'm a little bit surprised how long the replace resilver takes for log drives though, since they essentially only ever store a second worth of write data, I expected this to be quick! I guess it has to read through the entire pool just to be sure, for some reason.
 

almighty

Limp Gawd
Joined
Apr 20, 2006
Messages
344
I have to admit, upgrading the old pool always makes me a little giddy.

I got a pair of new old stock 280GB Optane 900p's to replace my aging slow SATA S3700 SLOG drives:

Where did you get the drives and at what cost? Been trying to find a pair for under $400 total.
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
33,809
Where did you get the drives and at what cost? Been trying to find a pair for under $400 total.

I picked up a couple 280GB U.2 version) from a taiwanese Newegg marketplace seller for $229 each. A little more than your target price, but I was happy with it. I am overhauling the server so it is easier to do it now than later.
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
33,809
Side note:

This upgrade is turning out to be a little bit of a pain. A U.2 drive is SIGNIFICANTLY thicker than anty of my SATA SSD's, and will not fit in the 2.5" trays I had for them...

Going to have to get creative when it comes to mounting...

That said, I have to admit I am impressed with Intel on these.

Not only did the drives come with the required cables (which are expensive, and I bought in advance and cant return because I expected to have to supply them myself) but the drives also came with screws to install them. They even included 5 screws per drive, just in case you lose one.

That was a very nice touch, and I wish more hardware vendors would do it.
 
Last edited:

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,871
Latest rendition of server up and running:
Gigabyte x370 gaming k7
AMD 2700x
32gb FlareX (cause I can)
10 HGST Deskstar NAS 6tb
6 Seagate Ironwolf NAS 6tb

View attachment 238256

One bigass pool, because I have a redundant server;
View attachment 238255

OOoo what case is that? looks bigger than the antec twelve hundreds im using right now and I need an extra bay

Side note:

This upgrade is turning out to be a little bit of a pain. A U.2 drive is SIGNIFICANTLY thicker than anty of my SATA SSD's, and will not fit in the 2.5" trays I had for them...

Going to have to get creative when it comes to mounting...

That said, I have to admit I am impressed with Intel on these.

Not only did the drives come with the required cables (which are expensive, and I bought in advance and cant return because I expected to have to supply them myself) but the drives also came with screws to install them. They even included 5 screws per drive, just in case you lose one.

That was a very nice touch, and I wish more hardware vendors would do it.
I got a quad set of some HGST SAS SSD which I had the same issue they were 15mm thick boi's and ran HOT AF (50-60c with no airflow just sitting in the case).
Ended up getting this supermicro adapter that worked great: https://www.supermicro.com/en/products/accessories/mobilerack/CSE-M14TQC.php
It was loud AF tho so I stuck a noctua fan resistor on it to reduce the speed and can barely hear it now, they dont have any U.2 options yet unfortunately though.
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
33,809
OOoo what case is that? looks bigger than the antec twelve hundreds im using right now and I need an extra bay


I got a quad set of some HGST SAS SSD which I had the same issue they were 15mm thick boi's and ran HOT AF (50-60c with no airflow just sitting in the case).
Ended up getting this supermicro adapter that worked great: https://www.supermicro.com/en/products/accessories/mobilerack/CSE-M14TQC.php
It was loud AF tho so I stuck a noctua fan resistor on it to reduce the speed and can barely hear it now, they dont have any U.2 options yet unfortunately though.

Yep. These were 15 mm as well.

I wound up removing the bracket from the case and bending back the drive dividers to make them fit:

PXL_20210720_003549683.jpg


Sustained sync writes over NFS over 10Gig Ethernet hover between 1.0 and 1.2 GB/s.

Not bad. No need to disable sync writes for performance on non-critical file systems anymore!

Only thing I don't like about the U.2 form factor is how easily the connectors become disconnected. They are worse than SATA power. I'd install the drives, then do some minor cable management, and they would become disconnected again.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,871
Yep. These were 15 mm as well.

I wound up removing the bracket from the case and bending back the drive dividers to make them fit:



Sustained sync writes over NFS over 10Gig Ethernet hover between 1.0 and 1.2 GB/s.

Not bad. No need to disable sync writes for performance on non-critical file systems anymore!

Only thing I don't like about the U.2 form factor is how easily the connectors become disconnected. They are worse than SATA power. I'd install the drives, then do some minor cable management, and they would become disconnected again.
Nice, I'm way too cheap to go U.2 yet, my stuff is all media pretty much so very little needs anything fast and the stuff that does? I've got 512gb of ramdisk available :D .
I'm still rolling the X10 with the V4 xeons but it works well for what I'm doing.
 

Zepher

[H]ipster Replacement
Joined
Sep 29, 2001
Messages
19,380
Migrated my Plex Server into my 20 year old YY-0221 Cube Case and purchased a couple of ICY Dock 5in3 Hard Drive bays.
I currently have 8 drives + an SSD for 96TB of storage.

IMG_1197.JPEG


Motherboard side, Asus Sabertooth Z87 with an i7 4770, 16GB ram, 2.5GbE NIC, LSI card in IT mode, and a new Noctua 92mm Chromax Cooler
IMG_1195.JPEG


Hard Drive and PSU side, I can fit up to 20 drives in the machine, but running all the cables would be a pain since it's so tight already.
IMG_1196.JPEG


This was the machine back in 2006 or so, might have been 2TB across the 8 drives.
Also this was the time when SATA drives had SATA power and Molex Power connectors. That is why 3 of the drives look
like they aren't plugged in to power.
8-drives-2.jpg
 
Last edited:

robbiekhan

Limp Gawd
Joined
Apr 13, 2004
Messages
464
With Gigabit internet I decided to ditch the bulk of my media storage drives and go 99% solid state as anything I need can be downloaded quickly or just streamed, so now my storage is 8TB Samsung SATA SSD, 1TB Samsung NVMe, and a 5TB USB 3 WD Passport drive I use to synchronise the 8TB SSD and 1TB Windows VHD system image on a monthly basis. Once the 8TB starts to consume more space I'll get another 8TB SSD and replace the 5TB USB HDD with the new SSD in a USB 3.1 caddy.

It's quite a joy not hearing the spinning of a few WD Reds in the case now, near total silence
 

SamirD

Supreme [H]ardness
Joined
Mar 22, 2015
Messages
5,088
With Gigabit internet I decided to ditch the bulk of my media storage drives and go 99% solid state as anything I need can be downloaded quickly or just streamed, so now my storage is 8TB Samsung SATA SSD, 1TB Samsung NVMe, and a 5TB USB 3 WD Passport drive I use to synchronise the 8TB SSD and 1TB Windows VHD system image on a monthly basis. Once the 8TB starts to consume more space I'll get another 8TB SSD and replace the 5TB USB HDD with the new SSD in a USB 3.1 caddy.

It's quite a joy not hearing the spinning of a few WD Reds in the case now, near total silence
So what did you do with the drives? Just keeping them as backup? (that's what I would do.)
 

robbiekhan

Limp Gawd
Joined
Apr 13, 2004
Messages
464
So what did you do with the drives? Just keeping them as backup? (that's what I would do.)
I had no need for them any more really, there were 13TB in total and I wiped them over the course of a week (dban, natch) and sold them on Facebook marketplace where they were picked up within a day or two of the ads going up lol.

Once the 5TB USB drive is too small for my backup needs I'll do the same with that too as mentioned and then be 100% solid state for storage and backup!
 

SamirD

Supreme [H]ardness
Joined
Mar 22, 2015
Messages
5,088
I had no need for them any more really, there were 13TB in total and I wiped them over the course of a week (dban, natch) and sold them on Facebook marketplace where they were picked up within a day or two of the ads going up lol.

Once the 5TB USB drive is too small for my backup needs I'll do the same with that too as mentioned and then be 100% solid state for storage and backup!
Pretty cool. (y) Be sure to post up if you ever run into any issues with the ssds as I don't think there's been a lot of testing done on long term data retention on ssds when heavy on the reads.
 

robbiekhan

Limp Gawd
Joined
Apr 13, 2004
Messages
464
Pretty cool. (y) Be sure to post up if you ever run into any issues with the ssds as I don't think there's been a lot of testing done on long term data retention on ssds when heavy on the reads.

Hmm can only speak from my own experience in the reads/writes department as have been steadily upgrading the OS SSDs on the same Windows install via cloning for years ever since SSDs became a mass thing. The last SATA SSD I had on this Windows was an Intel 730 series 480GB (Skulltrail controller) and that had over 250TB of total writes and much more reads. Intel's product page says it has a lifespan of 70GB writes per day for reference and the drive was pretty much in 24/7 use (PC never turns off). Intel SSD Tools was telling me it had over 90% health still after 5 years of usage by the time I sold it and moved to NVMe.

I had also read about long term data retention on SSDs when not powered up and apparently it's been a talking point for years even though no SSD maker mentions it on their product pages so I just could not believe that leaving an SSD off for say 2 months could result in data loss. I've had old SSDs in the past that I simply plugged in via a SATA to USB adapter that would only be used every few months and had no problems. Likewise PCs at work and laptops that were upgraded to SSDs sat on a shelf for months on end yet still booted and were used when needed so I was always a bit dubious about data retention claims, at least as far as decent brand/model SSDs go.
 

SamirD

Supreme [H]ardness
Joined
Mar 22, 2015
Messages
5,088
Hmm can only speak from my own experience in the reads/writes department as have been steadily upgrading the OS SSDs on the same Windows install via cloning for years ever since SSDs became a mass thing. The last SATA SSD I had on this Windows was an Intel 730 series 480GB (Skulltrail controller) and that had over 250TB of total writes and much more reads. Intel's product page says it has a lifespan of 70GB writes per day for reference and the drive was pretty much in 24/7 use (PC never turns off). Intel SSD Tools was telling me it had over 90% health still after 5 years of usage by the time I sold it and moved to NVMe.

I had also read about long term data retention on SSDs when not powered up and apparently it's been a talking point for years even though no SSD maker mentions it on their product pages so I just could not believe that leaving an SSD off for say 2 months could result in data loss. I've had old SSDs in the past that I simply plugged in via a SATA to USB adapter that would only be used every few months and had no problems. Likewise PCs at work and laptops that were upgraded to SSDs sat on a shelf for months on end yet still booted and were used when needed so I was always a bit dubious about data retention claims, at least as far as decent brand/model SSDs go.
Yeah, I too have heard about these retention issues, but I think it may be more related to bit rot issues than flat out data loss. Either way, another 5 years or so and all these unknowns will be pretty well known. :)
 

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
33,809
Pretty cool. (y) Be sure to post up if you ever run into any issues with the ssds as I don't think there's been a lot of testing done on long term data retention on ssds when heavy on the reads.

I don't think this will be an issue at all.

Mostly, unless you do something stupid, like use QLC drives, write endurance in most cases is a problem of the past.

I just took some Samsung 840 Pro SATA SSD's I'd been using as cache devices in my storage pool that had seen near constant writes for over 8 years out of my server, and they still had 35% life left in them according to Smart stats. Would have lasted ~12 years had I gone for gold.

And those drives were from before Samsung started using 3D NAND which drastically improved write endurance.

For normal use storage drives, anything modern and TLC should do the trick.

For heavy read write anything modern and MLC should be fine.

These things will simply become obsolete before they run out of write endurance.

One real problem with using Flash for long term storage is that cells lose charge over time, so data which is written once and left for a long time could theoretically be lost, but modern SSD controllers take this into account and use idle time to scan and read and rewrite cells that have lost too much charge before data is lost.

I wouldn't use an SSD (or USB stick) as a long term offline backup though. If it sits disconnected from power for a long time, the cell states will degrade, and the controller won't be powered up to maintain them. This is almost guaranteed to result in data loss if enough time passes.
 

robbiekhan

Limp Gawd
Joined
Apr 13, 2004
Messages
464
My 8TB SSD is a Samsung 870 QVO, so QLC 3D Vnand or whatever the terminology is for them. It has an 80GB write cache buffer though so as long as I'm not writing a large chunk of data in one go that is above 80GB then it won't slow down to around 120MB/s write speed during that operation. Any benchmark I run the read and write speeds are over 500MB/s and match Samsung's stated speeds. Granted the lower capacity models have a much smaller cache and as such the write speed drops even further if that smaller buffer if maxed.

Granted the 870 QVO is the 2nd gen QLC vs the previous ones so does have some improvements. I was iffy about getting it initially and spent days watching various reviews and benchmarks to determine that it was the right choice. The only other alternative being a Sabrent NVMe 8TB but that is over double the price!
 

SamirD

Supreme [H]ardness
Joined
Mar 22, 2015
Messages
5,088
I don't think this will be an issue at all.

Mostly, unless you do something stupid, like use QLC drives, write endurance in most cases is a problem of the past.

I just took some Samsung 840 Pro SATA SSD's I'd been using as cache devices in my storage pool that had seen near constant writes for over 8 years out of my server, and they still had 35% life left in them according to Smart stats. Would have lasted ~12 years had I gone for gold.

And those drives were from before Samsung started using 3D NAND which drastically improved write endurance.

For normal use storage drives, anything modern and TLC should do the trick.

For heavy read write anything modern and MLC should be fine.

These things will simply become obsolete before they run out of write endurance.

One real problem with using Flash for long term storage is that cells lose charge over time, so data which is written once and left for a long time could theoretically be lost, but modern SSD controllers take this into account and use idle time to scan and read and rewrite cells that have lost too much charge before data is lost.

I wouldn't use an SSD (or USB stick) as a long term offline backup though. If it sits disconnected from power for a long time, the cell states will degrade, and the controller won't be powered up to maintain them. This is almost guaranteed to result in data loss if enough time passes.
Yep, and this is pretty much what I've read (and seen on here) about the loss over time. It will be interesting to see if that changes in the future and if just idling is enough to keep things 100% over many years.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,587
I take back some of the info in my last post! I had to fact check myself lol and looked at the Samsung website again, the 870 QVO uses MLC chips:
"Samsung V-NAND 4bit MLC" - https://www.samsung.com/uk/memory-storage/sata-ssd/ssd-870-qvo-sata-3-2-5-inch-8tb-mz-77q8t0bw/

So better than I thought lol. For some reason I thought the QVO models were Q because they used QLC flash storage.

"4-bit MLC" is QLC.

https://www.anandtech.com/show/15887/the-samsung-870-qvo-1tb-4tb-ssd-review-qlc-refreshed
 
Top