[H]ard Forum Storage Showoff Thread

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,319
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.
I can only speak for enterprise grade stuff, but all the data I have seen say the opposite, even back from the google study from 2016...failrue rate is much lower than with HDD's.
 

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,319
Yes, failure rate is much better but data retention is another thing altogether.
We run RAID6 and have for years, never had any data loss...and we are talking HUGE amounts of data.
There is a reason we are ditching spindles as fast as we can.
 

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.
iirc, that only applies if you unplug a SSD and let it sit for quite some time
 

Deadjasper

[H]ard|Gawd
Joined
Oct 28, 2001
Messages
1,816
iirc, that only applies if you unplug a SSD and let it sit for quite some time
Yup. As long as it's regularly powered up all is good. It's when you park it on the shelf that the trouble lies.

Having said this, I have an old Dell RD1000 drive and 5 80GB cartridges. It's been sitting idle for at least 4 years. Wanted to see if the data was still there. It was. Don't remember how long data lasts on an SSD but I remember reading it wasn't anywhere near as long as a spinner. Of course it was some time ago when I read this. Things might have changed.
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,548
Latest rendition of server up and running:
Gigabyte x370 gaming k7
AMD 2700x
32gb FlareX (cause I can)
10 HGST Deskstar NAS 6tb
6 Seagate Ironwolf NAS 6tb

5WbYLti.jpg

One bigass pool, because I have a redundant server;
StorPool.jpg
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
11,881
Are you just using storage spaces? How's performance with parity these days?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,548
Are you just using storage spaces? How's performance with parity these days?
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
11,881
My performance has always been abysmal with parity SS. Read is fine, but write is just god awful
Yeah I was hoping that latest changes to 2019 had changed that, thinking of trying a 4x8TB and 4x4TB parity pool and see how it works.

Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.
Ah got you, how are you backing it up to the redundant server, just robocopy or something?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,548
Ah got you, how are you backing it up to the redundant server, just robocopy or something?
I have a batch file that launches OpenVPN, runs a robocopy /mir script, and then kills the OpenVPN connection once completed. I set it as a scheduled task.

Redundant server:
Gigabyte GA-F2A88X-D3H
AMD A6-6400k
16gb Generic ram
9211-16i in IT mode
11 Shucked WD EasyStore 10tb drives (white labels)

Forgot to add, redundant server is off site. Both locations are 1gb fiber.
 

IdiotInCharge

[H]F Junkie
Joined
Jun 13, 2003
Messages
14,497
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s.
Single drive speeds... are single drive speeds ;)

Not sure how that would compare to Hardware RAID or ZFS.
That depends significantly on the RAID configuration. Even SS will do a decent RAID1 etc, and RAID1 would double your read speeds while maintaining write speeds, theoretically.
 

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
Ran out of space on my old SFF/SSD NAS and my Avoton mobo finally bit the dust (the dreaded BMC/novideo issue). Instead of just getting a new mobo, decided it was time for an upgrade (that had lasted 5 years through a couple storage pool iterations).

New SFF NAS:
Sharing a twin-ITX 2U chassis with my edge device
Supermicro X10SDV-6C+-TLN4F
64GB DDR4-2133 ECC
Pico PSU 150
970 evo plus 250gb boot drive / cache (yeah, way overkill)
9305-24i
3x vdevs of Z2 goodness (each 8x 2TB WD Blue m.2 SATA drives) + 2 hot spares

31 TiB available



View attachment 218544
forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?
 

Machupo

Gravity Tester
Joined
Nov 14, 2004
Messages
5,179
forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?
Unfortunately, I'm on the other side of the world from the system and didn't think to write it anywhere. It ran fine off of a 60w pico psu, when I had it sharing a brick with my edge device. I switched it to the 150 b/c I thought the spontaneous rebooting issue I was having might have been related to pulling too much power, but it turned out to be a bug in Freenas 11.3 (described in a post somewhere in this thread regarding temperature SMART reporting on SSDs) and I never got around to swapping it back on to the 60w psu.
 
Top