[H]ard Forum Storage Showoff Thread

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,466
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.

I can only speak for enterprise grade stuff, but all the data I have seen say the opposite, even back from the google study from 2016...failrue rate is much lower than with HDD's.
 

Factum

2[H]4U
Joined
Dec 24, 2014
Messages
2,466
Yes, failure rate is much better but data retention is another thing altogether.

We run RAID6 and have for years, never had any data loss...and we are talking HUGE amounts of data.
There is a reason we are ditching spindles as fast as we can.
 

mnewxcv

[H]F Junkie
Joined
Mar 4, 2007
Messages
8,269
Storage for chromebook/Ubuntu dual boot.
 

Attachments

  • 20200412_115424.jpg
    20200412_115424.jpg
    292.7 KB · Views: 0

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.

iirc, that only applies if you unplug a SSD and let it sit for quite some time
 

Deadjasper

[H]ard|Gawd
Joined
Oct 28, 2001
Messages
1,862
iirc, that only applies if you unplug a SSD and let it sit for quite some time

Yup. As long as it's regularly powered up all is good. It's when you park it on the shelf that the trouble lies.

Having said this, I have an old Dell RD1000 drive and 5 80GB cartridges. It's been sitting idle for at least 4 years. Wanted to see if the data was still there. It was. Don't remember how long data lasts on an SSD but I remember reading it wasn't anywhere near as long as a spinner. Of course it was some time ago when I read this. Things might have changed.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,071
Are you just using storage spaces? How's performance with parity these days?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,591
Are you just using storage spaces? How's performance with parity these days?
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.
 

ND40oz

[H]F Junkie
Joined
Jul 31, 2005
Messages
12,071
My performance has always been abysmal with parity SS. Read is fine, but write is just god awful

Yeah I was hoping that latest changes to 2019 had changed that, thinking of trying a 4x8TB and 4x4TB parity pool and see how it works.

Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s. Not sure how that would compare to Hardware RAID or ZFS.

Ah got you, how are you backing it up to the redundant server, just robocopy or something?
 

Phelptwan

Supreme [H]ardness
Joined
Jul 20, 2002
Messages
6,591
Ah got you, how are you backing it up to the redundant server, just robocopy or something?

I have a batch file that launches OpenVPN, runs a robocopy /mir script, and then kills the OpenVPN connection once completed. I set it as a scheduled task.

Redundant server:
Gigabyte GA-F2A88X-D3H
AMD A6-6400k
16gb Generic ram
9211-16i in IT mode
11 Shucked WD EasyStore 10tb drives (white labels)

Forgot to add, redundant server is off site. Both locations are 1gb fiber.
 

IdiotInCharge

NVIDIA SHILL
Joined
Jun 13, 2003
Messages
14,710
Yes, just storage spaces (no parity, just a big span). I get sustained transfer speeds anywhere between 200-250mb/s.
Single drive speeds... are single drive speeds ;)

Not sure how that would compare to Hardware RAID or ZFS.
That depends significantly on the RAID configuration. Even SS will do a decent RAID1 etc, and RAID1 would double your read speeds while maintaining write speeds, theoretically.
 

ae00711

n00b
Joined
Feb 28, 2015
Messages
10
Ran out of space on my old SFF/SSD NAS and my Avoton mobo finally bit the dust (the dreaded BMC/novideo issue). Instead of just getting a new mobo, decided it was time for an upgrade (that had lasted 5 years through a couple storage pool iterations).

New SFF NAS:
Sharing a twin-ITX 2U chassis with my edge device
Supermicro X10SDV-6C+-TLN4F
64GB DDR4-2133 ECC
Pico PSU 150
970 evo plus 250gb boot drive / cache (yeah, way overkill)
9305-24i
3x vdevs of Z2 goodness (each 8x 2TB WD Blue m.2 SATA drives) + 2 hot spares

31 TiB available



View attachment 218544

forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?
 

Machupo

Gravity Tester
Joined
Nov 14, 2004
Messages
5,313
forgot to ask, and sorry if you've mentioned this already.. what's the power draw, idle, logged into OS?

Unfortunately, I'm on the other side of the world from the system and didn't think to write it anywhere. It ran fine off of a 60w pico psu, when I had it sharing a brick with my edge device. I switched it to the 150 b/c I thought the spontaneous rebooting issue I was having might have been related to pulling too much power, but it turned out to be a bug in Freenas 11.3 (described in a post somewhere in this thread regarding temperature SMART reporting on SSDs) and I never got around to swapping it back on to the 60w psu.
 

longblock454

[H]ard|Gawd
Joined
Nov 28, 2004
Messages
1,942
Revive!

Going through some boxes during downtime I ran across (3) unused M1015 cards I don't use anymore, decommissioned on my last upgrade due to the availability of >14TB spinners. In doing so it brought back memories of this thread, which is in need of some new attention/posts!

Any noteworthy upgrades since May?
 

Deadjasper

[H]ard|Gawd
Joined
Oct 28, 2001
Messages
1,862
I'm slowly upgrading my server farm. Started out with mostly SM X8 MB's. Now I'm down to 2 servers left with X8DTH-F MB's which are probably the best X8 boards SM ever made so I'll keep those. I have an X9SRH-7F on the way that will go in backup chassis. I also have a Xeon E5 2680 v4 on the way. Thinking about converting the system it's going in (X10SRL-F) into my everyday driver.

Recently ordered a rear window for a SM SC825TQ chassis so I could upgrade the MB from an old X8DTU-F to a X9DRW-IF. I should not have bought this chassis way back a long time ago. Didn't realize it was designed for a proprietary MB.

Also like to mention if you need Supermicro chassis parts or any other non commodity part, chances are they will be much cheaper direct from SM. The cheapest I could find via Google was $89 plus shipping. Price direct from SM was $29.59 delivered. Gotta jump through a few hoops as SM has no system for the little guy to order parts so they use the RMA system. It's easy once you know what to do and they are very helpful and friendly on the phone and email.

Also like to mention that only commodity items are available in their eStore, no parts.
 

86 5.0L

Supreme [H]ardness
Joined
Nov 13, 2006
Messages
7,038
Got a 34TB array i might upgrade to a 56TB array with a new 10gig enclosure. But my current stuff is working well
 

xtream1101

n00b
Joined
Dec 17, 2014
Messages
9
longblock454 I have not been here in a while, but got an email notification about this thread because of your post and damn it brings back some memories. I posted in this thread way back in the day (https://hardforum.com/threads/h-ard-forum-storage-showoff-thread.1847026/#post-1041328562)

My setup has changed a lot since then, So here is what I have now...

Server 1:
Freenas Server for my important data
Intel Xeon E5-2630 v3 @ 2.40GHz
64GB memory
ZFS array with 8 x 8TB hdd's in mirrored pairs
10GB networking

Server 2:
Proxmox server
Running a mix of lxc's & VM's depending on the workload
Intel Xeon E3-1225 v6 @ 3.30GHz
32GB memory
ZFS array with 4x 1TB ssd's in mirrored pairs
10GB networking

Server 3:
Proxmox server
Running a mix of lxc's & VM's depending on the workload
2x Intel Xeon X5690 @ 3.47GHz
96GB memory
ZFS array with 4x 1TB ssd's in mirrored pairs
10GB networking

Server 4:
OpenMediaVault
media storage/playback/downloading
Chenbro NR40700 4U 48-bay (top load)
Intel i5-9600K @ 3.70GHz
Quadro P2000 (for transcoding)
32GB memory
100gb ssd for os
2x 1TB ssds in zfs mirror for app data
Main storage array uses SnapRAID and MergerFS
19x 16TB hdd
3x 10TB hdd
3x 8TB hdd


I also have 24x 4TB hdd's and a handful of 6TB hdds I need to do something with
 
Top