[H]ard Forum Storage Showoff Thread

aight....
update to my setup...
Case: Corsair 750D with 6x3 bay cages
OS: Server 2019 Standard (Essentials role is gone.. im sad)
CPU: Intel i5-6600k
MoBo: Gigabyte GA-Z170XP-SLI
RAM: 4x8gb ddr4
GFX: Onboard Intel HD 530
PSU: Corsair HX 620W
OS Drive: 128GB SSD, Samsung
Storage Controllers: HP H220 8i feeding - HP AEC-83605/HP2 36 Ports 12GB SAS Expansion Board 761879-001
Hot Swap Cages: ICY DOCK 6 x 2.5" SATA /SAS HDD/SSD Hot Swap x 3
Storage Pool1: SSD
Storage Pool2: Sata with 500GB SSD 256gb NVME Cache
So...
15x 2tb SATA
3x 4tb SATA
256gb NVME Cache
1x 120gb Samsung SSD - OS
21x 500gb ssd
all quiet, compact, 10gb network.....
48881010572_d7fe975cde_b.jpg
Corsair 750d with 6x3 bay cages by Jeffrey Riggs, on Flickr
48881010422_d60acf778a_b.jpg
Icy Dock 3x - 6x2.5" hot swap by Jeffrey Riggs, on Flickr
48881010212_b1b5a1f0d1_b.jpg
SSD Pool by Jeffrey Riggs, on Flickr
48881010187_358a797a46_b.jpg
SATA Pool - NVME Cache by Jeffrey Riggs, on Flickr
 
Last edited:
Nice setup! Ended up reselling mine and used the funds to get an Intel SAS expander which works way better for me.
 
aight....
update to my setup...
Case: Corsair 750D with 6x3 bay cages
OS: Server 2019 Standard (Essentials role is gone.. im sad)
CPU: Intel i5-6600k
MoBo: Gigabyte GA-Z170XP-SLI
RAM: 4x8gb ddr4
GFX: Onboard Intel HD 530
PSU: Corsair HX 620W
OS Drive: 128GB SSD, Samsung
Storage Controllers: Dell 5100 8i HBA / H220 36i HBA
Hot Swap Cages: ICY DOCK 6 x 2.5" SATA /SAS HDD/SSD Hot Swap x 3
Storage Pool1: SSD
Storage Pool2: Sata with 500GB SSD 256gb NVME Cache
So...
15x 2tb SATA
3x 4tb SATA
256gb NVME Cache
1x 120gb Samsung SSD - OS
21x 500gb ssd
all quiet, compact, 10gb network.....
View attachment 192257Corsair 750d with 6x3 bay cages by Jeffrey Riggs, on Flickr
View attachment 192258Icy Dock 3x - 6x2.5" hot swap by Jeffrey Riggs, on Flickr
View attachment 192259SSD Pool by Jeffrey Riggs, on Flickr
View attachment 192260SATA Pool - NVME Cache by Jeffrey Riggs, on Flickr
Holy crap that is sexy
 
in a datacenter... that entire rack uses ~3.5kw
Interesting. So is that the datacenter rack and you just put your equipment in, or your whole rack? And how does physical access to it work? I've always wondered if it was possible to take our servers and just put them in a datacenter.
 
usually the datacenter owns the racks and you put your equipment in, in this case I have a deal with them to rent a small space in the corner, and that's my rack
 
usually the datacenter owns the racks and you put your equipment in, in this case I have a deal with them to rent a small space in the corner, and that's my rack
Nice! That's pretty reasonable. :)
 
Ran out of space on my old SFF/SSD NAS and my Avoton mobo finally bit the dust (the dreaded BMC/novideo issue). Instead of just getting a new mobo, decided it was time for an upgrade (that had lasted 5 years through a couple storage pool iterations).

New SFF NAS:
Sharing a twin-ITX 2U chassis with my edge device
Supermicro X10SDV-6C+-TLN4F
64GB DDR4-2133 ECC
Pico PSU 150
970 evo plus 250gb boot drive / cache (yeah, way overkill)
9305-24i
3x vdevs of Z2 goodness (each 8x 2TB WD Blue m.2 SATA drives) + 2 hot spares

31 TiB available



edge_2U.jpg
 
Last edited:
usually the datacenter owns the racks and you put your equipment in, in this case I have a deal with them to rent a small space in the corner, and that's my rack

I looked into co-location once but found it to be prohibitively expensive. (Like, way more expensive than just renting something that includes the server as well, which made no sense to me at all...)

Instead I stash my remote backup server in a friends basement and he stashes his in mine.
 
Last edited:
I looked into co-location once but found it to be prohibitively expensive. (Like, way more expensive than just renting something that includes the server as well, which made no sense to me at all...)

Instead I stash my remote backup server in a friends basement and he stashes his in mine.
It's definitely cheaper than renting servers. My 48U rack, with 30A 120V service, and 100mbit unmetered (with /26) is ~$700 a month last I checked. My 4U 24 bay Supermicros only use about 3A each and would probably cost $500+ a month per server if not colocated. Paying someone else for 150TB of storage gets to be very expensive. You also don't have to get a full rack. A single server (depending on power), can often be colocated for ~$100 and 10U runs about $300 (with 10A). Not that bad when you consider electricity to run and cool that takes up half of the price.

I should update my post seeing as I'm at ~500TB currently...
 
Supermicros only use about 3A each and would probably cost $500+ a month per server if not colocated.
Damn, electricity where you live is expensive. This pulls about 10-11A (120V) under full load and our electric bill for the whole house is only like $150/month:

IMG_0571.jpeg


It idles at around 9A, so your 3A per server estimate is actually pretty much spot on.

edit: Ignore me. I see what you were getting at now. Doh.
 
Damn, electricity where you live is expensive. This pulls about 10-11A (120V) under full load and our electric bill for the whole house is only like $150/month:

View attachment 218158

It idles at around 9A, so your 3A per server estimate is actually pretty much spot on.

edit: Ignore me. I see what you were getting at now. Doh.

damn my electric bill is $300-$400/m and I have LED bulbs everywhere, home automation to control shutting on/off lights...
 
Ran out of space on my old SFF/SSD NAS and my Avoton mobo finally bit the dust (the dreaded BMC/novideo issue). Instead of just getting a new mobo, decided it was time for an upgrade (that had lasted 5 years through a couple storage pool iterations).

New SFF NAS:
Sharing a twin-ITX 2U chassis with my edge device
Supermicro X10SDV-6C+-TLN4F
64GB DDR4-2133 ECC
Pico PSU 150
970 evo plus 250gb boot drive / cache (yeah, way overkill)
9305-24i
3x vdevs of Z2 goodness (each 8x 2TB WD Blue m.2 SATA drives) + 2 hot spares

31 TiB available



View attachment 218544

stupid question: iirc, generally speaking, ppl stay away from WD Blue HDDs for RAID arrays etc.. but given these are SSDs.. any issues? Any drop-outs?
It's a wildly different way of doing a file server, and I love it!
 
stupid question: iirc, generally speaking, ppl stay away from WD Blue HDDs for RAID arrays etc.. but given these are SSDs.. any issues? Any drop-outs?
It's a wildly different way of doing a file server, and I love it!

No issues with the drives themselves, they have been rock solid. Only funny issue was with FreeNAS 11.3 having what appears to be SMART temperature reporting bug when using SAS3008; there is a workaround and SMART temperature is disabled for SSDs in 11.3U2.
 
No issues with the drives themselves, they have been rock solid. Only funny issue was with FreeNAS 11.3 having what appears to be SMART temperature reporting bug when using SAS3008; there is a workaround and SMART temperature is disabled for SSDs in 11.3U2.

I assume you were 'forced' to go with that LSI card, in need of the number of ports? I checked the card after I posted - quite an expensive HBA!
 
I assume you were 'forced' to go with that LSI card, in need of the number of ports? I checked the card after I posted - quite an expensive HBA!

It depends. I just picked up a Quanta 24 bay 2.5 HD server that uses a Mezzanine card. I picked up a Quanta SAS 3008 HBA for $29.09 with free shipping.

https://www.ebay.com/itm/8-Port-12G...614978&hash=item41f2fde91e:g:x~sAAOSwSK9eTYXy

This is the server. Came with a dual 10G NIC too. Seller accepted $225

https://www.ebay.com/itm/QUANTA-D51...540209?hash=item23d3479931:g:JqoAAOSw1dVehMSQ

It's a Xeon E5 26xx v3 /4 server.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
stupid question: iirc, generally speaking, ppl stay away from WD Blue HDDs for RAID arrays etc.. but given these are SSDs.. any issues? Any drop-outs?
It's a wildly different way of doing a file server, and I love it!
Blue spinners are bottom-barrel; they will work, but they lack the features that Red spinners have for playing nice in a RAID.

Blue SSDs are also bottom-barrel, but for SSDs, that means something different. No RAM cache, no capacitor for ensuring writes if power is lost, lower endurance, lower SLC cache, etc. Stuff that might be important for a SAN or database server running in a cluster that demands high performance but generally meaningless on a read-focused storage server.
 
Blue spinners are bottom-barrel; they will work, but they lack the features that Red spinners have for playing nice in a RAID.

Blue SSDs are also bottom-barrel, but for SSDs, that means something different. No RAM cache, no capacitor for ensuring writes if power is lost, lower endurance, lower SLC cache, etc. Stuff that might be important for a SAN or database server running in a cluster that demands high performance but generally meaningless on a read-focused storage server.

Yep -- really not much is needed for storage, most mainboards have a m.2 if you really want to be ridiculous and add a cache. One could say that SSDs are 100% unnecessary for storage (and probably be right), but I was aiming for interesting/different SFF implementations. :)
 
Yep -- really not much is needed for storage, most mainboards have a m.2 if you really want to be ridiculous and add a cache. One could say that SSDs are 100% unnecessary for storage (and probably be right), but I was aiming for interesting/different SFF implementations. :)

We have gone all flash storage (Tier 3 data-centers, so power loss is not really a worry) simply because of the speed.
Spindle storage is dying, we only have backup on spindles now.
 
One could say that SSDs are 100% unnecessary for storage (and probably be right)
If the only goal is to store data that can then be re-read on demand, then yeah. Otherwise spinners are unnecessary and we'd use tape!

Also for write-intensive workloads, spinners can be desirable, as they're even more of a value versus SSDs designed for constant writes. Especially when you start moving up to cloud interconnect speeds of 40Gbps+. For many / most uses, the only reason to pay for better flash anywhere is for the associated warrant or service agreement. Write intensive scenarios where low-latency, high-speed read is also desirable are the main exception.
 
No way I would trust my data to tape again. Been there done that and lost big time more than once. Never again.
 
If the only goal is to store data that can then be re-read on demand, then yeah. Otherwise spinners are unnecessary and we'd use tape!

Also for write-intensive workloads, spinners can be desirable, as they're even more of a value versus SSDs designed for constant writes. Especially when you start moving up to cloud interconnect speeds of 40Gbps+. For many / most uses, the only reason to pay for better flash anywhere is for the associated warrant or service agreement. Write intensive scenarios where low-latency, high-speed read is also desirable are the main exception.

Oh, there is a lot of other benefits besides raw speed.
Power usage, lifespan is better, failure rates are better, you don't have drives failed after inergen relase etc.
Spindles are dying, I guess I 5 years not only our storage is all flash...but so will our backup be.
 
As I understand it, flash is not good for long term storage. Bit rot or something like that. For me, the best long term storage has been those little 2.5" SAS enterprise HD's. I can set them on the shelf and forget about them. Have never had one fail to read when needed.
 
Back
Top