The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Mad respect for this thread and you enthusiasts. I spent a long time looking into how best to make my old PC into a NAS on the cheap, and after finding Windows XP limits HDD's to 2TB, I purchased 4 2TB hard drives and ran them individually until a friend of mine gave me a copy of Windows Server 2003. When I ran those 4 2TB's in RAID 5, I found it had no issue recognizing the HDD's as 5.5TB, which suggests to me that I should have gone with the 3TB. I wondered for a while if maybe my 2005 motherboard would have difficulty recognising larger HDD's than 2TB as well, I suspect now that's not the case. If I'd gone with the 3TB's in RAID5, combined with the 2 x 80GB IDE's in RAID 1 and the 2 x 400GB IDE's in RAID 0, I would have just made it into this illustrious group :(
 
Mad respect for this thread and you enthusiasts. I spent a long time looking into how best to make my old PC into a NAS on the cheap, and after finding Windows XP limits HDD's to 2TB, I purchased 4 2TB hard drives and ran them individually until a friend of mine gave me a copy of Windows Server 2003. When I ran those 4 2TB's in RAID 5, I found it had no issue recognizing the HDD's as 5.5TB, which suggests to me that I should have gone with the 3TB. I wondered for a while if maybe my 2005 motherboard would have difficulty recognising larger HDD's than 2TB as well, I suspect now that's not the case. If I'd gone with the 3TB's in RAID5, combined with the 2 x 80GB IDE's in RAID 1 and the 2 x 400GB IDE's in RAID 0, I would have just made it into this illustrious group :(

Or you could've gone with something like FreeNAS or unRAID and have no worries about the limitations of Winderz problems. Both are free opensource NAS platforms that easily solve all your problems. Very user friendly either distro and I've used both.
 
very nice but I can't believe you have a 90TB RAID6
you lose 3 drives and everything is gone

If I could do triple parity I would but honestly hitachi drives have proven to be extremely reliable and I am pretty much an expert when it comes to recovering an array. I find it very unlikely that 3 disks would fail within that short period of a time to the extent I was not able to get anything off them in order to recover the array.

I have recovered an areca array before when a disk was failing as soon as it was inserted into a slot before you even could get into the BIOS utility. I am not really worried. Honestly if 3 drives or mroe really failed at once it would be more likely that a lot more woudl have failed due to some sort of spike/psu blowout etc...

my 1TB seagate drives with <700 days of use each have had 5-6 failures over the years. my 24x2TB hitachi has had 0 failures in 900 days of use, my 30x3TB hitachi has had 0 failures in 400 days of use/powerd on time.
 
Sorry for quoting a post from September, but how loud is that Supermicro chassis (SC846TQ-R1200B)?

I have two sc846 in my rack and they are cooler than norco and not much louder with the right setup/mobo that can control the fans.

It has three level you can choose from in BIOS, on the highest performance it pretty damn loud! But in the quiet mode I can have it in a working environment.

I have seen machines with that 1200 watt PSU, dude that thing is *loud*. The loudest thing I have found is the PSU. If you want to reduce the noise *a ton* go buy the 920 watt 94% effeciency PSU. That thing is super quiet compared to the others and what I use in mine. They can be bought used on ebay for like $80.
 
Or you could've gone with something like FreeNAS or unRAID and have no worries about the limitations of Winderz problems. Both are free opensource NAS platforms that easily solve all your problems. Very user friendly either distro and I've used both.
I don't trust those kind of OS's. Not to say they're bad, but a free OS doesn't sound like a good deal to me.
 
I follow the idea of you get what you pay for. They might be free, but from my understanding they either have a sharp learning curve, or lack the functionality of a commercial OS.
 
What can I say, your loss:)

Personally, I'm running around 5 servers with OpenIndiana and napp-it for the last 3 years and another one on Linux for the last 12 with no problems...

FreeNAS and unRAID are not OS-es, they are just a frontends that makes managing things easier. Under them is running FreeBSD(Unix) and unRAID I think it uses Linux. The same goes for OpenIndiana(spin-off of Solaris, an enterprise OS) and napp-it.

I trust them more than Windows since SW raids in those systems are used for a lot longer then the one in windows.

Matej
 
I follow the idea of you get what you pay for. They might be free, but from my understanding they either have a sharp learning curve, or lack the functionality of a commercial OS.

I don't want to preach much so I'll keep it short. I suggest trying FreeNAS in a VM. Make your own decision; you may be surprised. There is a load of awesome free software out there such as VirtualBox, purpose-specific *nix (e.g. FreeNAS), 7-zip, every web browser I can think of, and the list goes on. I get where you're coming from with the you get what you pay for mentality as I definitely like nice things, but it doesn't apply to everything in life (thankfully!).
 
Finally broke the 10TB barrier.

16.5 TB Total advertised storage
12.5 TB Total usable storage

Chassis: Norco 4020
Motherboard: Asrock Extreme4 Z77
CPU: Intel i7 3770S
RAM: 2x 4GB Corsair Vengeance DDR3-1600
Hard Disks: 6 x Western Digital WD20EARX, other 500GB to 2TB drives for less important files.
Controller: Supermicro AOC-USAS2-L8i
OS: Server 2012 Standard
UPS: CyberPower OR1500LCD

I'm currently using Storage Spaces Parity with acceptable results. 5 drives with parity and one hot spare. 450MBps read / 75MBps write.
 
Well folks, I hate you all! I ended up purchasing some 3TB drives.

Total advertised: 18TB
Total usable: 12TB

Total external drives and other drives in use: 5TB

Here is a picture. Yes, it is all self contained. There's more than adequate cooling and it houses 6x 3TB drives and 2x 750GB laptop drives. Best part, all the components except the drives, cost me total $300 thanks to deals and the fact that I studied in Atlanta, about 20 minutes away from Fry's.

th_IMG_0846.jpg


The external enclosure houses 1x 1TB and 1x 2TB drives. The 1TB is currently being uses as a scratch drive for my downloads with SAB and Transmission. Once done, the completed files are moved to the ZFS store.

th_IMG_0848.jpg


There is no vSphere client for OSX, so I use my trusty Windows laptop to manage the server.

th_IMG_0849.jpg



In summation, I have the ZFS shared to Windows, OSX and Linux. Lubuntu and OpenIndiana are running on the ESXi server with OpenIndiana handling the storage. Lubuntu handles SAB and Transmission. I can control SAB and Transmission from both the laptops since the daemons run on Lubuntu. I plan to expand my setup to include a server for running an ANSYS and MATLAB backend along with some proprietary CFD code. Display will then be taken care of by the laptops running in conjunction/unison to process the data and then display the data on all 3 screens with the Mac driving the controls. So, this first venture into ZFS is a very important milestone for me.
 
Last edited:

I agree pictures are definitely a plus, this is a showoff thread, but at the bare minimum there needs to be relevant information.

Well folks, I hate you all! I ended up purchasing some 3TB drives.

Total advertised: 18TB
Total usable: 12TB

Total external drives and other drives in use: 5TB

This post tells us nothing about the system specs., the last line "Total external drives and other drives in use" hints that the 3TB drives recently purchased could possibly be configured with some other hardware...metril (and others), don't leave us hanging.

I'd say if you are going to post your setup, at least put in a little effort.
 
Long time lurker, first time poster.

22 TB raw
16.5 TB useable

Chassis: Norco 4020
PSU: Corsair TX650M
Motherboard: Supermicro X9SCM-F-O
CPU: Intel Xeon E3-1220
RAM: 4x Crucial 4 GB ECC DDR3-1066
HBA: M1015 and USASLP-L8i (lsi 1068)
Hard Disks: 8 x 2TB RAIDZ2, 2 x 1.5TB mirror, 1 x 3TB, 2 x 80GB mirror
SSDs: 2 x OCZ Vertex 30 GB, 2 x Samsung 830 128 GB
OS: ESXi 5.1, OI+Napp-It for storage, various other VM's

I wanted to try out the stock fans before replacing them (terrible idea). To make it useable while waiting for the new fans and 120mm fan wall, I made a wall out of cardboard and stuck some fans in I had laying around.
one
two

Then came the new fans/wall.
three
four
five
 
Last edited:
I follow the idea of you get what you pay for. They might be free, but from my understanding they either have a sharp learning curve, or lack the functionality of a commercial OS.

How about I sell you a copy of any of the *NIX OS's for $100, you'll still get a GREAT OS, feel better about it and I'll have scotch money? Deal? :D

In all honesty, companies/site such as Google, Facebook, etc all run on free *NIX OS's.
 
Just realized I haven't updated my setup in this thread recently. Previously, I looked like this:
img1185wt.jpg


Well, that was taking up valuable garage space and most of the hardware was ancient, so I upgraded. Here's my file server now:
IMG_1820.jpg


And a pic of the inside:
IMG_1821.jpg


And what the back side looks like: (old pic, but still accurate)
IMG_1712.jpg


Specs:
Case: Fractal Design Define Mini
PSU: Corsair TX650 V2
Motherboard: SuperMicro X9SCM-F-O
CPU: Intel Xeon E3-1230
RAM: 2x 8GB (2x4GB) Kingston ECC Unbuffered DDR3 Model KVR1333D3E9SK2/8G
Controller: Onboard
Hard Drives:
OS
&#8226; 1x 16GB USB Flash Disk
Primary storage pool
&#8226; 4x Seagate Barracuda 7200.14 ST3000DM001 7200RPM/64MB/3TB (raidz)
Storage pool backup
&#8226; 2x Seagate Barracuda XT ST33000651AS 7200RPM/64MB/3TB (jbod)
Operating System: Solaris 11

Looks like it's time for another upgrade:
Code:
root@solaris:/pool# df -h|grep T|grep -v rpool
backup                 5.4T   5.3T        43G   100%    /backup
pool                   8.0T   7.3T       725G    92%    /pool
 
Last edited:
Just added 2x3TB, so I figured it was time to delurk and post mine, which could probably be characterized as low-fi and cheap(ish) as compared to most of the fine rigs here. I'm a grad student, so I've just been snagging used and on sale gear for a long time and cobbling it all together.

2x3TB + 6x1.5TB + 6x500GB = 18TB advertised
2x2.73TB + 6x1.36TB + 6x465.76GB = 16.41TB formatted

Lian Li 2000b
PC P&C 750w
ASRock A770DE+
AMD Sempron 145 Sargas 2.8GHz
4GB DDR2800
Sapphire Radeon HD 5550 1GB
3 generic SATA cards (PCI)
SuSE 12.2 installed on a separate 120GB drive
Controlled via KVM or Logitech k400 (just got this and it is sweet)

Everything is formatted Ext4, with each drive paired in RAID1 just using software RAID (which saved me worrying about specs on the SATA cards I already had/inherited). Each pair of drives serves as a bucket for a different kind/genre of media. This serves as simply a SAMBA file server for myself and my lady friend (and she barely uses it), so I don't really experience any lag or trouble on my home network. It provides the files for my HTPCs in the living room and bedroom (Mac Minis with Plex) and acts as the "TV" in the office (file browser and VLC till I find a frontend that SuSE and I like). On to the pics...

Pre 3TB drives (cleaner wires):
serverold.jpg

Lian Li 3x5.25" to 4x3.5" expander:
additionside.jpg

additionfront.jpg

Everything installed:
servernew.jpg

Back in its home:
serverhome.jpg


After my qualifying exams (whenever those will be :() I'll probably be swapping out the OS drive for an SSD and adding in 2-4 more 3TB drives (depending on monies). I'll redo all the wiring then, and probably rearrange things a bit to get the system out from under the table.

...someday rack mount...some day...
 
what are you storing on your server? and how many clients are hitting it?
Who are you asking? If it's me, only one or two clients are on my box at a time. I'm storing a couple .ISOs, some pictures, and a ton of video - mostly TV shows and cartoons I ripped for a home theater setup I've got planned once I graduate. I've also got my homework on there in case I forget to toss it on a thumbdrive before school, a resume, and a few other documents. Oh, and backups from my wife's laptop before I formatted it. Every once in a while she remembers something else she wanted saved. It's been over a year now, too...
 
Well since I finally have 10TB usable in one array I'll add mine. Just a home file server, and occasional linux workstation.

Ubuntu 12.04 w/ Gnome
Intel Core 2 Duo E6300 @ 3GHz
Promise SuperTrak EX 8350 (shitty, firmware support sucks, no smart passthrough, had issues with the samsung hd204ui showing a bad sector in the middle of drive on every single drive on the same sector, just use in JBOD mode now and it doesn't give me any issues)
2 * 320GB Raid 1, OS
7 * 2TB Raid 6, Storage
1 * 2TB WD Green non raid friendly drive, used for offline backups, spends most of it's time unplugged.

server1.jpg

server2.jpg

server3.jpg


Sorry, the flash really brings out the dust, I just cleaned it not that long ago :(. Wiring police, go crazy, don't care :p
 
Finally managed to join this group - Starting with a HP Microserver running OI/Napp-it with 4x WD30EFRX drives in Raid Z1, and once all the data is copied over to them, will also be running 2x WD10EADS in Mirror.

OS is a 8Gb USB drive

Now the basic's are up, will be adding 16GB ECC ram and 2 VM's off Virtual Box to support Windows Compatabilty and Linux Development.
 
At the moment i have a simple 14TB raid-5 array (8x 2tb):
Antec rack-mount case, amd quad-core, 16gb ddr3 ram, high-point 2680 card.

I'll be upgrading in Q1 2013:

DL140G3, 2x QC, 12gb ram, 2x 1tb R4 boot (raid 0), 1x ssd 60gb, raid-card (thinking of HP P222??)

SFF-8088 connection to:

Norco rpc4224,
HP raid SAS expander,
Atom based board to power the expander,
Corsair 850 watt psu

Starting witch 13x 2TB in raid 5 (P222 does not support raid 6:( )
So it will be 24TB

ill be adding more drives with OCE later on.




If you cant think of a better raid card for me to use please go to this topic


(I'll post pics of the current server soon)
 
...running OI...
...Virtual Box...
My experience with running VBox on an OI host has been terrible, I had multiple severe issues over the course of a couple of months. It made me go the ESXi way and I never looked back. Anyway, if your OI starts doing weird things you know why.
 
i found out something , raid 5 can only go up to 16tb on a nas. my 7x3tb doesnt fit.

have left over space where i had to set it to use iscsi
 
i found out something , raid 5 can only go up to 16tb on a nas. my 7x3tb doesnt fit.

have left over space where i had to set it to use iscsi

What?

What RAID? HW or software?
If HW, which hw?
If sw, which sw?

As far as I know, there is no limit or at least it's far from 16tb...

Matej
 
I just completed a complete upgrade on my server. I originally had 8x1.5TB drives in a norco chassis. I have added an additional 16x3TB disks and upgraded everything else in the server.

For this upgrade I decided to use the Habey ESC-4242C 4U chassis. It is very similar to the NORCO RPC-4224 with some minor upgrades that are worth the extra money in my opinion. The Habey case has a 3x120mm fan mount compared to the 4x80mm in the norco and the HDD trays seem to be more durable. I also like the backplanes better in the habey.

53TB usable space - 60TB total advertised

OS - Fedora 17 x86_64
Case - Habey ESC-4242C 4U Storage Server Chassis (24 disk hot-swap)
System Board - Asus Z9PE-D8 dual socket LGA2011
CPUs - Dual Intel XEON E5-2640 6-core
Memory - CORSAIR Vengeance 64GB (8 x 8GB)
GPU - Nvidia EVGA 8800GTS
PSU - SeaSonic Platinum-1000
Cooling - 2x Dynatron R17 CPU HSF, 3x 120mm silent typhoon, 2x 80mm Enermax Magma

Disks and storage controllers:
OS drive - 120GB OCZ Vertex 3 SSD
Data drives - 16x 3TB Seagate - mdadm raid6
Backup drives - 8x 1.5TB Seagate - areca hw raid5
LSI 9201-16i SAS HBA
Areca ARC-1220 Sata Raid controller


Full album here.

IMG_0145.JPG


server.JPG
..........
conky.png
 
Last edited:
The problem is not with ext4 but the e2fs tools.
https://ext4.wiki.kernel.org/index.php/Ext4_Howto said:
NOTE: Although very large fileystems are on ext4's feature list, current e2fsprogs currently still limits the filesystem size to 2^32 blocks (16TiB for a 4KiB block filesystem). Allowing filesystems larger than 16T is one of the very next high-priority features to complete for ext4.
 
Status
Not open for further replies.
Back
Top