The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
I want like 24 of those in raid 10. :D Would come up to around 110TB of usable space. I honestly don't even know what I'd do with all that, I have 19TB total between my 3 arrays and it's more than I need... for now.

Download all the porn!
 
Just got done adding in my 9th drive. Next one I'll make it a RAID6 just to give me a bit more of a buffer.

Filesystem Size Used Avail Use% Mounted on
/dev/sda3 462G 7.2G 455G 2% /
tmpfs 392M 428K 392M 1% /run
dev 10M 0 10M 0% /dev
shm 2.0G 0 2.0G 0% /dev/shm
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
/dev/md0 15T 2.4T 13T 17% /mnt/storage
 
Alright, just finished (re) building my storage box for the nth time. Hopefully this time it will stay for a little while:

15268538436_a8ac02e4ed_b.jpg


15105014177_e9190723e4.jpg
15288425961_80d5ed94a6.jpg
15104992598_819599393b.jpg


Please disregard the ghetto wire management and duct tape. I was in a rush to finish the job (the server also serves as my MythTV Backend, and my mother-in-law to be wanted to watch the news...)

I will get back in there and clean it up some time soon :p

It is an ESXi all-in-wonder box with the following specs:

Case: Norco RPC-4216
CPU's: Dual Xeon L5640 (6 cores +HT each at 2.27Ghz + turbo)
Motherboard: SuperMicro X8DTE
RAM: 96GB 1333Mhz Registered ECC

It has several guests on it. the storage is handled by FreeNAS, to which I have assigned 72GB of the ram and 6 cores. It has two direct I/O Forwarded IBM M1015's flashed to IT mode connected to the backplane.

There is a single ZFS Pool with two 6 drive RAIDz2 VDEV's configured as below (disregard stickers on front, haven't updated them yet)

Code:
	NAME                       
	zfshome                    
	  raidz2-0                 
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	  raidz2-1                 
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	    Western Digital RED 4TB
	logs
          mirror-2                                      
            Intel S3700 100GB (underprovisioned to 15GB)
            Intel S3700 100GB (underprovisioned to 15GB)
	cache
	  Samsung 850 Pro 128GB
	  Samsung 850 Pro 128GB

So, while we are talking a total of 48TB in spinners and 456GB in SSD's available storage is 32TB or 29.1TiB (if my math is correct)

Not as exotic as many of the other systems in here, but it works for me! :D
 
what's stupid about 10 4TB WD Red's?

That packaging is odd, but that's about it.

You need a hug today EnderW?:D

I like teasers and all, but I think his point is, we want to see cool and interesting server solutions, not really unboxings. I don't feel as strongly about it as EnderW does, but I see where he is coming from.

As far as I am concerned, at least he could ahve spelled out a [H] with them first :p

If i hadn't bought mine bit by bit and gradually expanded my system, I would have :p
 
Zarathustra[H];1041110029 said:
I like teasers and all, but I think his point is, we want to see cool and interesting server solutions, not really unboxings. I don't feel as strongly about it as EnderW does, but I see where he is coming from.

I understand the point too, but I'll play the other side of the card (just for arguments sake).

This entire thread is a "showoff" thread, that's the whole point of the entire thread. Getting all butt-hurt about someone posting 10 HD's in the packaging is just funny. They were just showing off. Someone jealous much?

There needs to be a new saying around here. From this moment forward we shall refer to what EnderW was doing as [E]mo, or simply [E].

:p

Some examples:

  • That guy was totally being an [E]-kid.
  • [E]mo people always act like anyone else cares.
  • [E] jeans look funny.
  • That kid is so [E] I can't tell what sex it is.
 
I understand the point too, but I'll play the other side of the card (just for arguments sake).

Getting all butt-hurt about someone posting 10 HD's in the packaging is just funny. They were just showing off. Someone jealous much?

[/LIST]

From him posting a single word, you were able to come up with things like "butt-hurt" and a whole thing about how we need a new saying around here?

Looking at google trends, it appears "butt-hurt" reached its peak around 2012.

People posting yet another image of hard drives in cardboard boxes to a thread titled "Post your 10TB+ systems" is somewhat odd. Drives in a cardboard box do not constitute a "system".

Look at the post a few before this one. Nice pics of his system and some details about it, not just a pic of a cardboard box and some drives.
 
what's stupid about 10 4TB WD Red's?

That packaging is odd, but that's about it.

You need a hug today EnderW?:D
my comment was not directed to Wibla's post, it was the fact I had to scrap the new thread I was working on because someone posted a reply too soon
my post originally contained a link to the new thread
 
I understand the point too, but I'll play the other side of the card (just for arguments sake).

This entire thread is a "showoff" thread, that's the whole point of the entire thread. Getting all butt-hurt about someone posting 10 HD's in the packaging is just funny. They were just showing off. Someone jealous much?

There needs to be a new saying around here. From this moment forward we shall refer to what EnderW was doing as [E]mo, or simply [E].

:p

Some examples:

  • That guy was totally being an [E]-kid.
  • [E]mo people always act like anyone else cares.
  • [E] jeans look funny.
  • That kid is so [E] I can't tell what sex it is.
wow what the fuck are you talking about?
http://en.wikipedia.org/wiki/Jumping_to_conclusions
 
Yeah,

I really think personal attacks are not helpful here.

Can we keep it to a discussion on the topic?
 

Wow...

http://en.m.wikipedia.org/wiki/Joke
http://en.m.wikipedia.org/wiki/Comedy
http://en.m.wikipedia.org/wiki/Teasing
http://en.m.wikipedia.org/wiki/Emo#Stereotypes

Look ma, I can post somewhat relevant wikipedia links too!

...and now that I have met or exceeded my harrasment limit, any future posts will be storage related.

Zarathustra[H]: how much did you pay for that chassis? How do you like the hotswap bays and sled construction? I have been weary of norco in the past because of their history of questionable backplanes, got any closeups of the trays and backplane?

Edit: what psu is that?
 
Last edited:
I see a new thread was started but it's closed now, why is that?

Since I already posted a pic of my stuff, here's a pic of the box it came in:



:D Sorry, I had to.
 
Zarathustra[H]: how much did you pay for that chassis? How do you like the hotswap bays and sled construction? I have been weary of norco in the past because of their history of questionable backplanes, got any closeups of the trays and backplane?

The Norco RPC 4216 (16 bay version, there is also a 4220 without the 5.25" bays. but 20 caddies) cost me $348.07 shipped from Amazon. Mine wound up having a bend in the back wall and a defective fan, neither of which I noticed until after I was well into my build, so I didn't want to return it for a new one. I complained to Amazon and they gave me a $69.61 refund, so in total I paid $278.46.

I was OK with it, as I had a spare 80mm fan to replace the bad fan with, and the bend is very slight. It causes the I/O shield to be a little bit uneven, but it is cosmetic only, from the back of the server, and I very rarely spend a lot of time looking at it... :p

As far as the case goes, I am very happy with it. Biggest downside - to me - was that it didn't have a consumer level of finish. If you get one, wear gloves or something during the install. There are lots of sharp edges where you least expect them. (I wound up looking like I'd been in a fist fight, and had to wipe blood off of my motherboard)

With the 120mm fan divider (and 3 Yate Loons) installed, it is very reasonable noise wise for a server, especially when the fan controller spins down the fans to 40% of max speed (which it does more or less permanently now that it is fall). By no means whisper quiet, but I can hear the WD RED drives seeking over the fan noise, for a noise reference level.

I don't have any closeups of the backplane (I haven't had them out). I may be able to take a pic of them next time I shut it down to do something. What side do you want to see. From the front (caddy side) in?

I DO have pics of the caddy.

15076586552_8d4e1e5d79_z.jpg
15076944165_d8d97d4abe_z.jpg


I understand they used to ship with better caddies that had a airflow shut lever to close off airflow through unused bays. These new ones must have been a cost savings update. they lack that airflow switch. Either way, I think they work well.

I don't have a whole lot of backplane/caddy experience to compare them to, only my previous server (A HP DL180G6, which only lasted a month, because it was louder than a jet engine)

By comparison I like the action of the Norco caddies better. They move in and out smoothly with little to no resistance. The HP ones would occasionally bind up a little. As opposed to the HP caddies, the Norco ones also have mounting holes for 2.5" drives, which is great for my L2ARC and SLOG/ZIL drives.

I can't speak much to the backplane quality, but I haven't had any issues with it in the short time I have owned it. I did read some older articles suggesting Norco backplane problems, but nothing recent. It seemed to me like older versions of the backplanes didn't deal well with power regulation on high rpm 4TB drives when they came out. Newer revisions appear fine though, but either way I am probably OK since I use RED drives, which are essentially low power 5400rpm GREEN drives + NAS magic, at least from a power consumption perspective.

Edit: what psu is that?

It is a consumer PSU, an Antec Earthwatts Platinum Series EA-550.

I got it because it was reasonably priced, about the right wattage I needed, and 80plus Platinum (to minimize electric bill conversations with the special person in my life :p )

I actually bought it for my old server (before the HP jet engine) built around consumer parts (AMD FX-8350) and when I decided to ditch the HP mistake, I already had it, and it provided sufficient power.

The Supermicro X8DTE board I selected had two EPS12V connectors, but this power supply only had one plug. Since I wasn't planning on installing any video cards, I found an adapter that converted one of the two 12V PCIe power plugs to a second EPS12V plug, and it seems to work well.
 
I see a new thread was started but it's closed now, why is that?

Since I already posted a pic of my stuff, here's a pic of the box it came in:



:D Sorry, I had to.

No idea why. I posted a placeholder for #1 spot and he got butt hurt and closed it :rolleyes: :confused:
 
Let me get things back on track with...

"TK-421"
Stormtrooper-inspired build for a Star Wars fanatic and all-around film buff.

Advertised space: 40 TB
Usable space: 32 TB

Disk configuration: 10x 4TB in single RAIDZ2 vdev

Operating system: FreeNAS 9.2.1.7

Supermicro X10SL7-F
Xeon E3-1220V3
Crucial 2x8GB ECC CT2KIT102472BD160B
SeaSonic SSR-450RM
CM Storm Stryker
3x AMS DS-346TL backplanes


dZSq4OAl.jpg


SAS controller flashed to IT, FN boots from USB. Getting 100 MB/sec writes and the Plex server is working well, so mission accomplished. Two drive bays and four SATA ports still free if 32TB proves to be too little space :eek:
 
Hey everyboby, here is my setup at home:

CPU: Intel Core i7-2600K @ 4,0GHz @ 1,30v (HT disabled)
Cooling: Scythe Mugen 3 Rev.B
RAM: 2x 4GB G.Skill DDR3 @ 1600MHz @ 9-9-9-24
Board: Asus P8P67 Pro Rev 3.0 Bios 1850
GPU: PowerColor HD5870 PCS
HDDs: 8x 3TB Hitachi Deskstar 7K3000 (Raid6); 8x 6TB WD Red WD60EFRX (Raid6); 1x 2TB Hitachi Deskstar 7K2000 (Temp); 1x Extrememory XLR8 Express SSD 120GB (System); 1x OCZ Agility3 SSD 240GB (Games)
Controller: LSI MegaRAID SAS 9260-16i/SGL (16x Sata3/SAS)
NIC: 1Gbit/s Intel Gigabit Onboard
NIC2: Broadcom 5720 Dual-Port 1Gbit/s
BD-RE: LG BH16NS40
PS: Enermax Modu87+ 800W
Case: Lian Li PC-A77FB
OS: Windows 7 Ultimate x64 SP1


So it is 74TB total capacity, and 50TB usable space.


Yj4UP0E.jpg


j4eYei3.jpg
 
As a long time lurker posting my setup which is inspired by reading [H]ard|Forum. Started some years ago with a relatively small system. Got the first 4U case two years ago and gradually added harddrives and cases :)

IMG_2152.jpg

Startech serverrack with three 4U cases. Top to bottom Norco RPC-4224, Norco RPC-4224 and Ri-Vier RV4324-03A.

IMG_2135.jpg

Top case with actual motherboard and Areca 1882iX-24 controller

IMG_2142.jpg

both other cases just contain PSU, Chenbro SAS expander and harddisks

Total storage is 42x 4TB and 12x 3TB all in 6 disk raid 5 volumes totalling 204TB raw disk capacity. I prefer "smaller" volumes to allow for easier migration whenever I might upgrade or in case a raid set gets degraded.
 
Last edited:
It's running windows 7 and yes 95% media storge. System is also used to remux blu-rays in MKV as I use XBMC and prefer this over blu-ray iso's.
Setup is not storage efficient as I allocated each array to a specific usage. For example I have an empty 6x 4TB array where I can copy a degraded array before attempting to rescue that array.
 
Wow that is impressive! But Windows 7? At least use a real server operating system. :p
with max 4 users there is no need for real server os. Only non-server parts in my setup that have caused problems are the Norco backplanes, one of them actually caught fire :eek:
 
with max 4 users there is no need for real server os. Only non-server parts in my setup that have caused problems are the Norco backplanes, one of them actually caught fire :eek:

Yikes!

What happened?

(and I agree with using server OS:es for server tasks, much better long term stability... In fact I have enough of a FOSS bias, that I would even say that nothing with "Windows" or "Microsoft" in its name belongs on a server, ever :p )
 
windows server 2012 is actually pretty dang good for windows based networks. Hyper-V and SMB3 are simply fantastic.
 
Zarathustra[H];1041129522 said:
Yikes! What happened?
Apparently the PSU had shutdown. Not aware of the root cause I switched it back on and then smelled something burning and the actually saw small fire on the Norco backplane. Result,
IMG_1045.jpg

supplier had never heard of problems with Norco backplanes but started selling his own brand of cases claiming these are equipped with reliable backplanes ;)

Failing backplanes is another reason why I have 6 disk raid arrays as with the 4 disk backplanes I thus avoid having more than one disk on each backplane. Not to long ago another backplane failed (not so dramatically) degrading three raid-5 arrays. Shutdown the server replaced the faulty backplane and recovered all three arrays. Also with the burned backplane no data was lost.
 
Apparently the PSU had shutdown. Not aware of the root cause I switched it back on and then smelled something burning and the actually saw small fire on the Norco backplane. Result,
*snip*
supplier had never heard of problems with Norco backplanes but started selling his own brand of cases claiming these are equipped with reliable backplanes ;)

Failing backplanes is another reason why I have 6 disk raid arrays as with the 4 disk backplanes I thus avoid having more than one disk on each backplane. Not to long ago another backplane failed (not so dramatically) degrading three raid-5 arrays. Shutdown the server replaced the faulty backplane and recovered all three arrays. Also with the burned backplane no data was lost.

Ouch! Did that take out anything with it too like drives?

Norco seems to have a really bad track record for backplanes. First time I hear of one crapping out THAT badly though. :D
 
Last edited:
Hi,
I changed some of my server hardware :
  • SSD: RAID1: 2x 256GB SanDisk Ultra Plus 256GB
  • HDD: RAID5: 6x 2TB Western Digital Caviar Green WD20EZRX
    RAID5: 5x 2TB Western Digital Caviar Green WD20EARS + 1x 2TB Western Digital Caviar Green WD20EARX
    RAID1: 2x 1TB Western Digital Scorpio Blue WD10JPVT + WD IcePack
  • Logical volumes: RAID1 256GB (VMware Datastores)
    RAID5 Write Back 9,09TB (RDM on Synology VM)
    RAID5 Write Back 9,09TB (RDM on Synology VM)
    RAID1 1TB (RDM on Synology VM)
  • VMs: XPEnology DSM 5.0 (DLNA, CIFS, Cloud Station, Mail server, MariaDB, IP Cameras), Windows 2012 Server Essentials (Anywhere Access, IIS Web Server, IIS Reverse Proxy, Veeam Backup, rtl1090, FR24 feeder), OpenSUSE 12.3 (Zabbix 2.2.2)

Accessories:

I got a bad surprise while mounting the server, the RAID card was too long, a friend managed to manufacture 2 mounting plates to install the RAID card on top of the rear case fan:





Still some stuff to do, a threading on one of the plates is missing, I should receive an USB 3.0 > 2.0 header, fan resistors, new SAS > SATA cables and a battery for the RAID controller.
 
Ouch! Did that take out anything with it too like drives?

Norco seems to have a really bad track record for backplanes. First time I hear of one crapping out THAT badly though. :D

From what I've read, older revs of the backplanes didn't have the caps to keep up with power distribution on the newer 7200+ rpm 4tb drives.

A lot of people have documented burnouts when they upgraded their older Norcos to larger drives requiring more power.

Haven't heard even close to as much of this lately though, so I'm Hoping that is something they've fixed. That, and I run all 5400rpm reds in mine, so they shouldn't challenge the power distribution as much.
 
Current inventory:
49x2TB Seagate SAS
19x3TB Seagate SATA
3x320GB Western Digital SATA
2x160GB Western Digital SATA
2x256GB Samsung 840 Pro SATA SSD
1x128GB Samsung 840 Pro SATA SSD
1x300GB Western Digital VelociRaptor SATA

Failed inventory:
7x HP 300GB 10K SAS 2.5" (In various states of failure)
3x3TB Seagate SATA (In various states of failure)
12x2TB Seagate SAS (In various states of failure)
2x40GB Maxtor IDE (Untested in years)

I've also thrown away another two or three 3TB Seagate SATA drives over the years. Dunno why I didn't just keep 'em for the pile. I'd love to pick up another 24-bay chassis and make use of the 3TB drives, but finances don't allow for it at the moment. Maybe in the future.

IMG_2640_zpsf7f6a77f.jpg


Both chassis are identical, snapshot'd and rsync'd nightly:
Code:
[root@nas ~]# zpool status pool
  pool: pool
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support
        feature flags.
  scan: scrub repaired 0 in 5h7m with 0 errors on Sat Sep 27 15:07:01 2014
config:

        NAME                        STATE     READ WRITE CKSUM
        pool                        ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000c50034f36cff  ONLINE       0     0     0
            scsi-35000c50034eb58bb  ONLINE       0     0     0
            scsi-35000c50034f44577  ONLINE       0     0     0
            scsi-35000c50034e85e4b  ONLINE       0     0     0
            scsi-35000c50034f422b7  ONLINE       0     0     0
            scsi-35000c50034e85c3f  ONLINE       0     0     0
            scsi-35000c50040cf0c4f  ONLINE       0     0     0
            scsi-35000c500409ae567  ONLINE       0     0     0
            scsi-35000c500409946ff  ONLINE       0     0     0
            scsi-35000c5003c95a907  ONLINE       0     0     0
            scsi-35000c50034fbe17b  ONLINE       0     0     0
            scsi-35000c50034f3dfc7  ONLINE       0     0     0
          raidz2-1                  ONLINE       0     0     0
            scsi-35000c50034f3cc5f  ONLINE       0     0     0
            scsi-35000c50034f3e81f  ONLINE       0     0     0
            scsi-35000c50034ea0857  ONLINE       0     0     0
            scsi-35000c50034ff6167  ONLINE       0     0     0
            scsi-35000c50034f3decf  ONLINE       0     0     0
            scsi-35000c50034f421c7  ONLINE       0     0     0
            scsi-35000c50034f3daeb  ONLINE       0     0     0
            scsi-35000c50034ff1b8b  ONLINE       0     0     0
            scsi-35000c50034f42db7  ONLINE       0     0     0
            scsi-35000c50034f3d3ab  ONLINE       0     0     0
            scsi-35000c50034e011d3  ONLINE       0     0     0
            scsi-35000c5003c95abdf  ONLINE       0     0     0

errors: No known data errors
[root@nas ~]# df -h|grep -v tmpfs|grep -v oot
Filesystem               Size  Used Avail Use% Mounted on
pool                      36T   14T   22T  39% /pool
 
Last edited:
158TB SERVER

8 X 5TB TOSHIBA = 40TB
8 X 4TB TOSHIBA = 32TB
20 X 3TB TOSHIBA,SEAGATE, HITACHI = 60
9 X 2 TB HITACHI = 18
2 X 4 TB SEAGATE = 8


They arrive!
9qbfxy.jpg


Nakedly exposed these virgins
28clkxw.jpg


Preparing for rackmount
34zkqr8.jpg


Areca 1800X detected the HD'S woohooo
245hk5y.jpg


Top and bottom rackmount are empty jbod, Still have 8 tray slot left and upgrading 2TB near future, ditching all the CRAPASS SEAGATE DRIVES
2sb7m86.jpg
 
Last edited:
Status
Not open for further replies.
Back
Top