The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
i can finally post here :D

Specs:
CPU- 2600k @4.7 Ghz
Mobo- Asus Sabertooth P67
Ram- G.Skill RipJaw 1600
OS Drive- 1TB WD Black
Data- 5x 2TB Samsung F4, 204UI
PSU- Antec Neo ECO 520w
CPU Cooler- H50
Fans- 6x 120mm Slip Stream 1200RPM , 1x Antec 250mm
Case- Antec 1200
OS- Ubuntu 10.10, CK Patched Kernal


This is my 1st attempt at wiring this many dam drives up, think i did a pretty good job for just having spare cables laying around, as well as only zip ties....

Should have around 8-10 TB usable space, havent decided how to setup RAID all i've done is partition everything out and get em ready

img20110508220244.jpg


edit: Funny enough, this is reply 1200 to the post, and the case used is an Antec 1200....just find that funny
 
looks like 1201 to me :)

Looks like you can still fit in a few more drives and you could manager the cables a bit better but other than that good work and welcome to the big boys club.

I once booted my machine up with a hd on the carpet,

you must have one crazy carpet, mine are not made of metal and as such drives don't seem to spark when they are on it.

i have had my PC on the carpet a few times before, while i was modding my case ect.
 
;) said reply #1200, not post #1200(it was post 1201, due to the OP, which is not a reply)


and ya, i can fit 3 more drives in it as configured, could get another HDD cage to fit an additional 3, need to work on the wiring a bit, just glad the sucker is up and running
 
Damn Redheaded Step Child! ;)

How are you thinking of configuring the drives?
 
Damn Redheaded Step Child! ;)

How are you thinking of configuring the drives?

no raid, all important data is in 3 places minimum, getting something like Mozy/Carbonite what not to put em on as well, the rest of the data is reproducible, so we dont have an actual reason for RAID at all. that is what the Red Headed Step Child has been up to ;D
 
Intel Xeon E3-1230 Quad-core CPU
SuperMicro X9SCM-F Motherboard
16GB (4x 4GB) SuperTalent DDR3-1333 ECC RAM
Corsair AX850 850W PSU
Norco RPC-4224 24-bay 4U Case
8GB Patriot Xporter XT Boost flash drive for hypervisor storage
320GB Samsung Spinpoint F4 HDD for OS storage
12x 2TB Hitachi Deskstar 5K3000 HDDs (1 to be added)
2x IBM ServeRAID BR10i SAS Controller w/ LSI IT firmware (1 to be added)
OpenIndiana b148 w/ ZFS v28 running on VMWare ESXi 4.1 host

Dell PowerConnect 2724 24-Port Gigabit Switch
Actiontec MI424WR Verizon FiOS Router with 45/40 connection


 
Last edited:
@zeroARMY: Nice.
You do realize that if you add the 3rd BR10i it will only connect at 4x PCI-e right?
Does the BR10i support that?
 
Most controllers should be able to adapt to fewer lanes if the socket the controller is plugged into has fewer lanes than the controller normally uses. It's just better to make sure beforehand instead of getting a nasty surprise after spending all your money on some hardware just to find out they aren't compatible...

It's nice to see the BR10i will be able to work in the x4 slots (PCIe SLOT 5 & 6 on the X9SCM). The only potential impact that this will have is on throughput. But if he's not after high throughput but rather just (low cost) connectivity to all his HDDs, he should be fine.
 
A figured might as well post my setup here since i am past the 10 TB point...

Soo....

Mobo is SM x8si6
CPU is an Intel i3-540
Case - Norco 4224
Raid card - Areca 1280ml
8 GB Kingston Ram
OS Server 2008 R2
Virtualized WHS V1 for backup etc...
Virtualized v2 for trial

Main array = 8 X 3 TB Hitachi's in Raid 6 so about 18 TB usable
2nd Array = 5 X 2 TB hitachi's in Raid 5 so 8 Tb usable
OS drive is raid 1 250 GB Drives
Drive for security = 750 GB
2 X 2TB WD green's for pass through to WHS V1 so 4 TB
and anonther 2 2TB Green drives for Misc storage 4Tb Total

sooo total advertised storage is about 43 TB, while actual usable storage is about 35

Excuse the crappy pics they were taken with my iphone.. i cant seem to find a memory card for my camera

Here is a pic of the server in my rack Excuse the wiring mess i keep meaning to get it cleaned up
IMG_0682.JPG


Here is a shot of the internally mounted 2.5 drives in Raid 1
IMG_0683.JPG


A shot of the Security DVR drive on the other side
IMG_0688.JPG


And finally a shot.. well kind of of the rack it calls home...

IMG_0694.JPG
 
This is mine. I have a total of 15 tb among 6 HDD's. I just ordered 4 more 3 TB drives and I am going to add a Sata card for the extra drives as well..

 
Last edited:
Hey! everyone's gotta start somewhere.

I personally think this thread should drop the 10TB condition, we'd see a lot of interesting and promising systems I'm sure.
 
Hey! everyone's gotta start somewhere.

I personally think this thread should drop the 10TB condition, we'd see a lot of interesting and promising systems I'm sure.
Given that the pr0n I want to see has more to do with number of spindles than number of terabytes, I concur. 10TB is less than 4 disks these days. 100TB is [H]ard.
 
perhaps the "cost of entry" should be increased to >= 20TB

i.e. greater than the storage that can be achieve with only "on-board SATA ports" (six) with the currently largest available hard drive (3TB)

6 * 3TB = 18TB

and before anyone moans, that would be me out of the game...

though, for true server p0rn, perhaps the "cost of entry" should be the requirement for an add-in drive card, be it a simple HBA or a hardware RAID controller. The >= 20TB rule is easier and would require the use of a HBA or HW RAID
 
Well, you could put ZFS on top of hardware raid :p

but fine, accounting for raid level then...

personally i think that that makes things too complicated,

it also favors people who are not that [H]ardcore. Someone could have some epic raid 10 setup and be beaten by someone else with half the number of drives in JBOD and some $10 Sata card/onboard sata ports.

and if its a must, at least list both, but again too complicated
 
Keep the rules as they are today, just up the entrance number to 20TB. This should minimize any confusion.
 
Thinking about it some more the minimum entry should be 25TB

Why?

Because some motherboards have eight on-board SATA ports.

8 * 3TB = 24TB

Perhaps start a well hard thread for those with => 25TB????
 
Thinking about it some more the minimum entry should be 25TB

Why?

Because some motherboards have eight on-board SATA ports.

8 * 3TB = 24TB

Perhaps start a well hard thread for those with => 25TB????


Yeah, but there are plenty of <25TB arrays that do have fancy RAID controllers. Not everyone is using 3TB drives. There are some pretty sick 750gb, 1tb, 1.5tb, and 2tb drive arrays that are less than 25TB but still interesting. I agree that having 10TB in your main rig just by throwing in a couple big storage drives on the onboard ports is kinda lame, but as with any "Show us pics of your ______" thread, there are some boring systems you scroll through quickly, and some awesome ones you will look at for awhile. You're certainly right in saying that 10TB doesn't mean nearly as much as it did 27 months ago, but I don't think the solution is to determine how much storage is available with the highest possible capacity drives (which many of us don't even use, since 2TB still has the best bang for the buck), on some goofy hypothetical system that doesn't have a fast boot drive, or any optical drives.

I'd suggest that if the 10TB is re-evaluated (which I feel is somewhat unneccessary), it should be made to require an add-on controller, or to require a minimum number of drives.
 
I suppose a minimum number of drives would be the best alternative compromise, however, everyone understands a capacity greater than a given threshold.

I agree with you on flicking through the boring ones
 
IMHO 10TB is still way above and beyond what normal people have and could be a pretty serious system. Maybe turn it up to 15.
8 spindles in the array seems like a good requirement. 9 drives requires a fairly serious case as well, although it can fit in some desktops, and its more than desktop boards support.
 
IMHO 10TB is still way above and beyond what normal people have and could be a pretty serious system.

True... True. But if you're just normal, you're not [H] :D

I'm betting that a large percentage of people here are well above anything average or considered normal.
 
just make it 25TB with 10drives in one system as startup requirement, after that anything counts towards overall size

and yea 10TB is silly, now even my router qualifies ( 2x2 + 4x1.5 ) :D
 
Currently just over 12TB

Case: Antec 1200
PSU: Corsair CX600 (Soon to be upgraded to TX750 V2 or similar)
Motherboard: Gigabyte FXA890-UD5
CPU: AMD Phenon II x6 1055t 95W
RAM: 2 x 4GB SuperTalent 2 x 4GB GSkill RipJaw DDR3 1333MHz (Yes yes, i know it's not ECC, this will be fixed soon when I get some money)
GPU: Not sure, some old PCIe card.
Controller Cards: 2 x BR10i flashed to LSI 3081 IT firmware, Adaptec 39160 SCSI
No Optical Drive: USB external used for any installs.
Tape Drive: HP Ultrium 448 LTO-2
Hard Drives:
6 x 1.5TB Samusung 153WI, Main filestore - RaidZ1
8 x IBM 2.5" 73GB 10K SAS drives held in SuperMicro CSE-M28E1 rack, ESXi VM store. 4 disk vdev mirrored
1 x IBM 3.5" 146GB 15K SAS drive, not used for anything yet, possibly ZIL I guess?
Samsung 1.5TB, 1TB and 500GB drives as backup all in USB enclosures
Samsung 250GB bootdrive for ESXi and local diskstore for PFSense and OpenIndiana VMs
Operating System: ESXi 4.1 hosting various VMs but mainly PFSense firewall and router and Openindiana used as a filer with Gea's Napp-IT installed for easy of management.

Pics to follow when my new cables arrive on Monday, absolute mess at the moment and I ordered reverse breakouts by accident instead of normal, Doh!
 
Last edited:
I agree - even if that would kick me out of the list ;) 10TB is possible with 4 drives and a Windows machine - BORING! I would say - increase the entry limit to 25TB. It was way more exclusive in the beginning!
 
I don't see the need to make a change. Not sure how updated the list is, but a 20t minimum means we would barely have a top20.

20t isn't a whole lot, but some spend thier money on quality like raiding out systems and such rather then just pure storage factor, as its generally harder to utilize that data capacity even.

I have started working on building sas15k raids which even to hit 1tb would be costly, esp when im running them in raid10.
 
48 TB of disks.
35.2 TB usable.
Single case.

My home server upgrade has been completed.
I'm happy now ... and should be for quite a while. :D

Case: Norco RPC-4224
PSU: Seasonic M12ii 80+ Bronze 620W
MB: ASUS P5K-V
CPU: Intel E6800 Core 2 Duo
RAM: 8GB (2x 4GB) 6400
NIC: Intel Dual Gigabit
SAS HBA: Supermicro AOC-USAS2-L8i in IT mode
SAS Expander: Chenbro CK23601
HDD Pool #1: 12x 2TB WDC WD20EARS Green in RAID-Z2
HDD Pool #2: 12x 2TB Samsung HD204UI in RAID-Z2
HDD Boot: 80GB Seagate Barracuda 7200.7
Cooling: 3x Noctura NF-P12, 2x Noctura NF-R8
OS: Solaris Express 11


Phase 1. This server started out quite some time ago with the same boot HDD as a P4 system with 4x 500GB HDDs as RAID5. It ran WinXP, patched to allow software RAID. 2TB. ~1.3TB usable

Phase 2. The motherboard and CPU was updated to the present model and given 2GB RAM. 4x 1.5TB WD HDDs were added as RAID5. The existing 500GB drives were put onto two cheapo PCI SATA cards as mirror'd pairs. The system was housed in an Antec P183 and ran Win2003 server. 8TB. ~5TB usable

Phase 3. Six months ago. The same motherboard and CPU, now with 8GB RAM, the Supermicro HBA and 6x new 2TB WD Green HDDs, was transplanted into the Norco with the Seasonic PSU and Noctura fans. The 4x 500GB WD drives were retired and an Intel Dual GB NIC was added. The OS was replaced with Solaris. The 2TB drives became a Z2 pool. The 1.5s became mirror'd pairs. 18TB. ~9.7TB usable

Phase 4. Last week the Chenbro CK23601 finally arrived. Got 12x new Samsung 2TB HDDs which became a new Z2 pool. The 1.5TB WDs went into other machines. Got 6x new WD Green 2TB HDDs and they, along with the existing WD 2TB drives, became a second Z2 pool. 48TB. 35.2TB usable


Since I don't really need this much storage right now, and since my backup solution is only about once a month and has <20TB capacity, I'm contemplating making the two Z2 pools a mirror. Right now though, I'm liking seeing two network drives of 17.6TB.

imag1109.jpg


imag1108.jpg
 
Last edited:
Nice rig DeadlyOne. I am starting to think I should have gotten the RPC-4224 instead of the RPC-4220 :eek:
 
Nice rig DeadlyOne. I am starting to think I should have gotten the RPC-4224 instead of the RPC-4220 :eek:

Thanks. I've been very happy with how things have gone together in this case.

With the replacement fans it's no louder than my old desktop and drive temperatures have been very reasonable. I will mask off the holes and around the cables in the fan wall bracket before it goes back in the cupboard though.

Mounting the boot HDD was the only somewhat problematic task. It is just screwed to the side of the case (just out of view at the bottom of the pic ... you can see the old IDE cable snaking down toward it.

Norco could improve it by including a a 3.5"/2.5" drive bracket for the side of the case or at least putting in a few screw holes with the right spacing for a HDD.
 
I have 3920GB in my main system plus a 1TB external usually attached to said system. That, among most people I know, is quite excessive, but here at [H] it's nothing. 10TB is fine for a limit--as has been said earlier if someone does something "boring" just skim over it.
 
Status
Not open for further replies.
Back
Top