The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
@Sulfuric

Nice build. :D Is that a 20-drive RAID6? :eek: I'm thinking of something similar using Seagate Barracuda LPs, but I was gonna split the drives into two 10-drive RAID 6 arrays, just to be extra safe.

Hold on a sec...24TB usable? So that is two 10-drive RAID 6 arrays? Also, how come your first four devices report different drive sizes to the rest of the drives? Controller discrepancy? What are those first four attached to, the on-board SATA controller? Does that mean your boot drive is IDE?

hehe, all those guesses from a /dev listing... :p
 
Is that a 20-drive RAID6?
Yes. 16 on the 2340 and 4 on the motherboard.

Code:
Hold on a sec...24TB usable? So that is two 10-drive RAID 6 arrays?
Its 1 20 drive Raid6 1.5TB drives are roughly 1.36TB after formatting. 1.36 x 18 usable drives after subtracting the 2 parity leave roughly 24TB usable. Also I'm not sure why the size on the controller card is different. A-D are on the motherboard F-U are on the controller. E is the system drive which is also sata.

Hope that answers your questions.

Creation Time : Fri Oct 2 03:09:35 200
Raid Level : raid6
Array Size : 23440915456 (22355.00 GiB 24003.50 GB)
Used Dev Size : 1465057216 (1397.19 GiB 1500.22 GB)
Raid Devices : 20
Total Devices : 20
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Oct 6 22:12:52 2009
State : clean, recovering
Active Devices : 20
Working Devices : 20
Failed Devices : 0
Spare Devices : 0

Status is still recovering from the final expansion. I had to do it parts due to the original card failure.
 
Last edited:
@Sulfuric
What drive format are you using? You've done my future setup, and I've been contemplating whether to use ext3 or xfs for the build.
 
What drive format are you using? You've done my future setup, and I've been contemplating whether to use ext3 or xfs for the build.

Originally I wanted to go with ZFS and raid-z, but decided against it once I couldn't build the whole array at once. I ended up using xfs and I think it was a better choice.

Also I'm pretty sure you mean ext4 since ext3 has a 16TB limit.
 
enjoy losing all your data as soon as one drive dies and another 2 die during rebuilds.
24 drives is a lot for one array.

Id post my server but its only up to 4TB. You people are freaks.
 
enjoy losing all your data as soon as one drive dies and another 2 die during rebuilds.
24 drives is a lot for one array.

Id post my server but its only up to 4TB. You people are freaks.

lol, no. The likelihood of 3 drives falling over is kinda small, even with a 20-drive array.

@Colonel_Panic

I've used JFS to great effect with my workstation's 3.2TB array, it's a good solid filesystem format. I'm also currently using ext4 under Ubuntu on this work machine's external drive, and it's also been solid.

@Sulfuric

If you were gonna use ZFS, would you have deployed OpenSolaris or used it with FUSE under Linux?
 
what if there is a crazy lightning storm and your whole house catches on fire....

damn you'll need RAID 666 to be safe from that!


stop being lame little jealous people and either make constructive comments about peoples builds or stay out...

RAID 6 is sweat, I am looking at my 4TB raid 5 array and wanting more..

congrates dude
 
Yeah seriously. I think raid6 is fine for 20 drives although any more and I would want something like raidz3 (tripple parity raid) which ZFS supports. you have a good PSU with (PFC?) support or whatever the PSU dieing should not take out your drives. I had had a raid6 rebuild complete just fine even when another drive was getting read-errors during the rebuild process which I replaced a day or two later after the rebuild.
 
What if your PSU goes or even having a drive fall out of the array while it is rebuilding? :eek: :(

lol, ffs :p If your PSU goes (and doesn't fry your drives) you replace it. Linux software RAID is rock bloody solid - it'll simply carry on from where it left off. If it does fry your drives, it's your own fault for buying an el cheapo PSU.

If one drive falls out of the array during a rebuild, it'll take parity data from the second parity drive.

If you're worried about a lightning strike, get an UPS or a surge protector. Or both.

Yeah seriously. I think raid6 is fine for 20 drives although any more and I would want something like raidz3 (tripple parity raid) which ZFS supports. you have a good PSU with (PFC?) support or whatever the PSU dieing should not take out your drives. I had had a raid6 rebuild complete just fine even when another drive was getting read-errors during the rebuild process which I replaced a day or two later after the rebuild.

No need for triple-parity RAID, my friend. Just split the array into more manageable sizes, like 30 drives would be split into 2 15-drive RAID 6 arrays. Lower your liability.


@sulfuric

I'd also suggest a weekly scrubbing of your array, just for surety.
 
lol, ffs :p If your PSU goes (and doesn't fry your drives) you replace it. Linux software RAID is rock bloody solid - it'll simply carry on from where it left off. If it does fry your drives, it's your own fault for buying an el cheapo PSU.

Even quality PSUs can go up in smoke. :eek: ;)
 
Yep, but quality PSUs don't take your computer with them. ;)

Actually, any PSU can take your computer with it. I just hope it never happens. Anyway, enough thread derailment. I can't wait to build a 40TB server sometime next year, which I will also be using RAID6.
 
If you were gonna use ZFS, would you have deployed OpenSolaris or used it with FUSE under Linux?

I had it working on a 4 drive array with no issues using fuse in linux. I haven't messed with solaris much so I wasn't too comfortable going that route.

Did you know you could have used an HX520? Or did you have that 750TX lying about anyway?

I've seen a lot of people using lower watt psus on 20 drive arrays, but I went with the corsair 750w to be on the safe side. Plus it was on sale.

To all the other comments if something happens it happens. I can't worry about all the what ifs.
 
My current rig:

AMD Phenom II X4 955 BE 3.20GHz
MSI 790FX-GD70 Motherboard
4 x 2GB Corsair Dominator PC3-12800 DDR3-1600 9-9-9-24
Zotac GeForce 8800 GT 512MB AMP!
Coolermaster Real Power Pro 1000W
Pioneer BDR-203BK BD-R Drive

Storage:

1 x 160GB WD Caviar Blue 7200 RPM 16MB
1 x 500GB Maxtor 7200 RPM 32MB
4 x 1.5TB WD Caviar GP 5400 RPM 32MB
4 x 1.5TB Samsung EcoGreen F2 5400 RPM 32MB
2 x 2TB WD Caviar Green 5400 RPM 32MB
1 x 2TB WD RE4-GP 5400 RPM 64MB
 
Pics or shens!

I'm currently debating wether to go for 16x or 12x 1.5TB Samsung F2 for the 10x500GB box upgrade... :D
 
this time i have to agree with Miguel,
PICS or it didnt happen!

blank stats of systems we could post in some diff thread :p

p.s.
when this thread was forming there was a though that every pic should be with paper sheet with your nickname on it ...

Wilba, start small ... just 10x :D

its a mess to move from those small drives
next time ill have to chose between moving to higher density or leaving old drives be, im chosing second
after cleaning 24x750, selling most of them and getting sick of having 30+ 1.5s tossed around my room, im not gona do this again! :D i think :D
 
this time i have to agree with Miguel,
PICS or it didnt happen!
Why "this time"? Are my posts that unrelateable? :p

Also, I didn't say "it didn't happen". (Just messing with you, we're cool, OK?)

p.s.
when this thread was forming there was a though that every pic should be with paper sheet with your nickname on it ...
lol

I seriously thought that rule only applied to the "for sale" section... It does seem a good idea, though, as one might be tempted to visit the nearest ISP and snap a couple of photos...

Now, who was it that was coming with me to the exclusive Google datacenter tour? :D

its a mess to move from those small drives
:eek: From that description, I'm very glad I'm starting a NAS when the smallest drive I'll fit in it will be 1TB... NOT funny.

However, if you have enough ports available, given the ratio of newer-to-older drives (1:3 on a 500GB-to-1.5TB migration, and even better if you go with 2TB drives), it shouldn't have to be THAT much of a pain...

Rebuilding and expanding arrays and partitions, though, is bound to cause severe headache problems if you have only a few ports available... But even then, most of the time you could go the "change one, rebuild" route, and only expand the partition once at the end, right?

Oh, and btw, if you don't want to be tossing drives around, I'll be happy to pay P&P for them, and toss them around... :D;)

Cheers.

Miguel
 
the 10x500GB box is only used for backups, so I'll just destroy the array, remove the old drives and put in the new drives, then make new array and copy over the data again.. not a big problem.
 
lucky bastards :D
Miguel for not having to deal with moves from 200/250->500/750->1.5+
and
Wilba for not having to deal with this at all

guys, i envy you
 
the 10x500GB box is only used for backups, so I'll just destroy the array, remove the old drives and put in the new drives, then make new array and copy over the data again.. not a big problem.
Wait... You have a 5TB system dedicated for backups? Damn, that's an [H] right there...

lucky bastards :D
Says the guy who has spent a truckload of money on HDDs...

As I usually say, if you have that kind of money, then don't complain about what problems that kind of money can bring you... lol (Joking, of course)

Miguel for not having to deal with moves from 200/250->500/750->1.5+
Well, granted that's not the funniest thing to do. But that's why I'm going with WHS for my build... :D Changing drives seems to be so much easier when compared to other RAID schemes (yes, I know WHS doesn't do RAID).

Cheers.

Miguel
 
lucky bastards :D
Miguel for not having to deal with moves from 200/250->500/750->1.5+
and
Wilba for not having to deal with this at all

guys, i envy you

Just yank your old drives, create the new array and restore from backups. You do have backups, don't you? :p
 
Just yank your old drives, create the new array and restore from backups. You do have backups, don't you? :p
That is actually a very good advice. And more so if the controller can actually recognize an array on all the yanked drives if they are re-attached to it...

As I usually say, the KISS approach is usually the best. Of course you first have to actually know what the simplest method is... :p

Cheers.

Miguel
 
after cleaning 24x750, selling most of them and getting sick of having 30+ 1.5s tossed around my room, im not gona do this again! :D i think :D

I'm curious, why the need to clean drives from an enormous RAID, especially given that you still hold a number of them ? It seems that even if someone bought, say, 20 of your drives from an array that couldn't sustain loss of 4 drives, your data would be unrecoverable. Or just a healthy dose of paranoia ?
 
mostly because its not singular array here, its variety of 10s, 5s, 6ses ... usualy with minimum drives required, had to privide variable safety/size not one massive array with all the eggs in it
so moving from multiple raids with variable backups is a total bitch
 
Also I'm pretty sure you mean ext4 since ext3 has a 16TB limit.

ext4 has a 16TB limit currently due to e2fsprogs not supporting it. a guy i know at redhat has recently done e2fsprogs dev testing on >16tb, so should see stable out by year's end i'd guess.
 
The latest edition to our system

2 SAN's...... 9TB a peice.
IMAG00111.jpg
 
pic looks like it was taken in an office building. That would be awesome if it was for a home setup. Going through this thread makes me want to build a server and merge my 6 external drives lol...sigh need money first.
 
Those are for office use, but that room is my personal area in the office. Those are soly for my data backups and program ISO's. Good stuff.
 
Its a bit of a mix of new and old, and DEF not as Impressive as some of the boxes in the thread... but Im proud of it.

16.0TB Unformated
11.4TB After formating & RAID6

Case: Norco 3216
Motherboard: Tyan Thunder LE (S2510) w/ ServerWorks ServerSet III-LE Chipset
CPU: Dual Intel Pentium III-s @ 1.13ghz (512k Cache each)
Memory: 3072MB PC133/ECC SDRam
Network: 2x Integrated Intel Ethernet Pro 100
Video: Integrated: ATI Rage XL VGA
OS Drive: 40Gig 7200Rpm PATA (Slim Maxtor) (ext3)
Array Drives: 16x Western Digital Caviar Black
mdadm: 2x RAID6 Arrays w/ 8 drives each (~5.7TB usable each array) (XFS)
OS: Slackware (Thinking about WHS... )

Sorry no Pics... but some df

root@Magnus:~# df -h /dev/md/0 /dev/md/1
Filesystem Size Used Avail Use% Mounted on
/dev/md/0 5.7T 4.3T 1.4T 75% /backups
/dev/md/1 5.7T 2.8T 2.9T 49% /storage
root@Magnus:~#

Cheers
 
this is supposed to be a showoff thread, not a bitch session. let's see some pics
 
Stay tuned for pics of my latest project next week.. 16x1.5TB upgrade...:D
 
bumpsy
updated some of the boxes, rebuilt some ...
resetting few numbers 82TB for total and 33TB for single box
 
My server.... =] I love being a geek. I have 11 1.5TB Samsung 5400rpm drives and 1 1.5TB WD green. The green is the system drive. I haven't gotten the power cables yet so I couldn't hook up the last 4 drives. As soon as I get the stuff in from frozen CPU I'll update the post.


12 1.5TB = 18TB advertised.

SKOL1.jpg

SKOL2.jpg

SKOL3.jpg
 
The systems posted in this topic inspire me and, at the same time, make me feel completely inadequate.

Not sure whether I should start building my own 10tb system ...or go hide under a table, in the fetal position.

Decisions. Decisions.
 
I personally went with the under the table approach. I did happen to get comfortable, and fall asleep. While I was asleep I had a dream of a 10TB+ server, and it was awesome. So when I woke up, I was motivated to start working on my first one, it's not done yet tho.
 
accualy, making multiple 10ish+ tb boxes isnt that fun as you would think ... so after having, making, rerebuilding for dozens of times, it starts to seem as a job not an entertainment device ;P
 
Last edited:
Status
Not open for further replies.
Back
Top