The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
SAS supports multiple HBAs however, so you could have 2 computers attached to an expander.

Right, but the expander unit itself is incapable of running on its own. Its like those ooooold school computers that had a secondary tower full of scsi drives, and a fat, four hundred dollar (they were expensive) scsi cable connecting it to the host computer. the storage tower isnt truly a computer / system, just a mass of harddrives, cooling, a PSU and either a controller, or a expander.
 
SAS supports multiple HBAs however, so you could have 2 computers attached to an expander.

Is that a spec thing? Mine, and others I've seen, only have one SAS input, and one SAS output, for if you wanted to daisychain them. And since they have to be attached to RAID controller if you want to RAID them (Mine is hooked up via an Adaptec 5085 so it can be run in RAID6), which machine would manage it? It's not like a NAS or SAN device, it has no processing for it's own filesystem or anything.

Actually that is interesting, if I have it RAID6 on my Adaptec 5085, and I plug it into the external port on my Areca 1680ix-24, would it be portable like say a USB drive? Or is it tied to a single system or at least another Adaptec card.
 
Actually that is interesting, if I have it RAID6 on my Adaptec 5085, and I plug it into the external port on my Areca 1680ix-24, would it be portable like say a USB drive? Or is it tied to a single system or at least another Adaptec card.

No experience here with SAS, but im almost 100% certain that you need either the same card, or a card with the same chipset / firmware. RAID arrays arent swappable between dissimilar controller manufacturers.
 
Is that a spec thing? Mine, and others I've seen, only have one SAS input, and one SAS output, for if you wanted to daisychain them. And since they have to be attached to RAID controller if you want to RAID them (Mine is hooked up via an Adaptec 5085 so it can be run in RAID6), which machine would manage it? It's not like a NAS or SAN device, it has no processing for it's own filesystem or anything.

Actually that is interesting, if I have it RAID6 on my Adaptec 5085, and I plug it into the external port on my Areca 1680ix-24, would it be portable like say a USB drive? Or is it tied to a single system or at least another Adaptec card.
I believe it is part of the spec. Here is an example of something that supports it: http://www.supermicro.com/products/accessories/mobilerack/CSE-M28E2.cfm
As for the detail on how it works, I'm not completely certain, however, I believe that you have to store the data in a file system that supports such a thing (say like DFS or Lustre...though Lustre is a bit different). With RAID it gets tricky though. You would need to have similar RAID cards to support reading the RAID array. So, it is portable, but at the same time, it isn't.

Yours truly needs a better job to play with these kinds of things (I actually have more storage at home than the main SAN at work). Anyone hiring? :p
 
Great systems and pictures folks, keep em coming!!

Fileserver 1: 4,0TB (RAID5 3,66TB) *RETIRED*
fileserver1.jpg

Case: Coolermaster Stacker
PSU: Dual 480W Antec Neopower
Motherboard: Asus P5WD2-WS
CPU: Intel D840 Extreme Edition
RAM: Corsair 2x512MB
GPU: Asus 7800GTX
Raid Controller: Dual RocketRAID 2220 PCI-X (crosscontroller raid)
Hard Drives: 8x 250GB Maxtor 7Y250M0 + 8x 250GB Maxtor 7Y250F0
Operating System: WinXP Pro 32Bits

Fileserver 2: 5,1TB (RAID6 4,38TB) *RETIRED*
fileserver2.jpg

Case: Coolermaster Stacker
PSU: 480W Antec Neopower + 750W Thermaltake CM
Motherboard: Asus P5WD2-WS PRO
CPU: Intel Core2Duo x6800
RAM: Corsair 2x512MB
GPU: Asus 7800GTX
Raid Controller: Areca 1160 PCI-X 256MB
Hard Drives: 16x 320GB Seagate Barracuda 7200.10 ST3320620AS
Operating System: WinXP Pro 32Bits

Fileserver 3 (Upgrade): 12,0TB (RAID6 10,25TB) *RETIRED*
fileserver3.jpg

Case: Coolermaster Stacker
PSU: 700W Seasonic M12
Motherboard: Intel D975XBX
CPU: Intel Core2Duo x6800
RAM: Corsair 4x512MB
GPU: ATI Radeon x1900
Raid Controller: Areca 1280ML PCI-e 2GB
Hard Drives: 16x 750GB Seagate Barracuda 7200.10 ST3750640AS
Operating System: WinXP Pro 32Bits

Fileserver 4: 18TB (RAID6 16,11TB)
fileserver4.jpg

Case: Supermicro SC846TQ-R900B
PSU: Redundant 900W Supermicro
Motherboard: Asus P5WD2-WS PRO
CPU: Intel Core2Duo e6750
RAM: Corsair 4x512MB
GPU: Some PCI card
Raid Controller: Areca 1280ML PCI-e 2GB
NIC: Dual Intel Dualport Gigabit
Hard Drives: 24x 750GB Seagate Barracuda 7200.10 ST3750640AS
Operating System: Windows 2003 Server 32Bits

Fileserver 5: 18TB (RAID6 16,11TB)
fileserver5.jpg

Case: Lian-Li PC-A77b
PSU: 700W Seasonic M12
Motherboard: Asus P5n32e Sli Plus
CPU: Intel Core2Duo x6800
RAM: Corsair 4x1GB
GPU: Dual Asus 8800GTX (not anymore because cooling is a bitch)
Raid Controller: Areca 1261ML PCI-e 2GB
Hard Drives: 13x 1500GB Seagate Barracuda 7200.11 ST31500341AS
Operating System: WinXP Pro 32Bits

Fileserver 6: ??? (4 needs to go, noisy)
Okay, so now what!?? 16x2TB ?
 
Last edited:
@d3vy: Nice setups, I had CM Stacker based fileservers for a long time too :) Is your total storage then 36GB, with Fileserver 4 + Fileserver 5?
 
@d3vy: Nice setups, I had CM Stacker based fileservers for a long time too :) Is your total storage then 36GB, with Fileserver 4 + Fileserver 5?

Thanks, the stacker is (was) a great case to start with. and yes total raw space is 36TB, 30TB formatted.

I'm looking for new ideas to build a high capacity "silent" setup, so if anyone has some.... Storage is addictive :D
 
23.16 TB Total
11.16TB Biggest Single

SPIDEROF1 - 6TB

Case - Coolermaster Centurion 590
PSU - Antec NeoPower 550
Motherboard - ASUS K8N-LR
CPU - Opteron 165
RAM - 2GB
Controller Cards: 3Ware 9500S-8, 3Ware 9650SE-8LP
Optical Drives: None
Hard Drives: 8x SAMSUNG HD753LJ(750GB)
Battery Backup Units: APC 1500VA
Operating System: OpenFiler 2.3

Shared Image, SPIDERWHS on left SPIDEROF1 on right.



SPIDERWHS - 11.16TB
Case - Coolermaster Stacker
PSU - PC Power & Cooling S61EPS
Motherboard - ASUS K8N-LR
CPU - Opteron 165
RAM - 2GB
Controller Cards: 3Ware 9550SXU-8L
Optical Drives: None
Hard Drives:
5x ST31000340AS - SD1A (Seagate 7200.11 1TB)
4x ST31500341AS - CC1H (Seagate 7200.11 1.5TB)
2x Hitachi 80GB
Battery Backup Units: APC 1500VA
Operating System: Windows Home Server PP2



SPIDERESX - 6TB
Case - SUPERMICRO CSE-743T-645B
PSU - 645W Came with case
Motherboard - SUPERMICRO MBD-X7DVA-E-O
CPU - Intel Xeon E5430 x 2
RAM - 8GB - 4x2GB A-DATA FB-DIMM DDR2
Controller Cards: Perc 5i
Optical Drives: None
Hard Drives:
4x ST3500641AS (Seagate 7200.7 500GB)
4x WD10EACS - (WD Green 1TB)
Battery Backup Units: APC 1500VA
Operating System: VMWare ESXi 3.5



The primary role of the servers is DVD storage for the MyMovies VMC application. SPIDEROF1 is an ISCSI target for ESXi VMs.
SPIDERESX is my VMWare ESX test lab and also hosts the server OS used by the media centers/extenders.
SPIDERWHS is storage for my Media Center network.

Images got made2owned.
 
Last edited:
How long does it take for you guys to copy data over when you upgrade? Do you do it over a network or just toss the drives in an upgraded machine and use the built in tools to copy over?
 
How long does it take for you guys to copy data over when you upgrade? Do you do it over a network or just toss the drives in an upgraded machine and use the built in tools to copy over?

Well you can do things like OCE, etc... but I didn't trust it enough so I just copied my data to two work machines over the network. Luckily I work somewhere that has petabytes of storage and some servers just laying around not in use, so I used those.

Most of our servers use lame 3ware controllers and those in raid6 are really slow. I offloaded to two servers at once when I offloaded my 11TB array to upgrade to 20TB and had about 6TB of data at the time. It took 2 days to copy my data over to the two servers (simultaneously). The load average was 10-15 on the 3ware machines and ~1 on mine.

Copying the data back took less than 24 hours and the load average on my machine writing the data back was ~1.5 due to the CPU usage of rsync/ssh.
 
I use OCE as I don't have access to anything with enough storage. When I was filling the server initially, I had data spread between many machines, and was able to nearly max out the 2 aggregated gigabit lines on the server.
 
How long does it take for you guys to copy data over when you upgrade? Do you do it over a network or just toss the drives in an upgraded machine and use the built in tools to copy over?

My machine isn't quite big enough to be in this thread (9TB, sigh) but I used "zfs send | zfs recv" to move everything from the old pool to the new one. It took about 10 hours to move 1.4 TB, severely limited by the one PCI bus in use (42 MB/s read from old mirrored pool, 64 MB/s write to new raidz2 pool). I'm planning another round of upgrades this summer, and will certainly post in this thread again when I have a SAS expander so I can use my new controller... not to mention another 6 1TB disks.
 
17,12TB

Case: Cooler Master Stacker
Power Supply: 2x Corsair TX 650W
Mainboard: Asus P5K WS
RAM: Corsair XMS2 DIMM 2 x 1GB
Graphic card: NVidia Geforce FX5200 PCI
Sound: Onboard
OS: Windows Server 2003 R2 Standard Edition
RAID Controller:
* AMCC 3ware Escalade 9550SX-8LP, PCI-X 133MHz
* AMCC 3ware Escalade 9650SX-16ML, PCIe

RAID5
* 4 x Samsung 500GB HD502IJ SATA

RAID6
* 15 x Seagate 1TB ST31000340AS SATA

Single HDD:
* 1 x Samsung 120GB SV1203N IDE

miniimg0020cr2.jpg


miniimg0021hq0.jpg
 
Well you can do things like OCE, etc... but I didn't trust it enough so I just copied my data to two work machines over the network. Luckily I work somewhere that has petabytes of storage and some servers just laying around not in use, so I used those.

Most of our servers use lame 3ware controllers and those in raid6 are really slow. I offloaded to two servers at once when I offloaded my 11TB array to upgrade to 20TB and had about 6TB of data at the time. It took 2 days to copy my data over to the two servers (simultaneously). The load average was 10-15 on the 3ware machines and ~1 on mine.

Copying the data back took less than 24 hours and the load average on my machine writing the data back was ~1.5 due to the CPU usage of rsync/ssh.

Sounds like some tuning of those 3ware boxen are in order?

15 hours copying 3.5 TB of data from one 3ware 9500S-12 array to another via the network, both running raid5... limiting factor? marvell network card in the recieving box (monster), later upgraded to intel gig-e nic.
 
if the connection was the issue the load average on the 3ware boxes would not be 10-15. The boxes showed decent write speed in raid6 (150-200MB/sec) when doing small amounts of writing (less than 5 GB) but large writes went really slow. Writing over the entire raid array averaged less than 35 megabytes/sec (less than a single disk).

Also we just lost 55 TB of arrays on a 3ware controller. We have since switched over to LSI for our new machines. See this thread:

http://forums.2cpu.com/showthread.php?p=762341

Needless to say 3ware has left a really really bad taste in my mouth and I would recommend against them any chance I get. This is the second array we have lost to this specific issue and about the 6th or 7th array we have lost to 3ware (out of 100 or so).
 
I guess i was unclear, I was hinting at tuning the 3ware controllers, not the connection. There are certain things you really should do to get better performance on linux...
 
Damn your builds are wicked sick, but I must say that I did prefer the old 4+ disk thread beacause then there were more people being able to qualify.
 
Yea pretty sick, I wish I could join, but honestly I have no use for more than 2 terabytes, but kudos to you guys that do. Really makes me want to join!

What Backplanes do you guys like. I went ahead with some Athena Power, wish I consulted you guys first though heh! Why didn't I pm bluefox, hes a cool dude. Heh!
 
I am on only Raid 0, so yea ;). Though you didn't answer my question, lol :p

If you can afford it, an offsite system. Otherwise, you can burn dvds, but burning doesn't always burn good disks.
 
Yea pretty sick, I wish I could join, but honestly I have no use for more than 2 terabytes, but kudos to you guys that do. Really makes me want to join!

What Backplanes do you guys like. I went ahead with some Athena Power, wish I consulted you guys first though heh! Why didn't I pm bluefox, hes a cool dude. Heh!
If you just want to use your 5.25" bays with 3.5" drives, you can pick up a bunch of the Lian Li non-hot-swap bay things. The 3-in-2 one is under $25 (which is considerably cheaper than anything hot-swap). Here are some examples (which can be found cheaper if you shop around): http://www.frozencpu.com/cat/l3/g/c241/s611/list/
 
If you just want to use your 5.25" bays with 3.5" drives, you can pick up a bunch of the Lian Li non-hot-swap bay things. The 3-in-2 one is under $25 (which is considerably cheaper than anything hot-swap). Here are some examples (which can be found cheaper if you shop around): http://www.frozencpu.com/cat/l3/g/c241/s611/list/

Yea I saw, though I want the Hot Swap, I had a SCSI server back in the day, and loved that ability :). Is Athena Power a good company/back plane? I kinda bought one haha :D
 
Yea I saw, though I want the Hot Swap, I had a SCSI server back in the day, and loved that ability :). Is Athena Power a good company/back plane? I kinda bought one haha :D
I'm not a big fan of them personally. If I were to buy one, it would be the Supermicro one (but it is probably double the price).
 
What Backplanes do you guys like./QUOTE]

Something that I cannot afford. :(

What is my backup plan? Pray that nothing goes down. I live dangerously. :eek:


I use these made by IcyDock. They can be a bit expensive, but they are a snap to use and easy to work with.

http://www.newegg.com/Product/Product.aspx?Item=N82E16817994028

17-994-028-08.jpg


yes than can be a bit pricey but it's worth it for the fan, the hot swap, and the ease of use.

PS the picture is wrong. It's actually sitting on the wrong side. Rotate it 90 degrees.
 
Well I have multiple systems. I will start with my biggest system first.

First is a nas n7700.
Hard Drives -
7 x WD10EADS 1TB SATA 3.0Gb/s Hard Drive - OEM


Second the pc I am currently using
Case P180B
PSU Corsair 620hx
Motherboard Asus P5W DX
CPU q6600 (overlocked to 3.0ghz)
RAM 4gb of CORSAIR XMS2 (4x 1Gb sticks)
GPU 8800gtx
Optical Drives - Sony Optiarc 18X DVD±R DVD Burner
Hard Drives -
1 x Western Digital Raptor WD1500ADFD 150GB 10000 RPM SATA 1.5Gb/s Hard Drive - OEM
3 x WD10EACS 1TB SATA 3.0Gb/s Hard Drive - OEM
Operating System Vista 64bit Ult Ed.


Third My Popcorn hour A110
Harddrives -
1 x Seagate Barracuda 7200.10 ST3750640AS 750GB 7200 RPM SATA 3.0Gb/s Hard Drive - OEM

Oh ya 4th. a wd 320 Gb drive which was my old comp download drive in a enclosure.

Wow first looking back at my old invoice:
When I bought the seagate drive before it cost $339.99
Raptor - $219.99
The old wd hd cost $259.99, $234.99, $189.99
And my new nas hd cost: $105.25

Hard drive price sure went down over time. Now for my story. The pc I built started with a raptor drive and the seagate 720 gb drive. The raptor drive for the os, programs, and games installed. The seagate drive for the stuff I download. Later on my download drive got full so I bought a new 1TB drive and started to move stuff I downloaded to that drive. Again my 1TB drive got full and I bought another 1TB drive. That drive was almost half full when my download drive broke. I rma the seagate drive and bought another 1TB drive as my download drive. So later my second 1TB drive got full and later on my download drive got full. I am running out of sata ports on my motherboard to use so I decided on making a nas system because I have a job and money to use. Before that, I got my seagate back from rma and used it in a popcorn hour i bought. The popcorn hour is to watch movies I downloaded on my tv. The nas server I built store the stuff I downloaded on my comp which are mostly console games.

So the space I have is a bit more than 10Tb, I will take pictures with my camera phone when I get home I guess if pictures are needed.


Here the pictures:

photo0022q.jpg

photo0023b.jpg

photo0024p.jpg

photo0025f.jpg
 
Last edited:
seems like the houkouonchi.net hotlinking does not work.

I should upload it somewhere else
 
Darn I saw the Icy Dock, its the same price as the Athena I bought. The reviews were good generally, so hopefully I won't get a dud. I really liked the on and off button and layout on the Athena Power. I mean is it crappy, should I return it? Or I can live with it?
 
Pfew, just made it!


-Main Rig (9.890TB, 8.967TB Formatted)
-MacBook Pro (200GB, 181.3GB Formatted)
-60GB USB External Drive (54.4GB Formatted)

***Total: 10.150TB Advertised, 9.2027TB Formatted

Main Rig (Will eventually become a WHS when I build a "real" gaming rig in ~2-3 years - trying to hold out until Haswell, but we'll see...)

Intel Core 2 Quad Q6600 @ 3.4GHz air
eVGA GTX 260 896MB ~14% OC
ASUS P5Q WS
8GB OCZ PC2-6400
Supermicro AOC-SAT2-MV8
Cooler Master Centurion 590 with 5in3s (currently using 2, but expandable to 3)
1x 500GB WD 5000AAKS
1x 640GB WD 6400AAKS
5x 750GB 3x Seagate ST3750640AS, 1x Samsung HD753LJ, 1x WD 7500AAKS
2x 1000GB 2x WD 10EACS
2x 1500GB 2x Seagate ST31500341AS
610w PCP&C S61EPS
Vista Ultimate x64 SP1



Here it is, not very imposing from the outside...


sc05874.jpg




Jurry-Rigged fans. May improve the mounting a bit this summer when I have access to better tools. For now, tape, blue tack and twist ties will have to do. Forced cooling is critical with the 5in3s, but temps are ok with these suckers howling away (my attempts at sleeping are less successful).



sc05883.jpg



DVD drive will be replaced with another 5in3 (5x2TB, doubling capacity) eventually, with the 3.5 to 5.25 converter at the top, and some SSDs crammed in somewhere.



sc05878.jpg




Cable management. With this many drives, you do what you can.... Could be worse.



sc05852.jpg



sc05859.jpg




Foam padding to cut down on sound and vibrations. The cages didn't line up with any holes in the case, but it's a (very) snug fit. As long as this thing isn't tipped upside down, they won't budge.



sc05857.jpg





The most space efficient way of storing HDDs possible:




0829081518.jpg





Arguably the cheapest (~$70) case for the number of drives it can hold. A bit of bending was required to fit the 5in3s, but nothing major. Maxed out, it will have one more 5in3, the single 1in1 and probably a couple of RAIDed SSDs for a total for 16 3.5" drives and 2 2.5" drives. With 2x PCI-E x8/16 8-port SATA cards (no GPU), and 2x PCI-E x1 4-port cards, along with an external expansion rack and beefier PSU for 22 more drives, the total theoretical number of drives this rig can scale to is 40. Pretty impressive.




0829081616.jpg



My Computer. Using that space well... Expecting it to be <500GB free within the month. Hopefully I can hold out to expand until the 2TBs drop to a reasonable price. This rig is all about storage density.


ycomputerhdds.png










And, rounding out that 10TB:

MacBook Pro (Merom Santa Rosa)
2.4 GHz T7700
GeForce 8600M GT 256MB GDDR3
2x 2GB Patriot DDR2 5300 SODIMM
Hitachi Travelstar 7K200 (what a pain it was to upgrade this sucker...)
LED Backlit 15" Matte LCD
Vista Home Premium x64 SP1 (Primary OS)



Nothing real special here. I upgraded the RAM and HDD after getting it (Apple was charging ~$800 for 4GB upgrade back then in summer '07 and didn't even offer a 200GB 7200rpm option - lulz).


Default pic - we all know what they look like...
acbookpro.jpg



Well, I hope you like it. A big thanks to houkouonchi for the image hosting, and congrats to the rest of you for breaking the 10TB barrier!




.
 
Last edited:
Sweet rig. Good economical design, but your photos sent me scrolling on a 22" monitor! Sort of hard to deal with images that big.
 
Is this any better? Sorry, it looked fine to me at 19x12 :p

Yeah they were fine for me too but most people run crappy low resolution. My lowest resolution monitors are 2560x1600 and my highest resolution monitor is 3840x2400. It pains me to see people running at 1024x768 or 1280x1024.
 
Yeah they were fine for me too but most people run crappy low resolution. My lowest resolution monitors are 2560x1600 and my highest resolution monitor is 3840x2400. It pains me to see people running at 1024x768 or 1280x1024.

Some people like me are at work and are using laptops and dell minis.

And then there are some people too lazy like me that wont bring in their 30" monitor to use at work.
 
Some people like me are at work and are using laptops and dell minis.

And then there are some people too lazy like me that wont bring in their 30" monitor to use at work.

For that matter, do most of you with hi-res screens really use your browser at anywhere close to full-screen? I find keeping it to rough paper-dimensions makes for a much better reading experience, and keeps wasted space to a minimum.
 
I run my browser at full screen on my 30's. I don't really need all 3 of them but I'm at least always using 2 of the 3.
 
37.5 TB

Dell Powervault MD1000's connected to Dell Poweredge 1950's
Redundant Dell PSU's
Dell dual socket motherbord
2x Intel Xeon Quad Cores at 2Ghz
16GB RAM
Dell PERC 5/E Raid Controllers
Dell branded SATA drives with Interposers
APC 5000VA 208v 30amp with 2x 120v Step Down Transformers
Windows Server 2003 Enterprise R2

I own an offsite backup business.

6.jpg
7.jpg
 
Status
Not open for further replies.
Back
Top