The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Total Storage: 36TB + 2TB + 64GB + 64GB
Usable Storage: 30TB (Raid 6 + Hot SPARE) + 1TB (Raid 1 Home) + 64GB (Raid 1 (Linux MD Raid) OS Drive)

http://blog.crcthree.com/images/beast_1.jpg
http://blog.crcthree.com/images/beast_2.jpg
http://blog.crcthree.com/images/beast_3.jpg
http://blog.crcthree.com/images/beast_4.jpg

Pyrodex, what filesystems are you using for your partitions? Especially with everything that I'm reading about the issues with EXT3 being hellish on a unclean restart, EXT4 not being suitable for anything >16tb etc.
 
Pyrodex, what filesystems are you using for your partitions? Especially with everything that I'm reading about the issues with EXT3 being hellish on a unclean restart, EXT4 not being suitable for anything >16tb etc.
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.
 
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.

Cool as its good to hear somebody that actually using it without any issues :eek:) You may well see the appearance of a "copy-cat" system when I decide to drop WHS and get back to the where I prefer my systems to be ... Open-sauce ;)
 
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.

I am really pro JFS myself. I can't trust ext4 for > 16 TiB file-systems due to what I have heard on the mailing lists and the only other 'stable' options on linux are XFS/JFS (although ZFS might be a real option very soon). XFS has good performance but due to past issues I just can't trust it for data reliability. Even on my 36 TB file-system JFS takes only about 15 minutes for an fsck and so far its been super reliable and I haven't lost any data on about 3-4 systems using it with > 2 TiB file-systems. Only one system is > 16 TiB and actually using more than 16 TiB on it as well:

Code:
root@dekabutsu: 01:24 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdd1               36T    22T    14T  62% /data
root@dekabutsu: 01:24 AM :~# df -h /data
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdd1              33T   20T   13T  62% /data
root@dekabutsu: 01:24 AM :~#
 
Upgraded my mythbox to 16 TB of storage bringing up my total to 107.4 TB.

New Mythbox

Total Storage: 16 TB

Norco RPC 4020 Case
Intel Xeon X3440 2.53 Ghz
4GB DDR3 1066 Memory
Nvidia Geforce 8600GT
Areca ARC-1220 Raid controller
8x2 TB hitachi 7200 RPM (raid6)

Back of the machine showing tuner cards/vid:


Top of the machine with cover off (getting ready to replace old box)


Old and new box in the same shot:


This is the box which i used for all my TV. It has 2 genpix DVB-S tuners for HD sat, 1 nexus DVB-S for non HD sat and 2 pchdtv cards for OTA HD. It can record 5 things at once and get non-recompressed sat streams from dish network.
 
Here's an update on my stuff - the Dockmaster FS 20000 as shown earlier was an epic fail. The mainboard was really crap (ASUS) - and because of time limitations due to a big customer project, I switched back to my external RAID boxes.

In late December I started a new approach - this (3rd) time was really successful :) I'd kept the Yeong Yang case from the last project and bought a new board, CPU, RAM, HDs and a new LSI controller.

Anyway - here's the system:

Total Storage: 17,66TB

"KARINKA"
Yeong Yang Cube
AMD Athlon X2 245e
6GB ECC RAM Kingston/Hynix
Gigabyte GA-MA770T-UD3
Old crappy ATI PCI graphics card - terminal only ;)
Intel SASUC8i / patched to IT firmware
1x160GB WD Caviar Blue IDE drive -> syspool
6x1TB Hitachi 24/7 drives (7200upm) -> mediapool
4x1TB WD Caviar Green drives (5400upm) -> storage (private/company data)
4x1.5TB WD Caviar Green drives (5400upm) -> backup
+
1x 1.5TB Samsung external HD USB2/FW400 -> backup, too - will be placed at my parents home
System: Nexenta Core 3.0.1


karinka.jpg


karinka2.jpg



The next system is a pretty small one, just for Debian and Intranet purpose :)

"HOTARU"
SuperMicro 19" 1U Server
Intel Celeron 2.4GHz
1.5GB RAM
1x160GB WD IDE drive
System: Debian Lenny

hotaru.jpg


hotaru2.jpg



The last 24/7 system in my basement is this nice old IBM Netvista IPCop :D

"SKULD"
IBM Netvista 28xx Series
Pentium-M 266MHz
256MB RAM
40GB Hitachi IDE 2.5" HD
System: IPCop 1.4.21

skuld.jpg


Oh! Not to forget: This is a Elgato EyeTV Netstream DVB-S2 Box...

dvbs2.jpg


...which allows me to stream TV, also in HD, to my office:

5323365046_1ee06b0476_z.jpg


5322760495_d0a60de450_z.jpg


5322760683_aa1f963002_z.jpg


(Looks a bit messy on this photo - check my flickr account for more photos :))

Ok back to the basement... here are 2 photos showing the complete rack...

rack.jpg


... and the complete installation...

complete.jpg


PS. And YES - I know that I have to sort those cables any day ;)
 
Last edited:
Ah temps of disks are about 20-25 degrees its an antec 300 case i just drilled new holes and made sure the hdds are nearly touching but there is still airflow between then and in the front of them is 3 fans with 3 speeds
 
Yay I can post here finally!

Antec SX-1040 Case
Antec TruePower Quattro 850W
AMD Athlon 64 X2 6400+
1x1TB Hitachi 7200rpm for OS (Server 2008 R2) and mirrored storage
Areca ARC-1880i RAID Controller w/ BBU
8x2TB Hitachi 7K2000 in RAID 6

Pic is kinda crappy because it's from a phone:

2q0vtrp.jpg
 
Yay I can post here finally!

Antec SX-1040 Case
Antec TruePower Quattro 850W
AMD Athlon 64 X2 6400+
1x1TB Hitachi 7200rpm for OS (Server 2008 R2) and mirrored storage
Areca ARC-1880i RAID Controller w/ BBU
8x2TB Hitachi 7K2000 in RAID 6

Pic is kinda crappy because it's from a phone:


Did you put a fan in front of those 2 hard drive banks to keep them cool ? they really don't have that much air flow around them to keep them cool.
 
Any handicap on the list for form factor? j/k :D Sorry for the poor quality of the pictures, they were taken with my cell phone.

MKVWhore, my media center.
12.16TB internal/total.

2011-01-14140157.jpg


Sugo 06 case w/ generic 300w SFX psu
Zotac H55ITX-a-e motherboard
MPX-3132 Mini PCIe RAID card (replaces onboard 802.11n card)
Intel i3-530 w/ retail heatsink
4GB (2x2GB) Kingston 1.35v DDR3-1333
2x Intel G2 80gb SSD (RAID 1) OS Drive
6x Western Digital 2TB (WD20EARS) Data Drives
Windows 7 Pro

Additional hardware soon to come:
Ceton InfiniTV4 (in the mail right now, finally!!!)
Prolimatech Samuel 17 (maybe -- this is up for debate).

I'm going to add the cooling fans once I get another 90-degree bent SATA cable (you can see the temporary orange one in the photos).
- 2x 80mm will be stacked vertically and go on the left side of the case (intake) to cool down the Ceton card and the ram.
- 1x 120mm will be on the top of the case blowing (intake) to blow cool air between the data drive bank and on to the ram/motherboard.

Pics:

Top down look before the PSU went in:
2011-01-14130740.jpg


Close-up of the MPX-3132 that made this possible:
2011-01-14130814.jpg


Chopped up PSU:
choppedpsu.jpg


Top down photo after the PSU was in:
2011-01-14132242.jpg


Left side:
2011-01-14132259.jpg


Right side:
2011-01-14132320.jpg



-----

MediaCentercapacity.jpg
 
Last edited:
Really sweet build in tight space. I'll give you props for the form factor. Think what you could do with 3TB drives!

Question: what kind of temps are you seeing on the hard drives? Doesn't look like there is much room for airflow. I'd expect that the drives in the middle are pretty hot?
 
Really sweet build in tight space. I'll give you props for the form factor. Think what you could do with 3TB drives!

Question: what kind of temps are you seeing on the hard drives? Doesn't look like there is much room for airflow. I'd expect that the drives in the middle are pretty hot?

Yeah, there is about 5mm between drives, so the ones in the middle are getting warm to the touch (haven't spotted it with my IR keychain gizmo, so don't have an exact temp). It's not too bad right now as my house is usually at about ~55ºF during the winter, but I'll definitely have to put a fan in for the summer. I've got a Scythe 12x120mm fan that will be just inside the top cover over all of the drives blowing down, hopefully that will solve any issues.
 
Yeah, there is about 5mm between drives, so the ones in the middle are getting warm to the touch (haven't spotted it with my IR keychain gizmo, so don't have an exact temp). It's not too bad right now as my house is usually at about ~55ºF during the winter, but I'll definitely have to put a fan in for the summer. I've got a Scythe 12x120mm fan that will be just inside the top cover over all of the drives blowing down, hopefully that will solve any issues.

The drives will report their own temps. All you need is a program that can read them. There are dozens of free apps out there. Its more accurate than your "IR keychain gizmo" because its reporting the temperature inside the drive - where it matters.
 
Problem with short legs ?
So do I, damn near impossible finding a chair that fits, they are all for long leg people :(

:D Yeah - with my height of 1.73m I'm a bit short ;) But this podest is mainly for long coding session to my feet on, lean back and hack the PHP code into vim...
 
The drives will report their own temps. All you need is a program that can read them. There are dozens of free apps out there. Its more accurate than your "IR keychain gizmo" because its reporting the temperature inside the drive - where it matters.

HW Monitor is giving me 30ºC for the front and back drives and 35ºC for the middle drives. Room Ambient is 18ºC, Case Ambient is 24ºC and proc (i3-530 @ 1.0v) is 39ºC when playing back 1080p .mkvs from the middle drives.


edit:
just did a couple hours of burn-in -- cpu topped at 1.2v/50ºC, hdds spread from 41-44ºC... i'll keep running burn ins to see if it goes any higher
 
Last edited:
HW Monitor is giving me 30ºC for the front and back drives and 35ºC for the middle drives. Room Ambient is 18ºC, Case Ambient is 24ºC and proc (i3-530 @ 1.0v) is 39ºC when playing back 1080p .mkvs from the middle drives.


edit:
just did a couple hours of burn-in -- cpu topped at 1.2v/50ºC, hdds spread from 41-44ºC... i'll keep running burn ins to see if it goes any higher

Given the compact case an limited airflow, that's not too bad at all! High side of acceptable at 40+, but not too bad.

It'll be interesting to see how it hols up this summer when ambient is a bit higher (18C? I take it you like sweaters - or perhaps a snuggy?).
 
Given the compact case an limited airflow, that's not too bad at all! High side of acceptable at 40+, but not too bad.

It'll be interesting to see how it hols up this summer when ambient is a bit higher (18C? I take it you like sweaters - or perhaps a snuggy?).

lol, yeah -- sweaters/hoodies are the way of life up here. Below zero (F) last night + 12.5' ceilings in a 170yr old house = way too much $$$ to heat all the way up ;)

I'll get the fans all installed in the next couple weeks and we'll see how much that helps. MTF
 
Now that the build is complete, I'm happy to post :D

Total Storage: 27.1 TB

Case: Norco RPC-4224 (new power button design)
PSU: Corsair AX750 750 watt (80 Plus Gold, single rail)
Motherboard: Asus P6T WS Professional
CPU: Xeon E5620
RAM: 12GB (3x4GB) Mushkin Enhanced Proline ECC PC3 10600
GPU: EVGA EN8400GS Silent
Controller: Areca ARC-1800ix-24-4G ECC
OS: Windows Server 2008 R2 Datacenter
System HDD: 2x 300GB Hitachi Ultrastar 15K RPM (SAS Hardware RAID1 on the motherboard)
RAID HDD: 4x 2TB Samsung F4, 4x WD 2TB EARS, 5x 1.5TB Seagate, 1x 2TB hot spare
BBU: APC 850 watt 1500VA UPS, Areca BBU eventually

I use the server for two purposes primarily, a storage server and a Hyper-V server for running test and development virtual machines (TFS & test images). Because of the cost, my only backups currently are the RAID arrays themselves, but I intend to duplicate the box and leave it in another house when money allows in the future.

I have all eight 2TB drives in one RAID 6 array, and the 1.5TB drives in a RAID 6 as well. In the near future, I intend to purchase 4 more 2TB drives, expand the array to 12 drives, and leave the last two rows (three when the 1.5s are upgraded) for a RAID 6 array full of 3TB drives, unless the smart people here tell me that is a bad idea. When complete, that'll be 60TB in this box.

Also, I'm torn on my motherboard selection. Questioning whether there was a better option I should have chose. Thoughts?
 
Last edited:
Now that the build is complete, I'm happy to post :D

Total Storage: 27.1 TB

Case: Norco RPC-4224 (new power button design)
PSU: Corsair AX750 750 watt (80 Plus Gold, single rail)
Motherboard: Asus P6T WS Professional
CPU: Xeon E5620
RAM: 12GB (3x4GB) Mushkin Enhanced Proline ECC PC3 10600
GPU: EVGA EN8400GS Silent
Controller: Areca ARC-1800ix-24-4G ECC
OS: Windows Server 2008 R2 Datacenter
System HDD: 2x 300GB Hitachi Ultrastar 15K RPM (SAS Hardware RAID1 on the motherboard)
RAID HDD: 4x 2TB Samsung F4, 4x WD 2TB EARS, 5x 1.5TB Seagate, 1x 2TB hot spare
BBU: APC 850 watt 1500VA UPS, Areca BBU eventually

I use the server for two purposes primarily, a storage server and a Hyper-V server for running test and development virtual machines (TFS & test images). Because of the cost, my only backups currently are the RAID arrays themselves, but I intend to duplicate the box and leave it in another house when money allows in the future.

I have all eight 2TB drives in one RAID 6 array, and the 1.5TB drives in a RAID 6 as well. In the near future, I intend to purchase 4 more 2TB drives, expand the array to 12 drives, and leave the last two rows (three when the 1.5s are upgraded) for a RAID 6 array full of 3TB drives, unless the smart people here tell me that is a bad idea. When complete, that'll be 60TB in this box.

Also, I'm torn on my motherboard selection. Questioning whether there was a better option I should have chose. Thoughts?
Without pictures you haven't posted your build yet... ;)

On the MB question: the P6T WS Pro has two drawbacks. It uses crappy Realtek LAN chips and it lacks remote management (IPMI). The LAN problem can be overcome with add-on cards that use Intel-based server LAN chips. After running a server like this for a while you'll really wish you had IPMI (or maybe not - but only because you don't know what you are missing - once you've run a server ->with<- IPMI you'll never want to be without it again.

Assuming you want to stay with an X58-based build, SuperMicro's X8STE would have been about right here. Intel LAN and IPMI both.

No worries in the end - you've put together a fine build and if these are the only nits to be picked its a pretty good day.
 
What exactly are the negatives of the Realtek LAN chip? (I say this after just having a 5.4 TB network transfer fail after ~2TB was transfered and needed a reboot to correct). I'm open to getting a different MB, but I'm not sure the SM unit fits my needs. I like the IPMI (I manage major servers at work, and am familiar with it), but with that MB would be missing the SAS ports for the system drives, and a TPM header.

I'll get some pics up in a few...
 
Without pictures you haven't posted your build yet... ;)

On the MB question: the P6T WS Pro has two drawbacks. It uses crappy Realtek LAN chips and it lacks remote management (IPMI). The LAN problem can be overcome with add-on cards that use Intel-based server LAN chips. After running a server like this for a while you'll really wish you had IPMI (or maybe not - but only because you don't know what you are missing - once you've run a server ->with<- IPMI you'll never want to be without it again.

Assuming you want to stay with an X58-based build, SuperMicro's X8STE would have been about right here. Intel LAN and IPMI both.

No worries in the end - you've put together a fine build and if these are the only nits to be picked its a pretty good day.

I'll second most of that. I purchased the X8ST3-F and am really happy I did. I deal with servers in a data center and the IPMI is a huge plus should I ever put this machine in a colocation space. If you only have it at home, it's not as big a deal, but I understand these server boards play much nicer w/ raid cards and such.

That board ran me $300 though, so it's definitely not the cheapest.
 
Amount of total storage (if posting multiple systems) Approx 33tb

WHS: 11.89tb total

Case Lian Li v1000
PSU PC Power & Cooling 700w
Motherboard Jetway ATOM-GM1-330
CPU Atom 330
RAM 2gb DDR 2 PC-800
GPU (if discrete) Onboard
Controller Cards (if any) supermicro sat2-mv8, adaptec 4 port card
Hard Drives (include full model number) See Picture
driveslist.PNG

Battery Backup Units (if any) APC XS 1500
Operating System MS WHS

WHS, what is there to say? Boring WHS box.

photo1.jpg


Amount of storage in the following system: approx 22tb


Case Norco RPC-4220
PSU Corsair TX850W
Motherboard supermicro X8SIL-F-O, lga 1156,
CPU Xeon x3440
RAM 8gb ddr3 eec 1333
GPU (if discrete) Onboard
Controller Cards (if any) Chenbro CK12803 SAS expander
Raid Controller RocketRaid 4320
Optical Drives No!
Hard Drives (include full model number)
-OS WD2500BEKT -250 gig 7200rpm 2.5" laptop drive
-Raid 0-WD6400AAKS 640gig 7200rpm 3.5" qty 3
-Raid 5-Samsung HD204UI, 2tb 5400rpm 3.5" qty 10
-USB 2tb samsung story station, qty 2
Battery Backup Units (if any) APC XS 1500
Operating System MS Windows 2008 R2 Datacenter w/ hyper v

Unit is primarily used for file storage and backups of other systems. Windows 2008 r2 is installed with hyper-v role. Unit is configured as a domain controller, and runs currently 3 other virtual machines. I am working on developing a backup plan that will utilize my WHS box.

IMG_4990.JPG

IMG_5005.JPG

IMG_5045.JPG

2011-01-21%2011.54.25%20(Custom).jpg


Some Benchmarks

osbench.PNG

OS Drive
raid0bench.PNG

Raid 0 Array
raid5bench.PNG

Raid 5 array
 
Last edited by a moderator:
Interested how those Samsungs with advanced format are going to be on hardware RAID...

The array finished initializing last night, pulling over some large files to it now from another array and it is going at about 88-100MB/s. Nothing over the top or too insane, but it will be good enough for storage. For sure loads faster than the whs box, that was as slow as 5mb/s to 45mb/s..

I'm going to pull everything over from my WHS box over the next week and power it down. My plan is to use my WHS box to backup the new server in conjunction with using the new server to backup my laptop to the external drives. I'm looking to use something like backup exec.
 
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.

I am really pro JFS myself. I can't trust ext4 for > 16 TiB file-systems due to what I have heard on the mailing lists and the only other 'stable' options on linux are XFS/JFS (although ZFS might be a real option very soon). XFS has good performance but due to past issues I just can't trust it for data reliability. Even on my 36 TB file-system JFS takes only about 15 minutes for an fsck and so far its been super reliable and I haven't lost any data on about 3-4 systems using it with > 2 TiB file-systems. Only one system is > 16 TiB and actually using more than 16 TiB on it as well:

As far as I am aware, the issue with ext4 is that while the filesystem itself will go up to 1EiB, the tools will not go past 16TiB. So the question is, how do the both of you administer a volume that is greater than 16TiB?
 
My tiny 26TB file server !!
This file server for HDmovie and Blueray movie.

Mainboard ZOTAC H55-ITX WiFi USB3
CPU Core i3 530
RAM Gskill PI 4GB 1600Mhz cas 8
HDD Seagate LP 2TB x 13
PSU Seasonic X series 650W
SATA control Supermicro's SASLP-MV8


That really, really is amazing! :eek:
 
Status
Not open for further replies.
Back
Top