picture can be found here - http://hardforum.com/showpost.php?p=1033848171&postcount=54
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Total Storage: 36TB + 2TB + 64GB + 64GB
Usable Storage: 30TB (Raid 6 + Hot SPARE) + 1TB (Raid 1 Home) + 64GB (Raid 1 (Linux MD Raid) OS Drive)
http://blog.crcthree.com/images/beast_1.jpg
http://blog.crcthree.com/images/beast_2.jpg
http://blog.crcthree.com/images/beast_3.jpg
http://blog.crcthree.com/images/beast_4.jpg
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.Pyrodex, what filesystems are you using for your partitions? Especially with everything that I'm reading about the issues with EXT3 being hellish on a unclean restart, EXT4 not being suitable for anything >16tb etc.
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.
root@dekabutsu: 01:24 AM :~# df -H /data
Filesystem Size Used Avail Use% Mounted on
/dev/sdd1 36T 22T 14T 62% /data
root@dekabutsu: 01:24 AM :~# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/sdd1 33T 20T 13T 62% /data
root@dekabutsu: 01:24 AM :~#
New Mythbox
Total Storage: 16 TB
Norco RPC 4020 Case
Intel Xeon X3440 2.53 Ghz
4GB DDR3 1066 Memory
Nvidia Geforce 8600GT
Areca ARC-1220 Raid controller
8x2 TB hitachi 7200 RPM (raid6)
Back of the machine showing tuner cards/vid:
Top of the machine with cover off (getting ready to replace old box)
Old and new box in the same shot:
This is the box which i used for all my TV. It has 2 genpix DVB-S tuners for HD sat, 1 nexus DVB-S for non HD sat and 2 pchdtv cards for OTA HD. It can record 5 things at once and get non-recompressed sat streams from dish network.
Here is mine 15tb running windows server 2008, dual core, 4 gig ram
Temps of the disks?
How did you mount so many close together like that?
Yay I can post here finally!
Antec SX-1040 Case
Antec TruePower Quattro 850W
AMD Athlon 64 X2 6400+
1x1TB Hitachi 7200rpm for OS (Server 2008 R2) and mirrored storage
Areca ARC-1880i RAID Controller w/ BBU
8x2TB Hitachi 7K2000 in RAID 6
Pic is kinda crappy because it's from a phone:
Did you put a fan in front of those 2 hard drive banks to keep them cool ? they really don't have that much air flow around them to keep them cool.
Really sweet build in tight space. I'll give you props for the form factor. Think what you could do with 3TB drives!
Question: what kind of temps are you seeing on the hard drives? Doesn't look like there is much room for airflow. I'd expect that the drives in the middle are pretty hot?
Yeah, there is about 5mm between drives, so the ones in the middle are getting warm to the touch (haven't spotted it with my IR keychain gizmo, so don't have an exact temp). It's not too bad right now as my house is usually at about ~55ºF during the winter, but I'll definitely have to put a fan in for the summer. I've got a Scythe 12x120mm fan that will be just inside the top cover over all of the drives blowing down, hopefully that will solve any issues.
Problem with short legs ?
So do I, damn near impossible finding a chair that fits, they are all for long leg people
The drives will report their own temps. All you need is a program that can read them. There are dozens of free apps out there. Its more accurate than your "IR keychain gizmo" because its reporting the temperature inside the drive - where it matters.
HW Monitor is giving me 30ºC for the front and back drives and 35ºC for the middle drives. Room Ambient is 18ºC, Case Ambient is 24ºC and proc (i3-530 @ 1.0v) is 39ºC when playing back 1080p .mkvs from the middle drives.
edit:
just did a couple hours of burn-in -- cpu topped at 1.2v/50ºC, hdds spread from 41-44ºC... i'll keep running burn ins to see if it goes any higher
Given the compact case an limited airflow, that's not too bad at all! High side of acceptable at 40+, but not too bad.
It'll be interesting to see how it hols up this summer when ambient is a bit higher (18C? I take it you like sweaters - or perhaps a snuggy?).
21.12TB in a single chassis
Without pictures you haven't posted your build yet...Now that the build is complete, I'm happy to post
Total Storage: 27.1 TB
Case: Norco RPC-4224 (new power button design)
PSU: Corsair AX750 750 watt (80 Plus Gold, single rail)
Motherboard: Asus P6T WS Professional
CPU: Xeon E5620
RAM: 12GB (3x4GB) Mushkin Enhanced Proline ECC PC3 10600
GPU: EVGA EN8400GS Silent
Controller: Areca ARC-1800ix-24-4G ECC
OS: Windows Server 2008 R2 Datacenter
System HDD: 2x 300GB Hitachi Ultrastar 15K RPM (SAS Hardware RAID1 on the motherboard)
RAID HDD: 4x 2TB Samsung F4, 4x WD 2TB EARS, 5x 1.5TB Seagate, 1x 2TB hot spare
BBU: APC 850 watt 1500VA UPS, Areca BBU eventually
I use the server for two purposes primarily, a storage server and a Hyper-V server for running test and development virtual machines (TFS & test images). Because of the cost, my only backups currently are the RAID arrays themselves, but I intend to duplicate the box and leave it in another house when money allows in the future.
I have all eight 2TB drives in one RAID 6 array, and the 1.5TB drives in a RAID 6 as well. In the near future, I intend to purchase 4 more 2TB drives, expand the array to 12 drives, and leave the last two rows (three when the 1.5s are upgraded) for a RAID 6 array full of 3TB drives, unless the smart people here tell me that is a bad idea. When complete, that'll be 60TB in this box.
Also, I'm torn on my motherboard selection. Questioning whether there was a better option I should have chose. Thoughts?
Without pictures you haven't posted your build yet...
On the MB question: the P6T WS Pro has two drawbacks. It uses crappy Realtek LAN chips and it lacks remote management (IPMI). The LAN problem can be overcome with add-on cards that use Intel-based server LAN chips. After running a server like this for a while you'll really wish you had IPMI (or maybe not - but only because you don't know what you are missing - once you've run a server ->with<- IPMI you'll never want to be without it again.
Assuming you want to stay with an X58-based build, SuperMicro's X8STE would have been about right here. Intel LAN and IPMI both.
No worries in the end - you've put together a fine build and if these are the only nits to be picked its a pretty good day.
Interested how those Samsungs with advanced format are going to be on hardware RAID...
I am using EXT4 on all the filesystems. We've been using it at work on a few LARGE scale systems that house oracle databases and large scale file storage without issues. We have a few past 16TB but most of them are broken out into smaller file systems. My system is all contained in LVM so I can break out them into smaller volumes for my needs and grow them on the fly as required.
I am really pro JFS myself. I can't trust ext4 for > 16 TiB file-systems due to what I have heard on the mailing lists and the only other 'stable' options on linux are XFS/JFS (although ZFS might be a real option very soon). XFS has good performance but due to past issues I just can't trust it for data reliability. Even on my 36 TB file-system JFS takes only about 15 minutes for an fsck and so far its been super reliable and I haven't lost any data on about 3-4 systems using it with > 2 TiB file-systems. Only one system is > 16 TiB and actually using more than 16 TiB on it as well:
Digital TV antenna I believe.
My tiny 26TB file server !!
This file server for HDmovie and Blueray movie.
Mainboard ZOTAC H55-ITX WiFi USB3
CPU Core i3 530
RAM Gskill PI 4GB 1600Mhz cas 8
HDD Seagate LP 2TB x 13
PSU Seasonic X series 650W
SATA control Supermicro's SASLP-MV8