Humble Request for Advice: Cheapest 60TB Redundant Array Needed

Joined
Nov 16, 2012
Messages
8
Hi guys.
Been scratching my head trying to figure this out and wanted to put it to some experts.

I run a medium sized video production company media department. We have an expensive 60TB Facilis Terrablock server that 20 editors can stream video from at a time. We want to offload this to a cheap 60TB RAID array that can sit on the network for one user at a time to browse.

We allready have extra Mac Pro towers available. What do you suggest?



I am looking at:
Raw storage connected to a Mac Pro PCI-E or SAN connection and share out via AFP and gigabit from that Mac Pro.
5 cheap consumer RAID 5 enclosures connected to eSATA cards in a Mac Pro, like these:
http://eshop.macsales.com/item/Other World Computing/MEQX2T12.0S/

($900 x 5 = 4500) $4,500 + $100 eSATA card for 60 terabytes of storage is pretty cheap and has some degree of RAID 5 redundancy, obviously we will keep other card backups so it doesn't have to be completely full proof. $76/terabyte

Or something like:
http://usa.chenbro.com/corporatesite/products_detail.php?sku=44
http://www.seaboom.com/scripts/product.asp?PRDCODE=1676-RM51924M2-R1350G&REFID=FR
$1,800 for 24 3.5" bays, filled up with 3TB drives for around $150 or so
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136874
another $3600
Comes out to $5400. for 72TB of raw data (no RAID). Then software RAID from there.
$75/terabyte

Or:
http://www.neweggbusiness.com/Produ...la-_-NA-_-NA&gclid=CLLi9tLb1LMCFYYWMgod6xoAKg
Sans Digital 8-Bay USB 3.0 / eSATA Hardware RAID5 Tower Storage Enclosure w/ 6G PCIe $440 with RAID 5, could get 3 of these?

But I don't know if I need to keep things in RAID 5 chunks or in a big RAID 6 or RAID 10 array or what, but if you need semi-redudant storage of 60 terabytes for cheap as possible, like $4k-7k, what do you suggest?

Thanks very much for your time!!
-Chris
 
Unless you're running ZFS or a filesystem which is aware of and can control for timeouts on the block level, you are definitely not going to want to buy Western Digital green drives if you plan to use them in any RAID array with parity. You should also avoid using RAID 5 with that much storage.


72TB (total raw)
59TB = 2.7 x 24 (one large RAID 6 array, 2 drives used for parity)
54TB = 2.7 x 24 (24 split into two 12 disc RAID 6 array, 4 drives used for parity)



HDD (Suggested)
---------------------------------------------------------------------------------
TOSHIBA DT01ACA300 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive

$3,599.76 = $149.99 x 24 - http://www.newegg.com/Product/Product.aspx?Item=N82E16822149408
$?,???.?? = $???.?? x 24 - http://www.neweggbusiness.com/Product/Product.aspx?Item=N82E16822149408
$3,429.12 = $142.88 x 24 - http://www.amazon.com/gp/offer-list...new/177-3242056-1222404?ie=UTF8&condition=new



HBA SAS/SATA RAID CONTROLLER
---------------------------------------------------------------------------------
ARECA ARC-1882IX-24 PCI-Express 2.0 x8 SATA / SAS 28 Ports 6Gb/s SAS/SATA RAID Adapter
Note: Kaleidonet.com is a direct subsidiary and retail outlet for Areca in North America

ARC-1882IX-24-1GB
$1219.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16816151110
$1219.99 - http://www.neweggbusiness.com/Product/Product.aspx?Item=N82E16816151110
$1179.00 - http://kaleidonet.com/arc1882ix24.html

ARC-1882IX-24-2GB
$1249.00 - http://kaleidonet.com/arc1882ix242gb.html

ARC-1882IX-24-4GB
$1309.00 - http://kaleidonet.com/arc1882ix244gb.html



SERVER CHASSIS
---------------------------------------------------------------------------------
NORCO RPC-4224 4U Rackmount Server Case with 24 Hot-Swappable SATA/SAS Drive Bays
- Speak to Odditory about this case, there might be some quality control issues he could be aware of.

$399.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16811219038
$399.99 - http://www.neweggbusiness.com/Product/Product.aspx?Item=N82E16811219038
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I run a medium sized video production company media department. We have an expensive 60TB Facilis Terrablock server that 20 editors can stream video from at a time. We want to offload this to a cheap 60TB RAID array that can sit on the network for one user at a time to browse.

This doesn't really make a lot of sense. You have an expensive storage server capable of supporting 20 users and you want to duplicate all of its content on another server so that you can support _one_ additional user?
 
Thanks 1010 very helpful advice, will that be more redundant than using the OWC drives? It would clearly be faster..

JJ: The idea is that we put our tv show video content on the "online" expensive video server for a few months to finish the project, then we move it all over to a cheap "nearline" data array that is still accessible but only for reference. The expensive video server gets erased and filled up with a new project.
 
Also why would you do 2 12 drive arrays instead of 1 24 drive array? It's safer?
What do you suggest?

"
72TB (total raw)
59TB = 2.7 x 24 (one large RAID 6 array, 2 drives used for parity)
54TB = 2.7 x 24 (24 split into two 12 disc RAID 6 array, 4 drives used for parity)"
 
Also why would you do 2 12 drive arrays instead of 1 24 drive array? It's safer?
What do you suggest?

"
72TB (total raw)
59TB = 2.7 x 24 (one large RAID 6 array, 2 drives used for parity)
54TB = 2.7 x 24 (24 split into two 12 disc RAID 6 array, 4 drives used for parity)"

The larger the discs or array, statistical probabilities increase that a URE will be observed. You should consider that storage capacities on drives continue to increase, but physical read/write speed increases are marginal on new generations of mechanical platter HDD. Array rebuilds take longer. While it's unlikely that 3 drives will fail/drop out of an array in unison or over the course of a rebuild, those probabilities increase as you add drives to an array. Choosing between 12x 3TB vs 24x 3TB in a RAID6 array mostly depends on how paranoid you are about protecting your data. You also should be considering how you plan to backup your data permanently (RAID is not a backup).

mwroobel and odditory or others would offer some good advice on this subject.
 
Last edited:
But I don't know if I need to keep things in RAID 5 chunks or in a big RAID 6 or RAID 10 array or what, but if you need semi-redudant storage of 60 terabytes for cheap as possible, like $4k-7k, what do you suggest?

Thanks very much for your time!!
-Chris

RAID 10?! for 60TB worth of archive storage? No offence but you sound like you're just stabbing in the dark...

With your budget you can't really do anything too fancy, but it doesn't sound like performance is going to matter if it's more of an archive than a file server.
From what you're saying the data doesn't sound very critical (correct me if I'm wrong?). Would the company be up Shit Creek if the data was lost?

In my opinion just build something like a single socket Supermicro board with the lowest-power Xeon available, 8gigs-o-ECC-RAM and enough HBAs to support something like 40 2TB SATA drives in 4x10-drive RAID-Z2 vdevs. Oh and a chassis or two to fit all those drives.
If you just use consumer drives they alone will come to about $3.5k, but you could spend twice that or more if you opt for enterprise class drives. The value of the data to the company will dictate the money you should invest to protect it.
 
Last edited:
In my opinion just build something like a single socket Supermicro board with the lowest-power Xeon available, 8gigs-o-ECC-RAM and enough HBAs to support something like 40 2TB SATA drives in 4x10-drive RAID-Z2 vdevs. Oh and a chassis or two to fit all those drives.
If you just use consumer drives they alone will come to about $3.5k, but you could spend twice that or more if you opt for enterprise class drives. The value of the data to the company will dictate the money you should invest to protect it.

Having thought about this, you might be better off with fewer larger drives (e.g. 30 3TB drives in 3x10-drive RAIDZ2 vdevs) because although the upfront cost of the drives will be a bit more, you won't have to spend so much on a larger chassis and the electricity to power them all. 3TB drives would probably be the $/GB sweet spot at the moment.
 
Hey guys,

Really appreciate everyone's input. I have gotten a little confused at this point, and would love some patient clarification. Everyone is mainly suggesting to order separate components and build a chassis, which I expected for bang-for-buck, sounds great. My conception of what that looks like is this:

1. Buy Chassis (big empty box with fans): $400-ish, eg. NORCO RPC-4224 4U Rackmount Server Case

2. Fill the Chassis with 24 sata drives, eg. TOSHIBA DT01ACA300 3TB 7200 RPM 64MB Cache SATA $3500ish

3. Buy Mac Pro tower (already have one) $1500ish, used.

4. Install HBA SAS/SATA Raid Controller into PCI-E slot in Mac Pro, eg. ARC-1882IX-24-1GB $1300ish

5. Run sata cables from all the 24 hard drives in the chassis over to the HBA SATA card, so they show up on the tower are able to be formatted/RAID array created.

6. Format/create raid array using some form of software interface in OS X in the Mac Pro.

7. Fill with data, and share the large RAID array on my network via AFP and gigabit LAN.

8. Drink root beer, feel smart.

Total hardware cost, roughly $5200

I have a suspicion that I am missing something here, or several things. Can I avoid installing a motherboard and SATA card inside the chassis? How do I connect the SATA drives to the HBA card in my tower? etc..

Would love your input! I am a fairly technical guy, but this particular build is outside my experience. I'm sure I'll be able to figure out what you're talking about but need a bit of clarification. Thanks very much for sharing your time and expertise.
-Chris
 
I have built a number of 60T hosts.
I have also tried to do what you noted a long time ago (running cables outside of one chassis into another) , someone may have more info but basically the SATA drives were having problems/resetting when I ran long cables from one chassis to the other.

What you do:
1. NORCO CASE
2. 24 x 3TB = 60TB formatted RAID-6
3. (I like 3ware) 9750-24i4e (or your card of choice)
4. 6 multi-lane connectors
5. Plug into motherboard of your choice
6. Share via network.

I do not recommend your idea of running cables between chassis.
 
So this means you build everything inside the case, with no Mac Pro involved?
And you have to buy RAM and a motherboard, processor, etc basically build a whole computer?
And you have to run somekind of OS on it? Linux I assume?

Would much prefer to have it formatted as HFS native apple and shared via AFP then to get into linux file systems, server protocols, etc... I intend on running MetaLAN server software on the Mac Pro as well for project sharing. I suppose the storage can still be accessible via AFP on the network using linux though. Is there a safe way to do everything in OS X?
 
To run from your Mac Pro setup a chassis as a DAS (direct attached storage). This would work:

1. Supermicro 24 bay chassis, 846BE16-R1K28B or 45 bay 847E16-R1K28JBOD (E16 stands for single port 6Gb SAS2 expander)
2. if you get the 24 bay version you need the parts to make it a JBOD chassis: CSE-PTJBOD-CB2 and CBL-0351L
3. An external SAS cable to connect the Mac Pro to the drive chassis: CBL-0166L
4. Raid Card to install in Mac Pro: Areca ARC-1882X (no expander needed on card since it is built into enclosure)


This only requires a single cable from the Mac Pro to the drive chassis and the cable is actually designed for this use. We do this in our office, but from a windows machine not a Mac.
 
Dear Chris,

1. You need to identify EXACTLY which item has HIGHEST priority.
2. There is NO perfect scenario for you. Every option has particular circumstances that you need to deal with.
3. When you have 500GB data, it is easier. When you have 60TB, it takes slightly more effort currently. You need to cross-exam your internal preference.

I understand it might need an example, so I list one as observation. Remember there are many perspectives. Example only

1. Buy eSATA add-on card + 8-bay eSATA external enclosure. 8 x 3TB. Approximately 18TB usable RAID-6. The whole thing in one-bundle.
2. In this example, it comes and goes as one-bundle. (except the eSATA card, which obviously will stay inside host)
3. Consistent with your initial expectation. once you settle a project, you move it to secondary storage, and reload primary storage
4. When even the secondary is no longer needed, you disconnect the entire 8-bay and store them as archive. (maybe occasional spin-up to check by maintenance crew). Else you can throw away the old data if archive is not needed at all)
4.1 Buy more bundle if you need more storage.
4.2 If 8-bay is too much under this example, then go for 4-bay enclosure.
4.3 If you still want archive, but not willing to use the entire disk enclosure, then you can use high capacity LTO tapes as final-stage archive medium.

To address counter-views, which are equally valid. Sometimes users have different circumstances, so this is not likely going to work for them. For example, maybe they want 60TB completely online, every minute, every second.
 
Last edited:
Thanks very much for your help tschopp
So if I understand correctly:

1. Supermicro 24 bay chassis, 846BE16-R1K28B, shows as $1310.95 from atacom.com
2. CSE-PTJBOD-CB2 $40 and CBL-0351L $48 at atacom.com
3. CBL-0166L $51 at atacom.com
4. Areca ARC-1882X $790 at newegg.com
(5.) 24 3TB drives, eg. TOSHIBA DT01ACA300 3TB $3,599.76
(6.) Mac Pro $0 (I have one)
Total cost: ~$5839.71

Then I can create two 12-drive RAID(6?) arrays using OS X?
And I can format both the big 30TB arrays as two giant HFS+ Volumes?
And they can be formatted/AFP-accessed just like a 4-bay RAID drive with eSATA, like a OWC Mercury Elite Pro or any other redundant hard drive?

Need to understand a little more probably but this "DAS" solution hard-lined into a Mac Pro tower is probably the best way to go for us, even though it appears to be a bit more expensive.
 
Last edited:
The raid card has a web interface to allow the creation of the array. First you group the drives into a raid set(s) - pick the drives for each array, then on the raid set you create a volume set(s) - define the type and size of volume(s) you want on each raid set, ie raid 6 etc.

You could do two 12 drive raid 6 arrays, I would probably just do a single 24 drive raid 6 or a 23 drive raid 6 with a hot spare.

Set the array to do a consistency check every 4 weeks. This looks for the dreaded URE mentioned above and fixes it before it becomes a problem.

The raid card presents each volume to the OS as a SCSI drive. So if you went with a single large array, the OS sees a 66TB SCSI drive. You then format it like any other drive. I am a windows person not Mac, you might look into if there are any Mac specific issues with a 60TB drive. Windows has no trouble formatting that and I would guess Mac would be OK also.

I assume you are putting this in a server room. Supermicro can be real loud. They can be made quiet if need be. Two main issues, the chassis fans and the PSU fans. The chassis fans can be quieted by a PWM fan controller. The PSU is best quieted by plugging into a supermicro MB. Setup as a JBOD with no fan control this will be loud.
 
If I use a Supermicro Motherboard in the box, will that still allow me to access the 66tv volume from my OS and format it as NTFS as normal? Or will it require using a linux filesystem?
Also, I am looking at using Hitachi 7K3000 drives because they are cheaper, is there any reason that I should pay the higher price for the Toshiba DT01ACA300 drives mentioned above?
 
For all intents and purposes, the Toshiba's are basically rebadged HGST (Hitachi) drives. Both should work great for what you need and you can go with whichever ones you can get cheaper.
 
Why not use fewer 4 TB disks instead of more 3 TB disks. Assuming 3 TB and 4 TB disks are equally reliable each additional disk reduces the expected up time of the overall array between disk failures.
 
To get around the problem of extended RAID rebuild times, consider RAID 50. Split your drives into groups of 3, making a RAID 5 of each group. Then stripe across the groups. That way, the effect of the failure of any one drive will be minimised, with only 3 drives involved in the rebuild. You could also have 3 controllers, with each controller controlling 1 drive in each set, so you could still function if a controller failed.
 
Back
Top