best way of going about a terabyte redundent fileserver

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
dont ask why or whats on it. That is of no conern... no its not porn. :D

anyway.

Here in about 5 - 6 months I will be upgradeing my now full fileserver and need some advice on the best way of building my new one.

I estimate my storage needs to be upwards of 400 - 500gigs by the time I am ready to build it.. along with room to expand.

My question is.. what will be avaliable in 6 months that will allow me to build a terabyte fileserver as cheap as possible.

(array will be in RAID5 with minamum storage capacity set at 1,000 gigs)

I will not be going the SCSI route as that would be super expensive. I would like to keep the disk array and RAID equiment combined into one under $1,000. Think I can do that?
 
I don't think you can meet that target budget now, but in 5-6 months you may be able to squeeze it in. Let me suggest to you that "squeezing it in" is not something I'd recommend. If this data is important to you, I suggest looking very carefully at the RAID controllers available to you and reading about the experiences of others. The Storage Review forums may be of use. Beyond that, I recommend redundant power supplies or at least a very robust server grade power supply. The supporting hardware should be from trusted manufacturers, non-overclocked, etc. etc.

If I were to undertake a project like this, I would employ a server type case, probably rackmount, with 2n + 1 hotswap bays, where n is the number of drives needed to hold the desired capacity. I would use a hardware Serial ATA controller with RAID 1 and hot spare support. I would use a redundant power supply with high +12 V amperage capacities, and total power rating in the high 400 W range at minimum. The rest would fall into place based on the available network and on processing needs.
 
Originally posted by cyr0n_k0r
dont ask why or whats on it. That is of no conern... no its not porn. :D

anyway.

Here in about 5 - 6 months I will be upgradeing my now full fileserver and need some advice on the best way of building my new one.

I estimate my storage needs to be upwards of 400 - 500gigs by the time I am ready to build it.. along with room to expand.

My question is.. what will be avaliable in 6 months that will allow me to build a terabyte fileserver as cheap as possible.

(array will be in RAID5 with minamum storage capacity set at 1,000 gigs)

I will not be going the SCSI route as that would be super expensive. I would like to keep the disk array and RAID equiment combined into one under $1,000. Think I can do that?

1TB, RAID 5, non-SCSI? Not going to happen, unless you've got about 5 IDE controllers. 1TB RAID 5, 250GB IDE drives = 9 drives. At a minimum, with no spares (and I recommend spares if you're running RAID 5, particularly on IDE, since you can't hot-swap).
 
Originally posted by skritch
1TB, RAID 5, non-SCSI? Not going to happen, unless you've got about 5 IDE controllers. 1TB RAID 5, 250GB IDE drives = 9 drives. At a minimum, with no spares (and I recommend spares if you're running RAID 5, particularly on IDE, since you can't hot-swap).
There are several controllers available that have more than one or two channels. And remember, we have Serial ATA, which offers true hotswapping abilities at IDE prices (almost). A 9-drive RAID 5 array? That would offer 2 TB of usable space: (250 GB x 9 drives) - 250 GB (for parity) = 2000 GB Perhaps you meant 5 drives? Here's a refresher if you're interested:

http://www.storagereview.com/guide2000/ref/hdd/perf/raid/levels/singleLevel5.html
 
RAID1? pfff.. I said I am doing RAID5!

RAID5 is all the drives minus 1.

I was also hopeing that the 300GB drives would be less expensive by then. Then I would only need 5 of them for a good array.

Speaking of extras.. can I add extra drives to a RAID5 that are for the striping?

I was thinking a tekram or promise for the controller.
 
Originally posted by xonik
There are several controllers available that have more than one or two channels. And remember, we have Serial ATA, which offers true hotswapping abilities at IDE prices (almost). A 9-drive RAID 5 array? That would offer 2 TB of usable space: (250 GB x 9 drives) - 250 GB (for parity) = 2000 GB Perhaps you meant 5 drives? Here's a refresher if you're interested:

http://www.storagereview.com/guide2000/ref/hdd/perf/raid/levels/singleLevel5.html


you're right, sorry. For some reason I was thinking striping + mirroring + parity. Don't know where I got that from.

And there's a very nice controller (I didn't mention it because it's in my office at work, and I'm not) that provides two IDE channels with true hot-swap capability, and even has Linux-compatible drivers. It's cheap, too ($80). If I was going to build an IDE RAID, I'd use a few of those.

I won't be back in my office for a few weeks (knee surgery), but when I am, I'll try to remember to post it here.
 
Originally posted by xonik
I don't think you can meet that target budget now, but in 5-6 months you may be able to squeeze it in. Let me suggest to you that "squeezing it in" is not something I'd recommend. If this data is important to you, I suggest looking very carefully at the RAID controllers available to you and reading about the experiences of others. The Storage Review forums may be of use. Beyond that, I recommend redundant power supplies or at least a very robust server grade power supply. The supporting hardware should be from trusted manufacturers, non-overclocked, etc. etc.

If I were to undertake a project like this, I would employ a server type case, probably rackmount, with 2n + 1 hotswap bays, where n is the number of drives needed to hold the desired capacity. I would use a hardware Serial ATA controller with RAID 1 and hot spare support. I would use a redundant power supply with high +12 V amperage capacities, and total power rating in the high 400 W range at minimum. The rest would fall into place based on the available network and on processing needs.

If your going to build your own
3ware Escalade
N+1 is rarely cheap Ive had real good luck with my Zippy MR3-6460P often comes bundled w\ Rackmount cases
got both the case and PSU off ebay for $450

But 1TB NAS appliances are running alot more than $1000, OEM or build your own

Id say your closer to $2000 > $3000 depending on what components you already have
and how lucky you are, I mean 250GB SATA HDDs are @ $200+ each x 5, there is your $1000 right there
 
Originally posted by cyr0n_k0r
RAID1? pfff.. I said I am doing RAID5!
Fine, do whatever you want. If I wanted to store 1 TB+ I would make sure that the array could stand multiple drive failures. With RAID 5 you can wishstand a single drive failure without consequence, but what if multiple drives fail (due to power spiking/failure)? The entire array is compromised. I don't think I could recommend RAID 5 to someone with that much data.
 
Originally posted by cyr0n_k0r
dont ask why or whats on it. That is of no conern... no its not porn. :D

anyway.

Here in about 5 - 6 months I will be upgradeing my now full fileserver and need some advice on the best way of building my new one.

I estimate my storage needs to be upwards of 400 - 500gigs by the time I am ready to build it.. along with room to expand.

My question is.. what will be avaliable in 6 months that will allow me to build a terabyte fileserver as cheap as possible.

(array will be in RAID5 with minamum storage capacity set at 1,000 gigs)

I will not be going the SCSI route as that would be super expensive. I would like to keep the disk array and RAID equiment combined into one under $1,000. Think I can do that?

I'd say it would be fairly possible, not including the costs of the pc itself. Figure about $0.60 a gig depending on the drives. So, $720 in hard drive space (1200gb 5 200gb for data + 1 for parity). That leaves $280 for the controller. I got a 3ware 8 port raid controller for $350 for the raid array at work, so you'd get fairly close to your price point.
 
Originally posted by freecableguy
Who says you can't hot swap IDE drives???? YES YOU CAN.

sure, under Win98 :p
or with sideband technology and a supported RAID card, but its not the same as SCSI, SATA or Firewire hotswap without those

Hotswap Issues
 
Originally posted by Ice Czar
Id say your closer to $2000 > $3000 depending on what components you already have
and how lucky you are, I mean 250GB SATA HDDs are @ $200+ each x 5, there is your $1000 right there

Can I expect SATA 250GB - 300GB drives to be cheaper in 6 months?

Are SATA RAID cards that reliable yet? I mean.. the tech is still pretty new... hasnt been along nearly as long as SCSI or IDE and those RAID cards have had years to improve on their models.
 
Originally posted by cyr0n_k0r
Can I expect SATA 250GB - 300GB drives to be cheaper in 6 months?

I've never seen a hard drive that wasn't cheaper after 6 months. Of course if this is as big as the drives will ever get then it will stay the same, but that is not going to happen.
 
Originally posted by cyr0n_k0r


Are SATA RAID cards that reliable yet? I mean.. the tech is still pretty new... hasnt been along nearly as long as SCSI or IDE and those RAID cards have had years to improve on their models.

its just an interface, not a RAID engine or buffer
ATA RAID cards employs SCSI drivers for interface with the OS anyway

Id trust those 3ware SATA RAID cards (Formerly Mylex)
that is a very mature technology
 
so far seagate only has a 200GB version of their SATA drives. They should have a 300GB version soon though correct?

Also, with a RAID5 array.. can I use more than 1 parity disk to make sure I can have more than 1 drive fail and it will be ok?
 
Well check out the promise SX6000 controler, it's fairly affordable for a raid5 IDE controler.

About the sata controlers not being 'reliable' i'd have to say no to that. It's the same controlers as the IDE but with a different intereface. I suppose you'll just have to wait though and see whats gonna be in your price range.
 
and I have the SX6000 its a fine RAID 5 controller
(you need to buy the cache RAM seperately will support up to 128MB of ECC or unbuffered)

But Ive never used it in a high volume multi-user environment

at this point Im thinking of selling it (I have external SCSI RAID arrays) and dealing with the PATA cables is a pain in my long range plans
 
Not a raid guru like some of you folks but couldn't he cut some costs by skipping on the hardware controller and going with software raid? Beefing up the CPU a little more is alot cheaper then many of the Raid5,1 Controllers out?

Yay or Nay?
 
certainly but there are tradeoffs, the suitability greatly depends on the access\traffic required and the overhead he is willing to incur, in addition employing dynamic disks (for software RAID) have their own drawbacks

Dynamic Disks in W2K

overall software RAID is an inferior solution in comparision
The main reason to employ it being the ability to use HDDs of various sizes, as opposed to matched drives
in addition to the performance lost due to CPU utilization
you also lack the {typical) cache that highend RAID cards provide
the question of if that performance hit is going to bottleneck access is really a function of the traffic
 
Originally posted by cyr0n_k0r
so far seagate only has a 200GB version of their SATA drives. They should have a 300GB version soon though correct?

Also, with a RAID5 array.. can I use more than 1 parity disk to make sure I can have more than 1 drive fail and it will be ok?

No, but you can install any number of drives as "hot spares", meaning that normally they will be unused but will automatically be rebuilt with a failed drive's data in the case of a drive failure. That means that the only way you should have an array failure is if 2 drives go out within the length of time it takes to rebuild a drive to hot spare (unlikely, I'd be more worried about some kind of controller failure/flakiness).
 
but with the increase in areal density, rebuild times go up
a good solution for "normal" wear and eventual drive failure
but the only solution for catastrophic failure (a power event, virus, natural disaster, fire, ect) is backup to hard media (typically tape) or to another remote array

by far the most popular reason for the death of an array is pilot error, I experienced that myself, when I migrated my array
the manual said that the card would rerecognize the assorted drives (6), but the recovery manual said under no circumstances rearrange all the drives.
 
Originally posted by Zuht
No, but you can install any number of drives as "hot spares", meaning that normally they will be unused but will automatically be rebuilt with a failed drive's data in the case of a drive failure. That means that the only way you should have an array failure is if 2 drives go out within the length of time it takes to rebuild a drive to hot spare (unlikely, I'd be more worried about some kind of controller failure/flakiness).
this sounds like something I would really want to do.
Can you use as many hot spares as you want? say 2 or 3?

and is the array smart enough to automatically start rebuilding if one of the drives fail?
 
I know 3ware cards allow at least 1 hot spare, never added more than that yet. And they do automatically start rebuilding onto it after a drive fails.
 
almost any card that will support a hot spare will also have management software that will do an automatic rebuild

the disadvantage to having more than one hotspare is that it cuts into the channels available for the RAID5
(its strength being the efficiency rating (Number of Drives - 1) / Number of Drives). Besides the rebuild window would provide the time youd need to physically replace the failed drive. There really isnt any advantage to more than one hot spare (if you loose 2 drives in a RAID5 your done anyway)
 
The whole idea of raid for most people is redunancy right? So you've got 10 drives and you don't wanna lose anything if one goes down.

Is raid really any safer though? If your controller card goes you've lost them all. Both ways are what if's. Theres a chance your drive will die and a chance your controller will kick the bucket.

So how does Raid actually give you any piece of mind? Most likely only one HDD will fail right? So you've lost whats on one drive, not good but better then all of it. Your controller goes and its all gone. Both ways its only 1 device failing.

Maybe Im looking at it wrong but Raid just doesn't seem too appealing unless your going to spend big bucks on a controller your pretty bloody sure wont fail.
 
RAID is weird..

If a controller goes, the drives don't lose any information.

You can usually replace the controller and rebuilt the array without losing any data, that's my understanding.
 
Originally posted by Poop
RAID is weird..

If a controller goes, the drives don't lose any information.

You can usually replace the controller and rebuilt the array without losing any data, that's my understanding.

Correct, Controller gone -> replace it ...

There are some nice 8 Channel SATA Controllers from 3Ware and Raidcore ...
 
Originally posted by Zuht
No, but you can install any number of drives as "hot spares", meaning that normally they will be unused but will automatically be rebuilt with a failed drive's data in the case of a drive failure. That means that the only way you should have an array failure is if 2 drives go out within the length of time it takes to rebuild a drive to hot spare (unlikely, I'd be more worried about some kind of controller failure/flakiness).

Can't stress this enough. Raid5 is not a truly effective/fault tolerant Raid5 w/o a hotspare or 2. Your info is MORE than worth the cost of an extra drive.
 
Originally posted by SKiTLz
So how does Raid actually give you any piece of mind? Most likely only one HDD will fail right? So you've lost whats on one drive, not good but better then all of it. Your controller goes and its all gone. Both ways its only 1 device failing.

Maybe i'm not reading that right, but if 1 drive in a RAID 5 dies you don't loose anything. You can continue running with a degraded array, but all of your data is still there. The other drives contain the partity information to continue running and rebuild the data onto the lost drive when a new one is inserted.
 
correct

Poop was also correct simply replace the controller

and I dont use a hotspare, but then my array isnt for a 24\7 sever either, If it was agree with Nate7311

I employ all 6 channel for drives, in the event a drive fails, I'll be using the computer and will be immediately notified,
I simply replace that drive and rebuild,
(I do have several matched spares, just not online as hotspares)
 
Originally posted by Ice Czar
correct

Poop was also correct simply replace the controller

and I dont use a hotspare, but then my array isnt for a 24\7 sever either, If it was agree with Nate7311

I employ all 6 channel for drives, in the event a drive fails, I'll be using the computer and will be immediately notified,
I simply replace that drive and rebuild,
(I do have several matched spares, just not online as hotspares)

thx for clearing that up. Im pretty shocked that u can just replace the controller and carry on. I thought it would have taken the array down.

Maybe not such a bad alternative then.
 
One thing that can happen with a RAID array and a controller failure is the controller actually corrupts the array on its way out. This is bad, replacing the controller will not get that data back. This is a fairly rare thing to happen compared to losing a drive itself, but I have seen it happen so it is not that rare.
 
yeh there are hundreds of ways to corrupt data
you can think of a RAID controller as an add on PC
many have dedicated processors, their own RAM subsystem, BIOS\hardcoded software and of course storage.

You probably run near the same risk of corrupting data as you would with a normal PC, the main difference being if the array information written to the drives is corrupted, reassembling the stripe and parity becomes problomatic, there are services that do that, and even utilities like Disk Patch that offer some support.

But generally speaking you can simply replace the controller

RAID isnt a substitute for Backup
enough of them can come close (for enterprise storage)
but generally the objective of that is that the remote array, isnt doing anything but imaging the near online array (where the action is) and creating hard media (tape) physical backups, and then repeat
 
struth. Safe data storage isn't cheap. Especially not for most of us with high storage needs. I'd hate too see some Enterprise like data storage setup costs where theres a backup of the backup that made the backup.
 
Ill post in here instead of starting my own thread.

My buddy needs a large drive array, at least 1TB of safe storage. speed isnt of the essence, but it needs to be direct-connected to the computer, and there isnt space inside teh computer to connect...

and it needs to be mac compatible (os-x 10.3).

are there external cases with FC or scsi for the external connection and use IDE on the inside?

gbit ethernet could work, but im not sure. It needs to have a low processor consumption.
 
methododical: how about a via c3 in a mAtx case with a superb psu, one 4 port pci addon card and 5 drives (1 onboard) plus another small one for boot (onboard too)
software raid5 under linux/bsd (some support online expansion too) or just a software jbod array
but definitely invest in a bakup solution (be it external harddrives or tape)
fileserve per samba, definitely works with a mac :)

you can buy external fc boxes (but 1TB with scsi is a bit costly) or firewire but they usually cost lot more than a DIY solution. what's his budget?

thread started: raid array is never a backup (except when you have mirrod system with same data), so look into a backup solution first, as you don't need 24/7 redundancy (primary idea behind a raid array), why waste money on raid5 then. A simple jbod array will do the same for your need and costs you less, in addiotion less to worry about setting it up. are your data important? back em up
a simple external 5.25" firewire case with a few removable caddies and several 250G drives will work perfectly as a home backup
 
Back
Top