Here's what I hate about RAID (maybe someone can change my mind)

tdx

Limp Gawd
Joined
Jul 3, 2005
Messages
154
Hey y'all,

I have a lot of DVDs to rip (about 5TB) and I've been thinking about setting up a RAID array with a lot of disks. I'm thinking about something like 8 X 750GB in hatdware RAID 5 (Areca X 8)

This is what sucks though: this would be a very expensive setup, and, when I want to upgrade it in the future, it will be totally useless. Why? Because once I chose the size of disks for my RAID 5 array, that's what I'm stuck with. So if, say, in 1 year, I want to stick some 2TB drives (or whatever is out there), my RAID card will only use 750GB out of them. What's worse, if I already have 8 drives on my Areca card, I'll have to buy a new one in order to create a new RAID array and "transfer" the data from my first RAID array to the new one.

So, basically, I'm going to spend 5K on an array that is barely big enough and which I'll have to throw away when i want to upgrade to bigger drives.

Am I missing something or is RAID pretty lame in that respect?
 
My plan is to buy all the drives for a new array when the time comes, transfer all my data to them as individual drives, split my current array, transfer back to individual drives from array, then transfer to the new array. Round about, I know, but the time it will take me is not worth $250 of my time(Cost of a new Raid5 card).

You could always consider encoding the DVDs in HQ Xvid. Say, 1GB per movie. Somehow I doubt you have 5000 movies. (Or just using DVD shrink to strip out all the crap you dont want will reduce most movies by a third or more).
 
It's not really lame, it's a limitation. Since the controller card is usually the biggest individual cost, get one that has plenty of ports for future expansion. I don't dabble much with SATA/IDE RAID setups, I use all SCSI RAID5 setups, with just the two controllers I have in my main server I already have 16 drives, and they can handle up to 84.

If you need more space later, get something with a dozen ports or more.
 
I agree with the points you guys are making, but the main problem remains: if you start with a certain drive size on an array, there is no way to move to bigger drives later on without creating a new array from scratch and throwing out the smaller drives).

Once again, if someone knows a way to upgrade arrays with different sized drives without losing the old ones, I'd be happy to hear it.
 
tdx said:
I agree with the points you guys are making, but the main problem remains: if you start with a certain drive size on an array, there is no way to move to bigger drives later on without creating a new array from scratch and throwing out the smaller drives).

Once again, if someone knows a way to upgrade arrays with different sized drives without losing the old ones, I'd be happy to hear it.

Not without JBOD, and I dunno if JBOD can even do it. What you could always do is get a 16 port now with a proper case, buy 8 drives and raid5/6(i suggest 6 or at least a hotspare), and if 750gb drives in a few years arn't dirt cheap and enough space for you, you could always just buy 8 2tb drives and create a separate array. I don't really see what the big deal is here...
 
hokatichenci said:
Not without JBOD, and I dunno if JBOD can even do it. What you could always do is get a 16 port now with a proper case, buy 8 drives and raid5/6(i suggest 6 or at least a hotspare), and if 750gb drives in a few years arn't dirt cheap and enough space for you, you could always just buy 8 2tb drives and create a separate array. I don't really see what the big deal is here...
agreed, you just make a second array, have windows/ unix extend the FS across the two arrays and you are done...
 
Oh, okay, I didn't realise you could do that with RAID, I thought all drives had to be the same size. My bad.
 
tdx said:
Oh, okay, I didn't realise you could do that with RAID, I thought all drives had to be the same size. My bad.

All drives within the same volume should have the minimum space. This means you could say buy a 4 300gigs, a 300gig fails but you can only find a 320 gig, the 320 gig will work (but might be kinda finnicky since typically you want all identical drives). But like I said you can always create a completely separate volume set if you plan for future expansion.
 
i see a lot of talk of RAID 5 here, what is it?

i know RAID 0 and 1 and i even get 0+1...but 5? little help please.

I'm looking to get up a large storage volume on one of my PC's and am looking for the best solution.
 
raid explained

wikipedia explains raid

as far as your raid limitations go. lets say you have a 6x120gb and want to add 6x160gb drives. you can do that, but you'd be limited to 120gb for each drive on that specific array. as said before, you can create another array and have the every drives seen at 160gb each.

cheers
 
These many-disk arrays are just not good fun, as you pointed out.

It certainly makes more sense to build several arrays.

Another alternative is using partitions on the drives in different RAIDs.

Example:
- you now buy 5 disks with 500 GB, put them into a 5x 500 GB array
- later you buy 3 more disks with 1000 GB
- you partition each disk to have 2x 500 GB
- the first partition in each drive goes into the old array, now an 8x 500 GB array
- the other three partitions form a new 3x 500 GB array

Of course performance sucks if you use both arrays at the same time, but otherwise it'll work fine.
 
While there is still higher overhead than just a JBOD solution you can always look into a middleware filesystem that seperates physical device partitioning and filesystems. Somthing like LVM, or I believe that EVMS does the same, and a resizable filesystem like XFS. Basically it would allow you to have your original array of 8x750GB, then when your 2 TB drives come out you can create another array or just an individual drive and add the new space to your existing Logical Volume that LVM/EVMS uses. Then just resize the filesystem (XFS is a good one) and now you've added storage space effieciently to a single volume. Most good quality RAID cards allow for multiple arrays per card so if you a 12 port Areca and used 8 now then later you would have 4 more ports to add another array or just individual drives.

Word of Caution: Obviosuly troubleshooting becomes much more difficult if you have a catastrophic loss of data or disks since now you can have multiple devices in different configurations all with data one them. People in this forum always say this but RAID is not a good backup solution, if is meant to increase availability and uptime.
 
I think the closest you're going to get to a single unified volume is the "mount as folder" option in Windows. Then you could have the big stuff (say the DVDs) in that folder and everything else in other directories.

Yes, you could use the two arrays as portions of a JBOD, but that will almost certainly cause Problems in the future. What happens when you try to expand the underlying raid array? I promise it's not good things, and you need dynamic disks to do software JBOD... ick.

 
Thanks for these very detailed answers, compadres. Looks like there is a lot more for me to learn about RAID than i thought.

Why can't they just make single 10TB drives, so I can forget abour RAID completely? ;)
 
tdx said:
Thanks for these very detailed answers, compadres. Looks like there is a lot more for me to learn about RAID than i thought.

Why can't they just make single 10TB drives, so I can forget abour RAID completely? ;)

*thinks of the amount of data lost* :(
 
*thinks of the amount of data lost*

Yes, but all we would need is to buy 2 of them (heck, even three of them) and backup all the data.
 
When it come to data loss from RAID-0, it is also a point of view.

The chance to lose any given file that you have from the first disk death that happens to you is equal for single-disk and for multi-disk raid-0.

RAID-1 and -5 lovers also forget that there are prime opportunities to lose all data on your raid, such as:
- PSU frying all connnected components
- OS software error scrambling filesystem contents
- driver/firmware software error messing up raid (software complexity of raid solution is higher, hence higher chance of error)
- flooding or mechanical impact to the computer such as from dogs, kids, flying dinosaurs, plates thrown at you by disgruntled wife etc
 
Retro Rex said:
Check out unraid

I have one and it rocks

http://lime-technology.com/
While that does look pretty flexible, note the difficulty with it: "The only requirement is that the parity drive must be as large as or larger than any of the data drives." So if you buy 8*750 now and go to expand in the future, you'd have to break the array (goodbye, parity), add a new drive, restore the parity onto it, and then add the new disks. And it's parallel ata. I can only hope that will be phased out by the time of 2TB disks :p And finally, having multiple seperate filesystems is a pain in the butt - you have to manually sort out what goes in what folder, etc.

I don't know why everyone seems to treat raid 3/4 as so much better than raid 5 (see for example the things by XFX, and these...). Sure, badly done raid 5 is bad, but there are plenty of good implementations of raid 5. No reason for this "baby/bathwater" mentality except advertising.

 
Well changing the partity drive is a bit of a hassle, but you never lose your parity. If another disk dies during the process, you can always throw your first parity drive back in and recover the failed drive.

Also, seperate file systems are a pain for file sorting, but great for recovering data. In a Raid 5, if 2 disks die, you lose all data on all drives. In unRaid, if 2 disks die, you only lose those 2 disks worth of data. Hell, if all the disks die except 1, at least you have that 1 disk left of data.
 
That depends - I would imagine the firmware can't trust any disk that's unplugged and re-plugged, so it would have to assume that the contents of the disk are corrupt, and wouldn't be able to recover. And that one parity disk will likely wear out faster - it has to do a read and a write for every write operation to any other disk. And the fact that the data isn't striped leads to larger amounts of parity generation - longer reads and writes than you'd need for raid 4/5.

As for "great for recovering data" - in raid 4/5 if you lose one disk you're at raid 0 reliability, and losing a disk means you lose everything on the array. With this, if any two disks other than the parity disk die, you lose three disks worth of parity - the parity disk is useless without (n-2) other complete disks. That's better than raid 5, true, but still not good.

And lastly, it's third-party, closed-source, and appears to use a proprietary system for generating the parity. You can't see the code to verify its correctness even if you want to, and you don't even have the assurance of a big name on its implementation.

 
So you're going to spend 8*475$ on drives right now for your RAID array. That's roughly $4k on drives alone. You realize that right? That's not counting the cost of a huge DVD collection. So you're saying that the problem with RAID is that in a few years when 2TB drives come out you won't be able to spend a couple hundred bucks on a used server setup and a decent case to put your new boatload of $$ worth of drives into? You seem to have more money than you know what to do with. Migrating to larger discs won't be a problem.
 
Inability to grow is a limitation of controller above all else.

However, you just pointed out exactly why all my hotswap arrays are SCSI-SCA. It's the same 80 pin connector and has been for years upon years. I can put old 9GB and new 300GB drives into the same enclosure. Anything else, keep dreaming. They can't even standardize across brands, and I've no doubt they'll change SATA's connector again soon enough. :p
 
uOpt said:
When it come to data loss from RAID-0, it is also a point of view.

The chance to lose any given file that you have from the first disk death that happens to you is equal for single-disk and for multi-disk raid-0.

RAID-1 and -5 lovers also forget that there are prime opportunities to lose all data on your raid, such as:
- PSU frying all connnected components
- OS software error scrambling filesystem contents
- driver/firmware software error messing up raid (software complexity of raid solution is higher, hence higher chance of error)
- flooding or mechanical impact to the computer such as from dogs, kids, flying dinosaurs, plates thrown at you by disgruntled wife etc

This is why you don't just trust a raid5 array, any "critical" data on my personal array is backed up in atleast 2 other places, with atleast 1 of those places being external to the server.

On topic, I have yet to see it in IDE devices (tho I haven't been following the market that closely since I bought a 3Ware 7508-12 a few years ago), higher end scsi raid cards allow you to expand the array. The high end scsi raid card that we use in our HP servers at work (LSI and Adaptec and such) allow you to replace the drives in a raid 5 array one at a time (allowing time for the array to rebuild on the new drives) until they're all the new size, and then you can expand the array to make use of the new space. We've transitioned a number of arrays like this after we ran out of space over the years, from 18 to 36 gb drives, and more recently to 72 gb drives (mail server).

And personally, when I migrated from a 4x 60gb drive raid 5 array on a 3ware 6410 to my current 12x 200gb drive raid 5 array three years ago, i bought an extra 200gb drive and backed up the old array to it. That extra 200gb drive now serves as one of the drives i backup the array to. By the time my 2TB array fills up, 2TB drives should be availible, and I'lll simply do the same. So I build my arrays to last :)
 
SoulkeepHL said:
On topic, I have yet to see it in IDE devices (tho I haven't been following the market that closely since I bought a 3Ware 7508-12 a few years ago), higher end scsi raid cards allow you to expand the array.

My Promise SX4-M does it (SATA), so do the Areca cards. I believe that the Broadcom's did allow you to do that as well.
 
SoulkeepHL said:
And personally, when I migrated from a 4x 60gb drive raid 5 array on a 3ware 6410 to my current 12x 200gb drive raid 5 array three years ago, i bought an extra 200gb drive and backed up the old array to it. That extra 200gb drive now serves as one of the drives i backup the array to.

that is a good idea to use a seperate drive to backup ur array on. i'll have to remember that when i (eventually) build my server
 
Back
Top