To raid or not to Raid

insanarchist said:
then you, my friend, are an idiot. Congrats!

and you my friend just flamed someone, and that is not allowed
we are extremely serious on this particular point of the rules
hense it being listed as the first rule

there is a very limited amount of slack given,
while you learn how the forum works
and you have already burned through most of it
like in life, failure to learn is often terminal,
so good luck ;)
 
yeap i learned from experience, my acct got owned so now im a limp gawd once agian and boy it sucks
 
tcraig_4096 said:
Nothing says that the probability of a drive failing increases just because you have more of them.
The probability of a single drive failing won't increase no... but if you have multiple ones and they depend off each other to function properly, i.e. RAID, then the probability of a failure (which brings down the system regardless of which or how many drives failed) has increased, the laws of probability and pure old logic say so. ;)
 
Flip 3000 pennies at once. The chances that a penny goes heads is still 1 in 2 for each and every penny in the 3000.

|||| EDIT:: [this paragraph] Yes I agree totally with prior post. Thanks for making it more clear where I was set on hardware failure it is the loss of data probability that is more meaningfull. ||||

You have plenty of space for backup, it sounds like. Just do it semi-regularly if you are worried about it. I see no reason not to RAID with your current configuration. RAID does not increase the rate of hardware failure, and if you don't trust the hardware, you are stuck anyway.
 
Eh don't ban insanarchist plz. He was talking about a completely different subject than the original post dictated. Data loss vs. hardware failure are not mutually equatable. Furthermore, I take absolutely no offense, as hardware opinions get escalated to volotile points anyway. For the record, I am an idiot.

TC
 
hes not in any current danger of getting banned
(beside you generally get a few timeout lessons first)
but its accumulative, due to the very volatile nature that you discribed
manners are all that more important,
its entirely possible to rip someone a new one
without resorting to base insults, but if your too lazy to employ the rhetorical question
the rapier wit, or the snide aside, you will eventually run afoul of the nice police

there is a very fine line, and everyone (that lasts) eventually learns it
some seem to walk it constantly, and yet they are still here
but generally, its better to just be polite and helpful, wins you more respect ;)


on topic it is infact the increased chance of data loss not hardware failure
but there is a small increase in the later as well, unless technical infrastructure is also addressed, not much at first but dramatically increasing with the number of drives
almost completely tied to hardware failure to power events, the more your powering, the higher the probability there will be an issue and the more serious you need to become in addressing that (power conditioning, correctly determining your power needs and providing ample capacity)
Eight Ways To Kill Your HDD 1
Eight Ways To Kill Your HDD 2
{somewhat old-school but still mostly applicable)
 
Impulse said:
The probability of a single drive failing won't increase no... but if you have multiple ones and they depend off each other to function properly, i.e. RAID, then the probability of a failure (which brings down the system regardless of which or how many drives failed) has increased, the laws of probability and pure old logic say so. ;)

Finally a correct answer on this. If your running 2 drives in your computer independently, then nothing changes about the MTBF for each drive.

Now, if your running 2 drives in RAID0, it is assumed your MTBF is cut on half...in terms of the reliability of your data. The second drive doesn't die as well....but your data stored on it (which is the most important aspect) is gone from both drives....hence the cut in half part.

Some even argue that in a RAID1 setup, your MTBF is doubled...but again, this is purely for the data, not the actual drives.
 
djnes said:
Finally a correct answer on this. If your running 2 drives in your computer independently, then nothing changes about the MTBF for each drive..

all else being equal

which of course.... :p
 
ok so lets say you have 2 drives in a raid0..one drive fails..and is gone completely..why can you not take that second drive, put it in a pc where the bios supports low level formatting and format it through the bios to erase the raid0 partition?
 
you can

but the data is gone
gets worse the larger the array, 6 channel RAID 0 one drive goes...
 
phaelinx said:
ok so lets say you have 2 drives in a raid0..one drive fails..and is gone completely..why can you not take that second drive, put it in a pc where the bios supports low level formatting and format it through the bios to erase the raid0 partition?

My explanation probably wasn't the best, but it's what I meant. You can take that second drive, and format it and make it a usable drive. What I meant, and what Ice Czar pointed out....it's the data that's gone. Hardware can be replaced, but sometimes data is irreplaceable. The second drive would be physically fine...it's just the data is gone for good.
 
well i was tryin to get at if you make good backups, all you really lose in a raid0 setup is 1 drive if the array fails..not both drives..true data is gone but as long as you can still format the hd than its not a total loss
 
sure, or another approach is to simply employ the RAID
for the OS or other high speed performance space and keep your "data" on a redundant array
or a single disk, you dont typically need that performance for data on a desktop

exceptions being realtime video editing, as a launchpad for games, the OS and Virtual Memory (pagefile)

RAID 0 is really just AID,
as in what youll need if you dont have backups :p
there is nothing redundant about it
 
I actually have a six disk RAID0 array running at the moment.
The chance of me losing the data there is very high, but I
only use it for work space. I write little source files, get
tens of thousands of large intermediate files, and then a
small final file. I copy the source & final file back to a
safer filesystem (a mirrored disk, in this case) when I'm
done.
Also - not only isn't RAID0 "really" RAID, but it long predates
RAID, and was traditionally refered to as "striping".
 
I dont care what anyone says ,from my experience raid 0 is alot faster than just a single same spec'd hd. Things load, copy, and delete faster. Thats what i've experienced and thats why i'm setting up my 2nd raid 0 as we speak.
 
menlatin said:
I dont care what anyone says ,from my experience raid 0 is alot faster than just a single same spec'd hd. Things load, copy, and delete faster. Thats what i've experienced and thats why i'm setting up my 2nd raid 0 as we speak.

You'd be surprised if you ran some tests to the levels we have. I know I was shocked at the outcome.
 
menlatin said:
I dont care what anyone says ,from my experience raid 0 is alot faster than just a single same spec'd hd. Things load, copy, and delete faster. Thats what i've experienced and thats why i'm setting up my 2nd raid 0 as we speak.

I used to think this too.. back in the great days of everyone saying "RAID0 is gawd!". But benchmarks and just plain use turned me away from it in the end. Sure, for large read/writes, it potentially is faster, but thats not something I personally do often. In my recent quest for speed I bit the bullet and picked up a 15K SCSI drive, with that its plain as day and night the differences in speed =).
 
Loading Adobe apps is gonna make me turn to SCSI drives again one day... even darned Acrobat loads slow as hell, ugh.
 
menlatin said:
I dont care what anyone says ,from my experience raid 0 is alot faster than just a single same spec'd hd. Things load, copy, and delete faster. Thats what i've experienced and thats why i'm setting up my 2nd raid 0 as we speak.


If you feel that you're seeing a performance, great, money well spent!! You're happy with your computer setup and that's what matters, not what someone tells you.


That said, don't make recommendations to others without having something a bit more concrete than feel.
 
Ice Czar said:
and you my friend just flamed someone, and that is not allowed
we are extremely serious on this particular point of the rules
hense it being listed as the first rule

there is a very limited amount of slack given,
while you learn how the forum works
and you have already burned through most of it
like in life, failure to learn is often terminal,
so good luck ;)

Yeah, I'm sorry about that: I guess I'm just used to my other Messageboard/forum where flaming is basically standard. My bad.
 
as mentioned before, it just takes a bit more effort ;)
unless of course vitriolic prose and biting witticism come naturally,
would however make good practice for the other forum :p

whenever I insult someone,
I strive to make them look up exactly what it was I said :p
that delayed burn is another form of dominance

But I would reinterate again, that respect isnt won with temper
 
First of all Greetz!

My own experience is about 30-50% more performance in a raid0 vs single drive (this is IDE drives... i dont have Sata to test).. Just doing a format on a partition is greatly improved. I have been using raid since about 98. I have used Promise, highpoint, softraid (from win2000/2003 server) of various sorts. However there are a few caveats.

1. Depends on your drives: Some drives are better than others in a raid0 set. Try to match drives.
2. if using IDE drives, make sure your drives are jumpered to cable select, NOT master/slave. In my own experience, that could make a 10 to 15 MB/s difference in throughput.
3. stripe size is very important!!!! do not let anyone tell you otherwise. I use a rocketraid404 on my server and my pc. 16k stripe is best all around. (yes that includes gaming.) There is a lot of talk that raid is worthless for gaming... not true at all.. if you experience no improvement, then you have it set up wrong.
4. never create a raid0 set with master slave on the same cable. (sata doesnt apply here). on my rocketraid404 I have 4 IDE channels with a 120GB Maxtor on each channel.
5. create your logical drives with 16k clusters (match with your stripe size). This is a little hard to do with your boot partition. Just use partition magic to change the clusters.
6. stay away from partitions larger than 150GB. NTFS starts slowing down with partitions larger than 50-100GB. (depends on your application).

(A little tip with Windows: never make your system drive (C) any bigger than 30-35GB. Windows cannot effectively manage that space and performance suffers for the whole sytem.)

That said, let me give you some background. I get sustained (not burst) of 65-75 MB/s with burst of up to 90 MB/s (depending on where on the disk i am) With the same disks in single mode, i never get above 35 MB/s sustained and 58 MB/s burst (this was with the 4x120G drives). (yes I have benched this). I have 3x6.4G (raid0) on a Fasttrak66, 2x13G (raid0) on a embedded hpt100 controller, 4x120G (raid0) on a rocketraid404, 3x60G (raid0) 2x40G (raid1) 13G 8G on a rocketraid404 in my server.


The moral of the story, if you set up the raid sets wrong, you will have bad performance. To say Raid0 has no improvement over a single drive, is to say you have not set your drives up correctly. This is definately faster for me, both in feel and empirical data.

One final beef I have is everyone bellyaching that having a raid0 set increases you chances of losing your data. At the risk of being brash... it is rubbish. The argument only applies in a vacuum. There is more of a chance of windows or a virus trashing your data than the hardware. With that said, whether you have a raid0 or a single drive, if you do not back up your important data off your harddrive to something else (cd, dvd, tape, etc) then you deserve losing your data if your drive fails.) Even having a raid 1 or 0+1 or 5 set should not be a substitute for a good backup. One thing about harddrives... barring any sharp blows to your system.... a harddrive will die within the first few weeks or will be up for years. the 6.4G drives i listed above have been running since 97.

anyways... i would be interested in your comments on this.

Doctor X :D
 
Doctor X said:
...At the risk of being brash... it is rubbish. The argument only applies in a vacuum. There is more of a chance of windows or a virus trashing your data than the hardware. With that said, whether you have a raid0 or a single drive, if you do not back up your important data off your harddrive to something else (cd, dvd, tape, etc) then you deserve losing your data if your drive fails.) ... barring any sharp blows to your system.... a harddrive will die within the first few weeks or will be up for years. the 6.4G drives i listed above have been running since 97.

I could agree or 'least understand your point of view 'till you went into failure... I've had more complete hard drive failures (three, two IBM Deathstars and a Quantum which came with a Dell) in the past four years than fatal virus infections. I do agree that backups are essential, RAID or not RAID...

But truth be told the backups just won't catch everything every time. Data can be more or less critical to various people depending what they do with their computers, their backup plan should follow suit and be more or less rigorous given their needs but the bigger potential for data loss w/RAID 0 is still there and should not be underscored.

Lastly, none of the three drives that failed for me died within the first few weeks or even months. The Quantum died unexpectedly (no bad sectores reported previous to it's death) after being shut down for a week during a vacation, still puzzled over that one, but it had been running flawlessly for nearly four years.

The IBMs both died slowly (bad sectors, corruption, the typical MO for the faulty 75GXP line) but it only started being noticeable after a full year of use. I'm not saying HDDs by and large are unreliable beats... I've got drives in a 486 and PII from various brands that are still working flawlessly and have been for more than six or eight years.

Yet, failures do happen, they don't always happen within the first few weeks, and a RAID 0 setup does increase your chances of data loss regardless of your backup habits. Can't dismiss the facts.
 
Impulse said:
P.S. Anyone try Ghost 9.0 yet? Is it supporting SATA drives correctly? I read an article that said it would at release but it still wasn't when tested, and do older versions perform scheduled backups to external drives? 9.0 won't apparently...

I've traditionally done backups of data only but I wanna start imaging (seems like the easiest way of doing full sys backups) my system on a regular basis to be ready in the event of another drive failure (first an IBM Deathstar two years ago and now an old 40gb Quantum drive I was using for music), trying to figure out the best method.
I am not fond of 9, I went back to 8 Corporate. There's a hell of a lot more flexibility with 8 Corp. You can create an image server and backup images over the network rather than storing them locally.
 
The_Mage18 said:
I am not fond of 9, I went back to 8 Corporate. There's a hell of a lot more flexibility with 8 Corp. You can create an image server and backup images over the network rather than storing them locally.

I prefer 8 Corp as well. Multicast is an awesome awesome feature.

As far as HDD failures...they do happen, can happen at anytime, and it does increase your risk of data loss in RAID0. It's a simple bit of math probability.
 
it was not mentioned, but your pc crashing, losing power, or accidentally getting reset can screw with your RAID 0 array. If you must use raid 0, do it in a 0+1 config so it's at least redundant.
 
with a single drive too or any array especially parity arrays see > write hole

Corruption 101
dont hard restart, immediately address any power issues, and test your rig with new drivers or other higher risk "crashables" with the array removed if possible
 
Doctor X said:
That said, let me give you some background. I get sustained (not burst) of 65-75 MB/s with burst of up to 90 MB/s (depending on where on the disk i am) With the same disks in single mode, i never get above 35 MB/s sustained and 58 MB/s burst (this was with the 4x120G drives). (yes I have benched this). I have 3x6.4G (raid0) on a Fasttrak66, 2x13G (raid0) on a embedded hpt100 controller, 4x120G (raid0) on a rocketraid404, 3x60G (raid0) 2x40G (raid1) 13G 8G on a rocketraid404 in my server.

Just curious, what program are you using for benchmarking? Please don't say Sandra, as it tells me my single WD 120GB IDE drive on a ABIT seriall converter is faster than a 15k U360 Cheeta drive (both in burst and sustained throughput) LOL.

Also as far as I know, no one says that it doesn't help some people.. For example working with large files (video editing) several GB in size, it does help a lot. But for most people, the performance increase is negligable (2-3 sec faster in loading a game at most) and not worth the cost/risk. (as working with many small files it can sometimes slow you down)

==>Lazn
 
Lazn_Work said:
not worth the cost/risk. (as working with many small files it can sometimes slow you down)

Very true, and not many people realize that.
 
Greetz!

This is good.... One thing i failed to state before, I boot from my arrays and do everything from there.

Just curious, what program are you using for benchmarking? Please don't say Sandra, as it tells me my single WD 120GB IDE drive on a ABIT seriall converter is faster than a 15k U360 Cheeta drive (both in burst and sustained throughput) LOL.

To answer the question on what benchmarks I used, I used HD Tach and HD Speed.

Personally, I hate Ghost. It is too limited and slow..

I use v2i protector (drive image 7). I love this utility. I can image my drive without even rebooting. And yes is works fine (it has saved my bacon a bunch). :D

Corruption 101
dont hard restart, immediately address any power issues, and test your rig with new drivers or other higher risk "crashables" with the array removed if possible

I have never had a array get corrupted due to power or lockups BSOD or anything. (correction, i did hose one during a bios upgrade once but I recovered it.) Make sure that you have xp/2000 set to clear page file on shutdown in order to protect the cache when you shut off the pc.

I have had wonderful luck with highpoint (htp100, htp370, htp374) and Promise to a lesser extent (not a big fan of their drivers). I had a Fasttrak that was good... and now have a Fasttrak66 in my son''s pc. (the one that has the 3x6.4G in a raid0). As I have stated earlier, I run a RocketRaid404 in my pc and in my server. I have removed drives ... forgot to plug drives back in.. plugged drive in different order... and all i had to do was shut off the pc and plug everything in correctly and everything came back fine.

Now I must say, I have not had good luck with the other chipsets like intel, SIS, and another one i cannot remember... they all tend to be slow and a little unstable. ( but i have not used any of the new versions out there now.)


I could agree or 'least understand your point of view 'till you went into failure... I've had more complete hard drive failures (three, two IBM Deathstars and a Quantum which came with a Dell) in the past four years than fatal virus infections. I do agree that backups are essential, RAID or not RAID...

But truth be told the backups just won't catch everything every time. Data can be more or less critical to various people depending what they do with their computers, their backup plan should follow suit and be more or less rigorous given their needs but the bigger potential for data loss w/RAID 0 is still there and should not be underscored.

I was just saying, never trust your data on a hard drive in any config. Make sure that you have backups. As long as you do, you can set up the raid to scream. Case in point... i have a VXA-1 drive and a DLTIV drive for my network backups. If it is really important (like my pictures of my family) I also have that on a raid1 but I do not rely on it exclusively.

I have used ibm ultrastars and I have never liked them (too unstable). The only brands I have used was WD, Maxtor, and Quantum. a break down of the drive models I have:

3x6.4G wd and quantum (cannot remember the mix) bought in 97
3x13G are all WD (bought in 99)
3x60GB are all mator (bought in 00 and 2001)
2x40GB Maxtors (gift in 2002)
4x120G Maxtors (bought in 2003)

I have personally lost 4 WD, 2 Maxtors, 1 Quatum, 1 Conner since 93. Now in my professional life, I have lost drives from wd, maxtor, quantum , ibm, fujistu, seagate, and conner. I am a little paranoid about my backup because of this.

The purpose of this and the prior post from me is for opening people's minds and not automatically think they know it all when it comes to things like raid and performance. Just because someone posts that they have not had any improvement from raid in a review, they automatically think it is the law. I have read many of the reviews and most of the time, they do not take the time to set the raid set up properly to maximize the performance. Many times they take the default (always a bad idea) or they make the wrong assumptions on how the array will be used. I have been working with ide, eide, ata33/66/100/133, sata, and all the different flavors of scsi, SSA, and fiber channel and san (including the old fddi drives from long ago) for the last 12 years so I know a thing or two about this subject. I have used raid 0, 1 , 0+1,1+0 (yes there is a difference), 5, 50 (data striped across multiple raid5 arrays) and a couple of bastardized versions of the above.

Well, sorry for the long post, but I wanted to clarify things just a bit.

Doctor X
 
potroast said:
it was not mentioned, but your pc crashing, losing power, or accidentally getting reset can screw with your RAID 0 array. If you must use raid 0, do it in a 0+1 config so it's at least redundant.

1+0 is better, at least when you get into bigger arrays.
If you stripe first, then mirror (0+1), then any single
drive failure will take out one half of your mirror, and
a failure of any drive in the remaining stripe will lose
your data. If you mirror first, then stripe across the pairs,
two drive failures would have to be from a matched pair to
cause a problem.
 
murphy's law....

Just to prove my point... I lost my raid0 on my server over the weekend... I had backups. and I lost a 40g maxtor that was on my raid1 same box....

Who let Murphy in here?


:D

Doctor X
 
Back
Top