RAID-0 proved ineffective at boosting desktop application/game performance

Well masher, I understand your point, but you can still move large blocks of data faster with how raid is setup. ;o) I can also see your point about how the extra bandwidth isnt needed for disk to disk transfers / other scenarios and how disheartened you are with your limited testing though. I'll do my own testing next time my main system needs major changes and may post it if it is interesting. But as for now I think I've heard all your side has to say.

@rouge
hitachi.jpg


I might post some numbers later today with my existing setup when I get back, if you can give me something I can easily reproduce to compare with your system.

Not to mention im completely maxing my connect out right now and for the past 3 days. =)
 
DougLite said:
You are welcome to say what you want, but the simple fact remains that no RAID setup will deliver improved desktop performance. Power users looking to increase storage performance should look to faster single drives or adding indendent spindle(s) to service additional loads.

There is a fundamental flaw in the test that, unfortunately, discredits the conclusions. I'm no fan of RAID 0 and would never use it. However, one simply should not draw conclusions on single drive performance vs 2 / 4 drive RAID-0 from tests that use drives of different densities. For benchmark comparisons to have any hope of validity one needs to insure that only one variable differs.
 
Lord of Shadows said:
Your farcry results are flawed because farcry has been proven by mikeblas to not make efficient use of the drive when loading. Not to mention the whole deal about varying file sizes with stripe size I mentioned a while back etc.

While it may be that Farcry is coded poorly, it is still a game. I am sorry, but I play neither 3DMark05 nor HDTach for more than 1 hour/ year. While I do not play Farcry, there are plenty of people that do.

Your argument is that "the Farcry test is bad, since it does not stress the advantage of a RAID-0 array", however that is the point that DL, UM and masher have been trying to get across the whole time:

While RAID-0 will boost your sequental read/ write times, you will not see much -if any- real world benefit from it.

Lord of Shadows said:
As for the reliability, I think your playing that a bit hard. Yes you lose everything if you lose one drive in the array. If my drive lasts four years, raiding it with another equivalent drive doesn’t suddenly make either die any sooner. Having a non-raid setup doesn’t save your files if your main drive goes down. Duplicating files is still important whether you're raided or not.

if you drive lasts 4 years on average and we assume the failure rate to be distributed uniformly, we can say the following:

4 years * 365.25 days/ year * 24 hours/ day * 3600 second/ hour = 126230400.

So the probability that a drive will fail this second is 1 / 126230400 . Now, if you have two drives, the probability that EITHER of them will fail is 1 / 126230400 + 1 / 126230400 which is equal to 2 / 126230400, provided that we assume that these two are independent events.

So it is very clear that the probability of your data being lost due to a hard drive failing has doubled. Since it is still distributed uniformly, we can also say that the average lifespan of the array is 2 years ( 126230400 / 2 * 3600 * 34 * 365.25). someone correct me if my math is wrong, I am open to criticism.
Lord of Shadows said:
I can agree with a bottleneck of a device, but again a faster single drive doesn’t help there either.
a single drive vs RAID does help your wallet. It also reduces complextiy....
Lord of Shadows said:
As for file transfer, your test is flawed. You copied a file from one drive to another, while raid copied from one drive to itself, which introduced countless seeks from position A to B as buffers filled.

why that is exactly why we selected the test, to point out that for the same money, one can have improved performance with a well thought out setup of two disks. For the same investment (assuming that a RAID controller is free) you can do better than RAID-0. You seem to want to measure with two different tapes, by selecting tasks that RAID-0 excels at, i.e. do not bench Farcry, but not selecting tests where other options are better, i.e. copy from A to B.


Lord of Shadows said:
Now your working at a higher level than you need to with your testing, you can do two interesting things with raid.
I remember that there was some famous guy that said "Where's the beef". Nobody is saying that "raid-0 cannot read/ write slower than a single disk", but rather:
While RAID-0 may be faster at the hardware level, this improvement does very little, if anything, for real-world performance.

It surely is not that difficult to understand that we are interested in the usable performance rather than theoretical benchmarks. In the end, you do not buy video cards because of what hardware performance it has, but rather how well it does in the games you play.


broberts said:
There is a fundamental flaw in the test that, unfortunately, discredits the conclusions. I'm no fan of RAID 0 and would never use it. However, one simply should not draw conclusions on single drive performance vs 2 / 4 drive RAID-0 from tests that use drives of different densities. For benchmark comparisons to have any hope of validity one needs to insure that only one variable differs.

goto this url:
http://www.storagereview.com/articles/200601/WD1500ADFD_5.html

scroll down about 4/5 of the page, there is a graph of various SATA HDD's running the WoW benchmark. As you will notice the WD740GD no TCQ scores 671. Now compare this to the graph shown on page 1 of this thread, where both the 2 and 4 disk array show performance that is siginificantly below 600 IO/sec. I do not see how this is not clearly showing that RAID-ing the drives does not increase performance? You surely know that SR uses the same hardware platform for all their test, which is called their "testbed", therefore allowing us to compare results across tests.
 
Lord of Shadows said:
Well masher...I think I've heard all your side has to say.
Unfortunately, the only thing we've heard your side say is "gosh I just _know_ Raid has to work better". You've given no benchmarks, no real-world scenarios where users would benefit, no price-performance analysis, nothing but starry-eyed wishful thinking. Hey, I'm not immune to that...Raid definitely has a "cool factor". But we should look beyond that to make rational decisions

broberts said:
There is a fundamental flaw in the test that...one simply should not draw conclusions...from tests that use drives of different densities.
No, because you don't understand the question that was asked. The question was "why buy a pricey 150GB raptor when 2 o the 74's in Raid 0 would be much faster?" That was proven false. And by extension-- since 150GB Raptors are the fastest desktop drives out there now-- Raid 0 for ANY drive, except possibly those. If you want faster performance, buy a faster single drive...don't spend money on Raid.
 
drizzt81 said:
someone correct me if my math is wrong, I am open to criticism...
Actually, probabilities are multiplicative, not additive. In this paticular case, if the chance of one drive failing is n, the chance either or both failing isn't 2n, but rather 2n-n^2.

However, for very small probabilities such as drive failure rates, the difference between this and 2n is very small.
 
drizzt81 said:
So it is very clear that the probability of your data being lost due to a hard drive failing has doubled. Since it is still distributed uniformly, we can also say that the average lifespan of the array is 2 years ( 126230400 / 2 * 3600 * 34 * 365.25). someone correct me if my math is wrong, I am open to criticism.

The math is correct given the assumption of independence in failures. Thus, the probability of having a RAID-0 array of two disks fail at time x is twice that of a single drive. However, I'm not sure I buy the simplifying assumption of uniformity.

Now, this doesn't change the qualitative answer - any sort of distribution (other than point) would result in there being, on average, a decrease in the expected lifespan of the array as the number of disks increases. It's just not likely that the mean lifespan of the array would be half of the single drive. (If the probability of a single drive failure is normally distributed, I'd say it's in the ballpark of 75%-80% of the single drive)
 
LhasaCM said:
The math is correct given the assumption of independence in failures.
Not correct...but close. I posted the exact number immediately above.

I'm not sure I buy the simplifying assumption of uniformity.
Its not perfectly uniform obviously. However, within the service life period, and excluding the immediate infant mortality period, its remarkably linear.

It's just not likely that the mean lifespan of the array would be half of the single drive. (If the probability of a single drive failure is normally distributed, I'd say it's in the ballpark of 75%-80% of the single drive)
Lifespan is not the same as failure rate. If the service life of a drive is five years, the life of an array is five years. However, the failure rate does double. Right off the line.

More than double, as an array can fail due to intersync errors...without any of the drives in it failing. I have no statistics on how common this is, however.
 
masher said:
Not correct...but close. I posted the exact number immediately above.

My bad - given independence AND mutual exclusivity, the math was correct. You're right, just given independence:

P(A or B) = P(A) + P(B) - P(A and B)

masher said:
Its not perfectly uniform obviously. However, within the service life period, and excluding the immediate infant mortality period, its remarkably linear..

Good to know - I guess I must just be lucky (knock on wood). Outside of DOA, I've never had a drive physically fail sooner than two years.

masher said:
Lifespan is not the same as failure rate. If the service life of a drive is five years, the life of an array is five years. However, the failure rate does double. Right off the line. ..

Isn't lifespan the same as the expected time of failure? Or am I being too cavalier with the terminology?
 
LhasaCM said:
Isn't lifespan the same as the expected time of failure? Or am I being too cavalier with the terminology?
It's not you, its the drive manufacturers who are being cavalier. They like quoting massive MTBF figures (some stretching 140 years or more), even though obviously no drive will last that long. So instead of calculating a true MTBF, they clip off the higher-risk area outside some arbitrary period, which they call the "service life" or "useful lifespan" or some other such name.

So you can't really calculate a 'true' expectation value for drive lifespan based on those two values. You have a failure rate thats valid only within a given period...and no data outside that. Worse, the MTBF they do quote is, for new drive models, usually based not off ANY real failure data, but based on the assumption failure rates will align with other "similar" drives made by them or others.
 
masher said:
No, because you don't understand the question that was asked. The question was "why buy a pricey 150GB raptor when 2 o the 74's in Raid 0 would be much faster?" That was proven false. And by extension-- since 150GB Raptors are the fastest desktop drives out there now-- Raid 0 for ANY drive, except possibly those. If you want faster performance, buy a faster single drive...don't spend money on Raid.

I was very specific about what conclusions shouldn't be drawn. You clipped that specificity in your quote and then suggested that I did not understand the question. I wasn't responding to the question, just to some of the conclusions that seemed to be drawn from the data. I stand by what I said. One should never draw performance conclusions for different RAID types or RAID vs non-RAID when the tested configurations involve different drive types or capacities.

I have no problem with drawing conclusions about relative drive performance when different drive sizes are tested in the same RAID or non-RAID configuration. But I sure don't think that the data supports the conclusion that one should buy a faster single drive instead of RAID-0 two or more units. Not that I disagree with the conclusion. In fact I've never felt that RAID-0 provides better performance on the average desktop and I think there are some valid benchmarks available to support the conclusion. My only contention is that the posted benchmark doesn't contribute any valid data to the conclusion.
 
broberts said:
I sure don't think that the data supports the conclusion that one should buy a faster single drive instead of RAID-0 two or more units
But that was exactly what was tested. A faster single drive vs. two slower drives. In your first post, you claimed the test was invalid because he should have tested drives of the same density. But had the author done that, the results wouldn't have answered the question, "faster drive, or two slower ones", but something altogether different.

You need to make up your mind, and think through the reasoning here. It's really very simple.
 
I have been running a Raid 0 with 3 HD's and there is definately a performance gain....

These Benchmarks using 4 HD's were they plugged into a 4 channel raid card or a 2 channel? Obviously using 2 HD's on one channel is going to significantly decrease performance, as your only allowing 1/2 the bandwidth for each drive, and only increasing seek times...

My card is a Rocketraid 404.

I do notice however that my seek times are slower when using raid, but when loading contiguous files its noticable faster as long as the hard drives are not compleatly fragmented.

So yea, theres alot of ways to show Raid-0 might not be worth it, but when loading those Huge BF2 maps or other large files thats when I see the performance increase.

Also all benchmarks will be a result of How you setup the array.... If you use smaller clusters or larger ones can make a huge difference in different benchmarks.

However opening windows explorer (for example) all 3 drives have to spin up and seek, I feel it takes a seocond or so longer, and when my HD is fragmented, I also think it slows me down.
 
I guess it's time to stop arguing this, but for old times' sake:
  • Raid 0 is faster in *some* situations
  • it's at least twice the cost of a single disk
  • and it more than doubles your chance of failure.

Thus, I'm not going to use raid 0 for a performance increase, and I believe you shouldn't, either.

 
I had to use my drives as a standard setup without raid 0 during my first few days of my new mb. The difference "felt" significantly slower than having the raid0. Windows certaintly booted much faster immediately after the raid array and OS was installed. To me just simple stuff like clicking on different windows and programs seem to be much quicker than a non-raid system. I've been using raid 0 for over a year and those 3 days that I used the standard sata setup were almost unbearable to wait for with my patience.
 
Yes I am digging this up again. Clicky
JayisunJ said:
This is possibly a n00b question, but here it goes: for the last ten years, I have used only single hardrive configurations in my computers and have never used a petition either. Building my new system, I decided (b/c of recommendation from dfi-street.com) to get 2 WD2500JB hardrives and set up RAID 0 on my DFI Lanparty UT Ultra-D. People at dfi-street.com are absolutely convinced that the benchmark increases shown in RAID 0 (as compared to a single drive of same type) translate to real-world performance increases in Windows and games, but I am wholly underwhelmed with my RAID 0 performance.
How about I(illa Bee? Clicky
I(illa Bee said:
I jsut went from RAID0 with 2 SATA 8mb cache drive, to a 74gb Raptor tonight, and the singal raptor is faster overall..
Ironically enough, I(illa Bee is a convert Clicky
Defiantly not the case. I ran 2 raptors in RAID, and then I let everyone tell me that raid didn’t help. In real life, So I sold my out of space Raptors (my only grief with them was size) and bought a signal larger HDD. My system was not the same at all. It was not as responsive, slower to boot, load time increased in my favorite games... and yes I timed them with a stop watch.

I then ran 2x80gb drives I had new but never used so I ran them RAID, BF2 load time went up about 10 seconds, 3 drive increased it another 10-15 seconds, and with 4 drives I am currently beating my buddy into BF2 games by a good 30 seconds. He has dual 74gb raptors, and a FX57… And I am leavening my BIOS alone. The only thing I changed was the ARRAY.

I have run with RAID, I have Ran without it. I have ran anywhere form 2 to 6 drive in it. I play around with Photo shop and Brice3d, I play BF2 online, and all the most recent signal player games. (FEAR is a WIP right now) I multitask a lot. And I must say that no matter what you non RAID runner say or think you can prove. I am faster than your single drive and it show in every thing I do on my system anyway.
There's a lot of garbage on the Internet, a lot of garbage that shouldn't be believed. A lot of garbage that won't get a free ride here.
 
Well, DougLite went through all the trouble of digging it back up, so why not comment in it? I ran a 3-disk RAID-0 array in a fileserver for about two months. The content was nothing worth backing up, and I figured it'd be fun to play with RAID. Why'd I run it? Because I was tired of having 7GB left on three drives, and needing somewhere to put an 8GB file. When I got my first 320GB drive, I threw all the data on there, and reconfigured the drives in a software-RAID 5. While I was formatting the array, a friend called up asking if I could donate any parts to build his girlfriend a computer. Seeing as to how he was a good friend, and his girlfriend had introduced me to mine, I wasted no time in canceling the format and tossing a 40GB drive his way. I hung out at his place and built his gal a computer with him, tossing back a brew or two and spouting random BS. When I came back, I decided to try out RAID-1. 30% into the format, a random drunk guy walks into my dorm room and calls me "the dorm's computer guy." Being as polite as I can, I acknowledge the title. He hands me a piece of paper with some specs on it. Apparently, he'd taken his box to his brother because it wasn't booting, and it was a HDD error. The brother told him to go out and buy a new HDD, and he decided to come to me first. Sighing, I stopped the format, pulled out another 40GB HDD, and said, "twenty-five bucks." He pulled out his wallet, handed me a ten and a twenty, and wandered out before I could give him change.

The moral of the story is, "Don't run RAID unless you're trying to get rid of HDDs." :)
 
DougLite said:
Those two games are reflective of game loading access patterns in general. UT2004 is remarkably similar to FarCry, while games like Doom3 exhibit patterns much like WoW.
How do we know that the problem isn't games having very poor access patterns when loading, and not that RAID 0 is any slower? It's hard for me to think that a proprietary benchmark proves anything.

masher said:
Your logic is flawed. It doesn't matter WHY any particular game doesn't benefit from Raid-0. What matters is the benefit doesn't exist.

Some games do have a very slight benefit from Raid. Some nonel. Some run slower. Net result: a wash.

Then why does DougLite posit "RAID-0 proved ineffective at boosting desktop application/game performance"? Some games benefit. Those that don't, fitting the category that Doug says is representative of all games, have performance bugs that leave them not taking advantage of all the performance of any drive subsystem, SLED or RAID.

Similarly, the SR results only exercise one set of desktop apps in one particular way -- which may or may not be representative of other desktop users.

I think RAID 0 is overperscribed, sure. But there's a little too much hyperbole in this thread, starting at the top.
 
its not all about random access performance like you guys put it out to be,

access time is the same as 1 HDD, copying large files over a gigabit network depends on the HDD ability to copy. i was not able to benchmark a single 120GB sata drive but i do have a single sata 80GB drive that i benched. for raw speed the 120GBx2 is WAY faster. alot of the times my server has to copy files to 2 diffrent computers at the same time. these files will be well over 1GB (archives, and maybe streaming/copying tv shows, etc) it can serve 2 computers with single HDD's at full write speed. a single drive will only serve 2 computers at maybe a 1/4 of the speed if your lucky

SeagateRaid0120GBx2.jpg


Maxtor80GB.jpg


i use my Raid arrays for copying large files, and no one can tell me that raid sucks in my situation, i will never go back to a single drive again. loading windows, fear, BF2 ALL have performance increases, once the game loads it doesnt swap so i dont know if swapping is slower but i can tell you than when defragging my HDDs can read AND write 32MB/s at the same time. not alot of single drives can do that.

whats really fast is copying from a desktop that has RAID 0 to another desktop that has RAID 0 thru my gigabit network. a cd image can copy in around 10 seconds, a single drive will take more than 20-25. yes 25 seconds is not along time to wait, but multiply that by 20-30 discs
 
Silly kids and their double digit access times / high cpu utilization :p
Old Atlas 10k IV's in RAID 1 off a LSI U320 controller.
hdtune.jpg
 
ambit said:
Silly kids and their double digit access times / high cpu utilization :p
Old Atlas 10k IV's in RAID 1 off a LSI U320 controller.

im pretty sure my sata drives combined at $150CDN is cheaper than that atlas + controller, also that was ran on an Athlon XP 2500@3200, Asus A7V600, SOFTWARE RAID from the mobo
 
I think the biggest mistake people make when setting up a RAID array is picking the right stripe size. The right stripe size is critical for RAID0 performance. The nForce 4 chipset defaults to 64k stripe when setting up the array. I was doing a search on stripe sizes and according to this guy 128k is the best stripe size for a Windows XP install.

I would like to update this RAID0 guide with my experiences, I have used RAID for a few years now, and have no problems at all.

Upon installing WinXP you will find the average file size to be 373kb, the general rule is correct, you divide the file size by two, and go to the next lowest setting:

373/2=186.5 next lowest 128k

I tried experimenting with file sizes and found the following:

16k, seek errors quadrupled, slow file access, slow loading times, and HD Tach Benchmarks showing inadequate HD performance, circa 37MB/sec sequential read (the speed of one of my single drives) I rechecked the settings, I had indeed made a RAID0 partition, but I figured the problem lay with the file size and accordingly RAID structure.

32k, Seek errors halved form previous setting, OS was faster, loading times reduced, benchmarks improved etc, but still not as fast as my previous default of 64k file size.

I had already tried the 64k file size, and had been having what I had thought good results at HDtach showing 60mb/sec sequential read, few seek errors, good loading times etc, so I tried 128k

128k, OMGWTF, load times blisteringly fast, HD Tach Benchmark at 80MB/sec sequential read, burst at theoretical max of 92MB/sec (ATA 100), games load times cut in half, hardly any seek errors... like I said OMGWTF
So naturally the urge to fiddle further had caught me.......time for 256k/sec

256k, Load times same as 128k, HDtach showed 78MB/sec sequential read, but.........HDD drop off occurred a lot earlier, which is slower than 128k, and here is a kicker, burst speed 85MB/sec, and the seek errors rose quite a bit, so I did the only thing that was left to do, went back to 128k.

Filesize is critical with RAID 0, a standard brand new WinXP installation has an average filesize of 373kb, so the RAID stripe size should be 128k for WinXP, this should be an industry standard, but as per normal things like RAID take a few years to catch up.
But RAID Results should also not be done on a full drive, that can lead to spurious and false results, if you ran a test on a full drive, received results as 2MB (mentioned previously), and then you install a RAID 0 with 2MB stripes, then you have lost a whole wad of your hard drive, as each strip is for one half of one file only, that is why you should only use a standard and new installation to find the OS stripe size, as any files that are added later will conform to the stripe size set, and having the wrong size will cause HD space wastage/overtaxing, which is not really good, and you will not receive the performance you should get.

So my advice is, if you are using WinXP make your stripe size 128k, and leave it as that until the next new OS arrives.

All these tests were done on a Silicon Image standalone RAID card with a Sil 0680 chipset, with the newest BIOS and drivers.

http://discuss.futuremark.com/forum...mber=2515844&page=&view=&sb=&o=&fpart=13&vc=1
 
Seek errors? What is he talking about?

Reading further: file size doesn't matter; read request size does. If the average file size after installing windows is 390-something K, then why would games be helped? They're going to install files in the many-megabytes of size. That's what you want to optimize for, isn't it?

I don't deny that getting the strip size wrong is a problem (you said getting it right was the biggest mistake, burningrave101). But this guy's post is pretty iffy.
 
Wow! Synthetic benchmarks! Nobody has ever run those before! So, let's see... on my data volume, the average file size is 4.2 MB - 137,239 files in 586,028,416KB. And one 3GB folder has 50k of those files, so if I moved that to another drive the average'd be more like 7 megs. Using a stripe size of 4 MB seems a little ridiculous.

*I* think the biggest mistake people make when setting up a RAID array is not testing it for themselves. Try it both ways, stopwatch it, and see what happens.

Mikeblas: he's probably talking about the SMART attribute.

 
unhappy_mage said:
Wow! Synthetic benchmarks! Nobody has ever run those before! So, let's see... on my data volume, the average file size is 4.2 MB - 137,239 files in 586,028,416KB. And one 3GB folder has 50k of those files, so if I moved that to another drive the average'd be more like 7 megs. Using a stripe size of 4 MB seems a little ridiculous.

I dont think your suppost to calculate it that way...

unhappy_mage said:
*I* think the biggest mistake people make when setting up a RAID array is not testing it for themselves. Try it both ways, stopwatch it, and see what happens.

And i think the biggest mistake people who preach against the performance advantages of RAID make come from their limited experience with RAID arrays and their limited knowledge of picking the correct stripe sizes and other such things when setting up the array to meet their specific needs. RAID0 isn't for everyone but the performance increases are there if your willing to set it up right. Its true that the biggest advantages will be seen while working with large files though.

There are the disadvantages of an increased point failure but the average user doesn't have more then a few GB of data at the most that can actually be considered critical data and that should be backed up externally on something like DVD media no matter what type of array you have because drive arrays can get screwed up.

If you were only running one drive and it fails you still lose all the data on that drive. Even though you have two drives which "doubles" the chance of failure i dont really look at it that way. In order for it to "double" the chance of failure then both drives would have to be putting forth the same workload and they dont. With RAID the workload is distributed between the drives. With two drives in RAID0 half of the data is written to one drive and half is written to the other. Theoretically the drives are doing the work of a single drive. This would compound if you had 3+ drives. I would say the biggest failure rate is caused by lack of efficient cooling and the imposed heat of having multiple drives closely together, especially if you have some that get rather warm like Raptors.
 
burningrave101 said:
I dont think your suppost to calculate it that way...
Okay, here's the algorithm:
Upon installing WinXP you will find the average file size to be 373kb, the general rule is correct, you divide the file size by two, and go to the next lowest setting:
Average file size: 4.2 MB. Divide by two, and go to the next lowest setting. I get a 2MB stripe. Not much better...
burningrave101 said:
And i think the biggest mistake people who preach against the performance advantages of RAID make come from their limited experience with RAID arrays and their limited knowledge of picking the correct stripe sizes and other such things when setting up the array to meet their specific needs. RAID0 isn't for everyone but the performance increases are there if your willing to set it up right. Its true that the biggest advantages will be seen while working with large files though.
Nope. I ran raid 0 on a pair of 120GB disks, I played with stripe sizes, cluster sizes of the filesystem, got it tuned just right. Then I noticed it wasn't all that much faster, and it had cost me twice what a single disk would have. I'm not arguing that there's absolutely no performance gain overall, just that it's not cost-effective for gaming machines. And since that's 90% of this forum, it makes a pretty good mantra.

burningrave101 said:
Even though you have two drives which "doubles" the chance of failure i dont really look at it that way. In order for it to "double" the chance of failure then both drives would have to be putting forth the same workload and they dont. With RAID the workload is distributed between the drives. With two drives in RAID0 half of the data is written to one drive and half is written to the other. Theoretically the drives are doing the work of a single drive. This would compound if you had 3+ drives. I would say the biggest failure rate is caused by lack of efficient cooling and the imposed heat of having multiple drives closely together, especially if you have some that get rather warm like Raptors.
Nope. The drives, especially in seek-heavy workloads (loading small files, etc), are doing just as much work as if they were seperate. Remember, most of the time we're talking about losing with raid 0 is in seek times, and both disks have to seek to fulfill a request. Single drives have pretty high STRs these days, and raid 0 doesn't gain you much in that respect - maybe 170 or 180% of a single disk. But if you're only spending (let's be generous) half your time STRing, you only actually gain 35-40% over a single disk. And those are hypothetical numbers; reality is worse.

mikeblas said:
How could that possibly be affected by the stripe size in the controller (or the driver software), several layers above?
Changing the stripe size could cause a larger or smaller number of seeks to occur, and the error rate would be proportional, I would guess. Not that I'm defending him, but I think that's what he's saying.

 
unhappy_mage said:
Changing the stripe size could cause a larger or smaller number of seeks to occur, and the error rate would be proportional, I would guess. Not that I'm defending him, but I think that's what he's saying.

So the seek error rate is the number of errors per the unit of time? I would have thought it was the number of errors per the number of seeks.
 
mikeblas said:
So the seek error rate is the number of errors per the unit of time? I would have thought it was the number of errors per the number of seeks.
Seek Error Rate
Count of seek errors. When your HDD reads data, it positions heads in the needed place. If there is a failure in the mechanical positioning system, a seek error arises. More seek errors (i.e. lower attribute value) - indicates worse condition of a disk surface and disk mechanical subsystem.

I dunno, it's not absolutely clear whether it's a ratio or a count of errors. In any case, depending on what software he's using to report the SMART values, he could be a complete idiot - for all values high is good. So having the count quadruple could be good.

I'll give him the benefit of the doubt. But I'd say having the STR decrease that much is a pretty good indication that Something is Wrong with his test setup. Maybe both drives on the same cable?

 
so im new to this thread, just wondering how long it took to point out to the OP that they compared a single faster new drive to a raid of the slower old drives.

that says nothing about comparing raid to non raid. didnt he learn anything about controlling variables in grade school?

compare 2 74s in raid vs 1 in non raid. compare 2 150s in raid vs 1 in non raid. jesus man.
 
Deusfaux said:
so im new to this thread, just wondering how long it took to point out to the OP that they compared a single faster new drive to a raid of the slower old drives.

that says nothing about comparing raid to non raid. didnt he learn anything about controlling variables in grade school?

compare 2 74s in raid vs 1 in non raid. compare 2 150s in raid vs 1 in non raid. jesus man.
The point of this thread is that it isn't cost effective. 2 150s would perform better than one, sure, but it costs twice as much and doesn't perform anything like twice as fast. And 2 74s cost the same (more or less) as a single 150. So it's a good idea not to run raid if you have $300 to put into a disk subsystem.

 
unhappy_mage said:
The point of this thread is that it isn't cost effective. 2 150s would perform better than one, sure, but it costs twice as much and doesn't perform anything like twice as fast. And 2 74s cost the same (more or less) as a single 150. So it's a good idea not to run raid if you have $300 to put into a disk subsystem.

What you're saying is the point of this thread, Unhappy, disagrees with the title of the thread: "RAID-0 proved ineffective at boosting desktop application/game performance". This thread hasn't proven anything--at least, for any stringent definition of the word.

While it might not be cost effective, there are still applications for it. There's always diminishing returns, and performance doesn't always scale linearly with investment.
 
well from experience i have been runing raid0 for 5 years now and have yet to have a failure. as u can see from my setup i have raid0 for games and the OS while i have raid1 for storage and i backup to dvd. data integrity should not be an issue if you back up ur data.

when i benched a single 150gb raptor i got about 80mb/s and when i raided them i got about 130mb/s so for me its worth it. say the array fails. big deal i got the cd's and the time + data on the R1 and if that fails i have dvd backup.

 
Exactly.

Doesn't take a rocket scientist to know that raid-0 IS FASTER than a single disk of the same make/model. My raid-0 raptors gets me loaded into maps in CoD4 faster than any of my friends with their non-raid setups. ... and at least 2 of them have almost identical machines to mine, but they don't have a raid-0.

I'm sure im just making that up so companies can sell more drives though. :rolleyes:
 
Doesn't take a rocket scientist to know that raid-0 IS FASTER than a single disk of the same make/model.
It doesn't take a rocket scientist; it just takes someone who knows something about scientific method. Or maybe someone who understands how hard drives work, and how programs access storage devices. Then, they understand why access patterns matter, and how sometimes RAID-0 might be faster, and sometimes it might be slower.
 
I could rattle off everything i know about raid, and probably make your head spin around your ♥♥♥♥♥♥♥♥. ... but I won't.

Rather, I'll just say that every single one of my customers that I've moved from single disk to raid-0 for their OS and application needs has expressed how their machine is more responsive after. I could care less what other people think about this subject, so I won't argue the point further.

I'll agree that in certain situations a raid-0 would be slower than a single disk... but not many. More importantly, a 2 disk raid-0 is much better than a 4 disk raid-0 for reasons already listed in this lengthy and BS filled thread.

For anyone to assume anything is for certain amongst all of the hardware configurations that are out there in every application instance is complete idiocy. Generalizations like this are foolish, and nearly impossible to prove.
 
Benchmarks and BS aside, I have been running RAID0 on and off (mostly on) for 7-8 years now. My general computing experience has always been much better running RAID0. I build high-end PCs for many folks and when I've bought another idientical drive for them and RAID0'd it, they've always been very pleased. Granted, the main benfits I've seen are boot times, game load times, level load times, Windows install times, defrag times, disk to disk backup times, and any heavy, constant disk functions. I've always had Raptors, so things like opening I.E. and similar programs are basically instantaneuos, so I've never seen where RAID0 slows the access times down. Also, I've always shortstroked the OS on the outside of the disk(s). Amazing reading 10 pages here and yet I know the positive experiences I've had with RAID0 far outweigh the single drive. Why on earth would I want to fork out another $290 for another Velociraptor when one has plenty of space?!? Because RAID0 is faster for my computing habits. Period.
 
Exactly.

All the things you listed are a very big deal. Games/OS's loading faster is one of the primary reasons I like raid-0. Followed closely by apps opening up MUCH faster.
 
I could rattle off everything i know about raid, and probably make your head spin around your ♥♥♥♥♥♥♥♥. ... but I won't.
Yeah, I know you won't.
 
Back
Top