Why is raid 0 so hated?

Lunas

[H]F Junkie
Joined
Jul 22, 2001
Messages
10,048
I was looking for information on raid 0 on a laptop for my 2 firecudas. And i got nothing but eww raid 0 if you really want to risk your data like that. and Gross if you want to have your array fail on you...

And such everywhere i went was raid 0 is the devil.

i remember before ssd raid 0 was hot shit and if you were not running it you were missing out and people touting 4 disk arrays of 15k rpm wd blacks...

So to get on with this

my new laptop has 2 internal bays for 2.5 and 2 for m.2 and support for 1 of those being NVME

it came with 1 WD blue 512gb m.2 sata ssd

i put 2x 2tb Firecuda SSHD inside

right now the fire cuda are a spanned volume via win 10

i can turn on intel rst and put the 2 fire cuda in hardware raid 0 should I? it requires me to reinstall windows and seagate discourages raid with firecuda too bad seagate is the only game in town for spinning disc 2.5 that are more than 1tb
 
Mostly it is the difference between how SSDs and HDDs operate. Putting your SSDs in RAID 0 removes space and gives you negligible returns in performance. You would have to be doing a lot of IOPs for it to truly benefit you.

Not to mention if you RAID 0 with different speed SSDs it will use the lower speed SSD as the base, decreasing your performance.

So all in all, for the average system it is better not to use RAID 0 for the SSD drives.

EDIT: Some articles on this here and here and here

There are more out there, some that says RAID 0 is still good, others that are lukewarm. I had used RAID 0 with SSDs for awhile, but I found it was better to just get a faster SSD later on and utilize it and keep all the storage. If you are going to buy identical SSDs, than it can give you a boost, perhaps around 50% boost, but how often and would you notice it? Meh. I would suggest reading the last article I linked as it is from someone who did the RAID 0 with NVMe drives for over a year and liked it.
 
Last edited:
There are quite a number of people that may poo poo on RAID 0 because it really isn't RAID at all. There is no redundancy to it. RAID 0 offers zero protections for your data. It is a strategy for trying to boost performance, mainly when you are using it with another strategy such as mirroring.
 
to be clear i was going to put the 2tb sshd in raid 0 i dont care about redundancy that is what backups are for...
 
to be clear i was going to put the 2tb sshd in raid 0 i dont care about redundancy that is what backups are for...

That is what I figured, which is why I gave the first response on the performance gains from RAID 0 and some complications.
 
I've used Raid 0 for years across multiple systems.

I'll put more down when I get home from work.
 
That is what I figured, which is why I gave the first response on the performance gains from RAID 0 and some complications.
humm what i got from the nvme article was read speeds significantly increase writes dont change at all

I am planning on getting a nvme at least 512 gb in size to replace the wd blue.... for $1700 i expected the laptop to have had the performance ssd to be nvme not a first revision wd blue.
 
Never put anything on a notebook that you want to keep. One drop in a puddle or 5 finger discount and its gone.

SO yea raid zero away and have double "the sustained reads/writes" but your randoms might slow down alittle, or at least be a hair laggier.
 
I have 4 SSD's in R0 and like it just fine. And I do back up. Regularly. But, it's fast. 512GB 850 Pro's. It's my game array so it doesn't contain anything critical should it be lost. But, I still back up for game files and settings.
 
Its awesome for power users who understand the risk because drives wear out (more so mechanical varieties) so its not a matter of if but when the entire array is in jeopardy. For the space utilization and performance enhancements, scratch data gives it a good case for usage. For just about every other type of dataset, if its not mirrored a la RAID-10, its asking for trouble. I would rather users be cautious about it rather than gungho personally..
 
I do not consider IRST-PCH as Raid capable. There is no BBM, no Linux support either.

Raid-0 = Aid-0 without the "R", Array of inexpensive/independent drives, no "Redundant" !

I would not do it. I have done it and it is by far not as reliable like you if run Adaptec BBU controllers.

You loose sync faster than you wish, whereas this is only "not so nice" on Raid-1, it is a neck-breaker with Raid-0.

If you wanna gamble, go for it and learn ;)

I use Adaptec or Dell/Perc/LSI or no Raid at all
 
humm what i got from the nvme article was read speeds significantly increase writes dont change at all

I am planning on getting a nvme at least 512 gb in size to replace the wd blue.... for $1700 i expected the laptop to have had the performance ssd to be nvme not a first revision wd blue.

I think that was generally for his applications. He didn't seem to be doing much heavy I/O with large files. There are plenty of people out there using RAID 0 or other RAID types such as 10 and 5 that are seeing increased speeds doing heavier I/O functions. SSD writes operate faster the larger the files and the more there are generally. He may have just been doing a bunch of small writes with occasional larger ones, so he never really got into the higher speed write averages.
 
Right now it is setup to use the 2 drives as a spanned volume and the ssd is the boot drive... my main steam library will live on the sshd volume as will movies and music and my ssd will be games frequently played and windows...
 
Mostly it is the difference between how SSDs and HDDs operate. Putting your SSDs in RAID 0 removes space and gives you negligible returns in performance. You would have to be doing a lot of IOPs for it to truly benefit you.

Some misinformation in here. Im not sure where it comes from that in RAID 0 you lose space? This is inaccurate. Other versions of RAID you will lose space for the mirroring/parity drives, since there are no mirroring/parity drives in RAID 0, you dont lose any space using it. You lose the redundancy, but not any space. An example, Two 256GB drives will give you the same amount of usable space as a single 512GB.

RAID 1-10 on the other hand you will lose usable space, but have redundancy in case of a failure.

humm what i got from the nvme article was read speeds significantly increase writes dont change at all

Also not sure where this comes from. In the case of RAID 0, your speed will infact increase, NVMe or not. I have NVMe RAID 0 and this is definitely faster than a single NVMe drive (see photo).
Were they possibly talking about RAID 1? RAID 1 (mirroring) you will get basically double the read, but writng is unchanged, this is because its able to read from both disks, but mirrors writing to both for redundancy (in case a drive fails).

I have been using RAID since the promise controller days and have never had an issue with it. Luckily I have never had a RAID array fail. I had one corrupt one time, but that was because I was pushing the PCI bus out of spec overclocking while using a promise controller. Software RAID I dislike, for the simple fact that you cant really move it. Going from one machine to another, you couldnt take the array with you without backing it up. I was able to swap chassis and keep my RAID 0 array long ago, since they were connected to a promise card, that was nice.

If the processor in the machine is a high i5 or i7, I would probably RAID them. The hit from software RAID 0 on a higher end box will be negligible. However I dont know if the benefits of a SSHD will still be there with a RAID 0 array of those drives? Maybe somebody else can chime in on that?
 

Attachments

  • nvme.jpg
    nvme.jpg
    74 KB · Views: 16
short version

amplifies data volume failures exponentially
ssds obliterate all performance use cases of spinning rust raid0
generally only helps sequential, aka big dumb transfers - poor benefit for random or small file performance
sequential performance on recent stuff hits other bottlenecks with a single drive anyways, network is the first thing usually. Even spinning rust can easily max 1GbE with big files, single ssds approach the limits of system bus interfaces with sequential
 
I used to RAID 0 everything on every box that could do it. But nowadays with NVME drives, there isn't really a pressing need anymore. My current home box only has it because I had a bunch of SSD drives and didn't want to deal with a bunch of drive letters.
 
I should also note my Previous G752VY and current G703 laptops both sported NVMe, the latter having RAID. I gave my G752 to my son. The difference I see between the 2 is virtually nothing performance wise with everyday use. You would only really see a benefit with a slow performing spinner drive, unless you are doing something seriously HDD intensive.

The few years old SSD in my desktop is only slightly slower then my laptop, even with the benchmarks scores being 4 fold. But again being that SSHD's use prediction for caching some files, Im not sure if you would possibly lose that with those drives in RAID 0 in your particular case. I would research that before going RAID with those drives.
 
Some misinformation in here. Im not sure where it comes from that in RAID 0 you lose space? This is inaccurate. Other versions of RAID you will lose space for the mirroring/parity drives, since there are no mirroring/parity drives in RAID 0, you dont lose any space using it. You lose the redundancy, but not any space. An example, Two 256GB drives will give you the same amount of usable space as a single 512GB.

RAID 1-10 on the other hand you will lose usable space, but have redundancy in case of a failure.

That was just careless editing on my part. My original post was regarding 10 configurations, gearing it towards the people who were not keen on 0. My original point was about doing RAID 10 instead of 0 takes away space and will give negligible gains to the average user because of the increase ops for mirroring.
 
The firecuda are SSHD with frequent uses of files caches them to the on-board 8gb ssd auto-magically currently they are spanned so when 1 fills up i was told that at best if i raid them ill loose the ssd part on 1 i can not find anything on spanned volumes.

but these firecuda are slow... they write at 80MB/s i really wish i did not need to rely on seagate smr shit disc
 
I think raid 0 was awesome in the spinner drive days. 4 rapter 10k rpm hdds in raid 0 was amazing for the time. But even an extremely low end ssd would out perform that nowadays. I cannot think of a reason why you would want raid 0 on a modern pc. This is especially true with NVME ssd drives. The performance gain you would obtain would not even be noticeable outside of synthetic benchmarks.

Even if you were doing heavy load type tasks, such as pushing a vm, it would be much better to have the vm on it's own drive separate from your OS drive, then putting it on the os drive that was created out of two ssds in raid 0.

Basically, Raid 0 gives you a speed advantage that isn't needed and generally wont help you.
 
MrRuckus cleared up some of the chatter here that seemed inaccurate. Thanks for writing the post I was about to. heh.

All that said I would not raid-0 those hybrid disks (firecuda).. That flash buffer seems like it would be problematic, especially in a striped data situation.
 
for people that say you increase you risk of loosing data going raid 0 I usual refer them to this XKCD comic

increased_risk.png



here is a secret... most people dont understand statistics or the number they are talking about. they just repeat shit to have an opinion instead of understanding.

if you feel you understand the benefits and risc of raid 0 go ahead and do it. proper planning is way better than following some random persons unsupported opinion
 
Been using Raid-0 with ssd's since they were introduced and have one machine with 840 Pro's still functional. I always make a backup image monthly in the event one of the drives go bad and put all data folders on a separate drive which is also backed up to the cloud. I don't care what drive or protocol you use backing up is essential as any drive could succumb to disaster at anytime. Raid-0 has always improved performance from HD's to M.2 drives to say otherwise is ill informed. Where folks have had issues with Raid-0 only proves they don't know what they are doing to begin with. That said I have never put two SSHD's in Raid-0 and would have to research that before proceeding. I doubt the cache would/could be raided and I suspect MrRuckus makes a valid point. GL
s
 
for people that say you increase you risk of loosing data going raid 0 I usual refer them to this XKCD comic

increased_risk.png



here is a secret... most people dont understand statistics or the number they are talking about. they just repeat shit to have an opinion instead of understanding.

if you feel you understand the benefits and risc of raid 0 go ahead and do it. proper planning is way better than following some random persons unsupported opinion
The problem with this logic is that most people don't understand the published data on drives (or statistics). Sure the odds of *most* drives failing in the first year or two of use are tiny, so doubling those odds isn't much risk, but give it some time (or a shitty drive like the seagate ST4000DX000) and your odds of losing all your data go WAY up. Even with drives unlikely to fail, I don't think the benefits of raid 0 outweigh the risks for anything other than quickly and easily replaced data. With nvme drives I haven't seen the need to use a raid0 in a while, but when I did, it was little more than a "temp" drive (or my boot drive with the OS an programs - nothing else), and any files on it were backed up daily.
 
If speed is important, and MMO/RPG games can very much benefit from much higher texture load rates raid 0 is a good idea.

A good PCIE based raid controller will do much better then an on-board or any software raid solution.

A single sata 3 connection can do at best 6 Gb/s if the drive can even go that fast.

An x4 3.0 PCIE Raid controller can do 3.94 GB/s

1 GB=8Gs.

So a decent raid controller can scale way paste just a few drives.

Yes you need to deal with either drive failing. I have not had it happen, guess I am just lucky.

My 8350 system in my sig has 2 raid 0 drives in it. Two HD and Two SSD. They both significantly outperform a single drive of the same make/model.

m.2 nvme drive also goes at pcie x4 speed.

My last 3 or 4 builds have had raid 0 in them. My Ryzen doesnt due to nvme and very limited pcie lanes.

For a desktop I would go with 1 raid and one not raid for important data. If your raid does blow up your not going to recover anything easily or cheaply.

For a laptop, one with nvme in particular or even a good ssd I would not do any raid unless batter life is a total non-issue. Raids use both drives at 100% when accessing data which means a full power drain from both.


In a slightly different direction I would like to throw up the definition of redundant:

re·dun·dant
rəˈdəndənt/
adjective
  1. not or no longer needed or useful; superfluous.
    "this redundant brewery has been converted into a library"
    synonyms: unnecessary, not required, inessential, unessential, needless, unneeded, uncalled for;More
    • (of words or data) able to be omitted without loss of meaning or function.
      "our peculiar affection for redundant phrases"
    • ENGINEERING
      (of a component) not strictly necessary to functioning but included in case of failure in another component.
The definition of RAID:

RAID (Redundant Array of Independent Disks, originally Redundant Array of Inexpensive Disks) is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both.

The redundancy in Raid 0 is in using multiple drives and not in having multiple copies of data.

My thought on the whole Raid 0 isnt really Raid conversation anyways.
 
......raid 0 is effectively hated because it takes the [R]edundancy out of the equation.

Raid was originally created so many harddrives could act as one. Once created, making two harddrives act as one brought about striping, effectively odd/even sharing of data.

Raid 0 is odd/even sharing of data. in a data stream every other chunk of data goes to the other HD. So in order to read a data stream, a request makes both HD's complete the request.

Raid 0 therefore is twice as fast as one HD delivering the data stream because it takes both HD's to reproduce the data.

Haters understand that if you are using raid 0 as your "ONLY" storage array, you are chancing the loss of all data if only one drive problems out. It makes raid 0 seems like its a bad decision.

Its not.

Performance increases are gained by using raid 0 but you have to include the potential into the calculation. Thus raid 1.

Raid 0 = two(or more) drives acting as one storage device sharing the data between them. Since the data is shared, reading data speed is multiplied since two drives are acting as one.

Raid 1 = two or more drives acting as a support group. If one drive fails, the other is a active backup. Places the [R] back into raid.

Raid 1 the intel way = [R]edundancy of raid 1 but read of raid 0.

Raid 0 is only<mostly> used when someone wants performance increases - and knows that backups are necessary if the data on a raid 0 array is valuable.

The longevity of SSD's - the newer ones especially - makes a tough disagreement as to why the minimum of two drives are being used of the "only" storage on a Pc. SSd's have such a long expectation of life that someone using a raid 0 array for the performance increases of their only storage array only brings up debate I would also have backup storage. Others risk it.

Haters dont understand why someone would use raid 0 since it doesnt provide reduncancy.

Haters dont drive[H]. "waht means this overclock thing you spek of""???

Raid 0 is intended for the overclocking crowd, but we know how to backup.
 
  • Like
Reactions: Lunas
like this
Also I did not think it needed to be outright said but this is on my laptop. A real raid controller is not an option. And my options are Intel mode in bios or Windows 10 soft raid. My options available right now twin firecudas 2.5" 2tb and 1 m.2 sata 512gb wd blue ssd gen 1

1 empty m.2 supporting 80mm
1 empty m.2 shorter for LTE card 2 antenna and sim slot present.
1 wd 512gb ssd m.2
2 sata 3.0 ports with firecudas on them

If I put a nvme drive in the second m.2 is disabled as the sata 1 and 2 port resources are taken
 
Last edited:
Per every single Backblaze stats release Seagates die much more frequently than other drives to begin with. I won't even buy their products, much less put them in RAID 0.
 
Per every single Backblaze stats release Seagates die much more frequently than other drives to begin with. I won't even buy their products, much less put them in RAID 0.

If I remember correctly, its only about certain capacities where seagate drives may be forgotten about. Their other drives are as reliable as expected. Gotta remember that Backblaze are sometimes using consumer drives in an enterprise situation. Heat and constant access are vastly different. I am not in the market for their 4TB drives(IMHO) . The others arent as flaky.
 
To understand the current issues/thought process with Raid-0 and SSD's you have to go back to when the general public only had use of mechanical storage. The mindset that commonly carried over when PC builders begin to use SSD's was that the same benefits would be there if they were to Raid-0 2 x SSD's together. To perpetuate this even further, benchmark programs would report these benefits back to the system builder.

And this is were we stand today. Granted, any longtime PC Builder will know those benefits actually do not exist as this tread will inform you.

However, there is still one benefit I've personally found in going Raid-0 with SSD's and that's one of flexibility. Tho, the need for that flexibility is slowing eroding as SSD prices come down. What I mean by this is that in the past and in some cases today, SSD's were expensive. 3 or 4 years ago, 250bg SSD's onsale were still around $250 - $275. A 500gb SSD would cost you well over $400 dollars and most likely $500 dollars. What I did and many other would be force to do is to pair up a less expensive size/brand matching SSD to give yourself that 500gb SSD. Of course this all depends on what you need. Even today I find the use of 500gb not needed. However, in the past there were several times over the years I would get a 80gb SSD, ot a 120gb and pair it with a second SSD for raid. To be honest, early on, I thought the benefits were there as I've described. But apparently not so. I personally always liked the flexibility of having two drives. It allows me to use one in a new build or if I went to a 250gb as prices came down I could use the 2 I had in new builds that I would later resell. There were times when I would hand one down to a friend or Girlfriend.

So while the performance benefits are not there, it's not entirely a waste of your time depending on your budget and needs, goals, future plans.

I know a few guys that could not afford a 1tb ssd so instead, raided two 500gb ssd's together.
 
i use 4 1tb enterprise drives in a raid 0 for my games, since you can just redownload a game and back it all up to steam thats pretty ok by me if the array goes south. That being said i do it because in that config, the 4 drives are as fast as my OS SSD, your drives are hybrid, if its not going to eff with the predictive caching, i would say do it, i mean no reason not too if its games and you know the risks. Also seagate drives scare me. My personal experience has been very bad with them, they always seem to die when used as main OS drives, and fail a lot, to the point where i pretty much dont buy their drives anymore period, though others, and enterprise, do.
 
RAID 0 with spinning rust...why the hell not as long as you know the slight risk.

RAID 0 with SSD/NVME...you won't notice the difference. Not really worth it.
 
Sure I used Raid 0 back in the 36G WD Raptor days, it provided noticeable load differences for OS and games.

I also used Raid 0 when SSDs were first a thing, still saw noticeable differences.

Stopped Raid 0 with current SSDs and NVME drives since my current applications no longer benefit from it.
 
The problem with this logic is that most people don't understand the published data on drives (or statistics). Sure the odds of *most* drives failing in the first year or two of use are tiny, so doubling those odds isn't much risk, but give it some time (or a shitty drive like the seagate ST4000DX000) and your odds of losing all your data go WAY up. Even with drives unlikely to fail, I don't think the benefits of raid 0 outweigh the risks for anything other than quickly and easily replaced data. With nvme drives I haven't seen the need to use a raid0 in a while, but when I did, it was little more than a "temp" drive (or my boot drive with the OS and programs - nothing else), and any files on it were backed up daily.

Doubling is doubling no mater the previous factor. yes, i agree on that....
but the issue is still if the person is ok with that and just need it for the speed and has data redundancy for his important data by another method, raid 0 is fine.

I have has been running raid0 on drives since before tthe famous ibm GXP deathstar ( 4 of those in raid 0 with a 256mb buffer no battery) never had a disk in my raid die on my. Its a small sample but still if what you loose is just a bit more of programs/games you can reinstall than its not a significant issue to dismiss Raid 0 on.
my data drive on my main desktop is still a couple of velociraptor 10k drives in raid 0

I believe my point still stands: there is no need for instant hate on raid0 without looking into the user requirements


and if we have to be all mathy. you don't double the chance of losing data. that arguably remains the same. You just lose twice as much when it happens. The failure rate of the drives remains (roughly) the same.
 
and if we have to be all mathy. you don't double the chance of losing data. that arguably remains the same. You just lose twice as much when it happens. The failure rate of the drives remains (roughly) the same.

Actually its not going to remain the same. It wont be double either. It will be slightly less than double but the *odds* of it happening do go up. Because you increase your exposure by using two drives. I saw a study a while back, maybe it was on Backblaze I dont remember where, that compared raid0 to raid5 for data loss odds per year. Raid0 had a 2.6 and raid5 had a 1.6 I think.
 
Back
Top