Why is raid 0 so hated?

Actually
its not going to remain the same. It wont be double either. It will be slightly less than double but the *odds* of it happening do go up. Because you increase your exposure by using two drives. I saw a study a while back, maybe it was on Backblaze I dont remember where, that compared raid0 to raid5 for data loss odds per year. Raid0 had a 2.6 and raid5 had a 1.6 I think.


You are comparing having 1 drive vs 2 drive in raid 0 yes
But if you are comparing having 2 drives vs 2 drives in raid 0 making the only difference raid itself then no. you still have " 2chances to fail" raid or not.


Scenary A: 2x 4tb drive no RAID
Drive1 fails: You lose 4 TB of data
Drive2 fails: You lose 4TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data

Scenary B: 2x 4tb in RAID 0
Drive1 fails: You lose 8 TB of data
Drive2 fails: You lose 8TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data


You can play around with the probability for the drives to fail by they remaining the same between A and B (When not looking into drive usage pattern)


But trying to bring hate on raid 0 by bringing in faults that has nothing to do with the RAID itself (aka 2 drives) is just biasing your results with another factor. ( Now that factor might be important to include in risk assessments based on the situation but it's not due to RAID)
 
You are comparing having 1 drive vs 2 drive in raid 0 yes
But if you are comparing having 2 drives vs 2 drives in raid 0 making the only difference raid itself then no. you still have " 2chances to fail" raid or not.


Scenary A: 2x 4tb drive no RAID
Drive1 fails: You lose 4 TB of data
Drive2 fails: You lose 4TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data

Scenary B: 2x 4tb in RAID 0
Drive1 fails: You lose 8 TB of data
Drive2 fails: You lose 8TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data


You can play around with the probability for the drives to fail by they remaining the same between A and B (When not looking into drive usage pattern)


But trying to bring hate on raid 0 by bringing in faults that has nothing to do with the RAID itself (aka 2 drives) is just biasing your results with another factor. ( Now that factor might be important to include in risk assessments based on the situation but it's not due to RAID)

I am not adding any hate. I only asked questions earlier about ensuring data protections are real. And then for your comment I merely pointed out that yes it can increase risk if you increase the number of drives. Though one could argue a mirror would reduce that risk. Same odds of failure but you have a mitigation against data loss.
 
Back before SSDs were anywhere near the price per GB they are today (I mean, back when I had my 512MB 8800 GT, so ~8 years ago) I bought 4x 1TB WD Blacks (for ~$100 each) and threw them into a 4-disk RAID-0 array. It was just one big 4TB partition too for OS, storage, steam, everything.

Lasted for years until I went with a pair of Intel 160GB SSDs in Raid0. One failed about 4 years ago and was replaced by Intel. I still run those and that's from when I built the 2500k system brand new! None of those WD Blacks ever did fail though... they're still kicking around in various systems as Steam-cold-storage, media, etc.

It wasn't the smartest idea, and I was lucky. There's no NEED to do it in today's market of relatively high capacity SSDs with low cost per GB.

Oh well, just my anecdotal rambling.
 
You are comparing having 1 drive vs 2 drive in raid 0 yes
But if you are comparing having 2 drives vs 2 drives in raid 0 making the only difference raid itself then no. you still have " 2chances to fail" raid or not.


Scenary A: 2x 4tb drive no RAID
Drive1 fails: You lose 4 TB of data
Drive2 fails: You lose 4TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data

Scenary B: 2x 4tb in RAID 0
Drive1 fails: You lose 8 TB of data
Drive2 fails: You lose 8TB of data
Both fails: You lose 8TB of data
None fails: You loose 0TB of data


You can play around with the probability for the drives to fail by they remaining the same between A and B (When not looking into drive usage pattern)


But trying to bring hate on raid 0 by bringing in faults that has nothing to do with the RAID itself (aka 2 drives) is just biasing your results with another factor. ( Now that factor might be important to include in risk assessments based on the situation but it's not due to RAID)
Or you can think of your array as a single 8tb drive...


For the people who are saying Seagate failgate. There is no 2tb drives available from more reputable companies. Unless I could put thicker than 7mm in or had the $$ to get ssd that big so for the low price of 89 per drive I bought 2 firecuda 2tb
 
Last edited:
I am not adding any hate. I only asked questions earlier about ensuring data protections are real. And then for your comment I merely pointed out that yes it can increase risk if you increase the number of drives. Though one could argue a mirror would reduce that risk. Same odds of failure but you have a mitigation against data loss.

We totally agree that the risk of occurrence of data loss is increased by the number of drives. That was simply my point and that lot of the hate on raid0 is really coming from an increase in drives rather than the actual raid.
and I totally agree a mirro would decrease the chance of error to occur.


Scenary C: 2x 4tb in RAID 1
Drive1 fails: You lose 0TB of data
Drive2 fails: You lose 0TB of data
Both fails: You lose 4TB of data
None fails: You loose 0TB of data

it's clear that you have fewer situations where you loose dat with mirror. but you also "pay" for in the facts that you lost half the space so would need to double the drives once more to store the same amount of data.



my point is there is nothing wrong with raid 0 if it fits your purpose. you just need to take into account what you are using the drives for.
 
The hate for raid 0 is largely emotional and people feel what they feel.

Hate is after all an emotion.

To the op, you have a solid plan and know the pluses and minuses. If the balance is right for you, please go for it.
 
For the people who are saying Seagate failgate. There is no 2tb drives available from more reputable companies. Unless I could put thicker than 7mm in or had the $$ to get ssd that big so for the low price of 89 per drive I bought 2 firecuda 2tb

Today, now, i'd have to agree with this. But I did have a better experience, in my limited experience, with WD blacks when I had platters. Now I'm SSD only. But, WD seemed to be better for me. And, I began my HD experiences with Conner's. :) But, I also had IBM's SCSI's (which worked excellently), Seagate's, WD's, and Toshiba's. Man, aging myself. I remember MFM/RLL add-in cards. I'm old. :cry:
 
RAID-0 with no BBU and no UPS IS RUSSIAN ROULETTE. I'd fire any admin who' dare to set this up. The next sudden OFF, power outage or maybe even bsod will likely teach you what a BBU is for.

guys, intel pch raid is a comic raid, get adaptec or lsi or such or better leave it. Unless you can accept data loss and a steep learning curve. lol

there is no substitute for a BBU, and none for a UPS.
 
RAID-0 with no BBU and no UPS IS RUSSIAN ROULETTE. I'd fire any admin who' dare to set this up. The next sudden OFF, power outage or maybe even bsod will likely teach you what a BBU is for.

guys, intel pch raid is a comic raid, get adaptec or lsi or such or better leave it. Unless you can accept data loss and a steep learning curve. lol

there is no substitute for a BBU, and none for a UPS.
LAPTOP cant just add a proper card... it is comic raid or windows software raid... incidentally if i enable intel raid i am required to reinstall winblows... i think for now i am gonna leave it as a spanned volume as that should allow me to get the performance seagate promises from the sshd and still have 4tb over all

eventually ill pick up a nvme drive and maybe a single 15mm or twin 2tb ssd if they ever drop below $100
 
Last edited:
RAID-0 with no BBU and no UPS IS RUSSIAN ROULETTE. I'd fire any admin who' dare to set this up. The next sudden OFF, power outage or maybe even bsod will likely teach you what a BBU is for.

guys, intel pch raid is a comic raid, get adaptec or lsi or such or better leave it. Unless you can accept data loss and a steep learning curve. lol

there is no substitute for a BBU, and none for a UPS.
this isn't an "admin" type situation, its home/personal use and he knows the "risks"....
 
Most any data rescue program will likely cause trouble and refuse to find the logical device as well, usually a thing you learn when it's too late.

Boot a USB-MultiTool stick and check how many of your tools will work, likely none, maybe 1 or 2 if you are.lucky

This is not the case with common proper controllers, oh well....

in addition, access times are lowest with....1 single drive. any raid is slower by definition. for small files and desktop use, raid-o is nonsense, better span it or such. the risc is fairly high vs. little to gain imho + experience.
 
If you are after speed mostly, perhaps ditch those slow drives all together and simply look for a good deal on a mid range 2TB (or 1.6tb / 1.9tb) SSD and be done with it?
 
If you are after speed mostly, perhaps ditch those slow drives all together and simply look for a good deal on a mid range 2TB (or 1.6tb / 1.9tb) SSD and be done with it?
i am after speed but after testing a number of laptop drives. I have found of all my non ssd laptop drives the firecudas are as fast as any others...

so you know what i had to test
7200 rpm hgst 750gb drive
2tb spinpoint samsung drive
1tb seagate sshd
2tb firecuda

fastest speed i could coax out of any of them was 130MB/s read 90MB/s write that was on the fire cuda
my 525gb mx300 ssd and wd 512 blue ssd does 500+ read write

I want the 4tb of space more than i want speed the idea of having all of my steam library installed appealed to me.
 
Ah, garbage collection on SSD type drives may not function properly in raid... depends. A quarterly full defrag obviates the problem (note: I am saying every 3 months 4 times a year not daily which can cause significant wear on the drives)
 
In the real world RAID 0 doesn't really make sense anymore, much less with SSDs.

Will you see a performance increase in benchmarks? certainly. Anywhere near 100%? not in your wildest dreams, 50%? maybe in some cases.

Will it turn into real world gains? Probably not.

SSDs are already pretty fast compared to HDDs, but some games don't benefit that much from going from HDD to SSD. So the difference with RAID0 will probably be neglible.

Sure you'll shave a couple of seconds, but is it really worth risking loosing your DATA?

I used RAID 0 with a couple of Raptors back in the day. The drives were already pretty fast by themselves and RAID 0 did feel somewhat faster. But after having to reinstall the OS a few of times after a BSOD and powerloss, I gave up.
 
RAIDs have to stay in sync to be okay. And that's where I think Seagate is potentially seeing the problem. Because if one drive is hitting the SSD part of that SSHD and the other isn't, there's a huge disparity in speed which can look like a drive dropped on some raid controllers. The other problem is that when a single file is being written to the sshd on a 'normal' partition, the ssd part of the drive somehow must recognize a frequently used file and move it to the SSD (unless it does it by sectors which would make this point irrelevant then). But if a drive is being used in raid0, the drive itself doesn't have a partition or files, just sectors of data that the controller re-assembles into files. Hence, the drive's ssd part of the drive may become confused, or even useless.

Depending on the data you have stored on the seagates, I don't see much more than a marginal read speed benefit from raid0 for large files. And for smaller ones, it would be even less. I'd just keep it how you have it as a spanned volume, although I'm old-school in that I would just have two different drive letters.
 
RAIDs have to stay in sync to be okay. And that's where I think Seagate is potentially seeing the problem. Because if one drive is hitting the SSD part of that SSHD and the other isn't, there's a huge disparity in speed which can look like a drive dropped on some raid controllers. The other problem is that when a single file is being written to the sshd on a 'normal' partition, the ssd part of the drive somehow must recognize a frequently used file and move it to the SSD (unless it does it by sectors which would make this point irrelevant then). But if a drive is being used in raid0, the drive itself doesn't have a partition or files, just sectors of data that the controller re-assembles into files. Hence, the drive's ssd part of the drive may become confused, or even useless.

Depending on the data you have stored on the seagates, I don't see much more than a marginal read speed benefit from raid0 for large files. And for smaller ones, it would be even less. I'd just keep it how you have it as a spanned volume, although I'm old-school in that I would just have two different drive letters.
I am hoping when my nvme drive gets here that I can just add it with the sata m.2 but I was rtfm and it states adding a nvme drive disables m.2 port #2.

And yeah Seagate uses the 8gb of ssd on the spinners to cache to I am hoping keeping it as a spanned volume won't mess with it but I could find nothing either way... The only thing I could find was a Seagate tech strongly advising not to do raid and that they wanted barracuda if they wanted raid drives.


I am fine with it spanned tbh I just wanted it to be 1 big drive instead of 2 I had thought about getting the 5tb barracuda instead but decided I wanted sshd paired.
 
Last edited:
Back
Top