Why is RAID 1 not popular any more?

Circumnavigate

Limp Gawd
Joined
Dec 26, 2009
Messages
239
I'm looking to build a new work computer with NVME SSD drives in RAID 1.

While researching parts and builds I noticed that people seem to not be using RAID 1 any more now a days, does anyone know why?
 
SSD's generally have a much higher reliability rate compared spinners so the risk to go RAID 0 is small. This is why RAID 1 is unpopular, it costs a lot for reliability that may never be realized since ssd's are so much more reliable..
 
382098_cdisk.jpg
 
SSD's generally have a much higher reliability rate compared spinners so the risk to go RAID 0 is small. This is why RAID 1 is unpopular, it costs a lot for reliability that may never be realized since ssd's are so much more reliable..
Even for a professional work environment with very important data? RAID 1 is of no use?
 
I'm looking to build a new work computer with NVME SSD drives in RAID 1.

While researching parts and builds I noticed that people seem to not be using RAID 1 any more now a days, does anyone know why?

Your question presumes that RAID 1 was ever a popular solution. RAID 1 has NEVER been all that popular. At least, not that I've seen. I've worked in many large IT environments in a variety of industries. I've built plenty of DIY workstations. But, I've deployed thousands of them at the enterprise level using OEM options from HP and Dell. RAID 1 was never used widely for workstations in any deployments I've done regardless of size. Whether we are talking about 6 machines, 600 or 6,000 of them it just isn't done. It comes down to one factor. Ultimately, its cost. Buying multiple drives (especially SSD's) is more expensive. Also, many of the cheaper motherboards do not support NVMe RAID at all. RAID 1 doubles the cost of your storage while effectively only allowing you to use half of it. That's a hard pill for individuals and enterprises to swallow. Meanwhile, RAID 5/6 allows you to use two thirds of it while retaining redundancy.

The only place RAID 1 was ever utilized heavily since other options became available was on DIY enthusiasts machines. Even then, RAID 0 was often the more common solution as the primary emphasis is on speed and not up time. The two do not have the same purpose, but RAID 0 was something I saw far more commonly which tells you what frame of mind and what priorities users have at the desktop level. Keep in mind RAID 1 is not a backup solution. RAID 1 only allows for redundancy of the volume. All that does is allow for uptime to be maintained when a single mechanical drive or half the array fails.
 
Your question presumes that RAID 1 was ever a popular solution. RAID 1 has NEVER been all that popular. At least, not that I've seen. I've worked in many large IT environments in a variety of industries. I've built plenty of DIY workstations. But, I've deployed thousands of them at the enterprise level using OEM options from HP and Dell. RAID 1 was never used widely for workstations in any deployments I've done regardless of size. Whether we are talking about 6 machines, 600 or 6,000 of them it just isn't done. It comes down to one factor. Ultimately, its cost. Buying multiple drives (especially SSD's) is more expensive. Also, many of the cheaper motherboards do not support NVMe RAID at all. RAID 1 doubles the cost of your storage while effectively only allowing you to use half of it. That's a hard pill for individuals and enterprises to swallow. Meanwhile, RAID 5/6 allows you to use two thirds of it while retaining redundancy.

The only place RAID 1 was ever utilized heavily since other options became available was on DIY enthusiasts machines. Even then, RAID 0 was often the more common solution as the primary emphasis is on speed and not up time. The two do not have the same purpose, but RAID 0 was something I saw far more commonly which tells you what frame of mind and what priorities users have at the desktop level. Keep in mind RAID 1 is not a backup solution. RAID 1 only allows for redundancy of the volume. All that does is allow for uptime to be maintained when a single mechanical drive or half the array fails.
So RAID is never used for redundancy?

What do people do instead, use an external hard drive? I was hoping there would be a way to automate the backing up of the data.
 
So RAID is never used for redundancy?

What do people do instead, use an external hard drive? I was hoping there would be a way to automate the backing up of the data.
The message you quoted is saying the exact opposite, RAID 1 is all about redundancy: Keep in mind RAID 1 is not a backup solution. RAID 1 only allows for redundancy of the volume

What do people do instead, use an external hard drive? I was hoping there would be a way to automate the backing up of the data.
Yes people tend to have external hard drive, NAS type of solution became quite popular overtime (https://www.pcmag.com/picks/the-best-nas-network-attached-storage-devices) to do backup and they can be automated, using RAID has a backup is a quite dangerous way to do it and one where you can lose everything. When people work is not already save in some form of cloud to start with (git server home made or otherwise)
 
So RAID is never used for redundancy?

What do people do instead, use an external hard drive? I was hoping there would be a way to automate the backing up of the data.

I didn't say that. I said, RAID 1 isn't generally used for redundancy on the desktop. Especially not with SSD's. And again, RAID is NOT a backup solution. Period. Thinking of it as such will bite you in the ass.
 
I didn't say that. I said, RAID 1 isn't generally used for redundancy on the desktop. Especially not with SSD's. And again, RAID is NOT a backup solution. Period. Thinking of it as such will bite you in the ass.
I back the stuff I actually care about up onto MDiscs, regular optical discs or USB thumb drives and/or use cloud backed storage. Family photos? MDisc. Résumés? Mostly cloud but I'll include them on a disc too. Tax returns? USB or regular optical. Not putting those in the cloud and only need to keep them 3 years.
I run a 2 disk RAID1 at home to store stuff that I could lose, like my pr0n collection of course.
I've worked for companies that liked to use RAID-1 for servers. I'm in the financial trading business. A lot of trading systems really don't do much with their disks. Basically just boot up, load software, and write out log files. Disks are one of the most likely hardware components to fail, so a RAID-1 is nice just because it makes it less likely a machine will go down due to a disk failure.
 
I back the stuff I actually care about up onto MDiscs, regular optical discs or USB thumb drives and/or use cloud backed storage. Family photos? MDisc. Résumés? Mostly cloud but I'll include them on a disc too. Tax returns? USB or regular optical. Not putting those in the cloud and only need to keep them 3 years.
I run a 2 disk RAID1 at home to store stuff that I could lose, like my pr0n collection of course.
I've worked for companies that liked to use RAID-1 for servers. I'm in the financial trading business. A lot of trading systems really don't do much with their disks. Basically just boot up, load software, and write out log files. Disks are one of the most likely hardware components to fail, so a RAID-1 is nice just because it makes it less likely a machine will go down due to a disk failure.

Home systems are a different deal. Again, people don't usually do RAID 1 with SSD's. With spinning disks, sure. I've worked for banks, a financial trading firm, defense contractors and the like. I've never seen RAID 1 employed on the desktop. Anything of value is stored on the servers. The servers have always used RAID 5 or RAID 6. I can't recall ever seeing RAID 1 in use on a server in any decent sized institution. Disk storage is a premium resource anywhere I've ever worked. They do not pay double the price for half the capacity. That's what you do with RAID 1.
 
wouldn't it just be better to backup your data to cloud storage or an external hard drive? You have limited m.2 slots compared to sata so it seems like a waste to use them for redundancy.
 
So what usage scenario would make r1 enticing?
Machines that don't really do much of anything with their disks but need to stay up is the main one I've seen. Basically the main use for RAID-1 is don't need space but want to make the machine more reliable.

Most of the boxes with RAID-1 I've seen at work were trading systems, or rather part of a trading system and had a total of two drives. They only need one to work but run RAID-1 so a single drive failure won't take the machine down. These kinds of systems going down can cost some serious money, so one extra drive is cheap insurance.

Now if you really want to talk about wasting space I had a machine with a couple dozen 10k SAS drives in RAID-10 in the test rack I sort of managed a couple jobs ago back in 2013-14. It also had a RAID-1 boot volume. It's job was to saturate two 10gig Ethernet ports with data replayed off of disk. That takes a read speed of roughly 2GB/s. Not sure why they used RAID-10. From my perspective it was more or less "here's a new replay server for the test rack." How times change. A single PCIe M.2 SSD would provide that speed, though I'd probably need a U.2 drive to get the required capacity.

I also run RAID-1 on the file server/general purpose Linux box in my basement. It has some crap on it I like enough to spend $110 on another 4TB drive for redundancy against a single disk failure but not enough to bother doing remotely regular backups of a couple TB of crap I don't actually need. It's mostly video files. I could go secure erase those drives right now and nothing bad would happen. I'd run RAID-6 if I had more crap, but I don't so it's a waste of $.
 
So what usage scenario would make r1 enticing?
Where you will see RAID 1 is for Server boot drives. It has been common for well over 10 years to boot a server with two hard drives in RAID 1. That is the most common scenario. But as others have mentioned with SSDs becoming more, and more reliable, even now you see some machines been booted off single SATADOMs or one NVME.
 
I could blow that away with a ramdisk haha but good numbers - is that a couple of 4.0 drives in raid?

You need four gen4 drives to hit that range. It could even be faster with the newly released gen4 drives.
 
So what usage scenario would make r1 enticing?

For commercial applications, there generally isn't one. It's just not desirable due to cost reasons. As OFaceSIG said, it's useful for boot drives. However, I can't remember seeing that a whole lot myself. I've generally seen RAID 5 / RAID 6 used for boot drives. Still, what he describes is probably the most common scenario for it. In the consumer world, again, it's not really desirable. For SSD's the cost per gigabyte is rather high. For any RAID-1 array, it's double what you would normally see with a single drive. And you only get half the drive space. The risk of losing a boot drive for most people doesn't justify the cost of a RAID 1 array for an OS drive using SSD's. I've seen plenty of RAID 1 arrays for storage volumes on consumer machines though.

For consumers, it's enticing for non-SSD based mechanical storage. Implementing a RAID 5 or RAID 6 configuration costs more as you need more drives to create such an array compared to RAID 1. Controllers are needed for parity calculations as an alternative to motherboard based controllers. This increases costs yet again. You can do soft RAID implementations from the OS, but it doesn't erase the requirements for additional drives.
 
Last edited:
I've generally seen RAID 5 / RAID 6 used for boot drives.

That's actually kind of interesting. The main reason I can assume someone would configure it like that is if you had data on the boot drive. RAID1 can suffer the same number of drive failures as RAID5, and it doesn't have a write amplification penalty and requires less disks so for a straight boot drive it's probably more attractive to use.

I'd say RAID1 as boot drives has been common as long as I can remember (20 years +) because generally speaking if the server is going to have data on it, that data will be on a separate volume. Any time you have a system with DAS (direct attached storage) all of the data disks are on the enclosure, so the only disks in the server are so it can boot. A RAID1 is a the cheapest method for redundancy so you don't have to rebuild the server in the event of a disk failure. I would say that at the beginning of virtualization it was common to see a pair of drives in RAID1 to boot the VM environment, but now days it's more likely you just have an SD card or possibly a mirrored pair if you were worried about one of the cards going bad.
 
RAID1 is fine for home use and small office use. Remember not everyone works in a massive media agency. Not everyone needs to have masses of data stored or thrown around. Most of the companies I work for maybe have 500GB of data tops? I've rolled out many 1/2TB Two disk RAID1 setup NAS boxes and all have worked perfectly for what they were required. These are offices with 2-10 individuals. Just required for file dumps/file sharing and PC backups. Most decent two disk NAS also have Cloud backup functionality too which goes nicely with a OneDrive 1TB account. Cheap and low hassle. No issues in the 8 or so years I've been pushing them out. In fact I've swapped out many big servers and killed the the £200 a month support contracts with these. Stupid what some IT firms push to customers. "Ahhh you have a 12 core Dell Xeon server with 64GB of RAM for 5 staff with let's see...oh...90GB of data on it. Nice!"

RAID1 QNAP/Synology or a external USB drive of unknown heritage...your choice.
 
I see the biggest problem with RAID 1 usage at home is people use it instead of a backup. With RAID1 you still need to take backups from time to time. Also at least 1 of the backup copies should either be off-site or at minimum not powered on or if powered in a completely different system.
 
I mean, I use raid 10 on my server... So it's not raid 1 per say, but it does use raid 1 (mirroring). I have 6 disks, I lose 1/2 of my space, but the reads are rediculous fast, ~6x a single drive (same as raid 5 or 5) and writes are ~3x a single drive, which is much better than RAID5/6. As mentioned though, on a desktop it's just not common or useful most of the time. The only use is if you're really worried about a long running job crashing if the SSD should die. Backups should be a separate drive/device, so it's really just redundancy to keep the machine running/accessible in case of a hardware failure. On my home server I can replace a failed drive and rebuild with no downtime. This isn't typical in a desktop where you need to shut down to replace anyways.
 
I'm not sure where people are getting the impression that RAID 1 is not popular. On the contrary, RAID 5 is basically completely deprecated, especially for large arrays, because the entire array could fail during a rebuild and the probability of a failure goes up as the number of drives goes up. For applications such as cloud infrastructure, the extra cost incurred by a mirror is negligible compared to the downtime caused by an array rebuild or even worse, a recovery from an cold backup. Cloud storage is worth so much that drives typically pay for themselves in a couple months (HDD storage is on the order of $0.02/GB, SSD on the order of $0.10/GB). Usually, volumes will be built as a RAID 10 or similar topology, where you can lose half the drives without total data loss.

The argument on desktops (where there are no ROI considerations, downtime is less fatal, and drive counts are lower) is somewhat different. Very few home users can justify enough SSD's to build a reasonable RAID 10 of SSD's; with fewer drives, the chance that any one of them will fail is lower as well. Small RAID 5's make more sense, since the failure rate during rebuilds goes down - I'd say a 3 or 4-drive RAID 5 of 4-8TB drives is not a capital sin if you have additional backups. Large RAID 5's are a fantastic false sense of security though, if one drive fails, it only takes a single uncorrectable read error during the rebuild to take out *all* of your data. You are arguably better off with a simple span, since if a drive fails there you are only out the contents of that drive...or just use RAID 10 like a reasonable human.
 
Where you will see RAID 1 is for Server boot drives. It has been common for well over 10 years to boot a server with two hard drives in RAID 1. That is the most common scenario. But as others have mentioned with SSDs becoming more, and more reliable, even now you see some machines been booted off single SATADOMs or one NVME.

This is exactly what I was going to say. I deal with VMs primarily nowadays, but 10-15 years ago we were racking and stacking HP servers day and night. And raid 1 on 2 small-ish hdds (36gb or 72gb) was always the OS option. The rest was usually done raid 5 depending on the HW config.

I've also become wary of Crystal bench numbers.... my raid 5 setup in my home plex server puts up monster benchmark numbers, but in real life usage it is not nearly that fast. Moving around a bunch of 5-10gb files puts that to the test right quick.
 
I didn't say that. I said, RAID 1 isn't generally used for redundancy on the desktop. Especially not with SSD's. And again, RAID is NOT a backup solution. Period. Thinking of it as such will bite you in the ass.

Dunno about you, but RAID 1 has saved many customers of mine asses time and time again. In restaurant environments, there's a constant staff turnover and getting anyone to be proactive about backing up anything is basically impossible. Politics with upper management and POS companies that have their software on the local terminals (Aloha, Micros, etc.) also preclude offsite backup solutions because they want complete control over any machine on the POS network, as well as the incoming internet connection. Offsite backups also cause additional nightmares with PCI compliance that nobody wants to deal with.

The POS companies don't really give a damn if a restaurant goes down, standard practice is to just send out an entire new terminal/server and have someone stand it up, which can take days. The only POS company I have any respect for is Focus, because they'll bend over backwards for the restaurant to keep it from experiencing lengthy downtimes. Aloha on the other hand is shit.

RAID 1 has saved days and thousands of dollars of downtime, just slap another drive and rebuild it on the fly.
 
Dunno about you, but RAID 1 has saved many customers of mine asses time and time again. In restaurant environments, there's a constant staff turnover and getting anyone to be proactive about backing up anything is basically impossible.

It doesn't matter. RAID is still not a backup solution. It's purpose is redundancy. That's literally what the "R" in "RAID" stands for. RAID 1 isn't the same as a backup no matter how often people use it that way.

Politics with upper management and POS companies that have their software on the local terminals (Aloha, Micros, etc.) also preclude offsite backup solutions because they want complete control over any machine on the POS network, as well as the incoming internet connection. Offsite backups also cause additional nightmares with PCI compliance that nobody wants to deal with.

The POS companies don't really give a damn if a restaurant goes down, standard practice is to just send out an entire new terminal/server and have someone stand it up, which can take days. The only POS company I have any respect for is Focus, because they'll bend over backwards for the restaurant to keep it from experiencing lengthy downtimes. Aloha on the other hand is shit.

RAID 1 has saved days and thousands of dollars of downtime, just slap another drive and rebuild it on the fly.

Well, you are talking about a specific use case scenario. First off, I'd never let some outside vendor tell me how to conduct business. I wouldn't give two squirts of piss what their opinions are concerning backups. I'm not even sure how they could preclude the possibility of offsite backups. It shouldn't effect their software. They shouldn't know if backups were occurring or not.

I don't think RAID 1 is useless. As you've stated, its useful in the scenario you've described. It can certainly be helpful, but its rarely employed because it doubles storage costs. Redundancy isn't usally a key factor for companies outside of servers. Even your comments above are in line with that. A workstation can be replaced rather easily. Data can't. Therefore data isn't left to chance. It's stored on file servers, SANs, NAS or various appliances. It's backed up in a centralized location. Depending on RAID as a backup is foolish as that local data is replicated between the two volumes in real time. Effectively, a virus, data corruption, or something like data deletion or modification by user error will effect both volumes. If there is a fire, if the workstation gets fried, or whatever, the data can be lost. RAID 1 can't do anything to address those issues. That's not what it was designed to do.

It also depends on the size of the business. Smaller business can rarely afford to do things the right way when it comes to IT. They rarely have dedicated personnel for it either. As a result, RAID 1 is probably more beneficial in a desktop scenario for smaller businesses as they lack a proper infrastructure. I've done some work with smaller business and the way they handle their computing needs is almost always appalling at best. It still doesn't change the fact that RAID 1 isn't a backup. It's just for redundancy. Yes, it can save them from total data loss from a failed drive but that's just one scenario. There are many other risks for data loss that RAID simply can't address.
 
So what usage scenario would make r1 enticing?
We used (software) RAID 1 at work for holding multimedia messages in transit; something like 6 or 8 pairs of disks in a machine, raid 1 isn't space efficient, but is cpu efficient; and was fairly simple to setup. When disks went bad, the datacenter techs would swap them out and we'd rebuild the array and boom, ready to go again. In my experience from that, spinning drives failed and probably 10-30x the rate of SSDs, but when the SSDs failed, they'd almost always just completely disappear, with no warning signs or ability to recover anything; the spinning drives would show bad sectors in SMART before they got really bad, but even the really bad ones could read some of the data. The other servers didn't get raid 1, because the cost wasn't worth it; we used SSDs for everything except the servers that really needed a lot of storage, and the SSDs didn't fail very often (except we had gotten a couple bad batches of SSDs, where like 50%+ of the drives failed within 2 weeks), and rebuilding one or two servers here and there wasn't a big deal.
 
I can't remember the last time I used RAID 1 in a personal computer, but it's used in industrial devices all time. It's also used in Enterprise firewalls. That said, it is more rare these days with SSD's.
 
RAID1 is for boot volumes or things that are small and need redundancy bot do not touch enterprise storage (my sever uses 2 120G SSD in raid 1 - $15 each and simple). RAID5/6 is for things that are big and need redundancy. Software raid (Erasure coding/ZFS/various enterprise solutions) are for things that are big, need redundancy, and are fast too, but generally requires more cost.
 
Back
Top