RAID seems like a real pain.

Sometwo

Limp Gawd
Joined
Nov 7, 2004
Messages
202
1. Apparently WD Red drives are important because normal drives can drop out of the array.
2. ZFS you need a ton of memory that should be ECC since 1 bit gets flipped in memory it can corrupt the whole array.
3. If one drive fails, it's not uncommon for a second drive to fail during recovery.
4. You can't just add a new drive to get more space, you have to rebuild the entire array?


I just started reading about it so there are likely a lot of other problems that I don't know about. It looks like I'd be spending a ton of money and still won't be sure my data is safe.
 
Also raid does not remove the need for a backup. So if this is data that absolutely can not be lost make sure you have a backup plan even with ZFS.
 
You should not confuse RAID and ZFS. ZFS is a complete solution that includes RAID functionality, volume manager and filesystem.

I used software RAID for a very long time without TLER drives (all Samsung) and only very seldom had a drive drop from the array. I expanded a 3 drive RAID5 array of 500 GB drives first to a 6 drive RAID6 with 1 TB drives and finally to an 8 drive RAID6 with 2 TB drives over the course of 8 years, all while the array was online and accessible. Only recently I moved all data to a 11 drive RAIDZ3 with 3 TB drives. I don't really see the problem in not being able to expand that online... my experience says that one should not build single pools that he cannot completely backup anymore.
 
1. Apparently WD Red drives are important because normal drives can drop out of the array.
Yes, non-TLER drives will try to correct errors for upwards of 1-2 minutes, which is far too long for a raid card. It needs to be less than 8 seconds, which is what RED provides.
3. If one drive fails, it's not uncommon for a second drive to fail during recovery.
I suppose it depends on the age and type of the drives, as well as if you have hardware or software raid.
4. You can't just add a new drive to get more space, you have to rebuild the entire array?
If you are expanding the array, yes, it will need to rebuild the entire array, hence "array". If you're looking to add storage without having to expand, just plug in the new drive and run JBOD.
 
Also raid does not remove the need for a backup. So if this is data that absolutely can not be lost make sure you have a backup plan even with ZFS.

It's not technically a backup, but they're both ways to survive data loss, right? So if I want to make sure my data isn't lost, I should just get a couple external drives and backup to them manually?

Edit: Okay, I just did some googleing and yeah, redundancy isn't a backup. Not only that, but a couple extra drives is going to be a lot cheaper. Thanks!

I clearly don't know what I'm doing, so is there anything important about backing up data to two external drives I need to know? Any software I should use?
 
Last edited:
It's not technically a backup, but they're both ways to survive data loss, right? So if I want to make sure my data isn't lost, I should just get a couple external drives and backup to them manually?

Edit: Okay, I just did some googleing and yeah, redundancy isn't a backup. Not only that, but a couple extra drives is going to be a lot cheaper. Thanks!

I clearly don't know what I'm doing, so is there anything important about backing up data to two external drives I need to know? Any software I should use?

NO, RAID provides uptime. Backup is a way to survive data loss. 2 very distinct problems with different solutions.

As you have already come to the conclusion of, RAID is probably not the option that is best for you. For most people RAID is not a good solution due to a number of the items you have pointed out.

Also ECC memory is not just for ZFS but for any critical system where stability is paramount. The same type of corruption that happens to ZFS could happen with say an Oracle DB or any file write. The data must go to memory first before it is written to disk if a uncaught bit flip happens in memory your data is hosed and you wont know it. Now that sounds scary however in home use the odds of this happening is pretty damn slim.
 
I clearly don't know what I'm doing, so is there anything important about backing up data to two external drives I need to know? Any software I should use?
RAID provides performance and fault tolerance in certain scenarios, which means everything stays running when a drive eventually fails.

Backing up to the cloud or to a hard drive that is stored at another location is the best backup method. Backing up to a drive then storing that drive next to your computer does little for you if your home is destroyed. Same concept applies to a business. A good backup provides ability to recover from deleted data as well as recovery from a catastrophe.
 
Last edited:
1. Definitely avoid power-saving drives that go into a power-saving state based on their own firmware too.

2. Not necessarily need a ton of RAM if you have a card that supports ZFS. Also, I don't think ZFS is that susceptible to issues and errors, else it would not be so extensively used in critical systems.

3. Never heard of this as being not uncommon.

4. For most complex RAID configuration other than JBOD, you will have to rebuild the array. Some RAID cards allow you to expand a RAID10 array without breaking the RAID, so long as you add pairs of identical disks at a time.

IMHO, RAID0 is a nice fun thing to play with if you're curious, but clearly it has obvious downsides, and is not really a real RAID level. As for actual RAID levels, if you have a critical system that cannot tolerate down-time in case a hard drive fails, then RAID will be necessary. But RAID itself should never really be thought of as a data backup solution.
 
ZFS doesn't "need" ECC. It's better with it, but so is any other filesystem. It's still significantly safer than any other filesystem without ECC. And you only need 2GB RAM for basically any size ZFS system unless it's under very heavy load and/or you use deduplication.
 
ZFS doesn't "need" ECC. It's better with it, but so is any other filesystem. It's still significantly safer than any other filesystem without ECC. And you only need 2GB RAM for basically any size ZFS system unless it's under very heavy load and/or you use deduplication.
This.

A hardware raid card typically has a disk cache, maybe 256 or 512MB RAM. And if this RAM is not ECC - the card is susceptible to random bit flips which might corrupt your data. If your network card doesnt have ECC RAM on it, it might corrupt your data. etc. All types of hardware need ECC if you want to avoid corrupted data. And if you run NTFS or ext4 or ZFS on a PC without ECC RAM - it will not detect bit flips which might corrupt your data. Thus, any safe solution requires ECC RAM. Because ZFS is already very safe and if you combine it with ECC RAM, you will get the best protection out there. ZFS in itself is very safe, but gets even safer with ECC.

BTW, ECC RAM typically detects and correct 1 bit flips, but is not able to correct 2 bit flips if they happen simultaneously. The point is, ECC RAM is not super safe.

And yes, it suffices with 2GB RAM for ZFS servers. If you have more RAM, you will get a huge disk cache which will speed up everything. But huge disk cache is not necessary.
 
A little more research and I was thinking of getting a 4-bay JBOD. I was under the impression it was just a bunch of disks. Then using software to copy one drive to another every so often. So it would be like having 4 separate external hard drives, but apparently JBOD mounts the disks as one virtual array so if one drive fails you lose your entire array?!

Sounds like JARA (Just Another Raid Array) more than Just A Bunch of Disks.
 
A little more research and I was thinking of getting a 4-bay JBOD. I was under the impression it was just a bunch of disks. Then using software to copy one drive to another every so often. So it would be like having 4 separate external hard drives, but apparently JBOD mounts the disks as one virtual array so if one drive fails you lose your entire array?!

Sounds like JARA (Just Another Raid Array) more than Just A Bunch of Disks.

Lookup Drive Bender or Drive Pool, I think it will explain the ability to have a storage pool with JBOD.
 
Lookup Drive Bender or Drive Pool, I think it will explain the ability to have a storage pool with JBOD.

Okay, those seem cool. So it's software you install in Windows that manages multiple drives. Unfortuantely I think there are only two SATA ports in this Dell and they're already used. Is there any way to use one of those external USB 4-bay JBODs with DrivePool?
 
I've always used RAID for my data, mainly RAID 0 for OS and RAID 1 for data.

But yes, this is not a backup. Many RAID levels like level 1 or 5 provides uptime and guarantee your data is accessible in case of hardware failure. But if you erase a file or a file get corrupted, you can't access it. You need a backup to restore it.

RAID is very useful in my case. Having many TB of data with one drive failure protection (RAID 1 in my actual setup) is a must for me.
 
A little more research and I was thinking of getting a 4-bay JBOD. I was under the impression it was just a bunch of disks. Then using software to copy one drive to another every so often. So it would be like having 4 separate external hard drives, but apparently JBOD mounts the disks as one virtual array so if one drive fails you lose your entire array?!

Sounds like JARA (Just Another Raid Array) more than Just A Bunch of Disks.

JBOD is like RAID-0 without the performance benefits. But if you lose one drive in a JBOD, you can still recover a good chunk of your data because you'll only lose what was on the dead disk. In RAID-0 you'd have chunks of most files on several drives, which also means that if just one drive dies, you lose at least some data from most of your files, which is close enough in practice to losing all of your data.
 
JBOD is like RAID-0 without the performance benefits. But if you lose one drive in a JBOD, you can still recover a good chunk of your data because you'll only lose what was on the dead disk. In RAID-0 you'd have chunks of most files on several drives, which also means that if just one drive dies, you lose at least some data from most of your files, which is close enough in practice to losing all of your data.

That's not what this guy says at 4:20-5:20

https://www.youtube.com/watch?v=cbOqbR-ou3E
 
He's wrong. It won't boot normally, but the files on good disks will be recoverable by many different programs, many of them being free, with pretty much zero risk if you're recovering to other disks.

I am not vouching for JBOD, however. I am not a fan.

What programs do people use to recover from a JBOD drive failing? If RAID isn't what I want, and you don't like JBOD, what would you recommend I do? Just stick with a number of separate external drives?
 
What programs do people use to recover from a JBOD drive failing? If RAID isn't what I want, and you don't like JBOD, what would you recommend I do? Just stick with a number of separate external drives?

You can either get maximum space out of your drives with something like JBOD or you can lower your usable capacity by adding some sort of redundancy. The choice is yours to make. You could do a JBOD. You could do 2 JBODs or 2 RAID-0's or 2 RAID-1's and keep important files on both drives manually or via synchronization software. You could just go for RAID-5 after all. Or you could go for RAID-10. It all depends on your budget, how much data you have, and how important it is to you.

Testdisk is one. Recuva another. Haven't used 'em. I've used the paid but cheap R-Studio and like it.
 
Last edited:
r-studio definitely, or probably recuva also can recover from a failed JBOD.

Basically what I'm seeing is: RAID and/or backups are too expensive for me.

That's fine, they aren't for everyone. But consider that a hard drive is $150 for a 4 TB, most people can afford $150 if they quit going to movies/bars/etc., quit eating fast food for a few weeks, or quit buying video games for a month.

On that 4TB drive that you saved for, are probably pictures and files, and memories that meant something to you at one time, or you would have already deleted them. Consider that at literally any moment, even now, your exisiting hard drive could fail and you could loose all that, or best case scenario have to pay a recovery service $1000-$2000 to get back those files.

$150 is cheap insurance.


That is the premise of RAID, large cost up front, but increased safety and reliability, especially zfs.

JBOD and many externals/internals all separate are no insurance at all.
 
4. You can't just add a new drive to get more space, you have to rebuild the entire array?

RAID controllers will have specific software, but my Dell PERC5i using LSI's MegaRAID Storage Manager allowed for a 5th 2TB drive to be added while the array was fully read/write. It did take 3 days or so for it to finish, but it was a seamless expansion overall.

I believe I did have to go into computer management in Windows and "expand" the drive to use the new unallocated space.
 
If you don't need RAID5/6 at the moment (as they are not ready for general use) and can do with RAID10 you may consider using a new kernel and use btrfs which in it's current state actually is very suitable for large home setups as long as parity RAID is not a requirement or you depend on some of the more advanced features of ZFS that are not yet implemented in btrfs. Hopefully a migration from RAID10 to RAID5/6 will be trivial in the future when the support is more robust if this is a necessity.

Btrfs is nice because it has full checksumming and snapshot features like ZFS (which gives you integrity and protection against almost all user errors or unwanted manipulation of data because you are able to walk back in filesystem history using the snapshots), but drives can easily be added and removed to the same array online with no negative effects other than doing a rebalance. It is in other word extremely flexible for long term multidisk arrays without the normal restrictions traditional RAID and RAID-Z enforces when you need to increase/reduce the size/number of disks

Hopefully RAID5/6 will also be generally usable in the near future, but learning from previous btrfs development it is best to not wait for this as things have a tendency to take longer than anticipated. I suspect these important features will be implemented faster than previously, though, as I know several distros are now going to use btrfs in the near future, allowing for much more general use.
 
Last edited:
BTRFS is really immature and it is riddled with problems. Just read the mail lists. I would avoid BTRFS myself.

If you are mostly storing media files, then you could go for Snapraid, Flexraid, or a similar solution. These allow you to use individual disks, and then you add another disk used for parity. So if a single disk crashes, you can replace it and the parity disk will repair the crashed disk. It is not a raid, every disk is individual. You can pull out a single disk and use it as a normal disk and access all files on it. If you access a file on a raid, all disks will be active. On Snapraid, only one single disk will be active, and the rest of the disks will be spinned down.

JoeComp has more to say about this, ask him. Or search for threads here on Snapraid. (It is possible to run Snapraid ontop individual ZFS disks).
 
I use the current stable kernel 3.11.1 and btrfs utils 0.20 rc1 and have had zero issues with btrfs on my fileserver. As a bonus it is much easier in use than fdisk/LVM/mdadm too. I use it with RAID10 and compress=lzo and utilize snapshots heavily. I also have a 22TB EXT4/MDADM RAID6 array in the same server.

btrfs fsck is now actually doing useful things too (like actually repair stuff), and is nothing like the old outdated articles you find in a quick google search, but you need the newest version of course.

I wouldn't say it is really immature, it just depends on the featureset you need. The important stuff is working really well.
But of course. If you refer to the default installed kernels and btrfs-utils used in most distros I would agree. It is not an out-of-the-box experience. You need the latest kernels and btrfs-utils.

But since it is actually very usable and seemingly safe for home use now, it means that the distro versions will catch up over the next months and it also means that you can install a "future proof" FS today, not having to redo the system and temporarily store several TB of data elsewhere in a year.

(If traditional ZFS and RAID is not an option that is)
 
If you are mostly storing media files, then you could go for Snapraid, Flexraid, or a similar solution. These allow you to use individual disks, and then you add another disk used for parity. So if a single disk crashes, you can replace it and the parity disk will repair the crashed disk. It is not a raid, every disk is individual. You can pull out a single disk and use it as a normal disk and access all files on it. If you access a file on a raid, all disks will be active. On Snapraid, only one single disk will be active, and the rest of the disks will be spinned down.

Good explanation. One other thing worth pointing out about SnapRAID is that it maintains block level checksums on your data, similar to ZFS or btrfs, although being snapshot RAID, the checksum is only verified when you are running a sync or check operation. (FlexRAID also maintains checksums, but they are file level, so not as fine-grained)

Also, the SnapRAID developer has been adding new features recently. He has improved the check (scrub) functionality so that you can configure it to automatically check older data for silent errors without having to check the entire data set.

And the next version (v5) will have triple parity support (and somewhat restricted quad-parity support). He already has the code for that in git and the performance is excellent.
 
Good explanation. One other thing worth pointing out about SnapRAID is that it maintains block level checksums on your data, similar to ZFS or btrfs, although being snapshot RAID, the checksum is only verified when you are running a sync or check operation. (FlexRAID also maintains checksums, but they are file level, so not as fine-grained)

Also, the SnapRAID developer has been adding new features recently. He has improved the check (scrub) functionality so that you can configure it to automatically check older data for silent errors without having to check the entire data set.

And the next version (v5) will have triple parity support (and somewhat restricted quad-parity support). He already has the code for that in git and the performance is excellent.

So SnapRAID is like an open source version of DriveBender or DrivePool? Is there such thing as simple 4-bay hard drive enclosure that doesn't use RAID/JBOD where I can use SnapRaid or DrivePool?
 
Last edited:
So SnapRAID is like an open source version of DriveBender or DrivePool? Is there such thing as simple 4-bay hard drive enclosure that doesn't use RAID/JBOD where I can use SnapRaid or DrivePool?

No, SnapRAID is not about pooling, although it does have a basic pooling function. SnapRAID is about parity and checksum data. If you want pooling, you can use SnapRAID's basic functionality, or another (3rd party) pooling solution.

Regardless, I do not recommend any HDD enclosure that does not give you access to each drive individually.
 
No, SnapRAID is not about pooling, although it does have a basic pooling function. SnapRAID is about parity and checksum data. If you want pooling, you can use SnapRAID's basic functionality, or another (3rd party) pooling solution.

Regardless, I do not recommend any HDD enclosure that does not give you access to each drive individually.

Okay, I'm thinking about getting this JBOD enclosure. Mediasonic HF2-SU3S2 ProBox 4 Bay Hard Drive Enclosure with USB 3.0 & eSATA

Some of the reviews mention they use it with storage pools. So that enclosure would work with SnapRAID and DrivePool?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I wouldn't say [BTRFS] is really immature, it just depends on the featureset you need. The important stuff is working really well.
But of course. If you refer to the default installed kernels and btrfs-utils used in most distros I would agree. It is not an out-of-the-box experience. You need the latest kernels and btrfs-utils.
If your kernel crashes, you loose a few hours of work. If your filesystem crashes, you loose years of work. The filesystem is the skeleton upon your entire system rests. Filesystems need to be more reliable than a kernel, the crashes have far greater consequences.

So, you should not use a filesystem that is under development, where the newest features have been implemented the last month or even last year. In some Enterprise storage companies they think ZFS is too new and immature (a decade old now), they only use mature storage solutions that has existed for several decades. Where there have been no critical bug reports the latest five years or so. If they loose data, the entire company might halt and loose millions or billions of USD. So, no, they would never put into production something that is under development. Neither would I, my data is important to me. I would be very careful to use BTRFS. After BTRFS reaches v1.0 I would wait another 3 years before using it. BTRFS Raid-5 was implemented recently, the last month or so? BTRFS is very very far from v1.0. Sorry, but in my ears it sounds like it is immature. Sure, if you have backups then fine, go ahead with something under development, why not. But if not, rethink your decision. Besides, ZFS runs just fine on Linux, in kernel today. Why BTRFS when you can run mature ZFS instead?
 
Greetings

A little more research and I was thinking of getting a 4-bay JBOD. I was under the impression it was just a bunch of disks. Then using software to copy one drive to another every so often. So it would be like having 4 separate external hard drives, but apparently JBOD mounts the disks as one virtual array so if one drive fails you lose your entire array?!
Sounds like JARA (Just Another Raid Array) more than Just A Bunch of Disks.

If the machine does do this internally I'm guessing that it just concatenates all the drives together so as to appear as one big drive. If I had to guess as to why it might be done this way I'd say that it would be to

(a) Make it simpler for people who just see one big drive, and/or

(b) probably set up this way because perhaps their PC can't use the Sata port multiplier function.

JBOD is like RAID-0 without the performance benefits. But if you lose one drive in a JBOD, you can still recover a good chunk of your data because you'll only lose what was on the dead disk. In RAID-0 you'd have chunks of most files on several drives, which also means that if just one drive dies, you lose at least some data from most of your files, which is close enough in practice to losing all of your data.

and

That's not what this guy says at 4:20-5:20

https://www.youtube.com/watch?v=cbOqbR-ou3E

Raid-0 and a concatenated JBOD mode like that stated in the youtube clip would have the NTFS metadata scattered about across all the disks so yes I would expect that one hard drive failure would cause massive problems.

HOWEVER, if you were to get windows to create spanned volumes you could select all 4 disk partitions and format them as one large volume, you can also increase the size by adding more partitions/disks up to a maximum of 32.

http://technet.microsoft.com/en-us/library/cc772180.aspx

Microsoft's official position is that if you lose one drive or partition in the spanned volume you lose all the data, in practice I am led to believe that there are consistent NTFS volumes left in the remaining drives and for example individual files reside on one hard drive only and never span onto another, the problems you have with this setup on a drive failure is.

(a) you don't know what file gets written to what drive so its hard to tell what gets lost, and

(b) there's no guarantee that you can access/recover the data left on the remaining drives.

What programs do people use to recover from a JBOD drive failing? If RAID isn't what I want, and you don't like JBOD, what would you recommend I do? Just stick with a number of separate external drives?

If there's no redundancy then usually there's no data available to recover from whatever is lost. Seperate external drives would be preferable as this would provide backup, you just need some good file syncing software.

Okay, I'm thinking about getting this JBOD enclosure. Mediasonic HF2-SU3S2 ProBox 4 Bay Hard Drive Enclosure with USB 3.0 & eSATA

Some of the reviews mention they use it with storage pools. So that enclosure would work with SnapRAID and DrivePool?

Make sure your esata works with port multipliers, read this note on the link you provided

*********************************************************************************************
Note: Motherboard's SATA port MUST support Port Multiplier in order for your computer to recognize multiple hard drive if the unit is connected via eSATA
*********************************************************************************************

If your computer's eSATA port is OK then when you plug the sata cable from this into your PC (assuming you already have the 4 hard drives in there) then I'm assuming that 4 drives will pop up on your PC, if this happens then you are then at liberty to

(a) format them individually (as if they were 4 drives physically plugged into your PC).

(b) format them as one big spanned volume set as described above (with all its disadvantages).

final comments.

If you don't want to backup your data (or can't afford to do so, of for any other reason) then it makes sense to choose the most robust filesystem available which at this point is ZFS, if people can afford to at the very least dedicate one or two drives to parity then they instead usually create a ZFS Raid-Z or Raid-Z2 array on their 5 to 20 drives that they have. The main reason people don't back up their data is the fact that they may have a 20 bay Norco filled with drives with all their data on it and backup for them means having to buy another identical case with another 20 drives and they may not have the finances to do it.

In your case I'd suggest you get the 4 bay NAS and format them as 4 individual drives, then all you need is 4 externals and you periodically sync the drives on the NAS with the externals. You don't want something that's too complicated or time consuming.

P.S.
A little more research and I was thinking of getting a 4-bay JBOD. I was under the impression it was just a bunch of disks. Then using software to copy one drive to another every so often.

You could do this by duplicating the data manually on two drives but if lightning strikes the overhead power lines leading into your house then it will fry everything including your NAS so if your data is really important then have two externals for every relevant drive you want to backup in your NAS and sync them individually one at a time (i.e. make sure you don't have both copies plugged in at the same time).

Hope this helps you somewhat.

Cheers
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Raid-0 and a concatenated JBOD mode like that stated in the youtube clip would have the NTFS metadata scattered about across all the disks so yes I would expect that one hard drive failure would cause massive problems.

Yes it will definitely cause problems, but JBOD will cause far fewer problems than RAID-0 would. With a program like R-Studio, a good amount of data would be recoverable with JBOD but not RAID-0. Nobody's claiming if you lose a drive in JBOD, it still works, but with some files missing. That's obviously not the case.

Again, not sticking up for JBOD here. I'm all about RAID-10 and occasionally 5/Z or 6/Z2. Usually RAID-10 for me though.
 
Greetings



If the machine does do this internally I'm guessing that it just concatenates all the drives together so as to appear as one big drive. If I had to guess as to why it might be done this way I'd say that it would be to

(a) Make it simpler for people who just see one big drive, and/or

(b) probably set up this way because perhaps their PC can't use the Sata port multiplier function.



and



Raid-0 and a concatenated JBOD mode like that stated in the youtube clip would have the NTFS metadata scattered about across all the disks so yes I would expect that one hard drive failure would cause massive problems.

HOWEVER, if you were to get windows to create spanned volumes you could select all 4 disk partitions and format them as one large volume, you can also increase the size by adding more partitions/disks up to a maximum of 32.

http://technet.microsoft.com/en-us/library/cc772180.aspx

Microsoft's official position is that if you lose one drive or partition in the spanned volume you lose all the data, in practice I am led to believe that there are consistent NTFS volumes left in the remaining drives and for example individual files reside on one hard drive only and never span onto another, the problems you have with this setup on a drive failure is.

(a) you don't know what file gets written to what drive so its hard to tell what gets lost, and

(b) there's no guarantee that you can access/recover the data left on the remaining drives.



If there's no redundancy then usually there's no data available to recover from whatever is lost. Seperate external drives would be preferable as this would provide backup, you just need some good file syncing software.



Make sure your esata works with port multipliers, read this note on the link you provided

*********************************************************************************************
Note: Motherboard's SATA port MUST support Port Multiplier in order for your computer to recognize multiple hard drive if the unit is connected via eSATA
*********************************************************************************************

If your computer's eSATA port is OK then when you plug the sata cable from this into your PC (assuming you already have the 4 hard drives in there) then I'm assuming that 4 drives will pop up on your PC, if this happens then you are then at liberty to

(a) format them individually (as if they were 4 drives physically plugged into your PC).

(b) format them as one big spanned volume set as described above (with all its disadvantages).

final comments.

If you don't want to backup your data (or can't afford to do so, of for any other reason) then it makes sense to choose the most robust filesystem available which at this point is ZFS, if people can afford to at the very least dedicate one or two drives to parity then they instead usually create a ZFS Raid-Z or Raid-Z2 array on their 5 to 20 drives that they have. The main reason people don't back up their data is the fact that they may have a 20 bay Norco filled with drives with all their data on it and backup for them means having to buy another identical case with another 20 drives and they may not have the finances to do it.

In your case I'd suggest you get the 4 bay NAS and format them as 4 individual drives, then all you need is 4 externals and you periodically sync the drives on the NAS with the externals. You don't want something that's too complicated or time consuming.

P.S.

You could do this by duplicating the data manually on two drives but if lightning strikes the overhead power lines leading into your house then it will fry everything including your NAS so if your data is really important then have two externals for every relevant drive you want to backup in your NAS and sync them individually one at a time (i.e. make sure you don't have both copies plugged in at the same time).

Hope this helps you somewhat.

Cheers

Yeah, that helps a lot, thanks! About the esata port, I got my computer just before USB 3.0 so I need to find a low profile PCIe 1x card to add USB 3.0 ports and connect the external bay to that. It has a real PCIe slot, but Dell disabled it for whatever reason. :(

I'm making progress now, thanks!
 
use hitachi coolspin drives. Quietish, lowish power and cool. Works on almost every raid controller.
 
This thread should be pinned. There is a lot of common knowledge about onboard motherboard RAID, RAID controllers, hard drive types, SAS vs SATA, Where ZFS and linux software RAID fits into what you want to do with your storage.

I feel like 90% of the forum comes here and wants to do RAID because they think it will be twice as fast or more reliable, and the truth is unless you spend the cash, that's generally not the case.
 
Normally I stick with WD or Seagate... I'll check out the coolspin drives.

From my observations in the data center (10s of thousands of drive) Hitachi has been the most reliable by far (deskstar and ultrastar). Seagate has been the worse *by far* with around a 50% failure rate after several years of heavy use ( by year 3 or 4). Honestly i wouldnt touch seagate with a 10 foot pole. WD hasn't been horrible but not as good as Hitachi.
 
From my observations in the data center (10s of thousands of drive) Hitachi has been the most reliable by far (deskstar and ultrastar). Seagate has been the worse *by far* with around a 50% failure rate after several years of heavy use ( by year 3 or 4). Honestly i wouldnt touch seagate with a 10 foot pole. WD hasn't been horrible but not as good as Hitachi.

Yes, my Seagate needs to be RMAd, which is the reason for the whole thread. :eek:


I need some USB 3.0 ports to connect the 4 bay enclosure, but I noticed some of the PCIe 1x cards require more power from SATA for some reason. I'm not going to need that if my 4 bay enclosure already has a power adapter, right?
 
Back
Top