No raid for green drivers?

budec

Gawd
Joined
Jul 10, 2002
Messages
748
Both the WD and Segate green drives have newegg comments that they shouldn't or not meant to be used for RAID. Why? I wanted to put 3x2 TB in a ZFS raidz, is that ok or some how wear these drives out fast?
 
Both the WD and Segate green drives have newegg comments that they shouldn't or not meant to be used for RAID. Why? I wanted to put 3x2 TB in a ZFS raidz, is that ok or some how wear these drives out fast?

Actually, this has someting to do with the lack of TLER (WD) or ERC (Seagate) support (more specifially, TLER or ERC is permanently disabled, and cannot be enabled at all) in newer versions of these drives. When used in a RAID that involves redundancy that's spread out over multiple drives (such as RAID 3 or RAID 5), these drives not only spin themselves down in firmware - but also takes much longer than 8 seconds to spin back up. This results in the RAID controller reporting these drives as "FAILED" even though the drives are still good! That leads to an unneeded rebuild of the RAID array, and you would permanently lose all data on the spun-down green drive(s). As a result, you are now required to purchase enterprise (RAID-ready) versions of those same drives, which now cost significantly more money per GB than the consumer versions of those same drives.

In other words, these consumer drives should be used only as single drives or in RAID 0, 1 or 10. No RAID 3, RAID 5, RAID 6 or any multi-level RAID involving one of those three parity RAIDs with these Green drives.
 
Last edited:
Thanks for the info.

What about software RAID like ZFS? Also I don't understand why it would work in RAID 1, but not RAID 5?
 
Thanks for the info.

What about software RAID like ZFS? Also I don't understand why it would work in RAID 1, but not RAID 5?

RAID 5 on a software RAID is about the worst in terms of both performance and reliability. And when one drive drops out, it would take days or even weeks to rebuild that array - and during the rebuilding, the CPU utilization would be high enough to prevent using the computer at all even for non-critical use.
 
Regarding ZFS:
Do i need to use TLER or RAID edition harddrives?
No and if you use TLER you should disable it when using ZFS. TLER is only useful for mission-critical servers who cannot afford to be frozen for 10-60 seconds, and to cope with bad quality RAID controller that panic when a drive is not responding for multiple seconds because its performing recovery on some sector. Do not use TLER with ZFS!

Instead, allow the drive to recover its errors. ZFS will wait, the wait time can be configured. You won't have broken RAID arrays, which is common with Windows-based FakeRAID arrays.

from http://hardforum.com/showthread.php?t=1500505
 
RAID 5 on a software RAID is about the worst in terms of both performance and reliability. And when one drive drops out, it would take days or even weeks to rebuild that array - and during the rebuilding, the CPU utilization would be high enough to prevent using the computer at all even for non-critical use.

Under linux it takes hours to rebuild if you have modern hardware. For example the 9 drive x 2TB linux software raid6 I created a few weeks back takes about 8.5 hours to rebuild on a core2quad.
 
RAID 5 on a software RAID is about the worst in terms of both performance and reliability. And when one drive drops out, it would take days or even weeks to rebuild that array - and during the rebuilding, the CPU utilization would be high enough to prevent using the computer at all even for non-critical use.

This is FUD. Rebuilding an array is not that intensive on the CPU - you READ from a few drives and write to ONE drive. And in the case of ZFS it is way more reliable than a hardware RAID5. Most people today use raidz2 to adress the real problem here - a disk failure in the middle of a recovery.
 
Regarding ZFS:

Do i need to use TLER or RAID edition harddrives?
No and if you use TLER you should disable it when using ZFS. TLER is only useful for mission-critical servers who cannot afford to be frozen for 10-60 seconds, and to cope with bad quality RAID controller that panic when a drive is not responding for multiple seconds because its performing recovery on some sector. Do not use TLER with ZFS!

Instead, allow the drive to recover its errors. ZFS will wait, the wait time can be configured. You won't have broken RAID arrays, which is common with Windows-based FakeRAID arrays.
from http://hardforum.com/showthread.php?t=1500505

I have no idea why someone would write this. TLER does not make a huge difference on a day to day basis on ZFS - but on failing drives I would much rather have errors occur that is detectable, than a drive trying to fix them for me with terrible performance as a result.

I would just get F4EG drives. They are great with ZFS. Use the recommended drive amounts (6 or 10 for raidz2) and ashift=12 - but even odd drive numbers and default ashift gives quite acceptable performance. Stay away from WDGP drives!
 
This is FUD. Rebuilding an array is not that intensive on the CPU - you READ from a few drives and write to ONE drive. And in the case of ZFS it is way more reliable than a hardware RAID5. Most people today use raidz2 to adress the real problem here - a disk failure in the middle of a recovery.

I stand corrected on reliability. On my particular rebuild of a RAID 5 array, it pegged the CPU to 100% - and stayed there for 10 days. And I checked the progress of that rebuild, and it remained at only 10% complete after 24 hours. But that was with a buggy driver version and hard drives that were very slow by then-current standards.
 
You must be running windows software raid or fake raid.

On linux or basically any non windows system software raid normally takes less than 10% CPU usage on a single core. During a rebuild this does go up depending on your array but no where near 100% and the operating system sets the raid rebuild to a low priority task so it does not hog your CPU. As for rebuild times I told you mine. It's typically less than an evening provided you are using a PCIe based HBA or the motherboard ports. And I manage 12 to 15 raid5/6 arrays some of which were created in 2004.
 
The WD Green drives did allow you to disable TLER up to a point, then WD caught on that people were using these drives for RAID instead of their more expensive enterprise level drives and put and end to it. Now the popular choice seems to be Hitachis, as they work in RAID arrays without any TLER adjustments required.
 
I have no idea why someone would write this. TLER does not make a huge difference on a day to day basis on ZFS - but on failing drives I would much rather have errors occur that is detectable, than a drive trying to fix them for me with terrible performance as a result.
I wrote that text; a long time ago though. But it still applies:

With TLER disks: ZFS still tries to fix uBER, but may fail when degraded + uBER. In this case using TLER disks is dangerous, as with a degraded array your disks are the last opportunity to retrieve the data; there is no alternative source left.

Without TLER disks: if you still have redundancy ZFS would write to the bad sector; this should not have to cause long delays or hickups. But if you do not have a redundant data source, then it falls back to normal 120 second recovery. This should be the setup that you want:
1) fixes uBER (bad sectors) automatically if redundant source available
2) only 'stalls' for recovery without redundant data source

So to answer the OP: for ZFS you do not want TLER-capable disks. If you disks are capable of TLER you may want to disable this feature, since it is not required (ZFS won't kick out disks like hardware RAID does) and even potentially harmful.

Green low-rpm, non-TLER disks are thus the preferred storage type for use with ZFS. low-rpm because you get IOps performance from combining with SSD (L2ARC+SLOG) so 4200rpm with very high data density would be the most logical setup. Today's standard is 5400rpm however.

Samsung F4 2TB drives appear to be the best ZFS disk at the moment. You should take care to stick to a certain number of optimal disks however, which is:
RAID-Z: 2, 3, 5 or 9 disks (usually 3 or 5)
RAID-Z2: 6 or 10 disks
 
Back
Top