Looking for a RAID card

Migelo

n00b
Joined
Dec 8, 2012
Messages
16
Hi!

Currently I have HTPC configured like this;
HW:
  • AMD Athlon II x2 260
  • GIGABYTE GA-M68M-SP2
  • 4GB DDR2 800MHz RAM
  • 1xWD 500GB 7.2k
  • 3xWD Green 2TB in RAID5

SW:
  • Win 7 x64
  • XBMC
  • Folder Sync
  • .....
On my array, I mostly save large media files and important family photos (they're also synced to another PC).
The array itself is managed by the on-board RAID controller and is using NTFS.

Now, I want to expand my array, switch to a dedicated RAID card that's future proof and maybe change the FS, although the latter is probably not feasible if I stay on Windows.
I'm fully aware that I'll have to delete the array in order to migrate to another RAID card.

I want to go with Hardware RAID, not any Software solutions like FlexRaid, RaidZ, or any kind of snapshot RAID.
I also want the array to be expandable and the only possible way with HW raid is the OLCE feature, as far as I know.
The NTFS has the array limit of 16TB, is there any way around it? Is it possible to avoid those nasty passive errors that come with storing a lot of data for longer periods of time?

Is it possible to get a raid card with let's say 4 ports and then just buy expansions cards to increase the SATA port count?

Does a raid card I'm looking for exist in the under 150$ range, which one?

Cheers
 
WD green drives (other than Green RE2 drives) aren't usually on hardware RAID card HCL's, and people that use them on such risk the data on them.

Looking at what you have, I would highly suggest making sure you have a good backup of your data. Myself and many others on this forum have had poor luck using motherboard RAID5 with WD green disks like that ; the disks may try to recover from an error and take so long to report back to the array manager that they will be dropped from the array.

Most WD consumer drives have functions on them that make them toxic to traditional hardware raid cards. These functions including aggressive head parking and error recovery that lives out of the bounds of most disks that are compatible with HBA's.

There are inexpensive RAID cards out there that do RAID5 and RAID6, like dell perc 6/i and IBM M1015 cards. You can find these cards on ebay usually within the price you mentioned. I can't recommend using those cards with your existing drives, I think you will have issues.

If you want to stick with your green drives, I'd suggest reconsidering software like drive bender or flexraid. Alternatively use the drives as single drives and use ntfs junctions.

What you wrote begs the question; Why are you set on hardware raid? Hardware RAID works best with high quality drives (Any SAS drives, Western Digital RE drives, Seagate Constellation/ES2 drives, hitachi ultrastars).

Hitachi deskstar drives also seem to work pretty well with most hardware RAID HBA's.

If you're serious about going with hardware RAID, then you would do well to get drives that don't have known issues with the cards.
 
I'm leaning towards HW raid because I've looked at a lot of SW solutions and none were what I was looking for. The best was Flex raid, but back then, the real time version was only in beta and I don't like the snapshot version. But now, with the problems you mentioned and me remembering reading about them, I'm willing to reconsider SW solutions again, because buying a completely new set of drives is out of question. It's been a while since I've last researched into SW raid.

That Drive bender seems nice, what about reliability?

I also had those problems, when a disk automatically dropped out of the array and had to rebuild it, but luckily, I didn't loose anything. =)

Before Windows 8 release I was very enthusiastic about the new feature, "Storage Spaces" but (the initial version at least) was buggy, it wouldn't redistribute the files once you added a new drive and even if the array had a new 2TB drive added it would not let you add any more files because all the other, old drivers were full, has that been fixed?
 
I've heard/ready about many people using it, but haven't tried it myself. I did pay for a license of flexraid and I'm using it without issue, though I don't know if I have a lot of faith in the parity protection from it.

I've also tried storage spaces, and I was also disappointed, but this was before the RTM.

Linux software RAID via MADM is supposed to work with pretty much every drive every made, can do RAID5 or RAID6, etc. but I wouldn't suggest it if you're not comfortable with linux.

The green drives I've had very poor performance with ZFS based raid units, to the point I ended up going the flexraid route. With flexraid I get native disk speeds at least and one big "pool", though in the same manner it doesn't autobalance or anything fancy like that. I do have backups so I guess I'm not to worried about it.
 
I don't have a backup for the data, so I need a reliable storage solution. I'm comfortable with Linux but I'm using stuff that's only available on Windows so I'm going to make the switch only if the Linux solutions provides much much better data protection and reliability.

I've looked at drive bender again and I'm confused now, does it support PARITY redundancy? Or just data duplication?
 
No hardware card is ever going to be a viable replacement for backup. You're making a huge mistake not to have a backup plan. Services like crashplan are a helluva lot cheaper than the time it'd take to reload everything from source media (presuming you have it).

Nor do hardware cards have much support for reliably expanding the array on them with more drives over time. Some "can" but it generally requires a degree of reconfiguring that isn't trivial.

Using a software RAID setup for what you're using is probably your best choice. One angle to consider is to set up a ZFS server. ZFS makes it less hassle to add more storage to a pool. That and it's data integrity features make it pretty much second-to-none for avoiding data loss due to corruption. One common way to set one of these up is via OpenIndiana and the napp-it add-on. Once you get it configured (which isn't all that difficult following napp-it's instructions) it's pretty set-and-forget.
 
6$/month for unlimited GB, WOW! But no thanks, I'm a bit paranoid and I like to have my data in my home, not on some server somewhere, where nothing stops foreign governments from looking. I don't have anything to hide, but privacy is a basic human right that is very much forgotten on the internet nowadays.

Will ZFS server automatically redistribute the parity data on the newly added drive? Do drives have to be the exact same size, even to the last bit, like it's with most HW solutions?

With ZFS, is it possible to migrate from RAIDZ to RAIDZ2?
 
Last edited:
At home with no backup is a bad plan. Crashplan (among others) have means for you to encrypt your data before it's sent over. If "the government" wants to know something about you, not having offsite backups isn't going to be much of impediment to them. There are far more ways for your activities to be determined by "the government" than depriving yourself of a viable backup plan.

If the data is important enough to spend the energy to set it up and keep it online then it's pretty stupid not to have a backup/recovery method implemented. But you're free to make whatever mistakes you like.

There's a ZFS thread here on [H]. http://hardforum.com/showthread.php?t=1573272
 
It's nice that they offer encryption but it's the monthly fees that I'm not comfortable with.

The most important data is backed up to 3 different disks in the household.

Let's just say, that I won't put it online and if you think that's a mistake, well....

A PC with RAIDZ2, hot swaps, lightning protection and an UPS is quite good, if you ask me.
 
They do periodically have specials on their subscriptions. I just picked up the family unlimited for $50/year. My time is WELL worth that, or even the full retail. Yeah, it's not "as cheap" as doing something on-site (or not at all) but it really is a bargain for the peace of mind.

A fire at the house or a thorough burglary is not something worth 'saving' the subscription fee, to me anyway.

I hear ya, I've got my data sync'd locally on more than one machine (in more than one location in the house). I've got one fileserver setup in windows2008R2 and an Areca 1260. The other running OI/napp-it with a raidz2 array. All are on UPSes, along with a 20kw generator. The main server gear is all rack-mounted, so thieves would be slowed down or deterred somewhat.

I've had mixed success in the past deal with hotswaps. Never underestimate the risk to the rest of the drives in a chassis when using some (most?) hot-swap setups. Most chassis really don't isolate the drives very well. For my backup swaps I prefer to keep those drives in a whole other eSATA chassis. Better to have those drives spun-down and in a whole other box rather than risk disturbing other drives that might still be spinning. Call me paranoid, but I don't trust swapping drives in/out of a 24-bay chassis while *any* of them are live.
 
First, I meant "Hot Spares" not Hot swaps, my mistake. And I agree with you, it'd be really stupid if you lost your data because of a hot swap. xD

It seems you really have things thought over! I'm not going to implement a generator or anything like that, but burglaries are non-existant where I live and if a fire destroys everything, well, shit happens.

Do you know anything about the Green drives' performance on ZFS?
 
Ah, yes, hot spares are very handy. You just have to make sure your setup is properly configured to use them AND that it DOES use them when needed. I've had my share of cases where a spare wasn't automagically rotated into use when one of the live array members died. So if you set them up go back and check from time to time to make sure they're being put to use when needed.

I've been bitten enough times over 30+ years to have learned how to avoid letting simple short-cuts screw me in the long run. Inconsistent power (low voltages, mainly) have taught me the benefits of UPSes for power conditioning. A generator became necessary when it was clear the local utility's outages were going to be frequent enough and long enough to make it too much trouble to keep enough UPS battery capacity online.

The trouble with the so-called 'green' drives is their aggressive, desktop-oriented "features" often cause a lot of trouble in file server applications. I try to avoid them whenever possible. Drives are cheap enough to make it worth putting the right kinds of drives in a server. Relegate the green ones to near-line or offline backups instead. My time and data are worth more than the headaches of trying to force a controller to work with them. This isn't to say they can't be used, some folks seem to have figured out how. Me, I just avoid them like the plague.

You say 'shit happens' but fire, floods, leaks and other forms of disaster aren't something to just 'put up with' instead of having a plan. Maybe a decade ago, when internet uplinks and machines were slow and storage services were hideously expensive. But today it's so trivially simple and cheap to set up that it's really a no-brainer.
 
Thanks for all the info. I'll be going the ZFS RAIDZ2 route.

Now I just need a HBA or some other way to expand the number of SATA ports on my M68M-S2P mobo that works with freeBSD. Any suggestions?
 
Intel SASUC8i. Works great, but only SAS 3GBs. I'm using many of these cards in FreeNAS boxes to good effect.

People like the IBM M1015 on this forum as well, which is much less expensive on ebay.
 
Same comment about the IBM M1015. It's a fine simple HBA for about $75. It can be flashed to LSI firmware that'll let you run disks in JBOD mode. Cables will run another $20. I'm led to believe the Intel SASUC8i doesn't support drives over 2TB.

I'm partial to hardware RAID for speed, simplicity, and array isolation, but there's no question that RAID HBAs have lower drive compatibility than software solutions.

My response was to buy a bunch of 2TB Hitachi drives for about $75 each to pair with an Adaptec 5805 or IBM M5014 (LSI 9260-8i less RAID-6 and half the cache) (both around $175). They seem less prone to problems than other consumer drives.
 
Last edited:
Which exact Hitachi model?

I'm in EU so the best deal I can get for the IBM card is 120$ over ebay......
 
Last edited:
I ordered the Seagate one, could be a bad choice, but even my WD green aren't specified for 24/7 and I had no problems so far.
 
I personally use 5K3000s, but I bought them back when Hitachi was still selling drives directly...Most Hitachi drives available now are used or new old stock.

Actually, Western Digital ("WD") bought them out (HGST--"Hitachi Global Storage Technologies") in March of this year. They still sell new drives (and are developing new ones). Not as widespread availability currently as WD drives or what they used to have, though.
 
Last edited:
I ordered the Seagate one, could be a bad choice, but even my WD green aren't specified for 24/7 and I had no problems so far.

Ah, you chose too fast :(. If you want an affordable (as opposed to enterprise) drive that's designed to work properly in a RAID array and/or NAS, I'd have suggested a WD Red instead of what you linked before. Designed for 24/7 operation, 1 million hour MTBF, has "time limited error recovery" enabled in firmware and so on. And a fraction of the price of enterprise Seagate Constellations, WD REs, Hitachi Ultrastars and so on. As well as looking at more vendors for better prices.

Seagate Barracudas, WD Greens and Hitachi Deskstars are not designed for hardware RAID array operation. They are designed for low prices ;). Not to say you will for sure have a problem with them if you do use them in a RAID array, but I'd personally never do it.
 
The IBM m1015 card is no good for me, I don't have an empty PCIe slot, just the old PCI.

The long white ones on this mobo: http://www.gigabyte.com/products/product-page.aspx?pid=3498

I'd recommend you get a new motherboard then, or it may be possible to use the graphics card slot. Your mileage may vary with PCI HBA's, in my opinion there aren't really any "good" ones. Most have no more than 2 ports on them.
 
I'm getting a new mobo and that IBM card.

It says it supports 16 drives, but I haven't found any SAS breakout cables for 8xSATAII, just curious.
 
Ah, you chose too fast :(. If you want an affordable (as opposed to enterprise) drive that's designed to work properly in a RAID array and/or NAS, I'd have suggested a WD Red instead of what you linked before. Designed for 24/7 operation, 1 million hour MTBF, has "time limited error recovery" enabled in firmware and so on. And a fraction of the price of enterprise Seagate Constellations, WD REs, Hitachi Ultrastars and so on. As well as looking at more vendors for better prices.

Seagate Barracudas, WD Greens and Hitachi Deskstars are not designed for hardware RAID array operation. They are designed for low prices ;). Not to say you will for sure have a problem with them if you do use them in a RAID array, but I'd personally never do it.

I'm going to use ZFS, so no problem.
 
Back
Top