RAID 5 Questions / Intel Onboard

Digital Viper-X-

[H]F Junkie
Joined
Dec 9, 2000
Messages
15,116
Got a few questions! here is my current setup.

4 x 3TB Drives in RAID 5 on the Intel on-board controller ( H87 chipset board)

I want to either expand it to 5 drives, or get an add on raid card that can do online expansion(if such a best exists that is reasonably priced)

What options do I have?

If I want to add another drive on the Intel setup, does it support online expansion?

If I want to move to a dedicated raid card, can I add 4 drives, move the data onto them, then add 2 drives later and expand it?
 
I would not use Intel onboard RAID5. I had to recover such a RAID for a friend and it really was a hassle. Additionally it has low performance. My suggestion would be to buy a NAS and place all data drives into that or buy a dedicated RAID card.

The Intel RAID does not support online expansion. How are you going to move the data to the new RAID without enough drives? You need enough new drives to contain all your data and then move everything over. Do NOT do online expansion without a proper backup (which you should have anyway), those can go horribly wrong and can be impossible to recover.
 
I would not use Intel onboard RAID5. I had to recover such a RAID for a friend and it really was a hassle. Additionally it has low performance. My suggestion would be to buy a NAS and place all data drives into that or buy a dedicated RAID card.

The Intel RAID does not support online expansion. How are you going to move the data to the new RAID without enough drives? You need enough new drives to contain all your data and then move everything over. Do NOT do online expansion without a proper backup (which you should have anyway), those can go horribly wrong and can be impossible to recover.

Thanks for the reply

NAS it out of the question, this computer was mainly built to be a file server / htpc. So I'd like to keep it as such. Any suggestions on a card?

I have enough non-raided storage to hold the data temporarily.
 
I would not use Intel onboard RAID5. I had to recover such a RAID for a friend and it really was a hassle. Additionally it has low performance. My suggestion would be to buy a NAS and place all data drives into that or buy a dedicated RAID card.

The Intel RAID does not support online expansion. How are you going to move the data to the new RAID without enough drives? You need enough new drives to contain all your data and then move everything over. Do NOT do online expansion without a proper backup (which you should have anyway), those can go horribly wrong and can be impossible to recover.

This sounds more like an opinion formed from a single, bad experience.


My 2 cents: I have used IM RAID for years on ICH6's 7's, 9's and 10's, always worked well in Mirror, Stripe and Parity (RAID 1, 0, 10 & 5). Use half decent drives, don't use wild overclocks or on an unstable system and keep in mind, it is on-board.

On 6x drive RAID-5's, I would average 130MB/sec write and 500+ MB/sec reads. More than enough.

Be wary of the IM software, use stable editions only.
 
This sounds more like an opinion formed from a single, bad experience.


My 2 cents: I have used IM RAID for years on ICH6's 7's, 9's and 10's, always worked well in Mirror, Stripe and Parity (RAID 1, 0, 10 & 5). Use half decent drives, don't use wild overclocks or on an unstable system and keep in mind, it is on-board.

On 6x drive RAID-5's, I would average 130MB/sec write and 500+ MB/sec reads. More than enough.

Be wary of the IM software, use stable editions only.

How many failed drives, rebuilds or recoveries did you do with Intel RAID5? I'm talking specifically about RAID5. RAID0 is dead anyway if a drive fails and RAID1 yields 2 separate drives usable on every computer, while a split up RAID5 array is extremely difficult to bringt together again. I doubt you can recreate an array without initialization with the Intel software, a feature that is required for an implementation I would entrust my data to. And it has no OCE.
 
What kinda disks do you have already? Many consumer level disks do not function well with these cards; some will work better than others with hardware RAID cards.

Good brands are LSI and Areca. Don't be suprised when you see the pricetag.

LSI 9240 / 9260 / 9280, depending on speeds and # of ports.

LSI 9240-8i

Areca ARC-1224-8i

I've had okay luck with some adaptec cards too.
 
I have 4 x 3TB hitachi 7200rpm SATA Drives currently, not Enterprise ones, just the regular desktop ones.
 
You lucked out, in general those work pretty well with just about any kind of RAID card.

Keep in mind you will probably have to destroy the existing array to get it from the intel onboard to the dedicated card.
 
You lucked out, in general those work pretty well with just about any kind of RAID card.

Keep in mind you will probably have to destroy the existing array to get it from the intel onboard to the dedicated card.

That part I'm aware of :)

Could I add another drive that is NOT the same model/brand?
 
That part I'm aware of :)

Could I add another drive that is NOT the same model/brand?

You can, and it will probably work, assuming its something in the Seagate Constellation ES2 / Hitachi Anything / Western Digital RE series.

I do have a frankenstein at work that has a mix of hitachi ultrastar drives and seagate constellation drives (though I remember its running ZFS now so it doesn't matter)

My gut tells me I wouldn't risk it.
 
You can, and it will probably work, assuming its something in the Seagate Constellation ES2 / Hitachi Anything / Western Digital RE series.

I do have a frankenstein at work that has a mix of hitachi ultrastar drives and seagate constellation drives (though I remember its running ZFS now so it doesn't matter)

My gut tells me I wouldn't risk it.

Hmmm, so even If i switch to 4TB Drives, I'd need to use more expensive drives, or go Hitachi again? I can't just use the regular Seagate 7200rpm drives for RAID?
 
I am assuming you're talking about the least expensive 4TB seagates. I would advise against those seagate drives unless they re on the card's HCL. They may be for the areca card.

It's not best practice to mix drive sizes in the same volume on a RAID card.

You could pick up two new 4TB drives and make them their own LUN/array on the same card, while having the 3TB drives in a separate LUN.
 
Last edited:
I am assuming you're talking about the least expensive 4TB seagates. I would advise against those seagate drives unless they re on the card's HCL. They may be for the areca card.

It's not best practice to mix drive sizes in the same volume on a RAID card.

You could pick up two new 4TB drives and make them their own LUN/array on the same card, while having the 3TB drives in a separate LUN.

Sorry, I meant creating a whole new array with 4TB drives and moving the data over. Though I doubt it's worth it, I might look into running a VM with something that does ZFS and using that VS RAID 5.

If I stick with Intel RAID ,do I still need to worry about non-raid model drives?
 
Sorry, I meant creating a whole new array with 4TB drives and moving the data over. Though I doubt it's worth it, I might look into running a VM with something that does ZFS and using that VS RAID 5.

If you have the hardware laying around to have a NAS it, its def worth it.

ZFS RAIDZ/Z2/Z3 arrays can't be expanded non-destructively as far as I know. You can do mirrored pair extending or just not worry about striping data and have a good backup routine.


If I stick with Intel RAID ,do I still need to worry about non-raid model drives?
In general, yeah. Hitachi Drives work okay with the motherboard fakeraid in my experience, as well as the models listed earlier (WD RE / Seagate Constellation)
 
Last edited:
How many failed drives, rebuilds or recoveries did you do with Intel RAID5? I'm talking specifically about RAID5. RAID0 is dead anyway if a drive fails and RAID1 yields 2 separate drives usable on every computer, while a split up RAID5 array is extremely difficult to bringt together again. I doubt you can recreate an array without initialization with the Intel software, a feature that is required for an implementation I would entrust my data to. And it has no OCE.

I've been using INTEL RST for years in multiple systems. With many failed drives, or simple malfunctions, rebuild always worked without a problem, and it's very fast. At least 5x. faster than "cheap" (compared to rst it's still expensive) dedicated cards with marvel chips.

I migrated raid 5 arrays between systems with different intel chipsets, and it worked flawlessly, even drive order is unimportant. I only once had to do a rebuild after migration when I failed to connect a drive, so only 5 of the 6 drives in the raid were visible. But you don't need any software to do the rebuild, it's done completely in the background, the software is only useful for monitoring the status of the array, or to force rebuild/verify from the os.

As a matter of fact right now I'd rather trust my data on intel fakeraid than many dedicated cards that have crap software support. I couldn't find a proper monitoring software for Adaptes 2805 card for example. So I'm completely blind to raid status.

The only drawback is the lack of OCE, but if you have enough drives to do offline capacity expansion it's no problem (OCE is a slow process anyway)
 
Last edited:
UPDATE:

Picked up 5 Hitachi 4TB 5900 RPM drives, only had time to create a 4 drive array before selling my previous 4 3TB drives.

Also got a new mobo (Z87) based Gigabyte, as I installed the new Intel RST Driver / Software and loaded it up, to my surprise there was a nice option called " Add Disk" when I select my array, surely enough after some digging, it allows for live expansion of the array, so I Will add my 5th drive after copying all of the data off of it to the array.

not bad for a cheap / free implementation.

Note: I have an offer for $100 for a Perc 5i with 256m + backup battery. (worth it?)
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Thanks for the info I will pass then =] plus I'm actually happy with the onboard perf. I rebuilt a 12tb array in about 24hours last time.

I have to throw an exception here.

DELL, IBM et. al. are very firm in no longer recommending single parity RAID (RAID 5) with drives in excess of 1TB. Personally, I think even double parity/diagonal parity is a bad idea with very large drives.

Its all about the unrecoverable read errors.

Say you have a drive come off the array. And we're not talking some "fake" failure such as a consumer drive that actually recovers and physically housekeeps a read error but takes to long due to the lack of TLER, -- no, here I'm talking a head crash or other "real" failure.

Now you're rebuilding. To rebuild, every single byte of every remaining drive has to be read. That's how the magic of XOR lets the RAID system rebuild that last drive. But magnetic storage has pretty much a guaranty of one unrecoverable read error every 10^14 or ^15 reads. You rebuild an array with six 3 TB drives, that means that FIVE times 3 TB needs to be read, and read without any more unrecoverable read errors on any of those remaining drives, or the rebuild fails.

Go play with this: http://www.raid-failure.com/raid5-failure.aspx

It will give you food for thought.

Don't get me wrong. I'm running a RAID 5 array on my workstation with 3x 1TB drives. And gosh darn it, I'm even using Intel Matrix/RS RAID -- and I must be the only guy to have never had a problem with it. But I use RE3 drives, I scrub the array monthly, and I have daily backups.

Even with better SAS drives, I would never run RAID with devices larger than 1TB under RAID 5.

I think ZFS or other solutions will provide a stop-gap for the next half decade, but after that we'll have monster SSD devices with double or tripple redundant parity and we'll never ever look back. Especially if some newer memory technologies emerge that are better than current Flash tech.
 
I have to throw an exception here.

DELL, IBM et. al. are very firm in no longer recommending single parity RAID (RAID 5) with drives in excess of 1TB. Personally, I think even double parity/diagonal parity is a bad idea with very large drives.

Its all about the unrecoverable read errors.

Say you have a drive come off the array. And we're not talking some "fake" failure such as a consumer drive that actually recovers and physically housekeeps a read error but takes to long due to the lack of TLER, -- no, here I'm talking a head crash or other "real" failure.

Now you're rebuilding. To rebuild, every single byte of every remaining drive has to be read. That's how the magic of XOR lets the RAID system rebuild that last drive. But magnetic storage has pretty much a guaranty of one unrecoverable read error every 10^14 or ^15 reads. You rebuild an array with six 3 TB drives, that means that FIVE times 3 TB needs to be read, and read without any more unrecoverable read errors on any of those remaining drives, or the rebuild fails.

Go play with this: http://www.raid-failure.com/raid5-failure.aspx

It will give you food for thought.

Don't get me wrong. I'm running a RAID 5 array on my workstation with 3x 1TB drives. And gosh darn it, I'm even using Intel Matrix/RS RAID -- and I must be the only guy to have never had a problem with it. But I use RE3 drives, I scrub the array monthly, and I have daily backups.

Even with better SAS drives, I would never run RAID with devices larger than 1TB under RAID 5.

I think ZFS or other solutions will provide a stop-gap for the next half decade, but after that we'll have monster SSD devices with double or tripple redundant parity and we'll never ever look back. Especially if some newer memory technologies emerge that are better than current Flash tech.

I've had to rebuild the array once already, it's not too bad for me, and I understand what you are saying, RAID 6 is a better option, but everything that I "can't" stand to lose is backed up on a separate external RAID 1 enclosure.

@ ^14 its 27%
@ ^15 its 88%
 
Last edited:
Back
Top