Should I move off of intel fakeraid?

Pylor

Limp Gawd
Joined
Feb 9, 2012
Messages
422
Currently I have a raid 5 of four 2TB drives, and while it's served me well over the past 2 and a half years (it was transferred from an old socket 775 core 2 duo build), all the things I've read about it and the dependence I've developed for it, I'm concerned about its reliability. Almost every forum I've ever seen has made motherboard raid solutions out to be the worst thing ever made.

Currently I have:

X9SCM-F providing 2xSata3 ports and 4xSata2 ports
HighPoint Rocket 640L providing 4xSata3 for the additional drives

4x2TB drives in raid 5 fakeraid
2x2TB drives in storage spaces, split between mirroring and striping for generic storage and backup - these are hooked up through the Rocket 640L
2x256GB ssds in raid 0 fakeraid (backed up every night to a storage space)

All running on windows server 2012.

Basically I'm looking at pros and cons to moving off of the intel solution. It seems to be pretty speedy, any time I transfer files to and from it, it's usually around 200MB/s depending on what's being transferred. I also ordered a 4TB external drive I plan on doing backups to; I'm only using a little over 2TB of the raid so that'll last me for awhile.

I've looked at flexraid and other snapshot raid type solutions, and they don't really seem to do it for me, I do torrent from time to time on the raid, and I know that flexraid has issues with that (it's not recommended). Mostly it's just stored data sitting there; it has a lot of movie rips and tv show rips on it, along with some documentation, but it's also where my torrents are saved to from my torrent VM.

I've also looked at something like FreeNAS but they say right on their forums "DO NOT RUN IN VIRTUAL MACHINE," and the only way I could run something like that would be through a hyper-v with the disks passed through. I've considered experimenting with it, I have 32GB of ECC memory so I still think it might work out if configured properly (assuming it even works on hyper-v, the posts I saw as recently as March said it didn't).

Storage spaces seems lackluster as well, although I haven't tested out their parity solution, from what I've read it has absolutely horrible write speeds.

Any ideas? Or am I better off just not messing with what has worked for me so far?
 
This very recent thread has good answers to your question.

It's pretty universally agreed that FakeRAID is the worst possible choice and you should move off it, but there's pros and cons between software RAID and hardware RAID.
 
This very recent thread has good answers to your question.

It's pretty universally agreed that FakeRAID is the worst possible choice and you should move off it, but there's pros and cons between software RAID and hardware RAID.

I actually read through that thread and it was part of what makes me want to move to a better system. Snapshot raid doesn't seem to be what I want, it's an option but I was looking for something more along the lines of ZFS2, but I'm not sure how to implement that without an entirely separate system dedicated to that. Every ZFS system I've seen runs on linux and they all say not to run them in virtual machines.

Hardware raid seems overly expensive for something as simple as a home storage solution to me, a good controller costs as much as 3-4 hard drives.

I'd prefer a real-time raid based solution that allows up to two drive failures. Ideally it would allow mixing and matching of larger and smaller drives, as I'd like to be able to get bigger drives in the future.

Given my setup I wouldn't be completely adverse to using a storage space for torrenting and then copying over the data to the snapshot raid if that's what I have to do.
 
Last edited:
I don't know what's the problem with Intel raid. I moved to a new motherboard without any problem. I didn't even have to pay attention to drive order. Just connect the drives to the new MB, switch the controller to RAID mode, and it recognised the raid array, and worked flawlessly. Having to start from scratch with a new os is stirctly bs. At least for intel RST.

It had to be rebuilt from time to time (I failed to properly connect one of the drives on occasion) but the only thing you can notice during rebuild is that the performance of that particular array is lower. It's completely done in the background, and if interrupted it continues where it left off. The computer's usability is unaffected. And my 6x1.5TB Raid 5 array is verified/rebuilt in a few hours.
 
I Every ZFS system I've seen runs on linux and they all say not to run them in virtual machines.
.

Most ZFS systems are Unix based (mainly Solaris and BSD) not Linux.
And you can run ZFS systems perfectly in a virtualized environment under ESXi
if you can pass-through the whole storage (controller and disks) to your storage VM -
together with other VMs (BSD, Linux, OSX, any Windows, Solaris)

See my All-In-One concept. You have the perfect mainboard for such a setup.
You should only replace your Highpoint with a LSI HBA controller (cheapest is a IBM 1015).

http://www.napp-it.org/doc/downloads/all-in-one.pdf
 
Most ZFS systems are Unix based (mainly Solaris and BSD) not Linux.
And you can run ZFS systems perfectly in a virtualized environment under ESXi
if you can pass-through the whole storage (controller and disks) to your storage VM -
together with other VMs (BSD, Linux, OSX, any Windows, Solaris)

See my All-In-One concept. You have the perfect mainboard for such a setup.
You should only replace your Highpoint with a LSI HBA controller (cheapest is a IBM 1015).

http://www.napp-it.org/doc/downloads/all-in-one.pdf

I'd prefer not to start over with ESXi, that is an option, but I would like not to have to set everything up all over again, especially since all my VMs are already setup through Hyper-V.

I don't know what's the problem with Intel raid. I moved to a new motherboard without any problem. I didn't even have to pay attention to drive order. Just connect the drives to the new MB, switch the controller to RAID mode, and it recognised the raid array, and worked flawlessly. Having to start from scratch with a new os is stirctly bs. At least for intel RST.

It had to be rebuilt from time to time (I failed to properly connect one of the drives on occasion) but the only thing you can notice during rebuild is that the performance of that particular array is lower. It's completely done in the background, and if interrupted it continues where it left off. The computer's usability is unaffected. And my 6x1.5TB Raid 5 array is verified/rebuilt in a few hours.

It seems pretty hit-or-miss from what I've seen. I've had this raid since it was just 2 drives in raid 1 in an x59 motherboard, then it got moved to a socket 775 computer where I added two drives and made it a raid 5 array, then yet again I moved it to the current server. I've never had a problem with it or my other fakeraid instances until I got my most recent z87 computer where the raid0 general steam/games drives have fallen out of raid 2 times, one of which occurred when I was updating the firmware.

The strangest thing is that I just went to run ATTO on it to show the performance and how it was pretty decent, only the write speeds were absolutely abysmal. What's really weird is that transferring files to he array still happens quickly, I can move an iso from my desktop to that raid at 200MB/s.
 
Last edited:
You see the high write speeds of 200MB/s because windows is caching the data into memory then writing to the array in the background.

I'm sure if you transferred a 100GB file it would decrease in speed to its real value
 
Hows Linux and the tried and true mdadm do under hyper-v? Actually kind of curious for my own information.
 
Is this a solution looking for a problem?

I've been using Intel integrated RAID controllers for years without issue in a number of configurations (0, 1, 5 with 2 - 3 drives). I've been able to migrate the arrays to new equipment, upgrade the operating systems without crashing the arrays, etc.

I cannot say the same for my experimentation with AMD's integrated RAID contollers. On my 890 and 990 boards the arrays degraded and crashed in short order. This was even using the same drives that had served me faithfully in my Intel machines and continue to do so now that they are once again in an Intel machine.

I understand the benefits of hardware RAID, but Intel's solution, fake as it may be, doesn't seem to leave me wanting and doesn't cost extra. It seems that it has been the same for you, plus you have a backup solution (your 4TB external) in place already in the event of disaster.

If it ain't broke, don't fix it?
 
I'd prefer not to start over with ESXi, that is an option, but I

It seems pretty hit-or-miss from what I've seen. I've had this raid since it was just 2 drives in raid 1 in an x59 motherboard, then it got moved to a socket 775 computer where I added two drives and made it a raid 5 array, then yet again I moved it to the current server. I've never had a problem with it or my other fakeraid instances until I got my most recent z87 computer where the raid0 general steam/games drives have fallen out of raid 2 times, one of which occurred when I was updating the firmware.
Updating the fw on drives inside raid is asking for trouble imho. I'm not surprised that that happened.
The strangest thing is that I just went to run ATTO on it to show the performance and how it was pretty decent, only the write speeds were absolutely abysmal. What's really weird is that transferring files to he array still happens quickly, I can move an iso from my desktop to that raid at 200MB/s.
I was never looking for performance, only a bit better measure against data loss in case of disk failure. But if performance is such a big concern to you, then yes moving to HW RAID is an option. But you have to keep it in mind that the controllers you can get under 400$ are still using software raid. They might be easier to migrate to a new computer, but I don't think it would be a significant performance boost over intel RST.
 
Is this a solution looking for a problem?
LOL!

Sure looks like it.

Remember RAID is NOT a back-up and it's purpose is to protect against a failed drive.

This thread will provide a little more insight.
 
But you have to keep it in mind that the controllers you can get under 400$ are still using software raid. They might be easier to migrate to a new computer, but I don't think it would be a significant performance boost over intel RST.

You can find plenty of used LSI 9260s for half that or Dell Perc 6 for a 1/4. Those are hardware are they not?
 
Most ZFS systems are Unix based (mainly Solaris and BSD) not Linux.

ZFS on Linux is production ready and works extremely well. It ditches the Solaris baggage and has wonderful target options (SCST), unlike FreeBSD.

<--- has ZoL Fibre Channel SAN up and running for several months and is happy with it.
 
You can find plenty of used LSI 9260s for half that or Dell Perc 6 for a 1/4. Those are hardware are they not?

The LSI 9260 series is considered a hardware RAID but it's called ROC (RAID On Chip) and has a few minor drawbacks from a true hardware solution.

Read about the differences here.
 
Is this a solution looking for a problem?

If it ain't broke, don't fix it?

This is probably the truth, I think I've just been reading forums bashing fakeraid to the point that some people make it seem like your data will instantly disappear the moment you sneeze wrong. My biggest problem was that I was trying to use it as a backup solution when it isn't.

I hadn't realized external drives had become so cheap, on tuesday my 4tb seagate, a molex power splitter, and a usb 3 pci express card arrive (figured usb 2 transfer of 2tb would take forever) and I got them all for under 160 dollars on amazon.

Thanks for the answers guys, it was more of an "exploring my options" and "is it really that bad?" thread than anything
 
Updating the fw on drives inside raid is asking for trouble imho. I'm not surprised that that happened.

Ya, that was a mistype, I updated the bios of the motherboard, not firmware of the drive, I was distracted while posting. I figured the bios update wouldn't matter much since I had moved the arrays between motherboards before. One disk showed up as in a failed array and the other one showed up as a non member disk
 
The LSI 9260 series is considered a hardware RAID but it's called ROC (RAID On Chip) and has a few minor drawbacks from a true hardware solution.

Read about the differences here.

The more you know.

Can you give me an example of a card that is a true hardware raid adapter?
 
Back
Top