Any experience with Intel C224 chipset RAID?

Concentric

[H]ard|Gawd
Joined
Oct 15, 2007
Messages
1,028
Has anyone had any experience using the RAID onboard the Intel C224 chipset (e.g. on a Supermicro X10-SLM+-F), particularly RAID 10?

I'd like to know whether performance is acceptable, because otherwise I'll need to shell out for a hardware RAID card.

Thanks in advance for any responses.
 
Is it for SSDs or HDDs? Do you want high IOPS or throughput? Read or write heavy? Because the comparable desktop version the RAID0 speed for SSDs speed is very high. What you have to consider is that the PCH has only a PCIe 2.0 x4 connection, so no more than ~1.6-1.8 GiB/s, without accounting for ethernet traffic over the integrated controllers.
 
Apologies, should have included more detail.

It's for 4 hard drives. I'm considering the Seagate Constallation ES.3 1TB 7200rpm 128MB ST1000NM0033

It's for a general purpose server running 4 VMs. So I'm guessing no need for massive throughput, just decent IOPS? It certainly won't be pushing GiB/s territory.
 
Especially HDD based arrays can benefit a lot from hardware RAID controllers with write caching and BBUs, since HDDs have very low IOPS compared to even the slowest SSDs. I would say that for the price of one good RAID controller you can get two relatively large SSDs to put in RAID1 on the motherboard controller.

I used the Intel onboard RAID for RAID0 (SSDs, HDDs) and RAID1 (HDDs) and was satisfied with their performance, it was undoubtedly limited by the member devices, not the RAID implementation and I'm certain you will not get to its limits with just 4 HDDs.
 
Apologies, should have included more detail.

It's for 4 hard drives. I'm considering the Seagate Constallation ES.3 1TB 7200rpm 128MB ST1000NM0033

It's for a general purpose server running 4 VMs. So I'm guessing no need for massive throughput, just decent IOPS? It certainly won't be pushing GiB/s territory.
If you have the drives, use them and on the on-board. If you haven't bought the drives, then look at SSHD's options as the prices are good and may offer better performance after they settle down.

Especially HDD based arrays can benefit a lot from hardware RAID controllers with write caching and BBUs, since HDDs have very low IOPS compared to even the slowest SSDs. I would say that for the price of one good RAID controller you can get two relatively large SSDs to put in RAID1 on the motherboard controller.

I used the Intel onboard RAID for RAID0 (SSDs, HDDs) and RAID1 (HDDs) and was satisfied with their performance, it was undoubtedly limited by the member devices, not the RAID implementation and I'm certain you will not get to its limits with just 4 HDDs.
I would normally disagree with first paragraph, on-board chipsets can do rather well with the OP's chosen combo however, the rest of your post does ring very true if total costing are looked at.


OP, a pair of bigger SSD's in dynamic (Win-Software) RAID-1 will see full single SSD write speeds and up to twice the read speeds while still having safety. The beauty of it is that you are getting rid of dependencies on certain types of hardware (on-board chipsets or HW RAID), all you need is a pair of AHCI enables SATA ports.
 
Thanks for the replies.

I don't have anything yet, I am specing a new setup.

I know that the best way to get good performance would be to use SSDs, but this is not that type of scenario. I'm trying to maximise speed if I can, but the budget won't allow me to splash out.

I should add: I need at least 1TB of storage space overall because this server will host everything, including user files etc. It's not just a VM host.

The cost of swapping even two of the drives to large enough SSDs would be hard to justify. One of the suppliers I'm looking at charges $135 for each of these HDDs, but over $300 for the cheapest and smallest SSD I would consider using.


Normally I'd just put two of the HDDs in RAID1 and call it a day, but I wanted to find out whether using the onboard RAID10 would boost performance enough to justify buying two extra drives for approx $270.
 
I would normally disagree with first paragraph, on-board chipsets can do rather well with the OP's chosen combo however, the rest of your post does ring very true if total costing are looked at.

The Intel (fake) RAID is not bad, but in certain workloads it can't compare against a hardware RAID controller. The battery backed write cache is basically a mini-SSD that can handle tens to hundreds of thousand of IOPS and has multiple GB/s of bandwith. If the number of VMs grows and you do not want to use unsafe caching options (which can corrupt VMs in case of power loss), a hardware controller is a real alternative.
 
No need for the lesson, I do this shit for a living and know that every card is very different and to get the figure you have just quoted, means deep pockets. There is always a time and place for everything, the OP isn't talking enterprise here.
 
It was mainly meant as an explanation for the OP because he did consider a hardware controller. If placing the performance critical stuff on SSDs exceeds the budget, a controller with BBU also certainly will.
 
It was mainly meant as an explanation for the OP because he did consider a hardware controller. If placing the performance critical stuff on SSDs exceeds the budget, a controller with BBU also certainly will.

Yes, true. I would be going for something like an LSI 9260-4i without BBU. Not ideal, but the system will be on a UPS.

Can I expect about 30-40% better performance with a card, especially with small reads/writes? Might help me to sell it if I could quantify the difference.
 
OP, can I ask what all the data storage is needed for? I'm just wondering if you could offload it to a separate drive or if you're running some sort of active directory type thing. I have a somewhat similar situation I'm running atm. I have:

2x256GB ssds in intel raid 0 (If something happens I have daily incremental backups for a year, and all I'd lose is access to my vpn and movies/tv shows so this isn't a big deal for me, but I probably should have gone with a 512GB and been done with it)
4x2TB drives in intel raid 5
2x2tb in storage spaces (split between mirroring and striping).

What I'm wondering is mainly if you can just get an SSD to run the operating systems on, then get 2 2TB (or 3TB, those are the best I can see in terms of GB/$ for most drives atm) and stick them in raid 1 for the storage you need. Really that question depends on your workload though.

I highly recommend SSDs for multiple OS VMs, once you get 2 OSes doing some sort of work, be it updates or searches or something, hard drives tend to start choking up, atleast in my limited personal experience. This is doubly true if people are going to be doing file transfers constantly.
 
OP, can I ask what all the data storage is needed for? I'm just wondering if you could offload it to a separate drive or if you're running some sort of active directory type thing. I have a somewhat similar situation I'm running atm. I have:

2x256GB ssds in intel raid 0 (If something happens I have daily incremental backups for a year, and all I'd lose is access to my vpn and movies/tv shows so this isn't a big deal for me, but I probably should have gone with a 512GB and been done with it)
4x2TB drives in intel raid 5
2x2tb in storage spaces (split between mirroring and striping).

What I'm wondering is mainly if you can just get an SSD to run the operating systems on, then get 2 2TB (or 3TB, those are the best I can see in terms of GB/$ for most drives atm) and stick them in raid 1 for the storage you need. Really that question depends on your workload though.

I highly recommend SSDs for multiple OS VMs, once you get 2 OSes doing some sort of work, be it updates or searches or something, hard drives tend to start choking up, atleast in my limited personal experience. This is doubly true if people are going to be doing file transfers constantly.

Yes, I'm running "some active directory type thing" :D This is a business server to run a network for a few hundred users. All the storage space is needed for the 5 OS installs plus the applications on each (things like WSUS data can take a lot of space) and all the user data - personal and shared.

I'm open to the idea of having two mirrors instead, but as I said, the SSDs are considerably more expensive. I would need to be looking at ~500GB enterprise models.
 
Back
Top