Areca 1880 - WD Red (Raid6 throughput)

bekax5

Limp Gawd
Joined
Jun 4, 2012
Messages
132
Hello dear e-friends,

I am opening this since I am with a speed problem on a Raid6 with 6x WD Red drives.
My speeds are kinda low/strange

First I find out this problem since my gigabit speeds were hitting 60-70MB/s when doing Raid6 transfers to other devices...

These are the speeds i got:



I guess I should be hitting 350-500 MB/s with these and not 50-200MB/s...
Is there any way I can test a single drive to know if it's something wrong with 1 drive or the reason to be getting 50MB/s with a 6 drive array on an areca?

By the way, I am hitting 800-1000MB/s with a Raid1 on the same card with 2SSD so the Pcie Bus is not the problem.

This is Windows Home Server 2011
16GB RAM (capped at 8)
I7 860
Areca 1880 (2x Corsair Force GT Raid1 | 6x WD Red Raid6)

If someone please could help me debug this problem it would be awesome.

Thanks.
 
Whenever I see well-defined sawtooth patterns like you have above, it is generally a contention issue of some kind.. First thing to try when doing the benchmark is to completely disable all antivirus/antispyware apps. What motherboard are you using and at what BIOS? Which other PCIe cards do you have installed and which cards are in which slots? What is the FW of your 1880?
 
So, The system was with no antivirus running when I ran those benchmarks.

Motherboard is:
Asus Maximus IIII Formula
Latest BIOS

1PCIe(x16) = Radeon 5450
2PCIe(x16) = Areca 1880
3PCIe(x16) = Null (x4)
1PCIe(x1) = Intel NIC 1Gbit
2PCIe(x1) = Null

I believe that areca is running at 8x (5 GT/s) since it halfs the bandwith when using both x16 PCIe

Firmware of Areca:



Is there any way I can check the Areca bus ?? I want to check If it's 8x 5GT/s or 2.5GT/s ??
Other than reboot maybe ? Since I'm not near the system and do not want to mess with it right now.
 
The PCIe bandwidth is most likely not the problem.
I suppose you do not run your system from this array and the initialization had finished before this test?
 
Yes, Initialization was finished !

I have 2 Arrays.
Raid1 - 2x Force GT 60GB
Raid6 - 6x WD Red 2TB

The Raid1 runs between 800-1GB/s, which is brutal :D
The Raid6 runs between 50-200MB/s which is odd, and different softwares show some peaks and stuffs... Which leads me to believe that it has some kind of problem.
 
Your not the only person to get baffling r5 speeds with WD Red's.

I bought 6 for use with an intel card (can't remember the model) but I get about 14Mb/s write and 70Mb/s read but a mirror is fine at over 100MB/s for read and write
 
Your not the only person to get baffling r5 speeds with WD Red's.

I bought 6 for use with an intel card (can't remember the model) but I get about 14Mb/s write and 70Mb/s read but a mirror is fine at over 100MB/s for read and write

Way to go western :)
Desktop grade RAID drives are bad at raiding :p

So, this is probably Drive related. Isn't there any way to test if it's a single drive which is causing this, or it's the drives in general?
I mean without have to rebuild the whole thing a few times...
 
Hey,

WD Fanboy here.

Good to see some input here from other WD Red owners.

I would suggest trying a RAID0 of 3 drives and a RAID5 of the other three, try different settings for the R5 array than you have right now (different stripe size, different partition/allocation unit size in NTFS)

Speeds should be decent on both, but this is rare for me to see such poor WD performance.
My 11x 2TB Green RAID6 array hits 2.2GB/s reads and 650MB/s writes.

I hope a solution is found for this - I'll be holding off on Red's for a bit.
 
The Drives seem to be able to handle the full speed, since the peaks at hdtach are 500MB which would be the true maximum of 6x disks in raid6

Now I don't find out what troubles the normal speeds...


And the only solution I see is dismantle this array, which I didn't want to do since it has non-backed up 5TB information.

Maybe remove a single drive at a time and test it as a standalone, then rebuild the array and go to the next drive.
That would take me 5 days lol. But it would assure all drives are giving true performance.
 
Get some backup of the important things first. You will be putting the array under a lot more stress than usual, and you will be doing it with only one HDD for redundancy so it's basically a RAID-5 for that period of time.

I'm currently using 3 Areca controllers with WD REDs without any speeds issues, but I use different operating systems. It could of course be fw/driver related, did you check with areca?
 
Just break up the array and test all drives separately. Maybe there is a single drive or faulty connection slowing the array down.
 
Get some backup of the important things first. You will be putting the array under a lot more stress than usual, and you will be doing it with only one HDD for redundancy so it's basically a RAID-5 for that period of time.

I'm currently using 3 Areca controllers with WD REDs without any speeds issues, but I use different operating systems. It could of course be fw/driver related, did you check with areca?

Actually I don't have backup space for 5TB, so I guess the Areca will be doing it's purpose, which is mantain my Array stable :D

However I have a 2TB disk backing up the most important files, so let's hope the raid is strong :)

I didn't check with Areca, I though of "debug" this for myself to make sure it was a disk problem.


Just break up the array and test all drives separately. Maybe there is a single drive or faulty connection slowing the array down.


How can I remove a disk from the array and test it as standalone?
I mean via webinterface, without have to physically have to remove it and test on a separated system?

I saw something related to pass-trough ?
 
After a while playing with this card,
Suddently:

kzfi.png


It seems that they don't play nice with the Areca's NCQ activated!

These are 6x 2TB RED on Raid6
After all RED's seem to me a nice bunch of drives to raid :D
 
Depending what you use it for disabling NCQ could totally tank your small block read and write speeds at QD > 1 since disabling NCQ effectively makes QD = 1 at all times.
 
It's serving as FileServer at the moment.
It was slowing down from 600MB to 100MB!

But now my SSDs dont get NCQ32 but NCQ0 =(
It seems to add NCQ or disable to the whole Controller and not to specific raidsets =/
Just got a drop from 1GB/s+ to 600MB/s

By the way, I just added today a Chenbro Expander, is there anyway to check the Areca link speed to the expander??

Regards.
 
Yes, Initialization was finished !

I have 2 Arrays.
Raid1 - 2x Force GT 60GB
Raid6 - 6x WD Red 2TB

The Raid1 runs between 800-1GB/s, which is brutal :D

The Raid6 runs between 50-200MB/s which is odd, and different softwares show some peaks and stuffs... Which leads me to believe that it has some kind of problem.

RAID-1 or RAID-0? Figures don't add up....
 
So use Intel integrated for the raid 1 array.

Raid 1 for sssd silly anyway. Get a Sammy 830 and call it a day.
 
Probably because in order to get the speed boost out of RAID-1 it has to read X sectors, skip X, then read the next X (where X is however many clusters in the RAID stripe), where if NCQ is enabled since there's no wait time it can just request multiple chunks at a time where without it, it has to wait for each to complete first.

And RAID-1 for SSD is far from silly
 
Is there any reason why I cannot change NCQ independently on each RaidSet ?
It would be the best for both SSD on and HHD off.

So use Intel integrated for the raid 1 array.

Raid 1 for sssd silly anyway. Get a Sammy 830 and call it a day.


The thing is that this motherboard (Asus Maximus iii Formula) doesn't have SATA3 so I'm still way better with the Areca1880 for all arrays. even with SSD NCQ off.
 
Back
Top