Areca 1882i/1883i slow raid 5/6 writes

Nixtar

n00b
Joined
Apr 12, 2016
Messages
3
Hi all,

Sorry this turned into a TL:DR.

I am in the process of setting up a new home storage server and I am experiencing very bad write speeds on both an Areca 1882i and 1883i.

The current setup is the following:
HDDs:12x Seagate ST3000VN000 3TB 5900RPM (8 of the drives are ~14 months old out of a previous RAID 6 controlled by a ARC-1224-8i)
Raid Controller: Areca 1883i FW:V1.53 2016-05-13
Chassis: Supermicro 846E16-R1200B with BPN-SAS2-846EL1 backplane (Connected to the 1883i with 2 SFF-8087 to SFF-8643 cables)
OS: VMWare ESXi 6.0 / Testing with Server 2012R2 Bare Metal

Once I build the Raid 6 and started restoring my data from backup I noticed it seems like it was choking and locking up in windows resource monitor the disk queue would jump up to ~30 seconds.
I paused the restore and ran a quick CrystalDiskMark on the voume and everything looked fine with about ~1800MB/s read and ~800MB/s write, yet after about 5 minutes of restoring it would start to die.
I changed the cache mode from write back to write through and..
ql1fc6h.png



I then tried another test with write back enabled and a 32GiB test and saw similar results.
My instinct was that one or more of the drives was bad and causing the raid to crawl.

So I nuked the raid and setup 12 pass though disks and kicked off 12 CrystalDiskMarks at the same time.
iOTtnZ8.jpg

None of the disks show any sign of having any performance issues, this was done with write through enabled on the pass though disks.

My old setup was a ARC-1224-8i with 8 ST3000VN000 in raid 5, while I dont have exact numbers on throughput, it was easily over 300MB/s sustained during Veeam backups of VMs that were ~200gig.
I currently have it setup with 6x 8tb NAS drives in RAID 6 storing all my data so I don't want to mess around too much with it.
But I did a CrystalDiskMark on that volume and got around 250MB/s write with Write Through.


I hit up Areca support and got 1 reply asking me to rule out caching,since then I have emailed them 3 times over the span of a month with no reply. So I turned to Supermicro and they were happy to help me rule out the the expander.

We tested setting up many Raid 0 configs with the results bellow: (All with Write Through)
1 disk(s) plugged in, pass though mode: 114.6MB/s Read 129.7MB/s Write
2 disk(s) plugged in, Raid0: 606.9MB/s Read 320.9MB/s Write
3 disk(s) plugged in, Raid0: 493.0MB/s Read 487.0MB/s Write
4 disk(s) plugged in, Raid0: 619.3MB/s Read 613.2MB/s Write
12 disk(s) plugged in, Raid0: 1413MB/s Read 932.9MB/s Write
So that seems to rule out the expander somehow being a bottleneck.

I also made some test raid 5 volumes..

Disk 1-3 Raid5: 628.7MB/s Read 33.05MB/s Write
Disk 4-6 Raid5: 525.1MB/s Read 32.75MB/s Write
Disk 7-9 Raid5: 524.3MB/s Read 36.10MB/s Write
Disk 10-12 Raid5: 468.1MB/s Read 66.69MB/s Write
These results are all from CrystalDiskMark with 32GiB and I used the sequential (Non Q32) result.

I purchased 3 HGST HDN724030ALE640 (3tb NAS series) for testing and they actually performed worse in raid 5 despite being faster drives by them selves compared to the Seagates.
HGST Raid5: 613.4.7MB/s Read 15.52MB/s Write


I picked up an Areca ARC-1882i refurbished on the cheap to test ( I was looking for a cold spare card anyway ) and the results are identical..

Other testing I have done:
Disabled NCQ
Disabled SES2 Support
Disabled all disk power options
Ran seatools for dos short and long tests on all disks outside the server
Replaced all SAS cables
Confirmed link between Disks > Expander are all 6G
Confirmed link between Expander > Raid Card are all 6G per ch
Confirmed Raid Card(s) are getting 8x PCIe link
Upgraded the Motherboard CPU and RAM
Setup all 12 disks in ZFS RAIDZ2 using an LSI 3008 flashed with IT mode FW, get around 290MB/s write (Tested by copying a 620GB VMDK over 10gbe network) seem to be capped by CPU.


Due to the windows heavy setup I run I would prefer to stick with the HW raid for now. Also the $$ investment.

If anyone is running either a 1882i or 1883i with a similar setup do you mind testing what write speeds you get with caching set to Write Through?

If anyone else can think of what might be going on here or something I should test I would appreciate any advice.


Cheers
Nick
 
Have you tried it in another slot on the same motherboard, or another motherboard?
 
Back
Top