ARECA Owner's Thread (SAS/SATA RAID Cards)

Hi everyone,

I'm new here and new to HW RAID (or any kind of RAID for that matter), and recently bought a used Areca 1882ix-24. Haven't installed it yet, still waiting for it to clear customs (taking forever!).
So I have some questions about setting everything up.

For starters, I bought 3 x 10TB HGST (model 0F27352) which will be set up in RAID5 until I expand to more drives a little later this year. I wanted to get 4Kn drives, but when I ordered them, they were the only 10TB HGST drives available. Now someone more knowledgeable than me has told me that I will not be impacted by any performance loss if I use either 512e or 4Kn drives. But what difference would I feel between the two in real world use? This will be a storage server, so no heavy I/O.

Another thing would be that I currently own an older setup with no PCIe2.0 or 3.0 slots other than the second PCIe2.0 slot which would have been dedicated for a second GPU. I do plan to upgrade, but waiting on Cannonlake to be launched. I'm short of funds now anyway and I want to see if they'll change the LGA. So my second question would be if the Areca will work properly in this second PCIe2.0 slot, considering the first one holds the GPU.

Other than this, I don't know... any advice on BIOS setup for the RAID is appreciated.
 
Just wanted to drop in to thank everyone in this thread. I'm learning so much already and it helped my solve some major issues I had w/ my Dat Optic Thunderbolt 2 setup (comes w/ a Areca ARC-1264IL-12). Unfortunately, I had (stupidly) bought some Seagate 8TB desktop drives for this solution. It worked great on the Mac for over a year, but when I switched it over to my new Win 10 Rig (via Thunderbolt 3 to 1--2 adapter) after a few weeks of extremely heavy use, I had drives 5 and 8 fail and were kicked out of the volume. Oddly enough, after fiddling around and rebooting several times, I ended up w/ drives 6 and 8 failed (5 was ok). I took them out and put back in 1 at a time and they rebuilt (first drive took about 36 hours, the second drive only about 32). But I got everything back working again. Now doing a volume check w/ Scrub Bad Block If Bad Block Is Found, Assume Parity Data Is Good. & Re-compute Parity If Parity Error Is Found, Assume Data Is Good option set and am about %30 complete now. Once done, I'll put my Plex server back into play.

Hoping I can overcome the seagate consumer drive not having TLER issue, otherwise I'm in deep doo-doo if it keeps kicking out perfect good drives. Curious why it worked perfectly on the Mac before (w/ HFS+) for so long. When I switched to Win 10, I reformatted the drive to NTFS and restored data from a backup.

To be fair, I initially bought Seagate enterprise drives to use, but 60c idle temps were just not something I could live with, even though I know it's normal for those.

Now looking at the the Seagate Iron Wolf Pro 10TB drives... tempting.
 
Last edited:
p.s. also having a weird issue. AJA test (default setting, cache disabled) = 900+ MB writes, but only ~500MB reads. Very concerning. This always tested ~950 write and ~ 1000+ reads before this incident.

waiting for volume check to finish before rebooting to see if it corrects.
 
Wow, after all this time, I FINALLY figured out what was causing the low read speeds, I any many others have reported. Uninstalled archttp, archsap and CLI (you can just login to it via the ethernet connection directly instead). After uninstalling them, just rebooted and BAM, read speeds instantly back! whooooo!!! Not sure which one of those was causing the issue, but none of them are necessary... so for all those other people w/ RAID 6 setups w/ half (or lower) read speed vs. your write speed, you may give this a try. Mine is finally flying again.
 
Hi, has anyone used the encryption option when creating a volume set? How does it work? I selected it, and chose Passkey for encryption, but it doesn't ask me to enter a passkey and I can't see the volume on my Windows Server. Thanks!
 
I need a itx (file)server, so I have this huge case with a ml1231 areca collecting dust. Should I reuse it or is it best to abandon it altogether. I kinda liked that thing, but I understand raid 6 is not that smart anymore with huge drives so use it anyway for 0+1 or is it not worth it anymore?
 
Another thing would be that I currently own an older setup with no PCIe2.0 or 3.0 slots other than the second PCIe2.0 slot which would have been dedicated for a second GPU. I do plan to upgrade, but waiting on Cannonlake to be launched. I'm short of funds now anyway and I want to see if they'll change the LGA. So my second question would be if the Areca will work properly in this second PCIe2.0 slot, considering the first one holds the GPU.

Other than this, I don't know... any advice on BIOS setup for the RAID is appreciated.

I don't know about your old motherboard, but I had issues with my LGA2011 GA-X99P-SLI when I put the Areca card in the secondary slot of the pair of full-bandwidth PCIe3.0 slots. It had to go in the primary slot in order to boot the card (not boot the OS off the RAID array, I have an SSD for that). Anything is possible with various motherboards. Also, the card boots a lot faster if you set it to UEFI bios (again, the card, not the PC) but I doubt you can do that in your old setup.
 
I don't know about your old motherboard, but I had issues with my LGA2011 GA-X99P-SLI when I put the Areca card in the secondary slot of the pair of full-bandwidth PCIe3.0 slots. It had to go in the primary slot in order to boot the card (not boot the OS off the RAID array, I have an SSD for that). Anything is possible with various motherboards. Also, the card boots a lot faster if you set it to UEFI bios (again, the card, not the PC) but I doubt you can do that in your old setup.
Thanks for the reply. I've opened a separate thread here and after a lot of troubleshooting I've sorted things out and all is fine now.
Using second PCIe 2.0 which operates at x16, same as the primary one holding the GPU. Not that it's too relevant, controller doesn't do more than x8 anyway.
My next motherboard will probably be from Asus Workstation series as I've heard they're great compatibility wise.
 
No, but I think these would work really well. Definitely going w/ these on my next upgrade for sure.

I'm working with the Ironwolf ST10000VN0004 right now. I built a raid 10 array originally using 4 of these. I've had timeouts when copying files so I'm in the troubleshooting phase. I'm running an Areca 1883i V1.54 2017-03-29 (firmware/bios). When I go into the device information on the drives it's showing Error Recovery Control Read/Write Disabled/Disabled. So I am not sure if this is causing an issue. Areca support told me the following:

"the parameter is reported by the drive firmware, controller firmware just forward it.
so it looks like this drive do not support ERC feature, configure controller TLER setting should not helps on this drive. you have to configure the timeout setting instead."

Of course this drive does support TLER/ERC and if I plug any of these drives into my really old ARC-1210 they show up Error Recovery Control Read/Write 7secs/7secs.

Anyway, just sharing...I'm currently going through this thread and others across the net attempting to troubleshoot this timeout issue. Leaning toward dropping NCQ or perhaps pushing the regular timeout feature to 20seconds or higher to see what happens. I have to rebuild the Raid 10 array again because timeouts killed it. The drives seem perfectly fine, I'm running Seagates Long Generic test on them now and 3 have already passed. If I find a solution I'll try to remember to post back in case anyone else runs into this issue.
 
I'm expanding a RAID6 on an online server running a 1280ML. I've expanded the RAIDSet and now I need to expand the VolumeSet. What is the difference between foreground and background initialization? Does foreground wipe the data or make it temporarily unavailable?
 
I'm expanding a RAID6 on an online server running a 1280ML. I've expanded the RAIDSet and now I need to expand the VolumeSet. What is the difference between foreground and background initialization? Does foreground wipe the data or make it temporarily unavailable?
Foreground is faster. Depending on what else is going on with the array the difference can be drastic (2x-6x difference.)
 
Foreground is faster. Depending on what else is going on with the array the difference can be drastic (2x-6x difference.)

I'm not worried about speed unless everything else is equal. In trying to find the difference online, I came across a document for 3ware controllers saying, "Foreground initialization does this by simply writing zeroes to all the drives so that they all have the same values, overwriting any existing data in the process. In contrast, background initialization uses an algorithm to resynch the parity information on the drives and does not rewrite existing data." Is this also true for Areca controllers? I'm guessing the safest approach is just to stick with background initialization.
 
It is speed and availability. Per the 1280 manual.... "Foreground (Fast Completion) Press Enter key to define fast initialization or Selected the Background (Instant Available). In the background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In Fast Initialization, the initialization proceeds must be completed before the volume set ready for system accesses." As far as I know, there is no difference in the method it completes the task, just the speed and availability (I have never seen anything published or heard from Areca either way, so I cannot say for sure to the method, just in the results.)
 
It is speed and availability. Per the 1280 manual.... "Foreground (Fast Completion) Press Enter key to define fast initialization or Selected the Background (Instant Available). In the background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In Fast Initialization, the initialization proceeds must be completed before the volume set ready for system accesses." As far as I know, there is no difference in the method it completes the task, just the speed and availability (I have never seen anything published or heard from Areca either way, so I cannot say for sure to the method, just in the results.)

Thanks...I just found the 1280 manual. It's tricky finding info for their older cards!

So the manual uses an example for a newly created VolumeSet. I just want to confirm that modifying an existing VolumeSet, with data on it, as a background task won't clear the data. I want the array to be available throughout this process, but it wouldn't matter much if it zero'd out the array!
 
Thanks...I just found the 1280 manual. It's tricky finding info for their older cards!

So the manual uses an example for a newly created VolumeSet. I just want to confirm that modifying an existing VolumeSet, with data on it, as a background task won't clear the data. I want the array to be available throughout this process, but it wouldn't matter much if it zero'd out the array!

Obviously...if possible...you should have backups available. In the past what I have done on an old 1210 card is expand my array by replacing drives. I started with 4 1TB drives back years ago. Then at some point I upgraded to 2TB drives. I did this by removing one drive at a time and allowing the raid 5 to rebuild completely each time. Once all the drives were replaced I then expanded the raid array, then I modified the volume set. Once all that was done I would go into windows and expand the drive. Worked very well and never lost any data. I updated drives like this a total of 3 times finally stopping with 4TB drives. Every time I did this I took the array totally offline once it was ready to expand and simply did this from the bios. Anyway, it can be done but I would sure recommend a backup if you can make one. The thing I cannot remember though is if during the expanding of the volume it asked me to initialize. I was thinking I did _the noinit_ option but it's been a few years and I really cannot remember. I just know that each time I did this I was very apprehensive, but I never lost data.
 
Hello everyone.
Owning the new ARC-1882-ix-24 controller.

With a basic 1GB cache. And my question is. Which DDR3 ECC memory supports these controllers. What must I do to work? Single rank or double rank?

The original memory of the manufacturer is a priceless total nonsense ...

Thank you for reply
 
I'm pairing a ARC-1880i with a Chenbro SAS CK23601.
How can I pass the drives, temperatures, and all the info from the expander to the Areca?
Is it through the I2C connector?

If it is, is there any special connector to link between the Areca (8 pin) to the Chenbro (4 pin) ??

They are connected through a SFF8087-SFF8087 with Sideband but it appears no info is being exchanged.

The other day I almost lost the expander since the Fan gave its life away after a few years 24/7, and the card was brutally overheating.
It was just luck of me opening the server to look inside.
 
Last edited:
Hello everyone,
I still can not decide which size of stripe to choose for Raid 6 :-(

Disks I have 5x 10TB IronWolf PRO. The .mp3, .iso files - of installing applications, photo backup will be saved in the box. And 80% of movies in .mkv

Which strip to choose 64/128 / 256 or 512? Or mega big 1mb?

This is a domestic use where it is limited to 1gbit LAN. Where the home server DLNA server runs and a total of up to 7 users access the server via SAMBA (sharing windows) or DLNA.

It is also necessary to count on the fact that some disk will die over time. So I have to take into account when I insert a new disk. So it takes a reasonable time to recover. If the size of strips is affected.
 
Ok, so I have an odd issue with a 1882 controller. I have 2 servers, both with a 1882 controller.

I'm attempting to move a 4 disk raid 0 set from server A to server B. In server A it works great, but when I move it to server B, it is not picked up by the BIOS, but is in a normal state in the web gui. See below:

F-Drive%20Take%202.JPG


So the volume in question is called Burst03 and on Server A, it is assigned drive letter F: Normally, when hot swapping a volume from one server to another, that volume should just pop up on the other server.

Since the volume doesn't even show up in the BIOS when rebooting the server, I think we can rule out it being OS related (Windows 10 Pro Creator on both machines btw), so what else could it be?

If I move the volume back into Server A, it pops right up in the OS with drive letter F:.

Note how the Burst03 volume has the same Ch/Id/Lun assignment (0/0/2) at the 6TB passthrough disk I have in that server. That doesn't seem right, does it?
 
Last edited:
Hello,
I had the same problem.
I have two controllers the same as you.

The problem here was that both disk fields were under the same "SCSI Channel: SCSI ID: SCSI Lun":
Try it:

BLuoXH.jpg


click to VOLUME:
VSGXuT.jpg

and CHANGE change the first volume to 0, and to the second to the last number and it will go ..
 
I have two Areca cards in a server.
The ARC-1680 appears just fine. Web GUI works.

The ARC-1223 times out now after attempting to create a virtual disk from the RAID set. The RAID 5 set is built from 7 IronWolf 8TB drives and appears to work fine.

I switch to the second controller (1223) with "set curctrl=2"
The "rsf create drv=1~7 name=RaidSet1" command completes successfully.

When I try to create the virtual disk with "vsf", it crashes essentially. Anything after that results in the output below:

[18:33:17][[email protected] ~]# cli64

Copyright (c) 2004-2015 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.14.7, Arclib: 350, Date: May 19 2015( Linux )

S # Name Type Interface
==================================================
[*] 1 ARC-1680 Raid Controller PCI
GuiErrMsg<0x14>: Timeout.

CLI> set curctrl=2
GuiErrMsg<0x00>: Success.

CLI> disk info
# Ch# ModelName Capacity Usage
===============================================================================
===============================================================================
GuiErrMsg<0x00>: Success.


Just looks dead to me :/


Rebooting with the drives in, nothing works.
Rebooting with the drives unplugged, works well.
Inserting the drives after reboot works well...drives appear again and RAID group appears.
Attempting to create the virtual disk again, crashes system.
Repeat until insanity sets in.



edit -- also!!!
I created each drive as a passthrough disk to just use linux RAID (mdadm) to setup the disks. Creates the /dev/md0 device...starts a build. drive 5 pops out of passthrough mode and back to FREE after an hour or so. I do the same again, allowing the 5th drive to just be nothing. I build the MD raid again, pops out drive 4 during the rebuild.

Starting to believe these disks are just not compatible with the controller.
 
Last edited:
Hello,
pleas, i need to convert SMART value from RAW to days. Is not there a converter?


CLI> disk smart drv=12

S.M.A.R.T Information For Drive[#12]

# Attribute Items Flag Value Worst Thres Raw State

===============================================================================

9 Power-on Hours Count 0x32 99 99 0 114963389613658 OK

Thank you for help.
 
Hello,
pleas, i need to convert SMART value from RAW to days. Is not there a converter?

9 Power-on Hours Count 0x32 99 99 0 114963389613658 OK

Thank you for help.

The conversion from Raw value to a quantity with physical units is not specified by the SMART standard. It varies from manufacturer to manufacturer and even model to model or firmware to firmware.
 
The conversion from Raw value to a quantity with physical units is not specified by the SMART standard. It varies from manufacturer to manufacturer and even model to model or firmware to firmware.
Ah, I understand it's about having two SAS drives in my home, and where do I know how many hours it's run?
 
Anybody try the Seagate Ironwolf 10TB yet?

I am using IronWolfs 10TB with Areca 1880 firmware 1.54 as RAID6 since roughly 3-4 months. So far no problems or anything unnormal. Spindown when idle also works fine. Before installing in their final location, I had following issues:

I run some checks with all drives before installing. One drive was returned despite all functions tests being fine. The drive was returned because it was louder and made clicking noises (roughly once per second) even while doing seq reads/writes.

I did format/copy files to the drives before installing using an old Areca 1281 with firmware 1.49 and while there was no serious problem, I found the LCC smart value to be rather high when I finished copying from the server. I haven't investigated it further on the 1281. On the 1880 with firmware 1.54 the LCC values are fine and there is no unusual increase. So maybe there is an issue with LCC and the old 1.49 firmware... maybe not....
 
I am using IronWolfs 10TB with Areca 1880 firmware 1.54 as RAID6 since roughly 3-4 months. So far no problems or anything unnormal. Spindown when idle also works fine. Before installing in their final location, I had following issues:

I run some checks with all drives before installing. One drive was returned despite all functions tests being fine. The drive was returned because it was louder and made clicking noises (roughly once per second) even while doing seq reads/writes.

I did format/copy files to the drives before installing using an old Areca 1281 with firmware 1.49 and while there was no serious problem, I found the LCC smart value to be rather high when I finished copying from the server. I haven't investigated it further on the 1281. On the 1880 with firmware 1.54 the LCC values are fine and there is no unusual increase. So maybe there is an issue with LCC and the old 1.49 firmware... maybe not....

I am also using 12 * 10TB Ironwolf with an Areca 1882 IX w\ firmware 1.54 as RAID6 & I have seen no issues as well. I was wondering one thing shouldn't the AgileArray be detected for the Error Recovery Control? All my drives show as disabled.

upload_2017-8-5_12-14-17.png
 
Hello,
Yes, so do i. I tried multiple disks and models and it's always turned off :-(
 
I also noticed the error recovery showing up disabled. But when checking the drive the controller seems to set/enable it:

>smartctl -l scterc --device=areca,9/2 /dev/sg2
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-121-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Actually, ERC display by areca and smartctl is identical to my WD Red 6TB drives. So if it is really
disabled inside the areca, I guess it also applies to the WD Reds.
 
Last edited:
I also noticed the error recovery showing up disabled. But when checking the drive the controller seems to set/enable it:

>smartctl -l scterc --device=areca,9/2 /dev/sg2
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-121-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Actually, ERC display by areca and smartctl is identical to my WD Red 6TB drives. So if it is really
disabled inside the areca, I guess it also applies to the WD Reds.


Thanks I will use the tool to check & let you know the results.
 
Hey,

I'm looking into making a hardware raid array using windows server 2016 with sata 6 gb/s drives.

The main use is home/media server.

Which (used) card is best for me?

1880i/1882i?

Thanks.
 
You really won't see much of a difference for a media server between the 1880i and 1882i. The 1882i is still receiving firmware upgrades though and will rebuild a bit faster however.
 
You really won't see much of a difference for a media server between the 1880i and 1882i. The 1882i is still receiving firmware upgrades though and will rebuild a bit faster however.

so 1880 is the way to go?

or perhaps a lower tier card? like the LSI 9266?
 
Last edited:
I agree with Blue Fox, in most cases the difference is small. I actually returned the 1882 for an 1880 because I prefered the lower power usage and fanless design, wich might also be important for a home/media server. Of course, the price might also matters...
 
I agree with Blue Fox, in most cases the difference is small. I actually returned the 1882 for an 1880 because I prefered the lower power usage and fanless design, wich might also be important for a home/media server. Of course, the price might also matters...

When it comes to home use should I choose the 1880 or opt for some lsi card, which is 50% cheaper?

Also are there any special compatibility issues? I'm using an asrock rack mobo with xeon cpu...
 
When it comes to home use should I choose the 1880 or opt for some lsi card, which is 50% cheaper?

Also are there any special compatibility issues? I'm using an asrock rack mobo with xeon cpu...

I personaly went for the Areca because their Linux kernel support was better and I prefer not installing any extra software and using the LAN port on the Areca with a simple webbrowser. At least for me, that was worth the cost.

Yes, there can be compatibility issues with certain motherboards. You might want to check here if someone has the same board. I only have a Sandy Bridge desktop board from ASRock (Z68 Extreme 4 Gen 3). Here the 1880 has problems booting. Only roughly every second boot works and only in the GPU slot. But server boards seem to have better compatibility with these RAID controllers...
 
I personaly went for the Areca because their Linux kernel support was better and I prefer not installing any extra software and using the LAN port on the Areca with a simple webbrowser. At least for me, that was worth the cost.

Yes, there can be compatibility issues with certain motherboards. You might want to check here if someone has the same board. I only have a Sandy Bridge desktop board from ASRock (Z68 Extreme 4 Gen 3). Here the 1880 has problems booting. Only roughly every second boot works and only in the GPU slot. But server boards seem to have better compatibility with these RAID controllers...

I did a little digging, none of the WD RED drives are on the compatibility list neither the new Seagate Ironwolf.

Am i missing something?
 
Back
Top