ARECA Owner's Thread (SAS/SATA RAID Cards)

Some RAM I've found that works in an Areca 1883ix card, that is affordable in used condition:
  • Hynix PC3-14900R 4GB 1Rx8 DDR3 Server Memory HMT451R7AFR8C-RD - $8 on eBay
  • Micron MT18JSF1G72AZ-1G9E2ZE PC3-14900E DDR3 1866 8GB ECC 2RX8 - $30 on eBay
 
Blah... I unplugged my arc-1170 while shuffling things around in the case. And now my machine refuses to post with it attached. Web interface doesn't appear to be coming up either.

I'm now extremely frustrated, but might as well check. Do the various areca cards use the same on-disk layouts? If I buy something (much) newer will there be any issue just using the current arrays?
 
Blah... I unplugged my arc-1170 while shuffling things around in the case. And now my machine refuses to post with it attached. Web interface doesn't appear to be coming up either.

I'm now extremely frustrated, but might as well check. Do the various areca cards use the same on-disk layouts? If I buy something (much) newer will there be any issue just using the current arrays?

I've upgraded throughout the years without issue, but from 1680->1880->1882->1883

I have an 1883i, 1883ix-12 i'm looking to sell, pm me if interested.
 
Blah... I unplugged my arc-1170 while shuffling things around in the case. And now my machine refuses to post with it attached. Web interface doesn't appear to be coming up either.

I'm now extremely frustrated, but might as well check. Do the various areca cards use the same on-disk layouts? If I buy something (much) newer will there be any issue just using the current arrays?
I upgraded from a 1220 to 1883 and my old array was detected fine, still worked although performance had odd speed spikes. I can't rule out that I did something wrong, but I remade the array on the new controller and all has been fine since then so the disks weren't to blame. So you should be fine, but if you do notice some oddities don't be surprised.
 
Maybe a bit of a silly question but is it possible to expand een Areca setup with a second card?

So instead of (for example) Areca 1882 => HP SAS Expander (in different case, like a DAS) can i use also do this with a Areca 1882 => Areca 1882?

So use the second card only as an expander. Is this possible?
 
I bet some one already asked this but i can't find a solid answer to it.

Currently using sff-8087 to 4xSata cables from my HP sas expander to all my disk in the array, but i am planing to move to a new case that have built in SAS backplane for all disk but i am having a hard time finding out in the new case witch one is nr 1,2,3... and so on..
Is it critical when i move the disk that disk have the same "Device location" in the new case??
 
I bet some one already asked this but i can't find a solid answer to it.

Currently using sff-8087 to 4xSata cables from my HP sas expander to all my disk in the array, but i am planing to move to a new case that have built in SAS backplane for all disk but i am having a hard time finding out in the new case witch one is nr 1,2,3... and so on..
Is it critical when i move the disk that disk have the same "Device location" in the new case??

Location doesn't matter with an Areca array.
 
Ok, thanks.
So i can put the disk's in any port on the SAS expander and it wil work

It's never caused an issue with me. To be safe, turn on the feature to not enable an incomplete array. Then go into bios on boot to confirm it found the array.
 
It's never caused an issue with me. To be safe, turn on the feature to not enable an incomplete array. Then go into bios on boot to confirm it found the array.

Ye, the feature to not enable an incomplete array is already active :)
 
Location doesn't matter with an Areca array.
This is not completely accurate. For regular use, yes. If you ever need to recreate the drive with the SIGNAT or LeVeL2ReScUe commands, the drives need to be in the order that the array was originally created in for it to have the best chance. This has been shown time and time again in attempts to recreate arrays over the years.
 
This is not completely accurate. For regular use, yes. If you ever need to recreate the drive with the SIGNAT or LeVeL2ReScUe commands, the drives need to be in the order that the array was originally created in for it to have the best chance. This has been shown time and time again in attempts to recreate arrays over the years.

Can't say I have ever had a need to use these commands. Should one need to do this, how would you know which drive goes where?
 
Can't say I have ever had a need to use these commands. Should one need to do this, how would you know which drive goes where?
Label drives to drive cage positions, label the cable ends, etc. - whatever is appropriate for your situation. Imagine if the drives got pulled and all the cables disconnected while you weren't there, what would you have had to of done to be able to re-assemble them exactly as they were?
 
Label drives to drive cage positions, label the cable ends, etc. - whatever is appropriate for your situation. Imagine if the drives got pulled and all the cables disconnected while you weren't there, what would you have had to of done to be able to re-assemble them exactly as they were?

LOL, I think we all know how to make labels. Is there no identifying data on the drive itself that can be read by the card? It is probably a good idea to label them just in case.
 
LOL, I think we all know how to make labels. Is there no identifying data on the drive itself that can be read by the card? It is probably a good idea to label them just in case.
Yeah one of the benefits is that for the most part that data is readable by the areca controller card, thus why putting them back in order usually doesn't matter. It's when you've had a drive failure, and some of that data may be corrupted, that know which went where is key.
 
Problem i have is that my cables i use to day sff-8087 to 4xSata do not say witch one is 1,2,3,4 and in the new case i will only use sff-8087 to sff-8087 to the backplane and the case do not say witch one is 1,2,3,4,5,6,7,8,8,9..24
So only solution is to check serial number and port/positions and then in the new case trial and error to find out witch one is witch on it :/
 
A couple days ago my Windows 10 Pro PC with Areca ARC-1882ix-16 crashed and rebooted in the middle of the night, but got stuck at the windows logo screen with the spinning circle. Thinking it was a corrupted windows NVME SSD, I did a fresh Windows install to an old spinny hard drive to try and troubleshoot. It booted fine, of course. So I installed Areca drivers and rebooted. The new install of windows took a long time at the restart screen, then blue screened with a comment about "driver power state error". When it tried to boot up it got stuck at the windows logo screen just like my original install on the SSD!

From this I drew the conclusion I have a problem with the RAID card, and it is likely causing Windows not to boot. Also, when the PC is powered off, about every 15 seconds the RAID card emits a short beep. It was getting late last night so I did not run any further experiments like removing the RAID card to confirm my theory. As I thought about it this morning, I wonder if these symptoms are an indication the RAID card fan has failed? Has anybody else experienced a fan failure, and is the beeping supposed to be an indicator of this? If this is the case, what is a good quality replacement fan?
 
A couple days ago my Windows 10 Pro PC with Areca ARC-1882ix-16 crashed and rebooted in the middle of the night, but got stuck at the windows logo screen with the spinning circle. Thinking it was a corrupted windows NVME SSD, I did a fresh Windows install to an old spinny hard drive to try and troubleshoot. It booted fine, of course. So I installed Areca drivers and rebooted. The new install of windows took a long time at the restart screen, then blue screened with a comment about "driver power state error". When it tried to boot up it got stuck at the windows logo screen just like my original install on the SSD!

From this I drew the conclusion I have a problem with the RAID card, and it is likely causing Windows not to boot. Also, when the PC is powered off, about every 15 seconds the RAID card emits a short beep. It was getting late last night so I did not run any further experiments like removing the RAID card to confirm my theory. As I thought about it this morning, I wonder if these symptoms are an indication the RAID card fan has failed? Has anybody else experienced a fan failure, and is the beeping supposed to be an indicator of this? If this is the case, what is a good quality replacement fan?
If the raid card is beeping with the power off, that kind of tells me you have a battery backup unit attached to it as well? Otherwise I can't figure out why a failed fan would be causing a beep with the system powered off. Anyway upon boot get into the card's setup before windows, see if all looks well in that environment. You should also unplug the raid card as a test to see if you boot windows without it, as you mentioned you didn't have time for this.
 
If the raid card is beeping with the power off, that kind of tells me you have a battery backup unit attached to it as well? Otherwise I can't figure out why a failed fan would be causing a beep with the system powered off. Anyway upon boot get into the card's setup before windows, see if all looks well in that environment. You should also unplug the raid card as a test to see if you boot windows without it, as you mentioned you didn't have time for this.

Yes, I have a battery backup. PCs have a 5V standby power supply as well, which is hot even when the PC is "off". I have no idea which is powering the beeper, or if the periodic beeping is trying to tell me something. I'll report back after work this evening. I think after I switched to UEFI boot in the RAID card the option to get into RAID BIOS using the keyboard disappeared. How do I do this? Or maybe I won't be able to if the overheat self-protect is active. But I should be able to see that by watching the fan.
 
Yes, I have a battery backup. PCs have a 5V standby power supply as well, which is hot even when the PC is "off". I have no idea which is powering the beeper, or if the periodic beeping is trying to tell me something. I'll report back after work this evening. I think after I switched to UEFI boot in the RAID card the option to get into RAID BIOS using the keyboard disappeared. How do I do this? Or maybe I won't be able to if the overheat self-protect is active. But I should be able to see that by watching the fan.
I don't know, my server motherboard boots to uefi and still shows the areca initialization and status with the few seconds opportunity to enter the setup afterwards. To be fair it's still consumer grade stuff, just an Asus Prime x-370 pro w/ a ryzen 1700x. I almost wish I could turn it off as that's 80% of my boot up time, I never even considered it though. You might be able to also connect over ethernet if you had that set up.
 
So I removed the 1882ix-16 and Windows booted fine. Then I removed the battery backup from the RAID card, and it stopped beeping. It was the beeper on the battery backup that had been beeping, not the RAID beeper. I put the RAID card back in the PC and it booted fine. So it looks like the battery backup had some sort of problem, crashed the computer causing a reboot, then wouldn't allow it to boot? WTF? This is nuts! I don't even know what happened to the backup battery. It is only 4 months old, although I don't know the battery manufacture date.
 
Last edited:
My Areca 1883i (out of warranty) has frozen up now twice, randomly, about 2 weeks apart. There's nothing in the logs on the Areca device. The http web manager goes dark when it happens and I can't access the card. All attached storage goes into read only mode and/or appears "corrupted" to the server. Upon reboot, everything is perfectly fine. It has me quite worried. I've run volume checks and filesystem checks successfully. I am in the process of doing updated full backups on drives attached via an LSI IT mode card using dm-raid, in case something terrible happens. Does anyone know what could be causing this all of a sudden? Nothing else on the system-level has changed. The only thing I've done on the Areca card in the past month was add some pass through disks, which I have now removed as a test. A few months ago I added a battery backup unit.
 
Last edited:
So I have an Areca 1883ix-24 RAID card with an external SFF-8644 port for expansion. Once this box filled up, I went out a built another expansion box using the Areca ARC-8028 24port expander box. I connected the two via each devices SFF-8644 port, but the 1883ix can't see the expansion box.

The light on the back of the 1883ix card where the port is has a slow blinking green light, but nothing on the ARC-8028 expander box that would indicate the connection is active. I've scoured the manuals to see if I need to do anything to initialize the connection, but I can't find anything. Also my ARC-8028
didn't come with the RJ-11 serial cable so I'm SOL there.

Has anybody gotten these two to work together before?

Thanks.
 
So I have an Areca 1883ix-24 RAID card with an external SFF-8644 port for expansion. Once this box filled up, I went out a built another expansion box using the Areca ARC-8028 24port expander box. I connected the two via each devices SFF-8644 port, but the 1883ix can't see the expansion box.

The light on the back of the 1883ix card where the port is has a slow blinking green light, but nothing on the ARC-8028 expander box that would indicate the connection is active. I've scoured the manuals to see if I need to do anything to initialize the connection, but I can't find anything. Also my ARC-8028
didn't come with the RJ-11 serial cable so I'm SOL there.

Has anybody gotten these two to work together before?

Thanks.

You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)[/QUOTE]
 
You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)

I have one PCIe connector powering it. It was bought from arecadirect.com, but it could of been an open box unit because it didn't come with anything other than the expander itself. The expander powers up fine and I can move through the LCD menu and everything shows as OK. I'll make the cable and report back, thanks.
 
You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)
[/QUOTE]

And to be sure, this requires a 6P6C RJ11 connector?
 
You’re really going to need the serial cable (you should be able to build one with an RJ11 and a DB9, RJ11 Pin 1 is RTS, 2 is RX, 3 is TX and 4,5,6 are Ground (there are pinout pics in the manual) so you can use the CLI. First, do you have the enclosure powered up with either 2 molex’s to the adapter or 1 PCIe power connector? If you bought this pre-owned I am guessing that either you have a bad expander, bad power or it is in Zone mode instead of normal mode, which you can only change with the CLI (hence the need for the cable)
[/QUOTE]

So I made the cable and was not able to serial in. When I powered off/on the expander box, I got output over serial that asked for a password, but after typing in 0000 nothing happens. I was frustrated and powered off both the expander box and the main box that has the main raid card and wallah now they are both connected. The manual fails to mention that the main card needs to be rebooted in order to pick up the expander box. Typical Taiwanese trash documentation, but whatever. Thanks for the help.
 
Has anyone tried any of the 14TB HGST drives?
I've got an Areca 1882ix and wanted to soon purchase some drives with model WUH721414AL5204 (i.e. P/N 0F31052) and hook them up.
They're not on the compatibility list but that means no one tested them, it doesn't necessarily mean they don't work.
I honestly see no reason why they wouldn't, just though I'd ask here first.
 
Has anyone tried any of the 14TB HGST drives?
I've got an Areca 1882ix and wanted to soon purchase some drives with model WUH721414AL5204 (i.e. P/N 0F31052) and hook them up.
They're not on the compatibility list but that means no one tested them, it doesn't necessarily mean they don't work.
I honestly see no reason why they wouldn't, just though I'd ask here first.
You are correct, just because they are not on the hardware compatibility list does not mean it will not work. That being said, it doesn't mean it will work without issue either. Since you are looking at the SAS model you are far less likely to have issues (compared to some of the large enterprise SATA drives that have had issues with timing and dropouts,) but until each is on the other's HCL (and sometimes even that is not a guarantee) it doesn't mean you will have issue-free compatibility. One of the reasons you par through the nose for HPE, Dell/EMC, NetAPP etc is the fact that if they sell you something and it doesn't work, you can at least be put in touch with the people that can make it work even if it involves spinning up a beta patch to bring you back up. White box can get you better performance for (lots) less money, but you are further out of the loop for weird compatibility issues.
Best idea is order them from someone with a good return policy, spin up a dozen in an array and torture test them for a week. The most likely compatibility problems, dropouts and panics are likely to show up in that time frame.
 
Does anyone know if it is likely that Areca will create a driver for ESXi 7.0 for the 1882 series? The 6.7 driver (1.40) doesn't work.
 
How in gods name is this thread already a decade old
Cards just won't die. I still have a 1882i in daily use in my colocated Supermicro 846. Never imagined I'd still be using Areca 13 years later after purchasing an 1130ML in 2007. Things were a little more exciting back then. Here's what it looked like in action with a whole 7TB:

DSC00553.jpg
 
I've noticed recently that my old array of 4tb hgst drives (#000) has started taking longer and longer to do its monthly checks. The 8tb wd red array (#001) has maintained its usual time frame. The old array has always taken 14-15 hours, all the way back to April 2018 (as far back as the log goes.) I haven't started using the computer or array in a different way, does this mean anything?
1586967697648.png
 
I've noticed recently that my old array of 4tb hgst drives (#000) has started taking longer and longer to do its monthly checks. The 8tb wd red array (#001) has maintained its usual time frame. The old array has always taken 14-15 hours, all the way back to April 2018 (as far back as the log goes.) I haven't started using the computer or array in a different way, does this mean anything?

Probably just more full, and fragmented.
 
Probably just more full, and fragmented.
Curious. I know it's different since it's spread across disks, but Windows does defragment them weekly. Anything else I should do?
They have been at 99% full since early 2018 when I filled up unused space with burstcoin plots. As time goes on I delete plots as I need space. On this array, 17tb out of 21.7tb usable are burstcoin plot files that haven't changed since then. The bulk of new data goes to the other array (#001) so I would say easily 95-98% of the data on this array has been static for a long time.
 
Curious. I know it's different since it's spread across disks, but Windows does defragment them weekly. Anything else I should do?
They have been at 99% full since early 2018 when I filled up unused space with burstcoin plots. As time goes on I delete plots as I need space. On this array, 17tb out of 21.7tb usable are burstcoin plot files that haven't changed since then. The bulk of new data goes to the other array (#001) so I would say easily 95-98% of the data on this array has been static for a long time.
It doesn't much matter if the data is static, performance is degraded as new writes are much more likely to be fragmented. Everybody knows if you want maximum performance you should keep your hard drive(s) less than full. If you're under 80% full and still seeing poor performance, you might try a real defragment - which I don't think Windows does. Auslogic is good, but there are others.
 
I've got a 1223 8i on my desktop machine. Bought an adaptec 5805 for my backup computer. Just wanted it really to connect jbod. The problem I've got is that the adaptec doesn't seem to do passthrough. So any drives with existing data will get zilched in the process of adding them.

I'm thinking of getting another areca card. Partly because I believe that they passthrough drives if they're not part of raid (so the OS will see them). And partly because if I have a hardware failure I have a compatible raid board I can swap in a hurry.

Bearing in mind that 10-12 tb disks might be affordable in the next few years, what is the oldest board I should consider? (speed is not important for the backup machine). I found a 1882i x8 for about $200US, which sounds a bit too good to be true. Otherwise its 1680 x12's that are generally available for the same price.

(or should I just save my money and get a cheap LSI card?)
 
Last edited:
Back
Top