ARECA Owner's Thread (SAS/SATA RAID Cards)

Just joined this forum, first time OP here...

Dell PowerEdge R900 - VMware ESXi 5.5 U3 - Areca 1882 X configured as RAID attached to Proavio DS316J enclosure with 16 3 TB Seagate SV35

1 RaidSet 14 disks with 2 hot spares

1 RAID5, 40 TB VolumeSet on this RaidSet

7/20: E#2SLOT#09 failed, hot spare picked up, rebuild begins to E#2SLOT#11

Rebuild ran through the night

7/21: 6:00 AM this morning, E#2SLOT#11 failed

Both E#2SLOT#9 and E#2SLOT#11 showed as failed on the enclosure

I pulled both the failed drives and replaced them, and added them as hot spares

Areca Raid Storage Manager shows RaidSet as 'Rebuilding'', VolumeSet as 'Failed', 1 slot as failed, 2 hot spares, and 1 free disk. See pics, please.

In addition, I had an 1883X configured as JBOD in the same R900. PCIe Fatal Error started flashing on the server. I shut it down, and pulled the RAID controller - reboot - still PCIe error. Put in 1882 as a replacement, pulled 1883 JBOD - no more PCIe Fatal Error.

View attachment 5656 View attachment 5657View attachment 5659

So the questions I have are:
1) Why didn't the second hot spare pick up when the second drive failed?
2) What is the real status of the drives?
3) Am I shooting myself in the foot trying to run 2 PCIe controllers in this server?

Any advice/information is appreciated.


Jeff-
A 40TB/14 Drive RAID5 array is like playing russian roulette, it's not a question of if you will lose, just when. Why oh why did you create a R5 with 2 HS instead of a R6 with 1 HS? Can you please post the complete log from the card? Unfortunately, it sounds like you lost 2 drives in an array that could only handle losing 1 drive, that is why it is failed, but the log will tell all. Do you have the original RAIDSET and VOLUMESET specifics. Do you know which drives came out of which slots? Unfortunately, even though the SV series drives support ERC, I have not seen great results with them in arrays.
 
Finally moving my 4TBx8 array from RAID5+HS to RAID6. Migration is estimated to take 7 DAYS. Pretty crazy.
 
Can anybody help me figure out how to boot my Areca 1882-16-ix RAID card in my new Windows 10 PC? It is a Gigabyte GA-X99P-SLI motherboard. It is probably something stupid, but I haven't been able to figure it out. I loaded the .INF drivers once in Windows, but that was no help as the card doesn't even go through it's own BIOS when it boots, just goes straight to windows. The boot drive is an SSD connected directly to the motherboard.

I've tried changing some BIOS configurations regarding PCIe storage boot from legacy to UEFI, but that seems to make no difference.

Thanks!
 
Can anybody help me figure out how to boot my Areca 1882-16-ix RAID card in my new Windows 10 PC? It is a Gigabyte GA-X99P-SLI motherboard. It is probably something stupid, but I haven't been able to figure it out. I loaded the .INF drivers once in Windows, but that was no help as the card doesn't even go through it's own BIOS when it boots, just goes straight to windows. The boot drive is an SSD connected directly to the motherboard.

I've tried changing some BIOS configurations regarding PCIe storage boot from legacy to UEFI, but that seems to make no difference.

Thanks!
Not seeing the BIOS of the card is a bad sign. First, can you try it in another machine to make sure the card is POSTing properly? Next, try another at least x8 Electrical PCIe slot in your X99P. Unfortunately, a lot of the High-End enthusiast boards are tuned for SLI/Crossfire (Even more so the ones touted for Quad) and that has caused issues with HBA's in the past. When using enterprise-level hardware, it is best to use a server-level board for best compatibility.
 
Not seeing the BIOS of the card is a bad sign. First, can you try it in another machine to make sure the card is POSTing properly? Next, try another at least x8 Electrical PCIe slot in your X99P. Unfortunately, a lot of the High-End enthusiast boards are tuned for SLI/Crossfire (Even more so the ones touted for Quad) and that has caused issues with HBA's in the past. When using enterprise-level hardware, it is best to use a server-level board for best compatibility.

Thanks for the reply. The card was running fine two days ago after a firmware update to the latest in anticipation of assembling the new computer. Are we trying to figure out if unplugging the 1882-ix-16 card from the old motherboard and plugging it into the new motherboard killed it? I guess anything is possible. Testing this would require the whole teardown/reassembly cycle of the computer innards. I suspect it will boot just fine when plugged into the old motherboard. Just to be clear, I am using the same Norco 4220 chassis, SAS backplane and Hitachi 3TB disk array that were used with the old motherboard. There is a possibility I did not re-connect the SAS connectors in the same order between the card and the SAS backplanes, although I don't think this would cause my problem.

Currently I have the single GTX-770 video card plugged into slot 1, and the HBA plugged into slot 3, which do not share bandwidth according to the motherboard manual (pasted below). I am using the cheaper i7-6800K with 28 lanes instead of 40, similar to the older i7-5820K. Trying a different slot is easy enough, perhaps I should try slot 2.

Currently the UEFI firmware is configured to auto-select PCIe2.0 or 3.0. I guess I could manually set this to PCIe3.0 as both the video and HBA cards support this. Just trying to figure out if there is some UEFI configuration that is breaking compatibility that I could change.

- Lifespeed

2 x PCI Express x16 slots, running at x16 (PCIE_1/PCIE_2)
* For optimum performance, if only one PCI Express graphics card is to be installed,
be sure to install it in the PCIE_1 slot; if you are installing two PCI Express graphics
cards, it is recommended that you install them in the PCIE_1 and PCIE_2 slots.
ŠŠ 2 x PCI Express x16 slots, running at x8 (PCIE_3/PCIE_4)
* The PCIE_4 slot shares bandwidth with the PCIE_1 slot and the PCIE_3 slot shares
bandwidth with the PCIE_2 slot. When the PCIE_4/PCIE_3 slot is populated, the
PCIE_1/PCIE_2 slot will operate at up to x8 mode.
* When an i7-5820K CPU is installed, the PCIE_2 slot operates at up to x8 mode
(All of the PCI Express x16 slots conform to PCI Express 3.0 standard.)
ŠŠ 2 x PCI Express x1 slots
(The PCI Express x1 slots conform to PCI Express 2.0 standard.)
 
You probably have legacy option ROMs disabled in UEFI. Try changing that (you'll need CSM enabled too). That should at least get you into the card's management interface where you can change the BIOS from INT13 to UEFI (under Advanced Configuration).

If that doesn't work, stick it in another computer or use in-band or web management to change said setting. Even if the option ROM isn't loaded, the card should still be usable for everything but booting. If changing said setting doesn't help, might just be an incompatibility issue as mentioned earlier.
 
You probably have legacy option ROMs disabled in UEFI. Try changing that (you'll need CSM enabled too). That should at least get you into the card's management interface where you can change the BIOS from INT13 to UEFI (under Advanced Configuration).

If that doesn't work, stick it in another computer or use in-band or web management to change said setting. Even if the option ROM isn't loaded, the card should still be usable for everything but booting. If changing said setting doesn't help, might just be an incompatibility issue as mentioned earlier.

What you mention certainly makes sense, INT13 sounds vaguely familiar now that you mention it. I will get on it this evening. Enthusiast motherboard notwithstanding, this thing should run a PCIe card. I am not trying to boot windows from the RAID array, just storage.

Thank you!

- Lifespeed
 
Just a quick update, the motherboard really seemed to want the raid card in slot 2, not slot 3. These are shared-bandwidth slots, but it seemed to want the HBA in the "primary" of the two. The video card is in slot 1 of the slot 1/4 shared bandwidth pair. The 1.53 firmware for the 1882-ix-16 does not seem to include the INT13 or UEFI settings anymore. Perhaps it is configured automagically these days. Also, I disabled legacy boot ROM compatibility and CSM without causing any issues booting the HBA.

Seems not entirely correct but as long as the RAID array works in an "enthusiast" motherboard I am happy. Something tells me a Supermicro board wouldn't include USB 3.1, USB C, and Thunderbolt 3. Should future-proof me for a few years. Funny thing is I don't have a single device using TB or USB C . . . but I will.

Thanks for the suggestions,

- Lifespeed
 
The 16gb stick of ECC memory I ordered arrived today, I'll give it a shot when I get home in a few hours and let you folks know the results... Standby

Is this memory the ECC DDR3 1333MHz to replace the 1GB in the 1882 HBA? How did it work out, was there a noticeable improvement in latency and/or throughput? Memory is a lot cheaper now than when I bought my 1882-ix-16, I am thinking about upgrading the memory if it helps.
 
Can anybody here address the utility of upgrading an 1882-ix-16 from 1 GB DDR3 ECC RAM? If it speeds it up I'm all for it.

Also, the CPU on this card has always run blazing hot in my Norco 4220 with 3 X 120mm fan mid-plate. If i let the fans turn too slowly it gets up to 95C. So I keep the fans spinning fast enough to cool it down to about 80 - 82C, which appears to be the best it will do without putting the fans in loud mode. And these are quiet Arctic Cooling PWM fans. I also have two 80mm rear exhaust fans in the case, PWM controlled also. The RAID card fan is functioning, turning over 7000 RPM. I guess this RAID processor is known to get hot?
 
Can anybody here address the utility of upgrading an 1882-ix-16 from 1 GB DDR3 ECC RAM? If it speeds it up I'm all for it.

Also, the CPU on this card has always run blazing hot in my Norco 4220 with 3 X 120mm fan mid-plate. If i let the fans turn too slowly it gets up to 95C. So I keep the fans spinning fast enough to cool it down to about 80 - 82C, which appears to be the best it will do without putting the fans in loud mode. And these are quiet Arctic Cooling PWM fans. I also have two 80mm rear exhaust fans in the case, PWM controlled also. The RAID card fan is functioning, turning over 7000 RPM. I guess this RAID processor is known to get hot?

The 1882 runs hot, per Areca's Tech Note on it safe temps are up to 105C, most of mine run at 80 +/- 2 in a 2U/4U Supermicro case with good airflow. In a proliant with wind tunnel fans, they run at 70.
 
I'm hoping someone here can help me out.

I have an Areca ARC-1882IX-12 card with the battery module and recently the battery module has started showing 'failed' notifications. Today the card started beeping and in the even viewer it's failed again but this time with an undervoltage warning.

Can I remove the battery module and use the card without it for now until I buy a replacement battery? I've been using the battery module since I purchased the card so I have never tried setting it up without it, not sure what difference it will make.

Any settings I should change on my raid setup to prevent problems with the battery removed?


Any help would be appreciated!
 
I'm hoping someone here can help me out.

I have an Areca ARC-1882IX-12 card with the battery module and recently the battery module has started showing 'failed' notifications. Today the card started beeping and in the even viewer it's failed again but this time with an undervoltage warning.

Can I remove the battery module and use the card without it for now until I buy a replacement battery? I've been using the battery module since I purchased the card so I have never tried setting it up without it, not sure what difference it will make.

Any settings I should change on my raid setup to prevent problems with the battery removed?


Any help would be appreciated!

Yes, you can disconnect it and run without the battery. If you do so, it is safest to switch the cache from "write-back" to "write-through", lessening the risk that a power outage would leave data in HBA RAM, unwritten to disk. This lessens performance somewhat.
 
You probably have legacy option ROMs disabled in UEFI. Try changing that (you'll need CSM enabled too). That should at least get you into the card's management interface where you can change the BIOS from INT13 to UEFI (under Advanced Configuration).

If that doesn't work, stick it in another computer or use in-band or web management to change said setting. Even if the option ROM isn't loaded, the card should still be usable for everything but booting. If changing said setting doesn't help, might just be an incompatibility issue as mentioned earlier.

One more note on this issue, as UEFI motherboard firmware has been standard for a while now.

Looking through the web interface for my ARC-1882-ix-16 PCIe 3.0 (I did not see this in the BIOS) in advanced configuration I saw a dropdown menu for auto, efi, uefi, bios, and int13. It had been on auto. I saw the usual 44 second countdown at bootup like I have seen for the past 4 years. I switched it UEFI and no longer see that slow roll. In fact I thought it failed to boot the card. But the RAID array is still online and apparently working fine.

Sure boots a lot faster!
 
The 1882 runs hot, per Areca's Tech Note on it safe temps are up to 105C, most of mine run at 80 +/- 2 in a 2U/4U Supermicro case with good airflow. In a proliant with wind tunnel fans, they run at 70.

Good to know 82C to 84C temps are within reason. This computer also does HTPC duty in the living room, so keeping it relatively quiet is important. The case fans don't have to spin too fast to keep the RAID card under 84C. Another nice enthusiast motherboard feature is highly-configurable PWM fan control.

I have to say that is quite a setup of computers, RAID HBAs and hard drives you have. I thought I was something of an enthusiast having a single RAID array in the media server, LOL.
 
I have a 1214-4i with 4 (3TB) drives in a Raid 1+0, single volume configuration. The RaidSet Hierarchy shows a capacity of 4.5TB. Shouldn't it be closer to 6TB?
 
I'm replacing my 1882ix-16 with a 1883ix-24. I had forgotten how loud that little fan really is. The rest of the rig is watercooled, so that 40mm blower fan they put on the heatsink was driving me nuts. So....I'm attempting to watercool it....I found some Koolance blocks that fit decently (but not perfect). Here is my current status....

20161002_173711.jpg


20161002_173653.jpg


20161002_173643.jpg


I'm planning on draining the system tonight, and sticking this in. Will report back on temps. My 1882ix-16 ran ridiculously hot. Probably averaged 110c when it wasn't being used. I had taken the fan off and replaced with a big 120mm fan that was quite...but probably wasn't doing the job was well. Anyway...I'm hoping this will get me more normal temps, and a slighter faster card.

Any other failure/success stories in here with watercooling an Areca card? When I put it in with air cooling, I was pleased with how quickly it built some test array's. But I didn't have any formal benchmarks to compare to. I'll run some disk benchmarks once I get this back in the build. It will control a 16 ssd raid 5 array and a 4 ssd raid 0 array.
 
I'm going to claim a tentative and initial success. I drained the system, got the 1883 installed, and filled it back up. After a minor leakage issue....got it leak free. Then plugged everything back in and......viola! Temps are good and zero fan noise. I had it migrate a RAID5 volume to a RAID3 volume to stress the card, and the CPU never got above 55, the controller never got above 39, and chip stayed in the 40's the entire time.

The only thing I'm disappointed in is the fittings are about 2 mm to long to still use the PCIe slot below the card. I have the 1883 in the 3rd x16 slot, and it leaves the 4th one juuuuuussst blocked. It's real close. There is another block on the Koolance site I might try to reclaim this PCIe slot. It's actually a little taller than this one...but has ports on the side of the block instead of just the top. If I can make that one work....I would reclaim the 4th PCIe slot.
 
Hi, I am creating a new volume on my 1882 card and would like it to be encrypted. However, when I select encryption when creating the volume, the volume does not show up in Windows (my OS). When I do not encrypt, the volume shows up and I can format in windows etc. what am I doing wrong? Thanks! Hammer.
 
Do your drives support self-encryption? Pretty certain that is required as the Areca controllers don't actually do any of said encryption themselves.
 
my drives are WD 8TB Red's which do not support encryption, but enabling encryption is an option on the areca...do you mean it only works with drives that support encryption? Thanks.
 
Do your drives support self-encryption? Pretty certain that is required as the Areca controllers don't actually do any of said encryption themselves.

Blue-
The 1882 and 1883 have on-chip encryption support at the hardware level and has no performance penalty separate from the RAID itself. It supports HDD and SSD. It only works properly on data volumes, not boot/system volumes. Here is a link to the original Areca notification.
 
Last edited:
Hmm, did not know. First result I found was this, so I presumed it was SED or bust: Areca Technology Corporation

Unfortunately can't really give any insight then. You could try contacting Areca support. They seem to be decent about responding.
 
So I'm having some performance issues that I just can't track down. I have 2 media servers as follows:

Server 1 - Supermicro X10SRL-F / E5-2683 v3 / 8x128GB Samsung PRO840 in RAID0, 12x6TB WD RED in RAID6, 12x4TB ST4000DM000 in RAID6
Server 2 - Supermicro X10SRL-F / E5-2630L v3 / 60x2TB in RAID60 (12x5) Misc drives, but mostly Hitatchi 7200 DeskStars

I just upgraded Server 1 from a 1882IX-16 to a 1883IX-24 thinking that my issue was with the 1882IX-16, but I found performance stayed pretty much the same. Both servers are running Win10 64 build 14393 with the .32 Areca drivers. Also running the latest 1.54 firmware on all the controllers. Server 2 is running a 1882LP btw.

So I ran some benchmarks using Anvil, and also tested unraring a ~40GB file from/to the SSD RAID0 array. After that, I yanked the 1883 controller and put it in another rig (Asus Z97-A / i5-4690K) just to rule out issues with the X10 mobos and/or SAS2 backplanes used for the spinners. The SSD 8 pack is direct connected to the 1882/1883 via fan-out cables.

So here's what I got:

benchmarks.JPG


That last row in yellow, is showing the performance I was hoping to get. Those numbers were pulled from this post:

http://www.xtremesystems.org/forums...e7168d11b796&p=5236003&viewfull=1#post5236003

So seeing he was running the ancient .29 Areca Windows driver and Server 2012 R2, I figured I'd roll my test rig back to Win 7 and use the .31 drivers. As you can see from the above chart, it helped in some areas and made things worse in others. Most impressive was the 207 MB/s on the unrar test.

With 8x 840 PROs in RAID0 behind a 1883, I should be getting some good numbers, but I don't. Hell the RAID60 with 60 spinners beats it in the sequential read and write tests.

Any ideas what could be going on here or suggestions on other tests to run?
 
I decided to run the build in Areca HDD xfer speed test and here is what I found:

SLOT09-850PRO.JPG

SLOT10-850PRO.JPG

SLOT11-840PRO.JPG

SLOT12-840PRO.JPG

SLOT13-840PRO.JPG

SLOT14-840PRO.JPG

SLOT15-840PRO.JPG

SLOT16-840PRO.JPG


So it looks like a pair of bad 840PROs (slots 14 and 16) are pulling down the performance.

I'll pull them and test in Samsung Magician.
 
Last edited:
I'm replacing my 1882ix-16 with a 1883ix-24. I had forgotten how loud that little fan really is. The rest of the rig is watercooled, so that 40mm blower fan they put on the heatsink was driving me nuts. So....I'm attempting to watercool it....I found some Koolance blocks that fit decently (but not perfect). Here is my current status....

20161002_173711.jpg


20161002_173653.jpg


20161002_173643.jpg


I'm planning on draining the system tonight, and sticking this in. Will report back on temps. My 1882ix-16 ran ridiculously hot. Probably averaged 110c when it wasn't being used. I had taken the fan off and replaced with a big 120mm fan that was quite...but probably wasn't doing the job was well. Anyway...I'm hoping this will get me more normal temps, and a slighter faster card.

Any other failure/success stories in here with watercooling an Areca card? When I put it in with air cooling, I was pleased with how quickly it built some test array's. But I didn't have any formal benchmarks to compare to. I'll run some disk benchmarks once I get this back in the build. It will control a 16 ssd raid 5 array and a 4 ssd raid 0 array.

I love it.
 
Does anyone know or tried 4kn disks on a Areca Arc-1261ML?
Firmware Version V1.49 2010-12-02

I don't think it will work because Areca says they support 4kn from FW 1.52 and above but V1.49 seems to be the last firmware version for the Areca Arc-1261ML

I would like to use the HGST HE10 10TB drives, they are available in 4Kn or 512e and I prefer 4K native

*Edit*

Areca Support confirmed it would not work with 4Kn disks and V 1.49 is the last FW so I will go for 512e drives.
 
Last edited:
If I were you, I'd upgrade RAID cards instead. Doing a rebuild on 10TB drives will take forever on something that old. If you can afford a handful of 10TB drives, then surely you have the budget for one of the 1883 series cards or the likes.
 
I thought about that, but it's in an old server (PCIe 1.0 / Sata 300) so there will be no speed boost from all of that. Then I need to replace the server and that's going to cost at least 6 grand and there is no budget planned for that. I don't need extra speed at the moment. The server is doing fine and is reliable.
The disks are still slower than what the machine can do so it will be alright.
I wait until PCIe 4.0 for a server upgrade. :cool:
 
Last edited:
Hey guys new here and first post.

So I have the ARC-1883i which has 2 internal SFF-8643 ports and one external RJ45.

I have eight 3TB HGST NAS drives connected in RAID5 and its been working flawlessly for years but now I'm ready to expand.

I'm a bit of a noob to all this stuff so forgive my ignorance but from my understanding I need and external SFF-8644 port to be able to expand but my card doesn't have one.

So question is how can I expand? This card is suppose to be able to support up to 256 drives.

Thanks for any help on this.
 
Hey guys new here and first post.

So I have the ARC-1883i which has 2 internal SFF-8643 ports and one external RJ45.

I have eight 3TB HGST NAS drives connected in RAID5 and its been working flawlessly for years but now I'm ready to expand.

I'm a bit of a noob to all this stuff so forgive my ignorance but from my understanding I need and external SFF-8644 port to be able to expand but my card doesn't have one.

So question is how can I expand? This card is suppose to be able to support up to 256 drives.

Thanks for any help on this.

You'll need a expander
Some expanders have the external SFF-8644

Example: http://www.areca.com.tw/products/sascableexpander8028.htm
 
You'll need a expander
Some expanders have the external SFF-8644

Example: http://www.areca.com.tw/products/sascableexpander8028.htm

Thanks for the replay. My question is how would I connect the expander you listed to my current raid controller?

My controller has 2 internal SFF-8643 ports that are occupied my 8 internal HDDs

The expander you referenced has 6 internal SFF-8643 ports and 3 external SFF-8644 ports (1 in 2 out)

Could you explain how I would connect my raid controller to this expander?
 
Hey guys new here and first post.

So I have the ARC-1883i which has 2 internal SFF-8643 ports and one external RJ45.

I have eight 3TB HGST NAS drives connected in RAID5 and its been working flawlessly for years but now I'm ready to expand.

I'm a bit of a noob to all this stuff so forgive my ignorance but from my understanding I need and external SFF-8644 port to be able to expand but my card doesn't have one.

So question is how can I expand? This card is suppose to be able to support up to 256 drives.

Thanks for any help on this.

As mentioned above you will need a SAS expander to expand past the eight drives you currently have. Also, due to the chance of an additional failure during rebuild due to your large drive size PLEASE consider migrating your array to RAID6 when you expand the array (migrate to RAID6 first.)
 
Also I looked at that expander before but
As mentioned above you will need a SAS expander to expand past the eight drives you currently have. Also, due to the chance of an additional failure during rebuild due to your large drive size PLEASE consider migrating your array to RAID6 when you expand the array (migrate to RAID6 first.)

My raid controller doesn't have the external SFF-8644 port to connect to the expander mentioned, with that said I'm assuming that I'll need to connect it via one of the SFF-8643 internal ports of my controller card to one of the internal SFF-8643 ports of the expander?
 
My above post is the most logical answer, sorry for the dumb questions, very new to all of this
 
Okay so I contacted Acera support asking the same question, here are the responses...

Dear Sir/Madam,
there have two methods to connect more drives with 1883i.
1. use enclosures with expander chip on the backplane
2. use adapter board to convert SFF8643 to SFF8644.

--

So to clarify, I could use your ARC-8028 expander box connecting SFF8643 to SFF8643?

--

Dear Sir/Madam,
these SFF8643 connectors on ARC8028 is used for drive connection not for host connection.
you have to use SFF8643 to SFF8644 cable if you would like to connect 1883i to a ARC8028.

--

Hope this helps anyone in the same boat as me :)
 
I would just like someone to confirm my logic here:

My 1280ML comes with "6*Min SAS 4i" which I'm assuming are 6x SFF-8087 ports. The card supports up to SATA II ("SATA300") disks. Each SAS port has a breakout cable with 4x SATA II ports.

Since SFF-8087 ports run up to 12Gb/s, and SATA II ports run up to 3Gb/s, there is no performance difference if it is running 4x SATA II disks on a single SFF-8087 port compared to a single SATA II disk each spread out among 4x SFF-8087 ports. Am I correct?

A newer version of this card, the 1284ML, supports SATA III disks over SFF-8087 ports. Since SATA III disks can run up to 6Gb/s, there would be a performance impact if 4x SATA III disks are running on a single SFF-8087 port on the 1284ML. Correct?
 
Correct, performance is identical. You just get the convenience of using a single cable for four drives. Note that you'll still be limited by the processor on the respective cards for total throughput (like any other HBA).
 
Back
Top