ARECA Owner's Thread (SAS/SATA RAID Cards)

Offering: The 1280Ml v 2.0 heatsink is 45mm x 45mm x 10mm, mounting holes 55mm apart.

Wanted: Summary of differences between version a, b & 2.0 (which mine is)....any ideas which is the latest and what changed between them all?
 
Offering: The 1280Ml v 2.0 heatsink is 45mm x 45mm x 10mm, mounting holes 55mm apart.

Wanted: Summary of differences between version a, b & 2.0 (which mine is)....any ideas which is the latest and what changed between them all?

Thanks! Glad I have a few vf700's here... :) More than enough, possibly even without the fan spinning.
 
Hi,
Could someone tell me what is the battery reference compatible with Areca card ARC-1680ix-16? I saw several references without any idea which one is compatible with my card.
Thanks!
 
Does anyone know the status of the 1883 on stock Linux kernel drivers? I'm looking at an upgrade from the 1260, and I've seen reports of bug fixes needed in the Areca drivers w/1883, but I can't work out how to translate from the revisions quoted by Areca to kernel releases.I can patch if necessary, but I've got very tired of needing multiple patches for multiple bits of hardware and I'm trying to eliminate that.
 
Finally I found a compatible battery. I have an issue with my card, one of my raid sets is in rebuilding state, rebuilding is really slow and then stuck. Now the RAID is rebuilding at 91.6% but it will stuck in about 1%. Any idea?
 
Hi,

Two quick questions

1) is the Areca 1261 supported in 8.1

2) Will I look data if I rearrange the drives from a raid6 array (eg move drives between channels on the card itself)

Thanks,
 
Hi,
Could someone tell me what is the battery reference compatible with Areca card ARC-1680ix-16? I saw several references without any idea which one is compatible with my card.
Thanks!
I ordered an ARC-6120BA-T113, fully compatible with ARC-1680ix-16.

Finally I found a compatible battery. I have an issue with my card, one of my raid sets is in rebuilding state, rebuilding is really slow and then stuck. Now the RAID is rebuilding at 91.6% but it will stuck in about 1%. Any idea?

I found the cause, another drive was defective. I changed the drive and created a new array, everything is good now.
 
I've found the Arc HTTP software has a terrible resource leak under Windows 8.1 and Windows Server 2012. I haven't been able to get Areca to respond.

How is their tech support? I am trying to decide between Areca and Adaptec. I have contacted Areca with some questions and never received a response. Adaptec has been very good with responding, which is a big plus.

The main thing I like about the 1883 is how the ports are laid out coming off the back of the card. Makes water cooling video cards cleaner for one thing.
 
I still haven't got a response. The bug reproduces trivially with two different cards on two different machines.
 
Could just use the out-of-band management through the ethernet port? It's one of the best features of the Areca cards after all. Not really sure why anyone even bothers with in-band these days unless you have an ancient card that only has that option.
 
Not really sure why anyone even bothers with in-band these days unless you have an ancient card that only has that option.
Probably because there still aren't a lot of out of band card options.
 
Probably because there still aren't a lot of out of band card options.
Maybe with other manufacturers, but seeing as we're talking Areca cards here, the only ones that didn't include it were the 2/4/8 port SATA ones and non-RAID HBAs. Every other card has it and I've been using them since 2007 (back when I bought an 1130ML). The 1261ML they mentioned has it too.
 
Could just use the out-of-band management through the ethernet port? It's one of the best features of the Areca cards after all. Not really sure why anyone even bothers with in-band these days unless you have an ancient card that only has that option.
One card has it, one card doesn't. But why should I have to cable, administer, and secure the extra Ethernet device when I'd rather use the in-band option? Why do you think that Areca provides the in-band option if we're not meant to use it?
 
I have had a very similar experience and curious how yours resulted cause I just got RMA# from Tekram. Build my rig in Sept 2010 and have been running off Areca 1680i + HP SAS Expander a 8x2TB Hitachi Raid-6 and 4xTB Raid-5 in a Norco 4224. Motherboard is evga 680i SLI, Q6600 2.4Ghz, Enermax Inifiniti 720, 6 GB RAM running W2K 08r2. (also been on UPS since day 1)

Background
A few weeks ago the controller started freezing anytime a large file(s) greater than 1-2GB is transferred to/from the array. The light stays on on the card and the raid drive becomes frozen and unusable - but I can still access drives not on the raid adapter and windows continues to run. Windows issues event 129 Reset to device \Device\RaidPort3 was issued. There have also been Event 11 The driver detected a controller error on \Device\Harddisk1\DR1. There are no events registered in the cards event buffer. During this the ethernet web interface to the card fails to be accessible to see what is happening with the array. Only a hard reboot restarts the array until a large file is transferred and it freezes again. I have swapped nics and tried different firmwares and drivers with same problem. This problem started about 2-3 weeks ago without any HW or SW changes to the system - and the system has been on a UPS. I even tried a fresh OS install, short of swapping out to a 1000W power supply and nothing solved it. I started the volume check process but have not yet completed, but dont see how a parity issue in Raid 6 would freeze the entire card - correct me if Im wrong.

Diagnosis
While the power problem is plausible as to why card freezes under load I don't know why it would start after a year of perfect operation on mostly new hardware and decent power to 12 total drives which cant be much more than 100Watts of use.

From Areca support "it sounds like a defected controller if it stop response after crash."

Areca support sent me to Tekram who issued me an RMA. Im in the middle of scrambling to backup the 8TBs before I ship out the card. Jacked costs of HDDs arent exactly helping.

?
I'm very curious as to other people's experience in dealing with Areca RMA process during the 1-3 year period. It says 3 year warranty but some fine print about anything over a year may incur costs.

Also curious what typical turnaround is so I know how long I may be out of commission and if I want to rebuild on a newer 1880 or 1882 or other brand.

EDIT: I never had any power messages however.

This was my second time RMA'ing my 1680ix-24 card. My system would boot fine. Upon reading from or writing to the card the card would drop from the system. Error log was plagued with 1.2v errors. Repair fee was like $150 and took 5 weeks. They replaced diode "D15". I believe this was the diode they replaced the first RMA time around :rolleyes: Works fine now.
 
I am using an ARC-1223 Controller with 4036ml enclosure.The system contains 8x Hitachi drives. I planed to upgrade the system with bigger 4TB drives. After pulling off the "old" 2TB drives (all working fine so far) to use them in other systems NONE of the drives is working anymore. I testet the drives on different PCs/servers...none of the 8 drives motor is spinning up. I just connected the power-cable to the drive. Any idea what causes that mystery?
Thanks!!
 
I am using an ARC-1223 Controller with 4036ml enclosure.The system contains 8x Hitachi drives. I planed to upgrade the system with bigger 4TB drives. After pulling off the "old" 2TB drives (all working fine so far) to use them in other systems NONE of the drives is working anymore. I testet the drives on different PCs/servers...none of the 8 drives motor is spinning up. I just connected the power-cable to the drive. Any idea what causes that mystery?
Thanks!!

I can't remember how the hitachi drives are configured, but if drives are configured to power up spun down you'd see this behaviour. You'd either need the controller to send a spin up command to the drives, which the Areca will do (at least with staggered spin-up enabled) or to configure the drives to spin up at power on. WD drives used to have a jumper for this, but I think my hitachi are either soft configured or "just worked" - it's been a while since I had to play with them.
 
I can't remember how the hitachi drives are configured, but if drives are configured to power up spun down you'd see this behaviour. You'd either need the controller to send a spin up command to the drives, which the Areca will do (at least with staggered spin-up enabled) or to configure the drives to spin up at power on. WD drives used to have a jumper for this, but I think my hitachi are either soft configured or "just worked" - it's been a while since I had to play with them.

You almost have my name! And last name initial! :eek:. I am with 2 L's though. Greetings :D
 
Finally bit the bullet and got an ARC-1883i!
I'm coming from an ARC-1220 that I've had for years. I know that going from SATA to SAS controller is OK, and I won't be able to go back. But should the drives and current array just be auto-recognized by the new controller? Thanks!
 
Yes, just plug them in (best to maintain the same order) and you're done. If I'm not mistaken, your array will still be able to go back to a SATA controller because it was created with metadata that specified 16 arrays maximum per card instead of 128.
 
One card has it, one card doesn't. But why should I have to cable, administer, and secure the extra Ethernet device when I'd rather use the in-band option? Why do you think that Areca provides the in-band option if we're not meant to use it?

I've always had issues trying to dial into my areca. Once I plugged in the ethernet and slapped an IP on it I've never been happier.
 
For some reason when I plug in my new controller, it won't boot. Goes right past the mobo post screens and into Windows. Windows doesn't see either my drives, the array, or the card.

Machine specs:
ARC-1883i
ASRock 880GM-LE FX mobo
Phenom X6 processor
8GB ECC RAM

I checked the manual, and the controller says it's PCIe backwards compatible. My current ARC-1220 works just fine. I've tried it in two different motherboards (same make/model) and already RMAed the controller once. I've tried it both with and without the battery backup plugged in, and with/without the cables plugged in. What am I missing?
 
Unfortunately, you are using an extremely inexpensive single-PCIe slot motherboard. Many of those boards have BIOS/UEFI that are geared towards graphics cards and not HBAs. You are using a RAID card that is designed for a server motherboard and it is very possible that your current board (even though it is compliant with your 1220) is not compatible with the 1883i. A few things to try... You said you RMA'd the board once, are you 100% sure the card is good (checked working in another machine?) Is the card and your motherboard updated with the latest BIOS (remember there are 3-4 different upgrade files for the Areca?) Have you disabled EVERYTHING that you aren't using on the motherboard (Can run out of address space) such as serial ports, accessory SATA controllers etc?
 
You are using a RAID card that is designed for a server motherboard and it is very possible that your current board (even though it is compliant with your 1220) is not compatible with the 1883i.
What would be the specific cause of incompatibility?
Have you disabled EVERYTHING that you aren't using on the motherboard (Can run out of address space) such as serial ports, accessory SATA controllers etc?
How much address space does the 1883i require? (Wait -- how much address space does a serial port require?)
 
What would be the specific cause of incompatibility?
How much address space does the 1883i require? (Wait -- how much address space does a serial port require?)

Well, most of the incompatibility issues I have seen in consumer-oriented motherboards and HBA issues were due to specific tweaks made to support SLI/Crossfire (Often seen where the card performs properly (albeit slower for example in a 4x electrical slot) in a slot not the primary (16x/16x).). In other boards it is more a this-board-just-doesn't-like-this-card (with no particular reason showing) kind of problem. This is why (especially in enterprise equipment) there are QVL and HCL lists.
As to address space, it isn't the old problem of running out of interrupts but how a particular BIOS was designed. Add-in boards (both plugin and motherboard-bound) all take small chunks of memory starting at C000/C800 which the system checks for option-ROM/bootable code. Some boards only allow a certain amount of space (it often isn't the total amount of space (usually 2k blocks) but where it is hooked) from add-in boards (they initialize the on-board devices they know about first and then give an add-on board address space above them (I have seen this specifically with some Dell Precision workstation boxes and Mellanox FDR IB boards where they only work if you disabled the secondary SATA controller). Most of these problems could probably be repaired with BIOS updates if the motherboard vendor tested against more devices, but they don't expect the vast majority of people running a $80 motherboard will be running a $700 RAID card and so don't test against that.
Whenever you take higher-end "enterprise" adapters and place them on lower-end motherboards (in this case the RAID card) and they are not specifically on the HCL there is no 100% guarantee (That is why there are HCLs)

tl:dr I am not an EE. If the AIB is not on your motherboards HCL there is no 100% guarantee it will work. It is still plug-and-pray.
 
Last edited:
A few things to try... You said you RMA'd the board once, are you 100% sure the card is good (checked working in another machine?) Is the card and your motherboard updated with the latest BIOS (remember there are 3-4 different upgrade files for the Areca?) Have you disabled EVERYTHING that you aren't using on the motherboard (Can run out of address space) such as serial ports, accessory SATA controllers etc?

I RMAed the RAID controller, not the motherboard, so it's definitely not the card at this point.
The MB BIOS is for certain up to date, but I can't update the RAID controller as I can't even "see" it yet to update it.
Everything that I could turn off on the board has been turned off (serial, IrDA, etc.)
 
OK, me being smart and all (durr) finally realized I should try to see if the out-of-band addressing was working so I plugged in an Ethernet cable and checked my router to see if it was pulling an IP. Sure enough, it was. Logged in via the web page and updated the FW/BIOS/MBR to the latest versions. Turned off the PCIe 3.0, made sure that it was using the Legacy boot method, and....

Nothing. Still doesn't show its own BIOS on boot. :confused:

Tried UEFI before I left for work, that didn't work either - though I didn't suspect it would. Does anyone see anything else in the manual that I should be looking at?
 
Today, we had a power failure at the office that lasted longer than our UPS'es could handle. And unfortunately I was too late to shut down our SAN server (as in: sick, nobody else at the office, you know how those things work ;))

I'm using a ARC-1260 on 1.49 firmware (latest for that model). The RAID6 array uses 13 disks, types WDC WD1002FBYS-02A6B0 and WDC WD1003FBYX-01Y7B1. Two of the disks are Hot Spares.

Before the power failure the RAID6 array was fine. No failed disks. But after switching the machine back on, the array is missing and 4 out of 13 disks are in the Failed status. One of them is a Hot Spare (the other Hot Spare is still showing).

An overview of the raidset hierarchy:

2cpykqr.png


It seems unlikely that the 4 disks all failed at the same time. But it's possible. Is there a way to re-check the disks or remove the Failed state and keep on trying?
Will the rescue command do anything in this situation? I've seen people with missing raidsets, but they always seem to have all the disks in normal operation.

Any ideas or things I can try?
I don't have 13 spare disks to do a dd clone and that will take too much time. Most of the data on the disks is recoverable from the backups, but this will take some time. If possible, I would like to try to bring the array back up. I'll notice soon enough if the disks are corrupt, because they contain a lot of VMWare images.
 
Last edited:
What kind of enclosure are they in? Are the 4 disks in question on the same backplane, on the same SFF cable or connected to the same power supply wire tree? Once the drives come up we can attempt an array rescue, but lets see why they are failed first.
 
What kind of enclosure are they in? Are the 4 disks in question on the same backplane, on the same SFF cable or connected to the same power supply wire tree? Once the drives come up we can attempt an array rescue, but lets see why they are failed first.

Thanks for the quick reply. It's a custom build based on a chenbro RM31616 chasis.

I'll have to check the exact cable layout at work tomorrow, but here is an image from the front in it's current state:

2mwrvvr.jpg


(one disk cage is opened: that's because it's faulty and needs to be replaced. That's unrelated to this problem)

Channel layout is:

13 14 15 16
09 10 11 12
05 06 07 08
01 02 03 04

Slots 2 to 4 don't have disks. I believe the backplane has 4 standard 4 pin power connectors, one for each row of 4 disks. And it also has 4 Mini-SAS connectors, also one for each row of 4 disks. The failure spread does not seem to indicatie one specific cable can be at fault.

Is it possible that the drives lost power a fraction before the actual server and RAID controller? And that the controller therefore sees the disks as failed?
 
Last edited:
Back
Top