ARECA Owner's Thread (SAS/SATA RAID Cards)

Yes. I remember seeing a review out there comparing the two if you want to see exactly how much. Shouldn't be hard to find.
 
Depends on your workload, but you'll probably find the battery allows you to disable write flushing in the OS and ends up providing better performance. It will also help a bit with durability. If you're reading lots, the memory might be better but it will depend on the pattern of reads you're doing. How much memory do you have on the card now?
 
Which would be a better upgrade? The battery or 4g ram for the 1882ix?

Better for what? Vague. For performance: 4GB module, unless youre planning on handling lots of >4GB files most of the time in which case it'll matter less. Also the controller doesn't care whether or not you have a BBU in terms of enabling Write Back caching - it allows it regardless and its enabled by default.

For home environmentna lot of people skip the BBU and mitigate the 'sudden power loss' threat by running off a UPS. That doesn't cover the PSU suddenly dying and a write not being flushed and that's a slim chance but you'll have to determine for yourself if the data is that critical - ie if you're just storing media files and/or you have a backup then you may not care.
 
File size doesn't matter for cache sizing; locality and repetition of access has more of an impact. The OS might not [easily] allow write flushing to be turned off if there's no BBU.
 
Better for what? Vague. For performance: 4GB module, unless youre planning on handling lots of >4GB files most of the time in which case it'll matter less. Also the controller doesn't care whether or not you have a BBU in terms of enabling Write Back caching - it allows it regardless and its enabled by default.

For home environmentna lot of people skip the BBU and mitigate the 'sudden power loss' threat by running off a UPS. That doesn't cover the PSU suddenly dying and a write not being flushed and that's a slim chance but you'll have to determine for yourself if the data is that critical - ie if you're just storing media files and/or you have a backup then you may not care.

I will be reading mostly 40gb files. Let me get this straight, so if my psu dies or power go out. All my data is loss without bbu? Then Is it safe to shut down my pc in a raid 6?
 
If there are cached writes that are never sent to physical disk (because they're in cache in the OS or the card), and power goes out, it's possible that the data on the drive becomes corrupted.

Most people would want to err on the side of durability than performance. Even if you have backups, you're buying into worry and time that's otherwise avoidable.

When you're reading 40 GB files, are you reading them sequentially or randomly? Are you rereading sections of the file in any particular patttern? Are you reading only one 40 GB file at a time, or many different files at the same time? You said "mostly", which is inspecific -- what's the mix? Are you having performance problems with your present configuration? What are the IO rates that you wnat to see, and what are the IO rates that you're actually realizing?
 
Odds are he's just talking about BD rips, which is why I say BFD if a PSU blows up and the write cache is lost while you're what - ripping a media file? You have to ask yourself how frequently you plan to write data that you wouldn't be able to recreate easily, and then factor what the chances are of a PSU blowing up when that happens. If you can see that happening often enough to make you uncomfortable about being without a BBU, then spend the money.

Personally, I dont bother with BBU's anymore for my Areca's that handle home media storage. I've got high quality PSU's and UPS's, and if a PSU dies during a write then it dies. I've had instances in the past where even WITH a BBU, with sudden power loss during a write there was corruption so its not a guarantee against corruption, just a small added layer or protection.

However for biz environment, BBU is a no-brainer.
 
Last edited:
Unrecoverable data is almost always written to the drive with any other write as metadata that controls the file system: directory entries, allocation tables, MFTs, i-nodes, whatever the file system uses. If those are corrupted, the volume might be corrupted completely.

There are people who don't value their time very much and are willing to take chances, even if they're pretty small. I'm not one of them, and my advice reflects that. I think $100 or so for a BBU to assure there's no problems with the file system and the data it holds. For people who don't think their time to recreate the data or recover the volume is worth $100, then the decision is a lot harder.
 
Hey guys,

I have big problems running Areca 1880IX-24 and 1882IX-12 on ESXi 5.1 I was hoping someone can help out...
I've been running the 1880IX-24 for a while with 4 Samsung 830 256GB (separate volumes, no raid).
Everything seemed to be fine for a few months... until I decided recently that I want to RAID the SSDs.
I bought 4 more of those SSDs, so now I have 8 in total. I wanted to do Raid-5 with them.
I took out the 1880IX-24 card, I put the new 1882IX-12 card and created the raid-5 volume with the SSDs.
I created the datastore, and moved some VMs on it. However, when I move VMs or if the VMs are under high IO, the ESXi crashes (the purple screen BSOD appears, it says something about arcxxx - I think it's related to the Areca driver?).
I tried reinstalling the ESXi 5.1 from scratch and reloading the driver, I created a raid-5 volume with only 4 drives, but the crash still happens.
I even replaced the raid card with the 'old' 1880IX-24, but it doesn't help.
Before it seems to be working fine for months, with 4 of these SSDs, but not in RAID... do you think it is related? I can try creating volumes and then keeping them under stress for a while, to see if it still crashes..

Any advice would be really appreciated...
 
Last edited:
Hi Areca Experts - is there a way to get a 1880x card to rescan for attached devices after it's booted up?

I have an ESXi 5.0 box that the 1880x is installed in. The 1880x is setup as hardware passthrough to a Windows 2008R2 VM. Attached to the 1880x is a Supermicro SC847 JBOD chassis. Given the noise level of the SC847, it is not on all the time. And so I would like to ability to turn that on and off as needed without having to shutdown the ESXi box each time.

I'm finding that sometimes the 1880x will automatically "see" the SC847 and the drives in it, bot sometimes it won't - so my current setup is a bit flaky.

Is there a way to get the 1880x card to rescan for new attached devices so I can turn on the SC847 after the ESXi server is booted and the 2008R2 VM is running?

Thanks!
 
Hello, I certainly wish I had found this thread years ago (lots of good information here), but I must have never quite hit the right google search until now...

Long winded background information:

We have used 1231MLs and a variety of different Seagate 1.5TB and 3TB drives as cheap filers for several years, with generally good results. Very few drive issues, etc.

Based on this, I bought a bunch of the ST3000DM001-9YN166 drives earlier this year when I noticed the 5 year warranty had changed to a 1 year warranty (so I stocked up on the last of the 5 year drives I could find). So these drives are not returnable and I need to find a way to make them work reliably.

I was finally able to build up a new machine to put them into, and I bought the 1882ix-24 4GB card, not realizing the SAS/SATA subtleties, etc. My thoughts were along the lines of being able to put 24 drives inside the case, and then still be able to expand out the back if required... Also, the 1231ML, etc. have been end of life'd from a FW support perspective, so it seemed to make sense to go with Areca's latest and greatest, since I plan to use this system for many years.

Anyway, I am seeing random drive time out errors under stress testing (I am doing things like copy data from the array back to itself, copy data between arrays, read data from the array, run HD Tune Pro, sometimes one of these things, sometimes multiples of them at the same time, etc.). The time outs are on different drives, spread across multiple 1882 ports (i.e., not always within the same 4 drives that share a cable).

2013-01-13 15:20:58 Enc#2 SLOT 04 Time Out Error
2013-01-13 15:08:52 Enc#2 SLOT 01 Time Out Error
2013-01-13 11:57:44 Enc#2 SLOT 07 Time Out Error

Examining the drive SMART data with smartmontools shows no apparent problems (no reallocated sectors, no extra load/unload cycles, no command timeouts, etc.). So I am guessing the drives are simply responding too slowly under load, and there is not anything physically wrong with them.

System information:
Areca 1882ix-24 4GB, PCB Version 3.0 (just purchased from Newegg)
Breakout cables, BBU (also just purchased from Newegg)
Seagate ST3000DM001-9YN166 3 TB drives
Asus Rampage 4 Extreme, i7-3820, 16 GB G.Skill RAM
Asus GTX 660 Ti Video
Corsair AX1200i PSU
Windows 7 Ulitmate 64-bit
Lian Li PC-D8000 case (note: the HDs and front fans mount with rubber grommets, so hopefully vibration is not an issue)

The 1882 has been updated to the latest FW (v1.51 2012-09-20) and I am running the latest windows drivers. There are only two cards in the system, the video card in the 16x slot 1, and the 1882 in 16x/8x slot 4. The 1882 reports 8X/5G.

The controller has two RAID sets at the moment. I initially built up a 3 disk set for some rapid early testing, and then I built up a second 10 disk set that I am hoping will get used for real data... So far, the time out errors are in this 10 disk array (which has been the focus of my stress testing). All 13 drives are connected to the internal ports on the 1882 with SAS/SATA breakout cables.

The 3 disk set has three volumes: 1500 GB RAID0, 1500 GB RAID5 and 1750 GB RAID6. The 10 disk set has 1 volume: 24000 GB RAID6.

I have configured the card with mostly default/normal settings such as NCQ at 32, read aheads, caching, etc. all enabled. I have disabled failing a drive due to time out errors, and I have disabled all of the power saving items.

There are 7 front fans blowing on the drives, and the temps under load are 30-31C for the 10 disk set (it is in the main front drive area) and 33-34C for the 3 disk set (it is in a 4 in 3 hot swap cage that uses 3 of the 5.25" slots above the main 3.5" drive area (that fan sits behind the drives rather than next to the drives, so they run a bit warmer)). The controller is using its supplied (noisy) fan, and the CPU is 62-63C loaded or unloaded, and the controller is 38-39C, loaded or unloaded.

So here come the questions and observations:

1) I would like to understand the actual effects of these time out errors. Is data being lost/corrupted? I am still running a volume check on the 10 disk set, and it shows no errors at 99% complete. I don't yet have a way set up to check the integrity of the data itself, but will work on getting MD5s set up on the files.

On the 1231MLs, a time out means a failed drive and a rebuild, but I have only seen them a few times (once with a drive that was barfing large numbers of reallocated sectors all at once, and showing command time outs in its SMART data, and once, early on, when I tried to actually use the hot swap features those cases offer; it seemed like inserting a drive caused the power to dip enough that nearby drives would sometimes drop out). With the 1882 having more advanced firmware, I am not sure just how bad a time out error is, but it scares me...

2) I have read that several people are running these Seagate drives on the 1882 without problems. What drive firmware versions are you using? My drives are CC4C. There is an updated firmware, CC4H, but since I can not revert back the FW on a Seagate drive, and since Seagate won't say what the actual changes are between CC4C and CC4H, I have been hesitant to update them as an initial attempt to solve the problem, in case the CC4H firmware makes things worse, for example.

3) Any ideas on what to do to solve the time out error problem? Is it something that can be safely lived with, or does it mean I need to get rid of the 1882 if I can not solve the time out errors in the next week or so? After the volume check finishes, I will experiment somewhat randomly with the 1882 configuration, but that is a hit or miss proposition, so I would like to understand what is going on better.

4) If I can't use the 1882, what would you recommend for a 24 drive solution?

5) I did notice very poor read performance on the 3 disk set with volume read ahead enabled - only 60 MB/s for the RAID6. With volume read ahead disabled, the read speed increased to 145 MB/s. For comparison, when I moved that 3 disk set onto a 1231ML, I got 160 MB/s with regular read ahead settings, which is about what I would expect (a 3 disk RAID6 should read at about the same speed as a single bare drive, as I understand it). Any ideas on what is causing this lower performance on the 1882? Note: once I built the 10 disk RAID6, I had to turn on volume read ahead to get decent performance. With it off, the 10 disk RAID6 read at around 300 MB/s, and with it on, it reads at around 800 MB/s. I am guessing that Areca does not have an algorithm that considers set size in its read ahead decision making and is spending too much BW reading ahead, but that is a complete, ignorant guess on my part, and I would love to hear the thoughts of those who know...

6) I have not spent a great deal of time testing the performance with the two RAID sets active (once I saw time out errors, I focused on them, since they scare me at this point). Ultimately, this system will have 2 10 disk RAID6 volumes, and then 4 slots for pass through disks and spares. The discussion on the 1680 performance with multiple arrays is also frightening. Has this issue been fixed on the 1882s?

7) For those people who have also run into the ignored volume initialization bug, Areca has known about it for years. I have seen it on the 1231MLs several times, and have discussed with it Areca support in the past. I don't know exactly what causes it (it seems to be only on larger volumes for me, but I just worked around it by rebooting the system, rather than building up different arrays to try to characterize it, since I did not have any extra disks at the time).

Thanks in advance for any information and advice that you can give me.
 
Last edited:
The 1882ix card has died and the system will not boot with the card installed. Before it died, I played with various settings, including lengthening the time out limit. It seems like when it decided to hang, all controller activity stopped (the global activity light went out and whatever was happening on the PC would stop if it involved data on the RAID). It would hang all data to/from the card for whatever time out limit I had set (which makes me think it was a controller issue, and not the drives taking too long to do something internally). I did notice that if I requested SMART data from a drive by clicking a drive in the out of band web management interface while the time out was happening, that seemed to be enough to unplug things, and the controller went back to work. I only did that once, though, so I am not 100% sure that really was what did it, or if it was just a coincidence.

So I wonder if some commands from the controller to the drives were getting lost in the controller somewhere (the drives don't show any command CRC errors, etc. in their SMART data). I can't image that drives would really hang for 30 seconds or more and not log any SMART errors, would they? No reallocated sectors, etc. on any of these drives.

The 1280ML 4GB card is showing discontinued in some places like Newegg, although Newegg still has the 256MB version. Has Areca really stopped making them? I'm a little confused about which 4GB RAM module to buy for it, so I am hesitant to order that 256MB one (is that style of RAM even still available today?).

Does anyone know the difference between the T113 battery (listed for the 1280ML, etc.) and the T121 battery (listed for the 1882 in some places, and 1680, 1880, 1882 in others). I am trying to figure out if I need to return the T121 as well as the 1882ix, or if I can keep the T121 and use it with the 1260ML.

Thanks
 
So it turns out my project shifted and I don't need my ARECA card anymore, what's a fair discount %-wise for an ARECA card ? (<2mo old, only tested, BBU never used)
 
OK, I've had my second 1680X failure (that's 2 for 2) in 25 months of operation. I'm going to pursue warranty repair, but in the meantime I need to replace it. This lives in a SuperMicro 1U server, connects via external SFF-8088 socket to a SATA Enclosure.

Since I need to buy a replacement to run during repair, should I get another 1680X or upgrade to an 1880 model? TIA, Paul
 
Hello,

I'm having some performance/reliability issues with my Areca ARC-1231 running Server 2008 R2. It's roughly 2 years old. As far as I was concerned, everything was running fine up until about a year ago. Some basic current system information:

Firmware Version: V1.49
Drives: 5xWestern Digital WD15EADS, 2x Samsung HD154UI, 1xWD20EFRX

Anyways, I started running into a situations where transfers involving the ARC-1231, especially writes, would be extremely slow (kB/s). On occasions I've had complete system hangs for a period of time, a couple times needing a system restart to fix. Reads are generally fine, unless I try to write something to the disk. Then, of course, the array will hang. Thinking that there was some system/OS/driver issue, I reinstalled OS. Then I swapped out the last two drives I had added, thinking that was the issue, which did nothing as well. The issue seems to be getting worse.

The Areca event log doesn't list any problems. Consistency checks have all come back fine. Server 2008 event viewer does have this entry: "The driver detected a controller error on \Device\Harddisk2\DR2" but I don't know if that means anything. My only guess is that one of my drives is dying.

Any thoughts or help is appreciated
 
I had 3 drives fail in a raid6 (or what I thought were failures) I guess they just dropped out as I pulled them and was able to see them in another pc. This happened while I was expanding the array and adding another drive.

I ended up finding the option "activate failed drive" so I activated all 3. But what I didn't notice was it wasn't done the migration process, so it kept going (from approx 98% on).

It finished and I saw this;



If I go back to the "activate failed drive" option, it doesn't show any drive.

I pulled the 2 slots and swapped them, then saw this;



If I restart I get back to Image 1.

How can I fix this?
 
Last edited:
I had 3 drives fail in a raid6 (or what I thought were failures) I guess they just dropped out as I pulled them and was able to see them in another pc. This happened while I was expanding the array and adding another drive.

I ended up finding the option "activate failed drive" so I activated all 3. But what I didn't notice was it wasn't done the migration process, so it kept going (from approx 98% on).

It finished and I saw this;



If I go back to the "activate failed drive" option, it doesn't show any drive.

I pulled the 2 slots and swapped them, then saw this;

[ul]

If I restart I get back to Image 1.

How can I fix this?


You could format the two drives and rebuilt the whole array, but you're risking 1 drive dying anywhere and losing all your data again.
 
You could format the two drives and rebuilt the whole array, but you're risking 1 drive dying anywhere and losing all your data again.

Is there an easier way? Maybe the "set disc to be failed" or "schedule volume check" or something like that. I feel like this could be an easy fix, I just do not know the way to do it. If at all else, I'll rebuild the array, but I'd rather not risk it.
 
Looks like there is a lot of card failing at 2yrs. I sure hope my 1882 last more than that for $900.
 
Looks like there is a lot of card failing at 2yrs. I sure hope my 1882 last more than that for $900.

Are they 1222's? 80-90% of them fail around ~1.5 years of use. Its only a problem with the 1222 though. I have 1220's and 1280s, 1230s, etc.. that i have had for a lot of years now and never had a problem.
 
Does anyone know the difference between the T113 battery (listed for the 1280ML, etc.) and the T121 battery (listed for the 1882 in some places, and 1680, 1880, 1882 in others). I am trying to figure out if I need to return the T121 as well as the 1882ix, or if I can keep the T121 and use it with the 1260ML.

Thanks

T121 is only supported by ARC-188X series cards, for older models you need to go with T113 as T121 simply won't work with those.
 
Jus - Thanks, I guessed that would be the case and erred on the side of caution, so my 1280ML and T113 BBU got here today.

Some random observations, in case anyone is interested:

1) As long as they are created using the 16 volume option and not the 128 volume option, RAID sets created on the 1882 are completely backwards compatible with the 1231ML and 1280ML cards (and presumably all other cards, but those are the only two I can test with).

2) The 1280ML boots up more slowly than the 1882ix-24 with no drives installed, but its boot time speeds up faster as more drives are installed, and with 13 drives installed, the 1280ML boots a bit faster than the 1882.

3) Using the RAID sets created with the 1882 on the 1280ML, HD Tune Pro shows the 1280ML being ~200 MB/s slower for the 10 disk RAID6 volume, which is not a surprise at all. For the 3 disk RAID6, the 1280ML is 100 MB/s faster than the 1882ix. So the 1280ML architecture handles smaller arrays much better, at least in RAID6. There was much less variability in the HD Tune Pro results on the 1280ML compared to the 1882. If I run the benchmark several times in a row, the average speed result for the 1882 would move around 50 MB/s or more, while the results on the 1280ML move less than 1 MB/s for average speed.

4) I do not see any time out errors so far, nor do I see the global activity light frequently going dark, nor do I see the file access related system lock ups all the time when the controller is heavily loaded. So the time out problem I was having was specific to the 1882 or the 1882 and these Seagate drives, but the 1280ML and these same Seagate drives do not show the same problem. Stress testing will continue for a while longer, along with deleting the RAID sets and recreating everything with the 1280ML, then comes the process of loading it with a copy of real data and verifying MD5s, etc.

5) There is no fan on the 1280ML, just as there is no fan on the 1231ML, so it is much quieter than the 1882. I put a slot cooler next to them, set on low, and that gentle breeze keeps them in the 40-50C range when under load.

6) I did not specifically bench the speeds on the 1882ix between the two RAID sets, but it seems like the 1280ML performs better with both sets busy than the 1882ix did, but I have nothing quantitative to back up that impression.

7) Newegg no longer carries the 1280ML (shows as discontinued). This one was made in July, 2012.

Thanks
 
SOLVED 1680X failure
This is embarassing, but since I complained here I thought I'd post the resolution. These are things that I could and should have done before asking for help.
- Updated the BIOS from 1.49 to 1.51 on my two ARC-1680X cards (seems to have fixed my "spontaneous disconnect" problem)
- Stopped trying to run two RAIDs with one card (solved a "spontaneous restart" problem)
Thanks to Jus for his sage off-forum advice.
 
I just install my 1882ix, it's sooooo loud! Any other card I should be looking at? Like the 1882i low profile?
 
My 1882ix fan was quite a bit louder than the other 12 fans in that system combined. The 1280ML is silent of course. ;) No issues with the 1280ML after a week of banging on it with data copies and MD5 integrity checks, etc.

You can try getting a slot cooler or some other fan and then removing the fan on the 1882ix. I think I read somewhere in this thread that one person was even water cooling theirs.
 
I would return my loud 1882ix-12 and get something else but I got such a great deal on it, $664 shipped. Where would I get the water block for the 1882ix?
 
Yep, finally.. was playing around with 1.51 earlier today, thanks for posting up the links. Successfully tested smartmontools is now able to see through Areca 1880 + expander to the SMART info of individual disks - not just passthrough disks but even those in raidsets, perform drive tests, etc. I tested with windows version of smartctl, using the following command to print all smart info for 1st disk of the 1st enclosure (expander) on the 1st Areca controller:

smartctl -a -d areca,1/1 /dev/arcmsr0

the numberic values in the example mean the following:

smartctl -a -d areca,<drive#>/<expander#> /dev/arcmsr<areca controller# ie 0 for first, 1 for second, etc>

Anyone got this to work on Linux?

I've tried different variants of N,E and x but it does not work.

smartctl -a -d areca,N/E /dev/sgx

I'm using:
Controller Name ARC-1880IX-24
Firmware Version V1.51 2012-07-04


smartctl 6.1 2013-01-23 r3754 [x86_64-linux-3.2.0-35-generic] (local build)
Linux FS 3.2.0-35-generic #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux


UPDATE: I managed to get it to work. The N value confused me a bit, but should be the Slot # when you list the disk via cli64.
 
Last edited:
How do I get window 7 to recognize my raid set? I am not seeing it under "my computer" in windows, thanks
 
That is where i am at. Disk management said i have to Initialize the disk, it ask me to pick Master Boot Record (MBR) or GUID_Partition_Table. I dont know which one to pick. Its 17TB raid 6.
 
You want GPT, since MBR doesn't support partitions larger than 2TB. The partition type and the 2TB limitation aren't Areca-specific.
 
That is where i am at. Disk management said i have to Initialize the disk, it ask me to pick Master Boot Record (MBR) or GUID_Partition_Table. I dont know which one to pick. Its 17TB raid 6.

And for the love, format to 8k or 16k cluster size since it sounds like you'd pick default 4k which would cap partition size at 16TB
 
Hello all,

I am getting a Areca ARC-1680ix-16 next week. I currently have a HighPoint Rocketraid 2720SG. Even though the Areca is not SATA 3, would I still see some improvements? I will be running RAID 10 for my VM's and probably another RAID 10 for storage. I might just play around before I set my mind on the storage RAID and try out the other RAIDs such as 5 and 6. Still, I am excited about getting the Areca and look forward to compare it to the HighPoint.
 
RAID 10 might not be much better but any RAID level with parity should be considerably faster. Unless you're running SSDs, 6gbit support isn't going to help you much. Do keep in mind that in general 6gbit cards tend to use newer processors and as a result tend to be faster than the 3gbit variants (at least for Areca), but that has little to do with being 6gbit.
 
Back
Top