ARECA Owner's Thread (SAS/SATA RAID Cards)

My initial build using Samsung 2TB 5400rpm drives took about 30 hours with only 8 drives.

If i have something setup wrong I'd sure like to know about it. I guess I should qualify that there is another 12 drive array on the same controller, 30 hours was with this other array. When I built it the first time with out another array it was about 8 hours to build it.

So is it a bad idea to have multiple raid 6 arrays? would I be better off with 1 20 drive raid 6 array (perhaps with a hotspare or two)?
 
Rebuilds are fairly fast on Areca cards, as long as you have write cache set to "Enabled" (NOT auto).
I just did a rebuild of 7x3TB in raid5 and it took approx 7 hrs on my 1880i
 
I've got a RAID10 on an Areca 1280 that seems that have gotten corrupted during a recent move from one chassis to another. I believe one of the SATA cables in the new chassis was acting up. Anyway, now I've got two partitions in trouble. One is showing up as a RAW type in windows diskpart. The other isn't showing up as a partition at all.

I can reload one from backups but the other has a bit of data on it that's newer than the most recent backup. I'd like to be able to see if that data can be recovered.

So whose partition recovery tools work the best lately? These are NTFS partitions from a Windows 2008R2 machine.
 
I like R-Studio, although they're stingy on the program updates. Another one is Active Partition Recovery, but that software is a bit more limited in functionality. I suppose because it's RAID10 which is non-parity RAID, the card couldn't figure out that one of the ports was going bad? You'd think that it would realize one of the mirrors has different data on it though... any messages in the event log?
 
If you know which drive has the bad data (and it is only one drive), can't you just force it to rebuild that drive from the mirror?

Of course that will not work if the bad data is on both drives.
 
Just want to post a heads up that the Areca 1880i + Intel SAS expander + 7 Hitachi 3TB drives in RAID5 combination are working flawlessly. Unfortunately my Chenbro SAS expander is NOT working with my Areca, causing the card to become unresponsive after a few percentages of array initialization through the Chenbro. I tried both enhanced and original firmware jumper settings. I'll be replacing the Chenbro with another Intel SAS expander and hopefully everything will be sorted out.
 
Is there any info on how the 1280 sets up a RAID10? How do I tell which drive has which parts on it? Is the info stored on the drives or in the card?
 
Can't really tell you all the fine details of how Areca implemented all the RAID levels on the card, but all the RAID metadata is stored on the drives themselves.
 
Is there any info on how the 1280 sets up a RAID10? How do I tell which drive has which parts on it? Is the info stored on the drives or in the card?

I'm guessing you're scratching your head in R-Studio or similar trying to determine the correct block pattern for setting up a virtual block raid device. The good news is RAID10 doesn't use parity so it's less complicated than recreating Areca's proprietary RAID6 matrix.

How many member drives in the original array, make & models? Did you maintain the exact drive order when moving cases? Do you remember the original stripe size? How much is the difference in data since your backup copy worth in dollars and cents?

Absolutely do not do anything that modifies data on the drives, don't look at them in DISKPART (or if so be careful not to accidentally hit okay when asked to initialize drive) and don't use any of the recovery keywords in the Areca management GUI until you've provided more details or contacted Areca support. Some people blindly issue the RESCUE and SIGNAT commands in the GUI which will make actually make some recovery scenarios worse.
 
Last edited:
The reason I ask is I have a 5th drive that was part of the array (4 drive RAID10). It 'failed' and a hot spare took over. Once that had rebuilt I shut down and made the move. I'm wondering whether or not this 5th one might be useful in untangling the mess. I want to get a clear assessment of what is or isn't on these drives before I risk anything else going wrong.

I've had 10 drives attached to the card, one RAID5 and the other RAID10 (both 4 drives each), along with two hot spares. 6 internal to the CPU and 4 external via an SFF-8470 cable. I attempted to move the other 4 drives to the external chassis. Along the way I discovered that some part of the 2nd sata-to-8470 adapter and cables is defective. The array wouldn't come back up. I may have harmed the process by moving the drives around to different connectors and the perhaps the card got confused.

So at this point I've stopped trying to do anything with the current setup and am looking at what other tools I could use, via another machine, to untangle the mess.
 
you didn't answer all the questions.

you say 4 drive raid10, make & model of drives? did you maintain the exact drive order when moving cases or did the order change? do you remember the original stripe size? also it helps if you take a screenshot of your areca management GUI to show the drives and array status.
 
I didn't answer because I was posting at the same time. I must've missed your post in the process.

The problematic array is made up of five Seagate 7200rpm 1.5tb drives. All are the same model. It's a 4 drive RAID10 array with one hot spare (same drive type). It has three volumes within the raidset: 32gb C:, 80gb D: and the remainder as E:. All were NTFS. The first one appears to be OK. The 2nd one shows up as just a RAW partition in diskpart. The 3rd one shows up as just a drive, no partitions on it at all. I brought up diskpart from the OS boot CD. I made no changes to the drives, partitions or volumes, nor did I see any messages about initializing. I've seen those before and know what to avoid. I have no used any other tools to look at the drives. I have specifically avoided using anything in the areca web interface, other than to look at the state of the array. Which, right now, is claiming everything is OK.

I may have gotten the drive order wrong when I made the move. The way the connectors are laid out on the 1280 (and unlabeled) makes for real hassles trying to get the cabling right. I moved them all at once, while the machine was powered off. I examined everything from the web interface through the card's own ethernet port.

I made no changes to the stripe sizes. I went with whatever is the default for this type of setup. They're 64k stripes and 512 byte blocks.

Before I shell out for R-studio, is it known to be able to deal with Areca RAID10 arrays? And would I be best served using a whole other machine with non-RAID sata port connections? I do have another areca card I could use, if necessary (a 4-port 1210).
 
I haven't tried R-Studio with dead Areca arrays, but I've used it with dead Intel ICH and Adaptec arrays. There's a demo-version you can try to see if it at least sees your data.
 
R-Studio worked with one of my Areca RAID 5 arrays, so I can't imagine it will have difficulties with RAID 10.
 
Anyone else notice a new feature/bug with the latest software/firmware on the 1880? I have scheduled raid checks in place on my Raid6 array however now I noticed every time I reboot the system it performs a system check upon boot. Thoughts?
 
I have four Seagate Barracuda LP ST32000542AS 2TB 5900 RPM drives today... do these currently work with the 1880IX-12?

In general, if you mix 3G drives with 6G drives on an 1880IX-12 but on separate raid sets, will all drives on the card fall back to 3G?
 
I had those Seagates working fine on my 1231ML, so it should work on the 1880, but I don't know for sure. You can mix 3G and 6G drives and the 6G drives should still work at 6G; however, only SSD can provide enough performance that 3G is actually a bottleneck.
 
thanks alamone.

Anyone come across a worthwhile review of the ARC-1231ML against the HighPoint 3530?
 
How is overall performance? I'm considering just sending them back and waiting for the 7K3000 3TB drives to go on sale.
 
I am running an ARC-1880iX-24 card in a Norco RPC-4224 case. It is a Supermicro X8DT6 motherboard. I built a RAID6 array of 12 Seagate ST32000542AS LP drives using foreground initialization. It initialized fine then dropped two drives. The drives showed "failed". On reboot only one was missing and it showed "free". I made it a global hot swap and the array rebuilt without error. The data was mostly good, though there were a couple of directories that were corrupted.

I disabled all power saving modes, flashed all of the drives to CC35 (some had to be forced from CC34), updated the Areca from 1.48 to 1.49, rebuilt the array using background initialization and it has run fine since (about two weeks). I built a second array (RAID6) with 9 of the same drives (also flashed with CC35) using foreground initialization about ten days ago. It has also been running fine. Last Friday I decided to grow expand the second raid set from 9 to 11 drives - again using the same Seagate drives. It showed Migrating all weekend, then last night at about 64% migrated the alarm started beeping constantly on the controller. Both RAID sets were missing to the O/S. When I restarted the server, the first RAID set was up and normal, the second showed "failed migration". One of the original 9 drives was listed as "free" and my global hot swap drive had been assigned in its place. The array was not recoverable using RESCUE or SIGNAT. It is now being rebuilt with 12 drives.

The disc failures have been random, not on a particular backplane, drive or slot. Areca tells me that "only enterprise drives supported" - which I already know. I have had great luck with these drives so far, especially as compared to the WD and Samsung counterparts - albeit not in a RAID array.

The questions:
  • Are the Seagate drives really not going to work?
  • Could it be a bad 1880 card?
  • Any other thoughts?

Have you upgrade the latest expander firmware for ARC-1880iX-24?
 
Okay, I've read as many posts as I can regarding getting the two to work together but I'm still turning up nothing. I've upgraded to the latest version 1.49 & even figured out how to finally upgrade the 1680s expander w/ the newer firmware 5.89.1.39 from their ftp site. When I console into the expander and type 'li" it sees the HP drives attached via the external cable but I get no love and the system reboots after 300 seconds. I'm using a Tekram 2m long sff-8088 to sff-8088 cable to attach the two together. I've even set the external cable length using 'dr -O 0x2' and saving it w/ 'st 0xff'.

Now if I attach the two together using a sff-8087 to sff-8087 it works fine and the HP expander shows up along w/ the drives. (It shows up as external enclosure #3 HP Expander 2.02 ...which I assume to be the firmware?) Bad part, the performance is lousy. If I keep all 20 drives on the HP expander w/ 0 on the 1680 I can expect around 250 MBps ish. If I move 8 HDs over to the the 1680 w/ 12 on the HP expander I get 450 to 500 MBps ish. But if I only use 12 HDs on the 1680 w/ the HP expander unplugged I get 650+ MBps (but have 8 HDs just sitting on the shelf!)

Any suggestions besides just buying a 1680ix-24 or 1880ix-24?
 
I think your main problem is you're using a high-port count Areca card with a built-in expander. I believe all the Areca cards that are not low-profile (i.e. full height cards) use an integrated SAS expander which causes issues if you try to connect to another SAS expander. Use a low-port count, low-profile card (like the 1880i with 8 ports) and you will have no issue using SAS expanders as they do not have a built-in SAS expander.
 
Have you upgrade the latest expander firmware for ARC-1880iX-24?
All four binaries for the v1.49 update were installed. I am not sure which of them was for the expander. The HP card was flashed with 2.06 from the vendor prior to shipping.
 
Okay, I've read as many posts as I can regarding getting the two to work together but I'm still turning up nothing. I've upgraded to the latest version 1.49 & even figured out how to finally upgrade the 1680s expander w/ the newer firmware 5.89.1.39 from their ftp site. When I console into the expander and type 'li" it sees the HP drives attached via the external cable but I get no love and the system reboots after 300 seconds. I'm using a Tekram 2m long sff-8088 to sff-8088 cable to attach the two together. I've even set the external cable length using 'dr -O 0x2' and saving it w/ 'st 0xff'.

Now if I attach the two together using a sff-8087 to sff-8087 it works fine and the HP expander shows up along w/ the drives. (It shows up as external enclosure #3 HP Expander 2.02 ...which I assume to be the firmware?) Bad part, the performance is lousy. If I keep all 20 drives on the HP expander w/ 0 on the 1680 I can expect around 250 MBps ish. If I move 8 HDs over to the the 1680 w/ 12 on the HP expander I get 450 to 500 MBps ish. But if I only use 12 HDs on the 1680 w/ the HP expander unplugged I get 650+ MBps (but have 8 HDs just sitting on the shelf!)

Any suggestions besides just buying a 1680ix-24 or 1880ix-24?

i'd connect all the drives to the HP expander, connect the 1680ix-12 to the HP expander with an SFF-8088 cable and then play with the SAS Mux setting in the Areca management GUI. Whatever you have it set to now, reverse it - I think it either needs to be enabled or disabled to work correctly with something on the external SFF-8088.

I used to have an 1680ix-24 and don't quite remember if I was ever able to simultaneously use both the internal SFF-8087 ports on the areca card *and* the HP expander ports, so I just connected all drives to the HP expander and then I think i disabled SAS Mux which disabled the internal ports on the Areca card. Might the story be different with a different model expander, like the Intel, perhaps.

NOTE: DO NOT use Seagate drives, and some models of WD drives, on the internal Areca 1680ix-12 ports. They're known problematic with the integrated expander on those cards.
 
Last edited:
SAS MUX should be auto or External if using 8088 cable.

I have a 1680x and use the Auto setting.
 
i'd connect all the drives to the HP expander, connect the 1680ix-12 to the HP expander with an SFF-8088 cable and then play with the SAS Mux setting in the Areca management GUI. Whatever you have it set to now, reverse it - I think it either needs to be enabled or disabled to work correctly with something on the external SFF-8088.

I used to have an 1680ix-24 and don't quite remember if I was ever able to simultaneously use both the internal SFF-8087 ports on the areca card *and* the HP expander ports, so I just connected all drives to the HP expander and then I think i disabled SAS Mux which disabled the internal ports on the Areca card. Might the story be different with a different model expander, like the Intel, perhaps.

NOTE: DO NOT use Seagate drives, and some models of WD drives, on the internal Areca 1680ix-12 ports. They're known problematic with the integrated expander on those cards.
My seagate drives work fine on the 1680ix.

ST31000340NS and ST32000444SS
 
My seagate drives work fine on the 1680ix.

ST31000340NS and ST32000444SS

Those are also the ES SAS models and are on the approved HCL.
We are mostly discussing the use of drives not on the HCL.
 
Ok guys, I ordered my new stuff. It should all be here Wednesday assuming the weather holds. I have the following on the way and I'll need a little help:

Norco 4224
Areca 1880i
4 x 2TB Hitachi 7K3000 Drives

So...after reading through pages and pages of threads, I have several questions.

The settings I seems to see and should use are:
128k Stripe
64-bit LBA
Write Cache Enabled
Foreground Initialization (I don't have the BBU, but I will have it hooked to a UPS)
Format using NTFS with a 128k cluster.

I will be using the stuff in my server for Hyper V stuff primarily:
Core i7 950
Gigabyte X58-USB3
Windows Server 2008 R2

Any problems I should watch out for in this configuration?
Do my settings look reasonable?
What other things should I do to prepare?
Should I update the firmware on the Areca card?
I plan on using one of the X16 slots on the board (leaving another x16 slot open). Are there issues with this?
I plan on eventually adding an HP SAS expander. Will these drives move to the Expander later leaving the array intact?

Help an Areca noob! :)
 
Last edited:
Has anyone placed an fan on the 1880IX series cards? I noticed there is a small place for a fan to connect and it would be ideal to help cool the processor. Anyone have any recommendations for a fan?
 
Has anyone placed an fan on the 1880IX series cards? I noticed there is a small place for a fan to connect and it would be ideal to help cool the processor. Anyone have any recommendations for a fan?

I'd also like to find a source for *reliable* replacement fans. I've got a 1220 without a fan (just the heatsink) and I'd like to add one. I'm not sure who makes ones that will hold up to constant server use. I've replaced the odd graphic card fans here and there but have been less than pleased with the reliability of the replacements.
 
I have 16 of them in a RAID6 on an Areca 1880iX. Bought them last week. They inititialized very fast and are working fine. They run VERY cool.

Hi rprade,

I'm looking at picking up an Areca 1880i, is that the same as the 1880iX minus the SAS expander?

Reason I'm asking is because I want to get 8 of the 5K3000 2TB Hitachi drives in a RAID 6 setup (eventually expanding to 16 in RAID6 via SAS expander) but I'm really worried about having issues with those drives dropping out of the array, or other idle/sleep issues with them since they're not "enterprise/RAID" drives.

Have you had any issues at all with them? Did you do any tweaks/firmware upgrades/config changes on the drives or Areca before using them?

I just want to make sure I wont have any issues with them in RAID6 on my 1880i (details of what I'm looking to do is posted here:
http://hardforum.com/showthread.php?t=1584599)

Thanks! :)
 
Yes the 1880i is the same RoC as 1880ix, minus integrated expander. And I would be very surprised if you or anyone COULD get a new gen Hitachi drive like the 5K3000 2TB to drop out of an array, at least if its 500GB, 1TB and 2TB predecessors are any indicator since the 5K3000/7K3000 generation is still new and people haven't had them that long. If so it would have nothing to do with the TLER/CCTL/ERC issues that have plagued WD drives and been the cause for a lot of FUD about ERC's role (or lack of) in otherwise healthy drives dropping from arrays. Simply doesn't happen with Hitachi. I've got my first 8 of likely many more 2TB 5K3000's and have been disk thrashing them last 24 hours on an 1880i and they're performing even better than my 7K2000's.
 
Last edited:
What odditory said plus its my opinion that a lot of compatibility issues with drives and RAID controllers in the past were because of the integrated expander used. With SAS2.0 the requirements are much more strict/streamlined so there is not nearly as much variation from card to card.
 
Back
Top