ARECA Owner's Thread (SAS/SATA RAID Cards)

@odditory, the idea was to have parking lot storage of maybe 10 drives using 3 TB disks on 6Gbps ports of an ARC-1880IX-12 and use the last two ports for a RAID-0 windows 7 virtual machine for dealing with PAR2 archives and RARs. But since Newegg lists two flavors of 3 TB Deskstar I didn't know which one was considered "safe" for an array. Hitachi's website wasn't much help either because their model number doesn't match the Newegg model. Amazon has them for a few dollars cheaper but also uses a different model number than Hitachi.
 
For those of you with the 1880*, what are your temps looking like?

Did you have to add fans to those Heatsinks on there?
 
My 1880i has an integrated fan and temp for CPU core generally hovers around 56C.

Lately I've been getting 3.3V under voltage errors in the event log (checking the hardware monitor shows the 3.3V line measuring at around 3.08V) which causes the system to freeze up while it recovers from the under voltage condition. I think it's been happening since I upgraded my rig from a single 580GTX to two 580GTX in SLI. I have an Antec Signature 850W, but maybe the load is too much for this PSU? I'll be upgrading to Sandy Bridge (Maximus IV Extreme, 2600K) and switching to a Corsair 850W (the fully modular one). Since Sandy Bridge is more efficient than X58, I'm hoping that this will clear up the problem. The Maximus also has a dedicated molex input, which I assume is extra power for the PCIe slots, so that may help as well (my current board is the P6T Deluxe V1).
 
I just looked on a few systems which are between 41 and 53.
 
I've had a system running for over a year using a 1220 and 8 150GB Velociraptor's in RAID6. Other than the occasional drive dying and being replaced by a spare everything has been great. About a month ago I built two new systems with 1222 cards and put 8 300GB Velociraptor's in RAID6, everything built fine and I haven't had a problem. One of the 1222's has been update to v1.49 and the other is still on v1.48

This weekend I decided to rebuild the older system just like the two new ones and ran into issues. The new card had v1.48 on it but I didn't upgrade it right away. As the array was initializing drives immediately started failing till I was just left with 3 working drives. At first I suspected a bad card so I used the 3 remaining drives to build a small array and that seemed to work fine. While that was going on I did some drive research and came up with the various problems the Velociraptors have had and of course started assuming a drive issue. Since each batch of 8 drives were bought at separate times and the two earlier batches had no problems I assumed the ones with the problems had earlier build dates and maybe older firmware.

After looking them over they turned out to be newer and with firmware either the same as the drives in the good systems or later so that didn't make any sense. I then updated the firmware on the 1222 to v1.49 and tried to build the array again. Once again I had 2 drives drop out, although the build continued without any further problems.

After the build was done I started looking around and noticed that the one thing in common with all the failures was drive #6 so I deleted the array, replaced drive #6 with the same model but a different build date and built a new array. This time there were no failed drives and the array built at a much faster rate.

So I'm left with more questions than answers and I'm hoping someone has some ideas.

1) Is it a problem with these drives? If so then I've got two other systems I need to worry about. Again I've been running the 150GB models for a long time with no problems
2) Is it truly a defective card? Tomorrow I'm going to try some of these tests again with a spare 1222.
3) If all of this was caused by some electronics failure or weirdness on #6 that scares me even more, a bad drive should not bring down other drives in an array.

Any ideas?

Thanks,

John
 
alamone,

here is a psu calc program: http://extreme.outervision.com/psucalculator.jsp -- just enter your equipment and it will tell you what size psu you need.

i also have sli 580s and 1880x. moved my 8x sas-ss 15k drives to external cabinet for heat related issues. i was over-temping the 1880 when pushing lots of data on the array. now, my 1880 temp is about 42C with only 4xSSD in the main box.
 
An update regarding my 3.3V undervoltage problem.

I switched my rig from X58 (P6T Deluxe) with i7-950 to P67 (Maximus IV Extreme) with i7-2600k. The power savings is only around 20-30 watts going from Clarksdale to Sandy Bridge. I suppose some power has to go to the NF200 chip and the PCIe switch for the extra PCIe lanes.

My old rig idled at around 380W, my new rig idles at around 350W. Part of this is because the GTX580 will not clock down to low speeds if you connect more than one monitor. My old ATI 5970 had the same behavior. I also have 8x3TB Hitachi 7200RPMs, 2x2TB Samsung 5400RPMs, and several 2.5" drives connected, so it's a rather heavy load.

My arrangement on the Maximus IV Extreme using the HAF X case:
Slot 1 - GTX 580
Slot 2 - GTX 580
Slot 3 - Areca 1880i
Slot 4 - Empty (I tried putting a card in here but the mobo only supports up to Tri-SLI)
Slot 5 - GTX 580
Slot 6 - GTX 580
Slot 7 - External Mini-SAS bracket
Slot 8 - Intel SAS expander

Anyway, the Maximus IV Extreme has two molex inputs for giving extra power to the PCIe slots. So, that may have helped stabilize the 3.3V line, as it's much closer to 3.3 now (around 3.28 or so). I also swapped the PSU from the Antec to the Corsair, although honestly I think the Antec was running fine and the P6T deluxe was just getting overloaded on the PCIe power lines.

Just as an aside, I really hate how stiff those cables are on the Corsair PSU. The cables aren't long enough either, especially for a bottom mounted PSU case like the HAF X. I had to use numerous molex Y-splitters. The molexes are rather annoying to get a good connection as well - you really have to mash them in. I do like that it's completely modular rather than half modular like the Antec, though.
 
alamone,

moving the 8x15k sas-ss drives to an external lowered my 1880 temp from 70C to 43C. I further liquid cooled the 580 sli rig and added an external radiator, so the sli rig runs 32C idle and 46C running heaven 2.5 extreme everything with OC. The 1880 runs 43C no matter what i throw at it. With all the removed hdd and fans, i bet i shaved about 100-150w on the psu.

how did you get your cpu load in the diferent run states? if you get a chance, the new 580 heaven benchmark is out (rev 2.5). can you run it with everything maxed-out and let me know the temps you get. I seem to remember my 580s used to run 82C-86C with air cooling.
 
The 56C figure was in my old i7-950 rig where the 1880i was jammed up against a 580. In my Sandy Bridge build the adjacent slot to the 1880i has some breathing room and my temps are reading 43C controller 50C CPU on the 1880i. I'll run the Heaven benchmark and report my temps as well.
 
thanks. the heaven benchmark places far more effort on the GPU then previous. I suspect that the 580 series raised the bar.
 
Hey guys,

Just got the Areca 1880i and loving it!

Hope you're having a great time with it
wink.gif

After reading much on this thread, tempted to get one too.
 
Hi,

I've run into a delicate problem with my RAID. My 1680ix-12 has been running for three years now with 8 x 1 TB Seagate Barracuda ES.1 drives in a RAID 6 configuration, but during an upgrade last weekend I decided to finally add more drives and expand the RAID set.

The new drives were 2 TB Western Digital Caviar Black (I decided on 2 TB after Areca confirmed that I would later be able to expand the RAID set further if I replaced the original 1 TB drives with 2 TB drives).

Due to a broken SATA connector for one of the slots, I had earlier moved one of the Seagate drives to another slot. The RAID was rebuilt with problems after this, but only three slots were available in my disk enclosure, so I added three WD drives and selected to expand the RAID with those three drives.

Almost immediately after the migration had started, the controller complained that one drive out of the three new drives had failed. Migration continued, but then another new drive failed. With no apparent way of cancelling, I let the migration continue in the hope that the controller would know best what to do in this situation.

Yesterday, I came home from a bike ride to a controller that was complaining loudly. Another drive had failed and the migration had stopped. The RAID set was now offline and it was reporting that this time, it was the moved Seagate drive that had failed.

All four drives (3 x WD + Seagate) are in the same 4 slot drive cage from Kingwin. They are all connected to the Areca controller with the same SAS cable on the controller's external port (the other drives are connected to the controller's internal ports through internal-to-external panels in the server).

So, without knowing the cause of the multiple failure and the RAID being offline, I took down the entire server and RAID enclosure and rebooted. When things got back up, the controller found the "failed" Seagate drive and the last of the "failed" WD drives. However, the controller now lists TWO RAID sets; one encompassing the original 8 Seagate drives, with a state of "Migrating (0.0%)", and the other encompassing only one WD drive with a state of "Incomplete".

The original volume set of 6 TB (8 TB raw disk in RAID 6) is still listed as belonging to the original RAID set. The "new" RAID set has no volumes.

At the bottom of this post is the event log from the controller showing the progress of the migration. The second failure occurred as I was removing the first failed drive (#7) from the hot-swap cage and this somehow caused the other drive (#8) to be listed as removed (no, I did not remove the wrong drive...).

The interesting part is the final two statuses: Failed Volume Revived and Rebuild RaidSet. I have emailed Areca about this log, but they have not replied yet. Does this mean there's still hope?

Also, in the form for RAID re-activation in ArcHttp GUI, my two RAID sets are listed as both being 11 TB / 11 drives. Obviously, none of them actually are. I have the option of re-activating both of them. Will I be able to re-activate my old RAID set and restore the data?

Thanks in advance for any input.

Jens

031:14:44 Raid Set # 000 Rebuild RaidSet
031:14:44 ARC-1680-VOL#000 Failed Volume Revived
031:14:45 H/W Monitor Raid Powered On
031:15:54 Enc#1 Slot#6 Device Removed
031:15:54 Raid Set # 000 RaidSet Degraded
031:15:56 ARC-1680-VOL#000 Volume Failed
031:47:18 Enc#1 Slot#5 Device Failed ---- THIRD DRIVE FAILURE -----
031:47:18 Raid Set # 000 RaidSet Degraded
031:47:20 ARC-1680-VOL#000 Stop Migration 050:46:50
031:47:20 ARC-1680-VOL#000 Volume Failed
031:47:53 Enc#1 Slot#5 Time Out Error
032:32:01 Enc#1 Slot#5 Time Out Error
071:15:42 Enc#1 Slot#8 Device Removed ---- SECOND DRIVE FAILURE -----
071:15:43 Raid Set # 000 RaidSet Degraded
071:15:45 ARC-1680-VOL#000 Volume Degraded
071:16:21 Enc#1 Slot#7 Device Removed
073:48:06 Enc#1 Slot#5 Time Out Error
082:33:30 Enc#1 Slot#7 Device Failed ---- FIRST DRIVE FAILURE -----
082:33:31 Raid Set # 000 RaidSet Degraded
082:33:33 ARC-1680-VOL#000 Volume Degraded
082:34:05 Enc#1 Slot#7 Time Out Error
082:34:11 ARC-1680-VOL#000 Start Migrating
082:34:13 Raid Set # 000 Expand RaidSet
 
I think one is just the newer revision with the plastic cover over the battery. That's the only difference I've noticed in all the BBUs I've owned.
 
That looks like some ugly problems - this is why I generally avoid migrations and simply backup, recreate the new array, and restore. I'd wait and see what Areca has to say.

I think what's happening is that your WD drives are causing power resets to the other drives connected to the drive cage. I'm not sure why, but I've had a lot of problems where inserting a WD drive "resets" other drives that are in the same power cable - that is, the drives act like they momentarily lose and restore power, which a raid card would interpret as a failed drive. You can observe the effect if your drive slots have access indicator lights - they will briefly blink amber, indicating a reset. I've had this happen to drive rows in my Norco case and in various hot-swap multibays to the point where I don't use WD drives in raid configurations. Some people say they have no problems with WD drives but this is my experience.
 
So i have my new Areca 1880ix-12 and maybe its just been this long since i created a new array but it seems really slow.

I started a RAID 6 init on 7x 5k3000s two nights ago and its only at 72%

2011-03-13 22:29:02 Data - 7x 5K3000 Start Initialize
2011-03-13 22:29:00 Data - 7x 5K3000 Create Volume
2011-03-13 22:23:08 Data - 7x 5K3000 Create RaidSet

EDIT: I also noticed for my other Array that is on the controller, none of the member disks are pulling smart data. The new arrays SMART data is showing up fine.
 
nitro I would've advised against a 1880ix-12 and instead an 1880i, you haven't specified it but I'm guessing you've connected your HP expander between the drives and the Areca card, which means you're daisy chaining expanders since the ix-12 has one onboard. When I created an 12-drive RAID6 with 5K3000's it was done in about 5 hours on an 1880i + HP expander. The LSISAS2x24 expander on your Areca is a good chip (probably the best right now), but daisy chaining with the PMC8005 on the HP is uncharted territory and possibly glitchy.

do you know for sure if you specified Foreground Initialization?
 
Yea i specified Foreground Initialization. I have background priority set to 80% and read ahead set to Aggressive in the system configuration.

Yea i was concerned about the expanders but when i got it i tested it out with my "Recovered" array, and was getting awesome sequential throughput. Though i did have the cache enabled.

Yea i bought this because i got a good deal on it. Might sell it and get a 1880i.
 
Alamone,

Did some experimenting on 1880 and sli/580 similar to your config with toughpower 1000PSU and all drives external.

Running i7/4.6Gh & 24Gb 2000+ with prime 95 to max-out cpu/mem, the +5 rail drops to 4.96.

CPU/mem now 100%.

Adding Heaven 3.5 extreme profile benchmark to sli/580si drops the +3.3 rail to 3.296.

CPU/Mem/GPU1/GPU2 now 100%

Adding hd tune benchmark across the entire array drops the +3.3 to 3.264

So CPU/Mem/GPU1/GPU2 100% + 1880RoC activity

Adding an HD video render (obviously very slow) did not have much impact to already burdened system.

At no time did I alarm from any device.

Final PSU: +3.294, +4.965, +12.135
Temps: MB: 43C, CPUs: 63C, NB: 56C, GPUs: 72C, 1880: 43C
 
Yea i specified Foreground Initialization. I have background priority set to 80% and read ahead set to Aggressive in the system configuration.

Yea i was concerned about the expanders but when i got it i tested it out with my "Recovered" array, and was getting awesome sequential throughput. Though i did have the cache enabled.

Yea i bought this because i got a good deal on it. Might sell it and get a 1880i.

If anything I'd keep the 1880ix-12, max out the cache to 4GB (that really is the one reason to get an ix-12 card) and swap the HP expander for an Intel RES2SV240 since at least then you'd be daisy chaining the exact same expander chip.
 
Last edited:
Yea i think ill take the HP out tonight and redo the initialization and see if it goes fast directly connected to the Areca.
 
Yea i think ill take the HP out tonight and redo the initialization and see if it goes fast directly connected to the Areca.

I recently had to initialize a new set of 5 of the 3TB hitachis. The initialization was taking forever and would have taken over two days. What I found was that changing the "Enable Disk Write Cache" setting to Enable (not Auto or Disabled) cut the init time ten times. 6 or 7 hours later it was all done. I changed it back to Auto after. Might be worth trying. Granted I have the older 1280ML card so results may vary.
 
Hi Folks,

I am buying an ARECA 24 ports in order to upate my 1260. I hesitate between a 1280ML and a 1880ix. Price is not a concern. I am willing to use it with my WD20EARS (I know, not a good setup... ). I am running Win2K8 with RAID6 without problem, but I plan to move to ZFS with the new setup.

I might use the SATA 6gb in the future, so the 1880ix is interesting, but otherwise, I have no ideas of what I have to choose.

Did you have any advices? What about the fact of running SATA disk on the 1880ix?

thanks in advance,

cheers,

Sergio
 
I'm considering the Areca 1880i connected to the HP SAS Expander in a Software ZFS RAID pool, is this a good card or bad since this is a hardware raid card? And is it like I can deactivate its Hardware RAID logic when using it for Software raid?
 
Today, after saving for almost a year, I finally ordered my ARC-1880ix-24!!!
I plan on hooking 8 Hitachi 7K3000 3TB disks on it in a RAID6 configuration.
The main purpose of my home data server is storing all of my Blu-Ray's and DVD's so I can easily stream them to my HTPC.
Now I know a bit about computers and data storage but I'm definitely not an expert like most of the people here :)
So I wanted to ask if there are any tips on what settings and values I should use to configure this card so that I'm prepared when the beauty finally arrives (order was going to take 2 weeks).
Like I said, I would like to use a RAID6 config and reliability is much more important for me than speed.
I'm guessing speed will improve alot for me since I'm currently running a RAID5 with 5 disks on a ICH10
Also would the 4GB memory upgrade be usefull for my setup and use?
Thanks in advance for any help!!!
 
Last edited:
I'm considering the Areca 1880i connected to the HP SAS Expander in a Software ZFS RAID pool, is this a good card or bad since this is a hardware raid card? And is it like I can deactivate its Hardware RAID logic when using it for Software raid?

You'll be wasting a perfectly good RAID card for ZFS. Sure you can set the disks in JBOD but you'd be better off (IMHO) with a 'simple' HBA.
 
thanks, I had already tried what you suggested but it was a no go, created a jbod array, and then blew it away, doesnt work.

that bios is very basic, not many options to play with. Nothing in the gui either to do anything. I just find this weird.

thanks again for the help.

well I realized what the issue is, its seems wd 1tb green drives with 00zjb0 are just not compatible with the areca card. the 4 drives that are not detected are all the same model, wd10eacs-00zjb0, my other 4 drives are wd10eacs too, all built before sept 2009 as well, but have a different part number.
 
Well i have 8 Seagate drives, running over 2 years without problem, and then 1 disk @channel 5 started to fail, i replaced it and now 3 months later same channel another disk failed.

i would say what is the odds?, i could understand if another disk on the other channels failed, but this disk is brand new and the areca card is telling me it is broken?

Well maybe it is broken maybe not, but ill bet on something is fubu with the channel no 5!

I had a similar issue with a raid card, did you try swapping the cable on that channel out? Seems crazy that a cable would go back, but it certainly resolved my issue when the drive dropped out of my raid 5 array every other day :)
 
Here are my results after running the Heaven 2.5 benchmark. I'm running stock on CPU and GPUs.

GPU1: 82C
GPU2: 75C
1880i CPU: 58C
1880i Controller: 52C
3.3V: 3.25V


Heaven Benchmark v2.5 Basic

FPS:
30.7
Scores:
774
Min FPS:
5.0
Max FPS:
84.4
Hardware

Binary:
Windows 32bit Visual C++ 1600 Release Mar 1 2011
Operating system:
Windows 7 (build 7601, Service Pack 1) 64bit
CPU model:
Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
CPU flags:
3411MHz MMX SSE SSE2 SSE3 SSSE3 SSE41 SSE42 HTT
GPU model:
NVIDIA GeForce GTX 580 8.17.12.6658 1536Mb
Settings

Render:
direct3d11
Mode:
2560x1440 8xAA fullscreen
Shaders:
high
Textures:
high
Filter:
trilinear
Anisotropy:
16x
Occlusion:
enabled
Refraction:
enabled
Volumetric:
enabled
Tessellation:
extreme
 
I just installed my new 1280ML card for the first time and want to upgrade the firmware. I downloaded the .zip file from the Areca site but it has four different .bin files in it. Which one do I flash the card with? Do I flash the card 4x, once with each .bin? The instructions aren't clear. Thanks in advance!
 
Yes, just flash it one by one with each file using the web management firmware update page, it will know which part of the card to update automatically. Generally I skip the MBR0 file as I think that file is only necessary if you had some boot issue with the card.
 
Thanks for the reply. Everything appeared to update successfully.

So I'm setting up my first RAID-6 with four Hitachi 3TB 7200RPM SATA drives on a 1280ML controller...here are my settings:

maximum SATA mode: 300+NCQ
background task priority: 50%
volume readahead policy: aggressive (expecting a lot of sequential data access)
HDD readahead: Enabled
HDD startup delay: 2 seconds
HDD idle head parking: disabled
HDD idle RPM slowdown: 30 minutes (what speed does this take them to?)
HDD idle spindown: 60 minutes

RAID level: 6
stripe size: 128k
Over 2TB support: 64bit LBA
SCSI Channel: 0 (Should I keep these SCSI settings to 0?)
SCSI ID: 0
SCSI LUN: 0
Cache Mode: Unsure, do I want write-through or write-back? I do have an Areca BBU attached to this card.
Tag Queuing: I'm assuming I want this enabled even though I'd be using NCQ

Could someone answer the above questions and make sure I set everything to sane settings?
 
Last edited:
So I installed my SAS Expander and all went well....at least so it seems.

I went ahead and expanded the raid set to include the two drives that I added. In the process, it didn't add the space. So I went in and added the space in and now it is initializing. So first it migrated and now it is initializing. Did I do something wrong? Is there a quicker way to do this? So far, adding a pair of 2TB drives (to the existing 4 drives) took over the weekend to do the migrate and now it looks like it will be another day or so to initialize.

Thoughts?
 
First the raidset gets expanded, then the volumeset gets expanded - two separate steps. its standard procedure. because you can have multiple volumesets per raidset, it doesn't make any assumptions and doesn't expand the volumeset when the raidset is expanded. third step is extending your partition in windows disk management.

OCE is time-consuming on any raid controller. that's why oftentimes if i need to expand space and I want it done fast or need the extra space right away, I'll just delete and recreate the raidset with the additional drives, and then copy the data back to it from my backup copy on another array. takes about 1/4 of the time.
 
Last edited:
I was wondering that as well. Thanks odditory. It should be done expanding sometime tomorrow morning. I still have to do this on my primary array. Can I up the background process percentage to make it go faster?

-Brian
 
Even at 80% priority, you still have plenty of bandwidth left over, so no reason not to have it at that.

Also, if anyone is interested, I have a couple 1280ML 4gb cards that I'm wanting to part with (listed in the FS subforum).
 
What is the purpose of installing the Areca driver in Linux? In Ubuntu 8.04 it already sees the card (incorrectly sees a 1280ML as a 1230 though) and I can see the volume set using fdisk -l (haven't tried partitioning/formatting it though). Is there any reason I should compile and install this driver?
 
When you install a video card for the first time the screen still works and i can see everything using the builtin windows driver.

You still install the NV or ATI drivers right? Of course because you get better performance from drivers that were designed for that specific card vs the drivers written to just get things to function.

Same logic applies here.
 
Back
Top