ARECA Owner's Thread (SAS/SATA RAID Cards)

It's important to note you must use a switch that understands how to interact with the ports. Not all switches will do this. Prepare to spend big dollars for a switch than can do this.

The HP 1810G and 1820G switches do 802.3AD aggregation and are quite cheap and improtantly for home use very quiet. I have a 24 port one here and it works great with Intel NICs in both Windows and Linux. Just over 200Mbyte/sec from Windows to Centos box.
 
That's true i just found that. I saw their canadian distrubutors website but I don't think I can buy from them as a consumer. Thanks guys. Can't wait for my 20TB server :).
 
Question for owners of 1880i cards; What "G" are you seeing at the top of your 1880i's BIOS banner?


arecapciex825.jpg



I've seen this at 2.5 or 5.0 depending on how the PCIe width is configured but never higher than 5.0.

Is my array performance suffering when this value is smaller?

Is there something higher than 5.0 that should be sought?
 
Setup a Areca 1880ix-16 on the weekend- took about 4.5hrs to innitialise a 12TB RAID 6 array!!! phenominal!! Sooo fast!!!

I'm using 8x2TB hitachi coolspin 5k3000 disks.
 
Blue Fox said:
PCIe 1.0 support 2.5 gigatransfers per second per lane. PCIe 2.0 supports double that.

I've been fiddling with an ARC-1880IX-24 with 24 SAS 6G drives in one RAID-6 array for a ms terminal server.

I can get that Areca banner to read PCIEx8/2.5G or PCIEx4/5.0G but not PCIEx8/5.0G.

Which of those two would be least likely to impact general overall array performance?
 
I've been fiddling with an ARC-1880IX-24 with 24 SAS 6G drives in one RAID-6 array for a ms terminal server.

I can get that Areca banner to read PCIEx8/2.5G or PCIEx4/5.0G but not PCIEx8/5.0G.

Which of those two would be least likely to impact general overall array performance?
I don't think it will really make a difference either way. You're going to be limited by the card's CPU.
 
I'm currently running an ARC-1260 on an Intel DX48BT2 motherboard as a home storage server. Since this is now on 24x7 I'm keen to get power consumption as low as possible. I've recently switched to 8x 2TB 5K3000 drives, which has at least halved power draw - accounting for the fact I can spin them down (the WD RE3's I had really didn't like that). The system boots from a separate disk.

The system spends most of its time idle, when it draws ~120W. Well over 80% of this seems to be the CPU+motherboard+graphics (NVS280), so I'm considering swapping them for something a bit more efficient. I'm keen to keep hold of the DDR3 I've already got, which is non-ECC.

The new Sandy Bridge seems to be far more lean when idle, and H67 supports the on-chip graphics: the Intel DH67BL apparently runs ~16W idle with a Core i5-2500, which would pay for itself in just over 2 years. It has a PCIe2 x16 slot, but it's sometimes referred to as for graphics - even the Intel site can't seem to make its mind up. I've heard PCIe2 eradicated the graphics-only PEG slot nonsense, but I've been burned before and I'm wary of wasting ~£250 on CPU + Mobo.

Has anyone tried an Areca in an H67 mobo? I'm aware a server-grade board would be a safer bet, but it would be ~£150 more expensive, would also require new RAM, and seems unlikely to be as power efficient (happy to be corrected if you have data).

Thanks.

For the benefit of anyone else who cares, I took a chance and the card works fine in the x16 slot on this motherboard. I don't have a monitor with digital interface to check, but the VGA output of the board certainly still works.

Power draw of mobo + i5-2400 + ARC-1260 + BBU + 8x5K3000 drives + 1 RE3 boot drive is ~70W at the wall when idle with the storage drives spun down (Zalman ZM1000-hp PSU, UK mains). With the drives spun up it rises to just over 100W at the wall.

[edit: I'm not sure yet why the mobo+cpu are drawing ~50W idle, when everyone else was quoting 16-25W for this combo, but that's a bit off-topic]
 
I can confirm that my 1880ix-24 4G works perfectly in an ASUS rampage x-48, as long as you're willing to sacrifice your vid card backing down to x8 in the other x16 slot.

In no small part due to what I learned from this thread (big time, thank you), the process of getting fully up and running was shockingly easy and seemless for this TOTAL raid n00b. I saw plenty of weirdness I expected. In retrospect, however, I think a majority of that weirdness might be due simple to a bad caple of my several cables, and not be signs that dropping an 1880ix into a rampage formula x48 is anything but shockingly easier than I was anticipating.

If you have a choice, I recommend putting your 1880ix in the lower of the appropriate slots if there's no downside to that for you. Your 1880ix might work fine in the first x16 slot, but after hearing that OCZ PCI RAID cards only work correctly consistently in second slot x16 slot, I figured why tak the remotest of risks it if there are any similar compatability issues? So I installed in the 2nd slot and never had to look back.

There may or may not be a relatively high speed "ceiling" or not with this x48. Hard to know if I'm maxing out too early in scalability on one of the raids is me doing something wrong or some bandwidth limitation on the board (if that's even possible, heck if I know being the n00b). Alternatively, it could be that I'm simply blowing it or using the wrong software for speed checking. But in the real world outside of benchmarks, transfers have never seemed sluggish to this speed freak thus far, so if I botched a setup at least it wasn't a total trainwreck.

FYI, my 1880ix-24 did come with the single heat sink, but I specified to my vendor until they were sick of hearing it that if somebody were to try to slip me a new old stock 1880ix-24 with the pathetic dual heatsinks, it would be returned. Temperatures of my 1880ix-24 in my "cannot hear yourself think" fan-airflow-noise tower have been surprisingly low. But in fairness, I have a couple of 60mms blowing at it because I just don't see any downside to that.

Felt n00b stupid buying the $60-70 arc-1000 LCD panel, but I'm pleasantly surprised it's actually useful (to me, at least) when the archttp panel (for me) just doesn't want to seem to cooperate.

Finally, for any dummies like me needed to be told, if your router isn't "out of the box" 192.168.1.1, and its address is, let's say, 192.168.6.1, make sure you set your 1880ix to an address like "192.168.6.anynumberinrange." If you're router's not stock-192.168.1.1, and you don't adjust on the card during boot or from a network computer to match the gateway of the non-stock address you gave your router (for instance, 192.168.6.100 per our previous random choice), you're going to be in a well-earned bit of frustrating results.

But yes, bottom line is for the 1 other person in the world silly enough to be considering dropping an 1880ix into an obsolete non-server rampage formula x48, yes, not only does it work, but it works very well (at least for me, obviously)

Most importantly, thanks BIG TIME to everybody in this thread. I'm absolutely shocked that after painstakingly reading what you all wrote, how easy it was for me not only to get several different raids including a RAID50 with ease, but with fully respectable speed numbers based on my intended use of each raid as well. Without this thread, I'd probably be staring at the darn screen wondering (if I got the raids up at all) wondering why my speed numbers were paltry relative to the speed that should reasonably be expected. Further, add me to the apparently lucky ones in the vote count for those the samsung HD203WI's do actually work fine even in large arrays (at least when hooked straight up to the card and not in an extender case), thank god.

Thanks guys. I'm well aware that even though I'm totally objective on areca and the many faults of the 1880ix series, I'm aware that this post makes me sound like an areca shill instead of the perfectionist "nothing is ever good enough" impossible-to-please guy that I am. But without your help, I might not even have attempted a run with the 1880ix on my foolish specific mobo/drive installation. Without your help, this thread would probably be "this card sucks, I don't get it" instead of sounding like a !@#$@#$#@ areca ad written by some somebody at areca with a bogus shill post. :)
 
Last edited:
Last edited:
I have an ARC-1680 in a Norco 4224.
I'm using a 750w corsair power supply.

Periodically I see the following in my areca logs (and the beeper sounds):

2011-06-18 12:47:53 Ctrl DDR-II 0.9V Recovered
2011-06-18 12:47:23 Ctrl DDR-II 0.9V Under Voltage

I'm wondering what this means and whether I should worry? I've had this happen every once in a while since the card was purchased and the 'under voltage' always recovers in less than a minute or two.

I was previously running with 15 x 7200 Hitachi's and recently added another 9 x 5k300 hitachis (as a second raid and volume set) to fill up the bays in the case, and now I am getting drop outs on two of the new drives (original raid & volume set are working fine). I'm currently debugging this but I am beginning to wonder if I have insufficient power (or then possibly 2 drives are bad - working on narrowing that down). I also have 2 x 7200 non-raided drives + 2 x Intel SSDs. In addition I have a dual port gigabit intel NIC and an HP SAS card on a xeon 3440.

Anyway, if anybody knows what the undervolting means I'd appreciate some guidance/advice. Thanks
 
I have an ARC-1680 in a Norco 4224.
I'm using a 750w corsair power supply.

Periodically I see the following in my areca logs (and the beeper sounds):

2011-06-18 12:47:53 Ctrl DDR-II 0.9V Recovered
2011-06-18 12:47:23 Ctrl DDR-II 0.9V Under Voltage

I'm wondering what this means and whether I should worry? I've had this happen every once in a while since the card was purchased and the 'under voltage' always recovers in less than a minute or two.

I was previously running with 15 x 7200 Hitachi's and recently added another 9 x 5k300 hitachis (as a second raid and volume set) to fill up the bays in the case, and now I am getting drop outs on two of the new drives (original raid & volume set are working fine). I'm currently debugging this but I am beginning to wonder if I have insufficient power (or then possibly 2 drives are bad - working on narrowing that down). I also have 2 x 7200 non-raided drives + 2 x Intel SSDs. In addition I have a dual port gigabit intel NIC and an HP SAS card on a xeon 3440.

Anyway, if anybody knows what the undervolting means I'd appreciate some guidance/advice. Thanks



As long as the PSU is running on a single physical rail (can be virtually split) and the AMPs are sufficient you shouldn't have any issues resulting from lack of power or amps. Corsair PSU are typically underrated when it comes to Max Power output (watts). They can usually support 50 watt more at max load than the power rating.

In the past Corsair units were rebadged Seasonic PSU, very reliable. I'm not sure who Corsair is using today. On one of my servers I'm running a Corsair 620w modular PSU, and running 15 drives without issue. In this setup, the server is using between 360-425 watts when measured at the wall outlet. Your drives aren't going to be the cause of inadequate power, if anything it will be a video card. Running 2 video cards on 750w would case problems, However, it sounds like you're running a dedicated server given the specs supplied, so I can't imagine that you are running into power issues with the PSU.

It sounds like your issue is related to the motherboard bios setting for voltage on the PCI-E bus. You can manually adjust this in the motherboard bios. Don't crank it up too high. Use small adjustments in 0.1 increments, and then test for stability. If you still see problems after increasing bus voltage by 0.4-0.5, I wouldn't go further. I'd contact Areca directly, maybe your card is failing.
 
Last edited:
I've got an 1880 that seems to really like to do volume checks on most, but not all, of my drives every time I boot the system up. I've got one 5 drive RAID-6 and several smaller 3 drive RAID-5's and for whatever reason any time the system is fully powered down, whether bad shutdown due to loss of power or proper shutdown from windows, the next time I power it on it does a full volume check on the RAID-6 and 2 of the RAID-5's but not the last (which happens to be the newest).

I don't see any sort of options to control this behavior one way or another nor do I see any errors in the logs other than the power on and start checking events.

Might it be that for whatever reason windows isn't giving sufficient time to power down the RAID properly and the volumes are shomehow being labled as dirty and thats why they check? I didn't really see any other instances of this happening for anyone else... Not sure I really like that extra wear and tear on the drives either.
 
Drives don't wear out by access activity. If Windows isn't giving the devices enough time to flush at shutdown, it's a problem with the device or driver, which is telling Windows that it has completed its hard flush and that it's safe to shut down.

What kind of drives are you using in what configuration?
 
Thanks for your help!

As long as the PSU is running on a single physical rail (can be virtually split) and the AMPs are sufficient you shouldn't have any issues resulting from lack of power or amps. Corsair PSU are typically underrated when it comes to Max Power output (watts). They can usually support 50 watt more at max load than the power rating.

The power supply is a Corsair HX750W 750-Watt Modular Power Supply - Single +12V Rail Design.

I was using it with the 24 raid-ed drives, 2 SSDs and then 2 7200RPM seagate drives I had left over.

There's a total of 7 fans (including CPU). I'm using the onboard Tyan graphics. As wrote, there are 3 PCIe cards - the 1680, the HP SAS expander and a dual port intel gigabit NIC.

It sounds like your issue is related to the motherboard bios setting for voltage on the PCI-E bus. You can manually adjust this in the motherboard bios. Don't crank it up too high. Use small adjustments in 0.1 increments, and then test for stability. If you still see problems after increasing bus voltage by 0.4-0.5, I wouldn't go further. I'd contact Areca directly, maybe your card is failing.

I'll see if I can find the adjustments for that in the bios (its a tyan server motherboard, not the easiest to 'drive').

I also sent off an email to areca support just in case it is the card.

The HW monitor shows:

CPU Temperature 52 ºC
Controller Temp. 40 ºC
CPU Fan 0 RPM
12V 12.160 V
5V 5.026 V
3.3V 3.280 V
DDR-II +1.8V 1.792 V
PCI-E +1.8V 1.792 V
CPU +1.8V 1.792 V
CPU +1.2V 1.200 V
DDR-II +0.9V 0.880 V
Battery Status Not Installed

I'm not sure why the CPU fan has 0 rpm. I guess the voltages are all a bit low (0.008).

After removing one SSD and the two additional 7200RPM SATA drives the 2nd raid has been running stable so I think I may be on the limits of the PSU. Am still running some tests on the RAID to see if I can provoke it to fail again.

Anyway, I suspect the two events (raid dropout and DDR undervolting) are unrelated. Let's see what Areca say as well. Hopefully the card is not faulty

Thanks again!
 
Thanks for your help!



The power supply is a Corsair HX750W 750-Watt Modular Power Supply - Single +12V Rail Design.

I was using it with the 24 raid-ed drives, 2 SSDs and then 2 7200RPM seagate drives I had left over.

There's a total of 7 fans (including CPU). I'm using the onboard Tyan graphics. As wrote, there are 3 PCIe cards - the 1680, the HP SAS expander and a dual port intel gigabit NIC.



I'll see if I can find the adjustments for that in the bios (its a tyan server motherboard, not the easiest to 'drive').

I also sent off an email to areca support just in case it is the card.

The HW monitor shows:

CPU Temperature 52 ºC
Controller Temp. 40 ºC
CPU Fan 0 RPM
12V 12.160 V
5V 5.026 V
3.3V 3.280 V
DDR-II +1.8V 1.792 V
PCI-E +1.8V 1.792 V
CPU +1.8V 1.792 V
CPU +1.2V 1.200 V
DDR-II +0.9V 0.880 V
Battery Status Not Installed

I'm not sure why the CPU fan has 0 rpm. I guess the voltages are all a bit low (0.008).

After removing one SSD and the two additional 7200RPM SATA drives the 2nd raid has been running stable so I think I may be on the limits of the PSU. Am still running some tests on the RAID to see if I can provoke it to fail again.

Anyway, I suspect the two events (raid dropout and DDR undervolting) are unrelated. Let's see what Areca say as well. Hopefully the card is not faulty

Thanks again!


Based on the Areca stats below I think it confirms my suspicion that your PCI-E bus is not receiving enough power. You should try to adjust this manually by 0.1 increments in the Tyan motherboard bios settings. Bump the PCI-E bus power by 0.1, then check the Areca controller stats again


Stats from an ARC-1231ML
------------------------------
Fan Speed N.A.
Battery Status Not Installed
CPU Temperature 69 ºC
Ctrl Temperature 57 ºC
Power +12V 12.038 V
Power +5V 5.026 V
Power +3.3V 3.296 V
SATA PHY +2.5V 2.496 V
DDR-II +1.8V 1.840 V
PCI-E +1.8V 1.840 V
CPU +1.8V 1.840 V
CPU +1.2V 1.200 V
DDR-II +0.9V 0.912 V







Corsair HX750 PSU rating
-----------------------------------------------------------
hx750rating.png



- Your PSU only provides 25A on the +5V rail
- Onboard graphics won't draw much power
- SSD draw a fraction of the power of traditional mechanical HDD


Now let's assume the worst case scenario (not assuming peak at startup, which can be avoided by staggered spinup)

26 platter HDD (+5V power usage)
==================
26x 0.7A = 18.2A


Consider that 7 fans may draw 2-3 amps. You're still left with 4-5 amps on the +5V That's plenty for any remaining IC components which may draw amps from the +5V rail






Power requirements for platter based HDD (examples)
-----------------------------------------------------------
You'll notice below that WDC and Seagate power ratings on drive labels for +5V and +12V are different than Hitachi.


Seagate Barracuda ES (500GB) 7200RPM - ST3500630NS
http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/Barracuda ES/SATA/100424667b.pdf

Rating (drive label)
===================
+5V 0.72A
+12V 0.52A

Startup (peak)
===================
+12V 2.8A



Western Digital GP (1TB) 5400RPM - WD10EACS (similar power req as WD10EADS & WD10EARS)
http://websupport.wdc.com/rd2.asp?u...s/detail/search/1/a_id/1679#&aid=1679&lang=en

Rating (drive label)
===================
+5V 0.70A
+12V 0.55A

Startup (peak)
===================
+12V 1.65A

Idle usage (watts)
===================
3.3w


Hitachi Deskstar (2TB) 7200RPM - 0F10311
http://www.hitachigst.com/tech/tech...251D9A9862577D50023A20A/$file/DS7K3000_ds.pdf

Rating (drive label)
===================
+5V 400mA = 0.40A
+12V 850mA = 0.85A

Startup (peak)
===================
+5V 400mA = 1.2A
+12V 850mA = 2.0A

Idle usage (watts)
===================
7.5w




Power requirements for PCI-E RAID controller
-----------------------------------------------------------
Areca ARC-1260
http://www.areca.us/support/downloa...Manual_Spec/ARC_1xx0_1xx0ML_Specification.zip

The RAID controller doesn't appear to require power (amps or watts) from the +5V rail

Rating (watts)
===================
+3.3V 4.95W
+5V 0W
+12V 6.22W


Power requirements for Intel 82571EB (Dual gig-E)
-----------------------------------------------------------
Intel 82571EB
http://www.intel.com/products/ethernet/resource.htm#s1=all&s2=82571EB&s3=all

The Intel NIC operates on +3.3V

Rating
===================
+3.3V 226mA = 0.226A
 
Last edited:
Hi guys,

I'm looking in the web since yesterday and alse read the entire 189 pages long manual without finding an answer to my question : do you know if the Areca 1880i supports raid with multiple size disks (1To, 1.5To and 2To) or do I have to supply the same size for each of them?

Thanks

EDIT : I think I found the answer right after I posted : "If physical disks of different capacities are grouped together in a RAID set, then the capacity of the smallest disk will become the effective capacity of all the disks in the RAID set."

Damn
 
Hi guys,

I'm looking in the web since yesterday and alse read the entire 189 pages long manual without finding an answer to my question : do you know if the Areca 1880i supports raid with multiple size disks (1To, 1.5To and 2To) or do I have to supply the same size for each of them?

Thanks

EDIT : I think I found the answer right after I posted : "If physical disks of different capacities are grouped together in a RAID set, then the capacity of the smallest disk will become the effective capacity of all the disks in the RAID set."

Damn

That's a RAID limitation, not an Areca limitation. You should do more reading ;)
 
There aren't conventional RAID systems that handle mixed capacity drives, but there are proprietary systems that do. Areca doesn't (to my knowledge), so not supporting mixed drive sizes is really is both an Areca limitation as well as a limitation of conventional RAID.
 
I thought Raid had evolved since my last lessons... Like for instance FlexRaid works well with different HD
 
I thought Raid had evolved since my last lessons... Like for instance FlexRaid works well with different HD

That's why mikeblas was careful to note "conventional RAID" as opposed to just saying RAID. FlexRaid is not a conventional RAID setup.
 
Hi, I'm very new to this forum and to this controler.
My server : SuperMicro X8SIA, Acreca 1880, HP Expander, Norco 4220 Backplanes, Samsung and Seagate 2To drives, Windows server 2008 R2
My problem : I put the Raid 5 in write through mode to avoid all power failure problems... and the result was systes freeze as soon i move data to the Raid5. Note that Raid 1 are OK in write through mode.
I switched Raid 5 to write back mode and everything was OK.
Nota : I've seen that my Raid5 drives are in SATA150 mode , the supported mode being SATA300+NCQ(depth 32)
Any idea from the experts ?
Thank you in advance.
Alain
 
Can Areca cards be run in write-back mode without a BBU? While I appreciate this is not recommended, and ultimately, if I'm happy with the card, I'll likely get a BBU, I'd like to experiment/benchmark with the cards I'm considering (either a 1222 or 1880) without having to buy the BBU up front. LSI allows you to force write-back on without a BBU/with a failed BBU, but I searched through every bit of the 1222 manual (and also searched every post in this thread with "BBU" in it) for information on forcing write-back to no avail. Thanks.
 
I have noticed on some systems that when PCIe doesn't seem to auto negotiate the x4/8/16 for the PCIe devices plugged in a motherboard, the Areca BIOS banner didn't display for me.
See if your BIOS will let you specify those values.

I dont think so, its auto set to 16x I believe. Is there anything I can do? Or do I have to set in the last slot, I believe that slot goes 4x, maybe?
 
I dont think so, its auto set to 16x I believe. Is there anything I can do? Or do I have to set in the last slot, I believe that slot goes 4x, maybe?

tried last slot no avail, anyone have any ideas :(? Theres a 2 solid lights on the back, a green and blue one, it seems to be working. I dont have any drives plugged in, it just doesnt boot :(.
 
Anyone else using an EVGA x58 motherboard with an areca 1880? I'm having issues getting it to boot off my ssd. It works fine with all drives unplugged from my areca 1880 but doesn't work with any of them plugged in. When drives are plugged into the areca 1880 it recognizes them all and then I just get a blank screen. More details here http://hardforum.com/showthread.php?p=1037460778#post1037460778

Is anyone successfully using an areca 1880 on a motherboard with a sata 6Gb/s for their boot disk? If so what motherboard? I found that my evga will not let me set my corsair force 3 ssd as a boot disk if it is on the 6Gb/s sata connection. It works when the areca is not connected. When the areca is connected and recognizes all the disks it will not have my corsair force 3 ssd as an option for the boot disk. The bios only shows 6 disks to choose from and they are all disks on the areca controller. It appears that the areca initializes before the separate marvell 6 Gb/s controller that is onboard the evga motherboard I have. Is this an option rom issue? Any ideas if all the 1366 motherboards with a sata 6 Gb/s option will have this same issue?
 
My server : SuperMicro X8SIA, Acreca 1880, HP Expander, Norco 4220 Backplanes, Samsung and Seagate 2To drives, Windows server 2008 R2
My problem : I put the Raid 5 in write through mode to avoid all power failure problems... and the result was systes freeze as soon i move data to the Raid5. Note that Raid 1 are OK in write through mode.
I switched Raid 5 to write back mode and everything was OK.
Nota : I've seen that my Raid5 drives are in SATA150 mode , the supported mode being SATA300+NCQ(depth 32)

No idea ?
 
Hello!

I recently received my 1880i controller, and started doing some testing with it.

I first created a RAID6 array with 4x2TB Samsung F4 drives. This took about 5 hours with foreground init.

Then I started expanding the array by 2 more 2TB Samsung F4 drives. However, I think the migration process is very slow! It has been running for roughly 15 hours, and it's only at 25%.

Is this normal?

Server specs:
- Asus Maximus Formula
- Intel q9450
- 8gb DDR2



edit:
I am using the http configurator.
 
Last edited:
Hello!

I recently received my 1880i controller, and started doing some testing with it.

I first created a RAID6 array with 4x2TB Samsung F4 drives. This took about 5 hours with foreground init.

Then I started expanding the array by 2 more 2TB Samsung F4 drives. However, I think the migration process is very slow! It has been running for roughly 15 hours, and it's only at 25%.

Is this normal?

What is the priority for the migration set to? Expanding a RAID 6 is pretty much the most demanding thing that you can do with one of these cards, and I am not at all surprised to hear that it will take longer than a day. Also, that really beats the crap out of your drives, so ideally you want to build the array with all of the drives that will be in it from the beginning, and make big jumps in expansion.
 
Back
Top