ARECA Owner's Thread (SAS/SATA RAID Cards)

I just got my Areca 1880i today.

I'll be testing it hopefully by the end of the week by starting with 4 of the Hitachi 2Tb drives in RAID 6. I'll also test multiple arrays on the controller since I see some people reported issues in this configuration on the Intel based 1680's.

I don't need to expand past 6x2Tb drives in RAID 6 quite yet though, but once I do I'll be adding a SAS expander and Norco 4220 or whatever is out at the time. I'm sure someone will have tested this configuration with the 1880i by then though.

I will be very interested to hear your results with multiple RAID5/6 arrays on the 1880 series. I was planning on buying a 1680 and it was a bit disappointing to know that I could really only use 1 array if I wanted acceptable performance. Hopefully someone will be able to test out the 1880 series with an HP SAS expander and can confirm if it works/doesn't work.
 
Aren't those two reasons good enough? I call them vital...

no. i'll deal with crap GUI and management if it means enough of a gain in performance - there have been cases like that over the years. unfortunately in LSI's case its historically been "none of the above". hopefully their new stuff shows otherwise.
 
Last edited:
we're talking about two different things - i didn't mean 1680 vs. 1880. i meant in general if something has enough of a performance margin i'll forgive what a PITA it might be to set it up. /derail

btw were you the one that tore his hair out with the LSI 6G card or am I thinking of BlueFox?
 
It was BlueFox, he came over to my house and we tried all sorts of shit. Huge PITA. I dont think i could buy one of those cards if it was going to act like that.
 
I just got my Areca 1880i today.

I'll be testing it hopefully by the end of the week by starting with 4 of the Hitachi 2Tb drives in RAID 6. I'll also test multiple arrays on the controller since I see some people reported issues in this configuration on the Intel based 1680's.

I don't need to expand past 6x2Tb drives in RAID 6 quite yet though, but once I do I'll be adding a SAS expander and Norco 4220 or whatever is out at the time. I'm sure someone will have tested this configuration with the 1880i by then though.

Please keep us upto date with your results. I am on the verge of ordering an 1880i as well, but I need to be able to run 2 raid arrays at once.
 
seriously i still dont understand what the issue was with the lsi card...they are so simple its insane. they have such limited options LOL it is freakishly simple. that is actually a perk of having a bare ass gui and such. maybe a bad card? that is a possibility!
 
Last edited:
I'm in at least two minds about the OEMing LSI thing. On one side it's a plus - given that LSI are everywhere, from ROCs to expanders to chips on SAS hard drives - should improve compatibility. On the other side it reduces competition, and I have to wonder exactly what Areca is adding that you don't get with an LSI card. Upto 4GB cache - very nice, but not on the 8 external port versions, which is what I'd prefer. Ethernet management port - fine, but I only use that when the ArcHTTP proxy thing has gone flaky. Plus in a windows domain environment, with multiple admins, MSM's authentication works better than Areca's.

On top of that LSI has had more time to mature their firmware than Areca have and have more features than I'm interested in (e.g. the SSD stuff). I'm quite happy with LSI as a SAS ROMB solution and have had good experiences with the Dell Perc cards. For me the killer feature of my 1680ix-24s is the 4GB cache, but the multi-array problem lessens the effectiveness of it. Plus now that my iSCSI target software (Starwind) can use system RAM as WB cache I'm less in need of it on the RAID card.

It's funny that nearly all the RAID cards have gone PowerPC. I'd have thought that ARM stuff, which has been getting a lot of attention in the mobile space, would be just as capable. Bai hints at the problem being more of a political one; if Marvell are the only ARM based ROC supplier, and want to limit what their OEMs can do, then I can see why Areca made the switch.
 
seriously i still dont understand what the issue was with the lsi card...they are so simple its insane. they have such limited options LOL it is freakishly simple. that is actually a perk of having a bare ass gui and such. maybe a bad card? that is a possibility!
There is nothing wrong with the LSI hardware, the problem lies within MSM as MegaCli shows the correct info...
 
Plus in a windows domain environment, with multiple admins, MSM's authentication works better than Areca's.

On top of that LSI has had more time to mature their firmware than Areca have and have more features than I'm interested in (e.g. the SSD stuff).

In my opinion MSM should be labeled as "technology preview" or maybe "beta 1" as testing MSM vs MegaCli vs lsitool reveals some pretty disturbing flaws in MSM, but thats just my two cents...
I'm sure BlueFox would agree with me on this, if not completely then probably in general...
 
LSI's management software...ugh. Don't get me started on it. I only buy plain LSI HBAs now because of it. I could never get my 9280-8e working, so I gave up on it and returned it. Was not worth the headache. After having out-of-band management on the Areca cards, I want nothing less.
 
out of band management on the Areca's is great, but I find I often have to use it rather than the in-band, archttp route, especially after a reboot, because the embedded web server - or maybe the proxy service - runs so slowly.

I guess I'm not doing very much with MSM so maybe that's why I don't see the same problems - just creating RAID-1's, enabling/disabling on-disk cache etc. And I've not used it on the proper LSI raid cards, just the ROMB on my motherboards.

Can I ask for a clarification of the multithreading issue that some are seeing when you have multiple volumes? Do you see this when you have one volume per raidset, or only when you have more than one volume per raid set? Is it clear that it's a firmware issue or a driver issue?

Also does anyone know how the Areca's allocate cache? Are they supposed to divide up the cache among the volumes equally, or is it a global cache (in which case, whichever volumes has been getting most ios recently will be using more of the cache).
 
i have no experience with the CLi, so i dunno bout that.
i only use my raid cards for benching, gaming, showing off, etc :)
 
Last edited:
Well, here's one that's just about Areca, and got me stumped.

I upgraded a server's 1680ix-24 to v1.48 firmware last night (from v1.46).

First, upgraded the firmware bin. Am told I need to restart system. I do. It comes up ok, then I do the other 3 bins, and the windows drivers. Reboot. Then... nothing.

So today, had to visit the datacenter to sort this out. The Areca firmware wasn't starting up properly, it would then time out, and reboot the server. Ad nauseum.

I tried powering down the server. Same problem. Got into mobo bios, and rebooted. Now Areca firmware would initialize ok, but it couldn't see any drives.

I powered down again, removed all external cables, put them back in, tried again. Still no joy.

Noticed flashing blue light on back of Areca. Hmm, maybe the battery is a problem? Disconnecete battery. Still no joy.

So what worked? I still can't quite believe this, but unplugging the Areca card and then reinserting it worked. System came up! Yay! Rebooted OK. Tried a shutdown. Bad idea. Back to same problem of not seeing drives. Tried removing all power again, including the power supplies, waiting a few minutes, and starting again. No joy. Removed the card and reinserted it again. System came up!

Did this a third time just to check that I wasn't insane.

How can this work? The only thing with any charge in it in the whole server that I can think of is the cmos battery. Surely that doesn't send any power over the PCIe sockets?! How can the Areca card know it's been disconnected and reinserted into a completely powered down system, especially when the bbu has been disconnected?
 
Last edited:
Well my 1880i's arrived today. I unboxed one and stuffed it in to a Supermicro X8ST3-F and headed home. I've got six Seagate ST3146356SS drives hooked up to it. I'm open to some test scenarios from those who want to know this or that.
 
Well, here's one that's just about Areca, and got me stumped.

I upgraded a server's 1680ix-24 to v1.48 firmware last night (from v1.46).

First, upgraded the firmware bin. Am told I need to restart system. I do. It comes up ok, then I do the other 3 bins, and the windows drivers. Reboot. Then... nothing.

So today, had to visit the datacenter to sort this out. The Areca firmware wasn't starting up properly, it would then time out, and reboot the server. Ad nauseum.

I tried powering down the server. Same problem. Got into mobo bios, and rebooted. Now Areca firmware would initialize ok, but it couldn't see any drives.

I powered down again, removed all external cables, put them back in, tried again. Still no joy.

Noticed flashing blue light on back of Areca. Hmm, maybe the battery is a problem? Disconnecete battery. Still no joy.

So what worked? I still can't quite believe this, but unplugging the Areca card and then reinserting it worked. System came up! Yay! Rebooted OK. Tried a shutdown. Bad idea. Back to same problem of not seeing drives. Tried removing all power again, including the power supplies, waiting a few minutes, and starting again. No joy. Removed the card and reinserted it again. System came up!

Did this a third time just to check that I wasn't insane.

How can this work? The only thing with any charge in it in the whole server that I can think of is the cmos battery. Surely that doesn't send any power over the PCIe sockets?! How can the Areca card know it's been disconnected and reinserted into a completely powered down system, especially when the bbu has been disconnected?

To reflash firmware, you have to upload all four files before you do a reboot. If you reboot in between the four files, you mixed up the different firmwares. (the firmware is flashed during bootup)

I recommend you do it again. (with the BBU taken off as well)
 
Well my 1880i's arrived today. I unboxed one and stuffed it in to a Supermicro X8ST3-F and headed home. I've got six Seagate ST3146356SS drives hooked up to it. I'm open to some test scenarios from those who want to know this or that.

With six 3G drives, you'll not see much improvement comparing to existing 1680. 6G performance difference only become noticable when paring 8+ SAS drives. This is one reason why Areca don't plan to release a 4 port 6G RAID controller.
 
If you had a couple more disks vr. I would be interested in the performance of multiple RAID6 arrays :) vs just a single RAID6 array. Though I suppose you could try 2 RAID5 arrays as I believe that people had performance issues with that on the 1680's. I assume that you don't have an HP SAS expander to test with?
 
Correct, no HP expander.

What bench tool do you want run against the two RAID5's and on what OS?
 
To reflash firmware, you have to upload all four files before you do a reboot. If you reboot in between the four files, you mixed up the different firmwares. (the firmware is flashed during bootup)

I recommend you do it again. (with the BBU taken off as well)

Thanks Bai. I've tried this, and it was ok for a couple of reboots / shutdowns. Then it stopped working again. Removing BBU, reseating card, even changing slots hasn't helped. I've removed the card from the server to test at home. The rear of the card has a solid green led and a blinking blue led - neither are illuminated on my other 1680ix-24. Solid red led on the BBU even though the Areca UI (BIOS/textmode) reports it as 100% charged. There's also a blinking green led on the card itself, near the expander chip.

Am in touch with Areca support, hopefully they will be able to help me sort this out.
 
Has anyone ever had any luck with Samsung? I was really interested in the new F4 3-platter 2TB drives for a new Raid 6 array, but everything I have read about the F3 points to compatibility issues. I was thinking that since that 1880 uses the ROC from LSI that these compatibility issues might be solved. Any thoughts or opinions? I'd be a guinea pig but everyone is sold out of 1880's right now.
 
If Samsung say the drives are suitable for RAID, then you will probably be OK, if not then you will probably have problems, like them dropping out of RAID when they try to do extended error correction etc. Areca only seem to test "enterprise" SATA drives for their compatibility list. And you can even have problems with drives that are on the compatibility list - e.g. the Velociraptor 45 day bug...
 
Correct, no HP expander.

What bench tool do you want run against the two RAID5's and on what OS?

I will have to dig through the SAS expander thread I believe to determine what benchmark software people were using to test the performance issues but I believe that it was HDTune. For the OS, I would say Windows at least but Server 2008 R2 would be ideal. Thanks!
 
Is there anyway to use SEAGATE tools with drives connected to an Areca 1880 controller? I have 5 Samsung drives that SEEM to work fine with the controller but I want to test them (SMART, etc) using Seagate Tools but it only sees the card
 
Is there anyway to use SEAGATE tools with drives connected to an Areca 1880 controller? I have 5 Samsung drives that SEEM to work fine with the controller but I want to test them (SMART, etc) using Seagate Tools but it only sees the card

To the best of my knowledge, to that that info, you need to put the drives in JBOD or Pass-through. Probably Passthrough actually. The web interface does give you a ton of info though... this is what I get from my Deskstar drives... (ie: non-enterprise)

Device Type SATA(5001B4D70012F012)
Device Location Enclosure#2 SLOT 06
Model Name Hitachi HDS722020ALA330
Serial Number XXXXXXAHLLR7X
Firmware Rev. JKAOA28A
Disk Capacity 2000.4GB
Current SATA Mode SATA300+NCQ(Depth32)
Supported SATA Mode SATA300+NCQ(Depth32)
Disk APM Support Yes
Device State Normal
Timeout Count 0
Media Error Count 0
Device Temperature 44 ºC
SMART Read Error Rate 100(16)
SMART Spinup Time 115(24)
SMART Reallocation Count 100(5)
SMART Seek Error Rate 100(67)
SMART Spinup Retries 100(60)
SMART Calibration Retries N.A.(N.A.)

and from my SAS Drives:

Device Type SAS(5000C50000FD3105)
Device Location Enclosure#2 SLOT 11
Model Name SEAGATE ST936751SS
Serial Number XXXXXXXXX09802DS86
Firmware Rev. 0001
Disk Capacity 36.7GB
Device State Normal
Timeout Count 0
Media Error Count 0
Rotation Speed 15015(RPM)
Device Temperature 42 ºC
Read Errors Recovered W/O Delay 0x0000000000428D28
Read Errors Recovered W Delay 0x0000000000000000
Read Errors Recovered W Retry 0x0000000000000000
Read Errors Recovered 0x0000000000428D28
Read Total Bytes 0x0000043DB9406C00
Read Errors Unrecovered 0x0000000000000000
Write Errors Recovered W/O Delay N.A.
Write Errors Recovered W Delay 0x0000000000000000
Write Errors Recovered W Retry 0x0000000000000000
Write Errors Recovered 0x0000000000000000
Write Total Bytes 0x00ABE182A9844200
Write Errors Unrecovered 0x0000000000000000
Verify Errors Recovered W/O Delay 0x0000000000000001
Verify Errors Recovered W Delay 0x0000000000000000
Verify Errors Recovered W Retry 0x0000000000000000
Verify Errors Recovered 0x0000000000000000
Verify Errors Unrecovered 0x0000000000000000
Non-Medium Errors 0x0000000000000000
Device Smart Status O.K.
 
How can I read:

SMART Read Error Rate 100(16)
SMART Spinup Time 115(24)
SMART Reallocation Count 100(5)
SMART Seek Error Rate 100(67)
SMART Spinup Retries 100(60)
SMART Calibration Retries N.A.(N.A.)

That?

Additionally how can I tell if this drive will ultimately work with this controller? In the past Samsung drives have not worked, these appear to work fine so far, what issues should I look for?
 
For starters you can create and delete a raid5/6 array a few times - let it build with foreground initialization and keep an eye on the on the event log. If that succeeds then you're 90% of the way there. After that you can use disk thrashing tools (i use Winthrax which is kind of outdated/not very user friendly) to see if you can get a drive kicked out of the array.

Personally I ignore HCL's (they're usually outdated or incomplete anyway) - the only way to know for sure if something works, is *do your own testing*. If I'm evaluating a new harddisk to standardize on for raid use, i'll buy usually 4 of them and test all I can before buying any more. By the way, if there's a compatibility issue, the SMART stats aren't the place you're going to see anything meaningful about it. Seagate tools is also useless.
 
Thanks Bai. I've tried this, and it was ok for a couple of reboots / shutdowns. Then it stopped working again. Removing BBU, reseating card, even changing slots hasn't helped. I've removed the card from the server to test at home. The rear of the card has a solid green led and a blinking blue led - neither are illuminated on my other 1680ix-24. Solid red led on the BBU even though the Areca UI (BIOS/textmode) reports it as 100% charged. There's also a blinking green led on the card itself, near the expander chip.

Am in touch with Areca support, hopefully they will be able to help me sort this out.

Any more info on this issue? I have this same problem with my brand new Areca 1880ix-24 and a bunch of Samsung disks. Haven't contacted anyone yet, hoping to get it working. If I boot up with 1 drive attached all is fine, then I can add the drives when in Windows and it'll work, just booting screws it up again.
 
Any more info on this issue? I have this same problem with my brand new Areca 1880ix-24 and a bunch of Samsung disks. Haven't contacted anyone yet, hoping to get it working. If I boot up with 1 drive attached all is fine, then I can add the drives when in Windows and it'll work, just booting screws it up again.

Have you had any compatibility issues with the Samsung drives?
 
Unfortunately, yes. I had 17 of those drives and 3 Seagates hooked up to a Highpoint RocketRAID 3560 working flawlessly (except for the occassional Seagate dying on me). With the 1880 I have to have 1 drive hooked up while booting to get through the Waiting for F/W screen, then I have to hook up the rest of the drives after Windows has started. Hooking up more than 1 drive reboots the system after 2 or 3 minutes of waiting for F/W and this loops. If I log into the web console I see drives throwing time-outs.
This is not a specific set of drives, it depends on the order they are hooked up in and how many drives are hooked up. Have been fooling around with this since saturday morning, so for about 3 whole days, and by now I'm almost desperate enough to go out and buy WD RE disks :(
 
Last edited:
Unfortunately, yes. I had 17 of those drives and 3 Seagates hooked up to a Highpoint RocketRAID 3560 working flawlessly (except for the occassional Seagate dying on me). With the 1880 I have to have 1 drive hooked up while booting to get through the Waiting for F/W screen, then I have to hook up the rest of the drives after Windows has started. Hooking up more than 1 drive reboots the system after 2 or 3 minutes of waiting for F/W and this loops. If I log into the web console I see drives throwing time-outs.
This is not a specific set of drives, it depends on the order they are hooked up in and how many drives are hooked up. Have been fooling around with this since saturday morning, so for about 3 whole days, and by now I'm almost desperate enough to go out and buy WD RE disks :(

What are you power settings?

Try these
http://hardforum.com/showpost.php?p=1036188405&postcount=299
 
I have 5 2TB Samsung F4 disks hooked up to an 1880i on their 24th hour of drive testing and I have not had any issues so far (fingers crossed)
 

Stagger power on has been tested with anything between the lowest value, 0.4 and the highest value, 6.0. Low power, low RPM and spindown are disabled. So far nothing seems to work.

I have 5 2TB Samsung F4 disks hooked up to an 1880i on their 24th hour of drive testing and I have not had any issues so far (fingers crossed)

Well keep us informed, it seems I'll have to swap out my 20 1,5 TB drives for 2 TB drives after all... so much for a cheap upgrade to an Areca :p
 
Back
Top