ARECA Owner's Thread (SAS/SATA RAID Cards)

If the card is sitting there and doesn't come ready as normal, it is due to some kind of failure of communication between the controller and the drive(s) and/or expander(s) it is connected to. When it does successfully boot, are there any errors you can see in the BIOS? Can you post your logs? Have you updated to 1.50 firmware (you must update all 4 files, not just the FIRM file). If you remove the 2 SSD's and the 4 additional drives, does the problem resolve itself (if you do something like this, make sure that you have a complete backup of course, that activate incomplete raidset is disabled and that you don't go further than the bios prompts). If it does, readd the drives one at a time until you find the failure.

The LUN etc is 0/0/0 for the SSD's boot drive. I did upgrade to F/W 1.50 and did all four files. The lack of communication between card and drives sounds like a logical explanation. Earlier this morning I booted, shut down and rebooted, then restarted. Each time I tested this way, there was no problem. That is, the POST went as it's supposed to. When I get home later tonight, I'll try again. If I run into problems, I will try to post the logs. I assume you mean the Areca logs.

BTW, each time I have added drives, there has been a problem with the card recognizing the drives. The first was a non-boot raid 0 array (2 each WD 2TB drives). That time I got the message that the bios file was not loaded. It took a few boots and reboots for a couple of days, then finally everything worked well. It's almost as if you have to "burn in" the card/firmware each time I add a raid set.
 
The LUN etc is 0/0/0 for the SSD's boot drive. I did upgrade to F/W 1.50 and did all four files. The lack of communication between card and drives sounds like a logical explanation. Earlier this morning I booted, shut down and rebooted, then restarted. Each time I tested this way, there was no problem. That is, the POST went as it's supposed to. When I get home later tonight, I'll try again. If I run into problems, I will try to post the logs. I assume you mean the Areca logs.

BTW, each time I have added drives, there has been a problem with the card recognizing the drives. The first was a non-boot raid 0 array (2 each WD 2TB drives). That time I got the message that the bios file was not loaded. It took a few boots and reboots for a couple of days, then finally everything worked well. It's almost as if you have to "burn in" the card/firmware each time I add a raid set.

While it is always possible that you are using and reusing a bad cable or two, it sounds more like you have a flakey card. Which exact card model do you have? -8, 12, 24, ix, x etc?
 
Which exact card model do you have? -8, 12, 24, ix, x etc?

I have an ARC-1882ix-12-4G

I just bought 2 cables from one source and a third cable from another source. I think they were Highpoint (2 each black) and 3Ware (1 each red) cables. The red one goes from the first connector (closest to the M/B) to the 2 SSD's and the 2 HDD's in Raid 0. One of the black cables goes to the four pass-through HDD's.
 
@mrwill

If you have not done so, test the SSD drives individually, do not connect them to the controller at the same time.
 
Connect one of the drives, create a single drive RAID set and a volume set on it, install your OS, try to reboot your system several times.
 
Jus - I tried your suggestion, but I also changed some of the cables. In a previous post I mentioned the two brands I bought. I removed the red 3Ware cable and used only the black Highpoint cables (2 each). The black ones are sleeved and look a bit better. After all that, I have not had a problem yet. I even deleted all the arrays and volumes and started over, and did not have one error message. Not sure, but it seems like the 3Ware cables may have been the problem.
 
Those usually come with drive caddies and need to be supported by a disk enclosure. However, fault LEDs are not utilized when you install SATA drives.
 
Took a spin around inside the web GUI of 1880IX-24 in a windows environment.... is there a way to spit the raid set, volume set, drive serial numbers, model numbers to a text file without drilling into each individual disk?
 
CLI utility could do it for you... You may have it downloaded from Areca ftp server.
 
I have a bit of a Twilight Zone moment that is more informational than anything since it doesn't really require an answer specifically but...

I downloaded the CLI from Areca's site. It came down as "V1.9.0_120314.zip" which I didn't pay any attention. I installed that to a non-production Win7 x64 workstation (C:\Program Files (x86)\ARCSAS\CLI) and was trying to run the CLI but this system doesn't have an Areca card in it so the cmd prompt wouldn't open.

I hopped on a production server with an Areca card in it and re-downloaded the CLI which installed to (C:\Program Files (x86)\MRAID\CLI) again didn't pay attention.

Somehow the paths being different triggered in my brain. So I flip back to the Win7 system and re-visit the Areca site and hover over the link and it lists "V1.86_111101.zip"

The different version number has me scratching my head as to where / how I managed to pull different versions visiting the same URL. It lists v1.9.0 dated March 14, 2012 in the CLI if anybody wants it.
 
The different version number has me scratching my head as to where / how I managed to pull different versions visiting the same URL. It lists v1.9.0 dated March 14, 2012 in the CLI if anybody wants it.

The most likely ways would be either a> you clicked a different link, b> they have a front end load balancer which sent you to a backend with an older file or c> they updated the site between the time you clicked the first link and the second.
 
The most likely ways would be either a> you clicked a different link, b> they have a front end load balancer which sent you to a backend with an older file or c> they updated the site between the time you clicked the first link and the second.

They still list v1.8 as the downloadable version. The random 1.9 I pulled is not older.
 
Those usually come with drive caddies and need to be supported by a disk enclosure. However, fault LEDs are not utilized when you install SATA drives.

Ours work, you need a mobile rack with sideband support. Ipass functionality is the best since it uses one simple connection.

One thing that I've noticed that's strange regarding LED indicators though. Only Intel SSDs will actually fire the activity LEDs. I've tried other SSDs with SF, Indilinx, MCX (Samsung), and Marvell. The Intel 520 which is SF based do work but the LEDs glow steady and blink only when the drive is being accessed. Weird, eh? All other (non Intel brand) SSD have the activity lamps dark. If I use the "identify drive function" in the Areca BIOS it will still flash the respective bay's fault (red) LED though. This does not matter what kind of drive is installed.
 
Ours work, you need a mobile rack with sideband support. Ipass functionality is the best since it uses one simple connection.

One thing that I've noticed that's strange regarding LED indicators though. Only Intel SSDs will actually fire the activity LEDs. I've tried other SSDs with SF, Indilinx, MCX (Samsung), and Marvell. The Intel 520 which is SF based do work but the LEDs glow steady and blink only when the drive is being accessed. Weird, eh? All other (non Intel brand) SSD have the activity lamps dark. If I use the "identify drive function" in the Areca BIOS it will still flash the respective bay's fault (red) LED though. This does not matter what kind of drive is installed.

The sideband physical interface is SGPIO or I2C (depending on the card some allow a choice of I2C or SGPIO, some dont), which is a physical communications interface. Some enclosures/backplanes support SGPIO/I2C activation of the drive LED with some SATA drives. On top of that is SES-2, which is a logical communications protocol. SES-2 is only available in SAS. SES2 can pass alerts users about usage and drive, temperature and fan failures and allow the controller to activate the relevant activity LED(s).
 
I have:
20x2TB in a single raid6 (7200 RPM 3GB SATA Hitachi)
30x3TB in a single raid6 (5700 RPM 6GB SATA Hitachi Coolspin)
8x2TB in a single raid6 (7200 RPM 3GB SATA Hitachi)

None of these have had failures.

Also quite a few different 8x2TB hitachi in raid6 at work (4 or 5 of them) I believe one is using the 6GB SATA version of the 7200 RPM drives (all 7200 RPM @ work).

I wish I was so lucky. I have nearly the same setup and I cannot get this to work correctly. I've tried using both 1680i and 1880i cards and I cannot get the Areca card to work through either the HP SAS nor Chenbro expanders when attached to my Hitachi 3TB. They (the cards, expanders and drives) work fine when attached to my Samsung 2TBs. So this box currently has 48 2TB Samsungs and 24 (Non-fuctioning) 3TB Hitachis.

Each batch is running off one expander per norco 24 bay chassis. I've just about given up and may just put this hitachis as a seperate stand alone 24 cage box. As I have both a 1680i and 1880i card. I would much perfer a single server solution as it's easier to manage when all my drives are one VM.
 
I wish I was so lucky. I have nearly the same setup and I cannot get this to work correctly. I've tried using both 1680i and 1880i cards and I cannot get the Areca card to work through either the HP SAS nor Chenbro expanders when attached to my Hitachi 3TB. They (the cards, expanders and drives) work fine when attached to my Samsung 2TBs. So this box currently has 48 2TB Samsungs and 24 (Non-fuctioning) 3TB Hitachis.

Each batch is running off one expander per norco 24 bay chassis. I've just about given up and may just put this hitachis as a seperate stand alone 24 cage box. As I have both a 1680i and 1880i card. I would much perfer a single server solution as it's easier to manage when all my drives are one VM.

I have a few boxes with 1880 and 1882 cards, HP SAS expanders (make sure they are updated) and Hitachi 5K and 7K 3TB drives. Do the drives work if hooked up directly to the card?
 
@Zerosum

What kind of problems are you experiencing? Define "cannot get them to work correctly"
 
@Zerosum

What kind of problems are you experiencing? Define "cannot get them to work correctly"

I am curious what he means by that too. I am using a ARC-1880x with the HP SAS expander and its been running fine:

Code:
root@dekabutsu: 10:30 PM :~# cli64 vsf info
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 WINDOWS VOLUME   40TB RAID SET   Raid6    129.0GB 00/00/00   Normal
  2 MAC VOLUME       40TB RAID SET   Raid6     30.0GB 00/00/01   Normal
  3 LINUX VOLUME     40TB RAID SET   Raid6    129.0GB 00/00/02   Normal
  4 DATA VOLUME      40TB RAID SET   Raid6   35712.0GB 00/00/03   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>

Code:
root@dekabutsu: 10:30 PM :~# cli64 disk info
CLI>   # Ch# ModelName                       Capacity  Usage
===============================================================================
  1  1  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  2  2  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  3  3  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  4  4  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  5  5  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  6  6  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  7  7  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  8  8  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
  9  9  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 10 10  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 11 11  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 12 12  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 13 13  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 14 14  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 15 15  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 16 16  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 17 17  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 18 18  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 19 19  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 20 20  Hitachi HDS722020ALA330         2000.4GB  40TB RAID SET
 21 21  N.A.                               0.0GB  N.A.
 22 22  N.A.                               0.0GB  N.A.
 23 23  N.A.                               0.0GB  N.A.
 24 24  N.A.                               0.0GB  N.A.
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Enc# Slot#   ModelName                        Capacity  Usage
===============================================================================
  1  01  Slot#1  N.A.                                0.0GB  N.A.
  2  01  Slot#2  N.A.                                0.0GB  N.A.
  3  01  Slot#3  N.A.                                0.0GB  N.A.
  4  01  Slot#4  N.A.                                0.0GB  N.A.
  5  01  Slot#5  N.A.                                0.0GB  N.A.
  6  01  Slot#6  N.A.                                0.0GB  N.A.
  7  01  Slot#7  N.A.                                0.0GB  N.A.
  8  01  Slot#8  N.A.                                0.0GB  N.A.
  9  02  PHY#0   N.A.                                0.0GB  N.A.
 10  02  PHY#1   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 11  02  PHY#2   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 12  02  PHY#3   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 13  02  PHY#4   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 14  02  PHY#5   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 15  02  PHY#6   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 16  02  PHY#7   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 17  02  PHY#8   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 18  02  PHY#9   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 19  02  PHY#10  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 20  02  PHY#11  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 21  02  PHY#12  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 22  02  PHY#13  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 23  02  PHY#14  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 24  02  PHY#15  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 25  02  PHY#16  N.A.                                0.0GB  N.A.
 26  02  PHY#17  N.A.                                0.0GB  N.A.
 27  02  PHY#18  N.A.                                0.0GB  N.A.
 28  02  PHY#19  N.A.                                0.0GB  N.A.
 29  02  PHY#20  N.A.                                0.0GB  N.A.
 30  02  PHY#21  N.A.                                0.0GB  N.A.
 31  02  PHY#22  N.A.                                0.0GB  N.A.
 32  02  PHY#23  N.A.                                0.0GB  N.A.
 33  02  PHY#28  N.A.                                0.0GB  N.A.
 34  02  PHY#29  N.A.                                0.0GB  N.A.
 35  02  PHY#30  N.A.                                0.0GB  N.A.
 36  02  PHY#31  N.A.                                0.0GB  N.A.
 37  02  PHY#32  N.A.                                0.0GB  N.A.
 38  02  PHY#33  N.A.                                0.0GB  N.A.
 39  02  PHY#34  N.A.                                0.0GB  N.A.
 40  02  PHY#35  N.A.                                0.0GB  N.A.
 41  03  PHY#0   N.A.                                0.0GB  N.A.
 42  03  PHY#1   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 43  03  PHY#2   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 44  03  PHY#3   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 45  03  PHY#4   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 46  03  PHY#5   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 47  03  PHY#6   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 48  03  PHY#7   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 49  03  PHY#8   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 50  03  PHY#9   Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 51  03  PHY#10  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 52  03  PHY#11  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 53  03  PHY#12  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 54  03  PHY#13  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 55  03  PHY#14  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 56  03  PHY#15  Hitachi HDS5C3030ALA630          3000.6GB  90TB RAID SET
 57  03  PHY#16  N.A.                                0.0GB  N.A.
 58  03  PHY#17  N.A.                                0.0GB  N.A.
 59  03  PHY#18  N.A.                                0.0GB  N.A.
 60  03  PHY#19  N.A.                                0.0GB  N.A.
 61  03  PHY#20  N.A.                                0.0GB  N.A.
 62  03  PHY#21  N.A.                                0.0GB  N.A.
 63  03  PHY#22  N.A.                                0.0GB  N.A.
 64  03  PHY#23  N.A.                                0.0GB  N.A.
 65  03  PHY#28  N.A.                                0.0GB  N.A.
 66  03  PHY#29  N.A.                                0.0GB  N.A.
 67  03  PHY#30  N.A.                                0.0GB  N.A.
 68  03  PHY#31  N.A.                                0.0GB  N.A.
 69  03  PHY#32  N.A.                                0.0GB  N.A.
 70  03  PHY#33  N.A.                                0.0GB  N.A.
 71  03  PHY#34  N.A.                                0.0GB  N.A.
 72  03  PHY#35  N.A.                                0.0GB  N.A.
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>


The HP sas expander is running firmware 2.08 on mine:

Code:
CLI> The Hardware Monitor Information
=====================================================
[Controller H/W Monitor]
  CPU Temperature  : 50 C
  Controller Temp. : 48 C
  12V              : 12.099 V
  5V               : 5.080 V
  3.3V             : 3.328 V
  DDR-II +1.8V     : 1.840 V
  CPU    +1.8V     : 1.840 V
  CPU    +1.2V     : 1.264 V
  CPU    +1.0V     : 1.056 V
  DDR-II +0.9V     : 0.912 V
  Battery Status   : 100%
[Enclosure#1 : ARECA   SAS RAID AdapterV1.0]
[Enclosure#2 : HP      HP SAS EXP Card 2.08]
[Enclosure#3 : HP      HP SAS EXP Card 2.08]
=====================================================
GuiErrMsg<0x00>: Success.
 
I have a few boxes with 1880 and 1882 cards, HP SAS expanders (make sure they are updated) and Hitachi 5K and 7K 3TB drives. Do the drives work if hooked up directly to the card?

I'll take another look when I get home tonite. This is what I have available:
ARC-1680i and ARC-1880i
2 Chenbro CK23601 SAS expanders
2 HP SAS Expanders
3 Norco 24-Bay chassis
48 2TB Samsung Drives
24 3TB Hitachi Drives.

I've tried different combinations of those products to get each of my 3 chassis working. Still nothing. I do know that my HP SAS are not at the highest version. I have no way of updating them unless I find someone locol or sent them out. (Which I'm willing if anyone has the equipment or happen to be in the Washington DC area). I spent about a week on this as troubleshooting was a big pain.

I'll see what else I can do as right now I'm almost out of space while I have almost 90TB of 'spare' drives not even powered on taunting me.

From memory, I think the HP are running 2.02 and 2.06. I do know they are not at newest. I got them awhile ago. My 1680/1880 are updated tho.
 
HP Expander @ 2.02 or 2.06, doesn't really matter and shouldn't be affecting whether or not the 3TB Hitachi's show up, as the items that changed between revs were somewhat HP specific. Personally I've got an 1880i + HP Expander driving my Hitachi 3TB's. If I were you I'd focus on getting the ARC-1880 and HP SAS Expander working together - there should be no problem with any of those drives nor in that combination of hardware. What other variables might you have missed testing/swapping? Also how exactly is each box set up? You mention 3 Norco case's -- does each one have a motherboard or are you using a JBOD power board and some other combo of hardware to run the expander in the empty case? Lastly, how big is the power supply in terms of watts?

ARC-1680 is probably okay but the 1880 is just superior in so many ways especially when driving that many disks so I would back seat the 1680 until you've got everything working with the 1880. And it goes without saying make sure the 1880 is on firmware rev 1.50

Lastly if you want to flash your HP expanders I can probably help you with that but try everything else first.
 
Last edited:
@Zerosum

After all is said and done, try swapping the cable which you use to connect Areca card with an expander.
It may be at fault. Also, direct connection between the drives and an Areca controller might shed some light on the problem as well.
 
Would anyone have an idea why an Arc-1882i + intel sas expander would be having shockingly slow read speeds off an 8 disk RAID6 2tb Hitachi 7200rpm?

ARC-1680lp (v.149 firmware, aka latest firmware):
88bO

ARC-1882i (latest v1.50 firmware, originally shipped with an earlier version of v1.50):
88cG


Yes, this is the same array on the exact same setup besides for the sas controller. Just swapping the controller introduces massive differences in performance, for the worse with the 1882i. Areca's support suggest cabling, but there is literially no difference between what works on the 1680 compared to the 1882i.

Drives, cabling, and sas expander hasn't changed. The 1882i isn't reporting any SAS link errors(after running benchmarks), the drives SMART status is fine (looking at the raw values via the 1882i CLI shows good raw values, and all drives have near-identical).

The following don't change anything:
  • Rebooting.
  • Swapping the PSU & internal cabling.
  • Changing Motherboard/chipset (I will try with a supermicro X9SCM-F I've just finished building soon, but I'm not expecting it to help).
  • PCIe link proclaims it is 8x/5G, moving to a 1*PCIe v1 slot just degrades performance
  • Trying another intel sas expander.
  • dual or single link between sas expander & card.
  • Using a friend's norco-2442's backplane directly attached to the 1882i's ports, swapping between which backplanes are used.
  • Reflashing the firmware on the areca card.
Fairly sure the card isn't overheating either, as the most I've seen it reach is ~65C during an array build at which point I turned the AC on in the room and which forced the temp down to 45C-50C (yes I'm aware the 1882i runs hot)

All I can think of is that the 1882i has a bad port, which I causes the rest of them to be dragged down. But I'm unsure how to isolate that.
 
Last edited:
Definitely weird. With an ARC-1882 I saw excelent performance with the 6 GB SAS expander built into supermicro cases (2GB+ read) 2GB/sec write with 24 disks in raid6.


the drives SMART status is fine (looking at the raw values via the 1882i CLI shows good raw values, and all drives have near-identical).

Heh. I was the one who got areca to do that (completely redo smart support in their CLI so its useful with the raw values (in decimal) and to introduce smartctl support (on linux) on their newer SAS controllers.
 
Definitely weird. With an ARC-1882 I saw excelent performance with the 6 GB SAS expander built into supermicro cases (2GB+ read) 2GB/sec write with 24 disks in raid6.
Wierd, is how turning on the disk write cache improves read speeds;
jgls
. I'm probably going to have to RMA the controller, as something is borked with it.

I'm planning on see how an Intel G2 SSD performs on each port in pass-through (or a 1 disk raid0) and see if one of the ports is doing something stupid.

Heh. I was the one who got areca to do that (completely redo smart support in their CLI so its useful with the raw values (in decimal) and to introduce smartctl support (on linux) on their newer SAS controllers.
This was quite handy, as there are a few raw values you don't want to see change at all. Plus it lets you see the cycle & online hours counter.
 
@Xon: on the 1882 what have you got under System Information for PCI-E Link Status? Should be "8X/5G"

As to your question about isolating the ports - what you could do is break the array and set the disks to passthrough disks individually, then fire up HDTune and do a quick read-test on each drive for 10 seconds, abort, next drive. That will make sure all the drives are giving you full speed individually. Dont even bother initializing or formatting them NTFS as HDTune's read bench works at the disk level.

Also what motherboard are you using?
 
I have been slowly expanding my raid from 5-6 and adding drives, I have to left but wanted to test to see if it took to raid 6 in the starage manager under expand raid set it shows my 11 member disks state is normal and capacity is 22000.0gb
When i go to Modify Volume set it shows me the Mac Cap allowed as 20000.0GB
Volume Cap 14000.0 (where it was at raid5)
raid Level drop down menu still only shows me option for Raid 5
Now I have 2 disks left to expand for a total of 13 disks
I go to expand raid level i am able to select raid 6 stripe size 64KB
Do i need to hit yes on Change the COlume Atribute During Raid Expansion?
Raid6_Final2Disks%20%282%29.jpg

I've been taking my time with this expansion and don't want to screw it up and get stuck with on large raid 5
Thanks
 
@Xon: on the 1882 what have you got under System Information for PCI-E Link Status? Should be "8X/5G"
Yup.

As to your question about isolating the ports - what you could do is break the array and set the disks to passthrough disks individually, then fire up HDTune and do a quick read-test on each drive for 10 seconds, abort, next drive. That will make sure all the drives are giving you full speed individually. Dont even bother initializing or formatting them NTFS as HDTune's read bench works at the disk level.
I've got 16 2TB 7200rpm Hitachis, in two raid 6 sets of 8 drives each. Both sets have the same massive performance issues regardless if it's a single set attached directly to the controller or both sets via the sas expander (or just 1 set via the sas expander). Even after migrating motherboards.

I've broken up one 8-disk set already, so I'll use those once I can power & connect them back up to try to isolate if it's a particular channel on the controller.

Also what motherboard are you using?
The following cpu/motherboards all have exactly the same, lack of, performance:
  • Q6600, GA EP45 DQ6 (PCIe link: 8X/5G)
  • i7-920, GA X58A UD3R (PCIe link: 8X/5G)
  • Xeon E3-1230, Supermicro X9SCM-F (4th PCIe port, syncs 4X/5G, but exact same performance)
No overclocking, just stock clocks.
 
Last edited:
Hello

I spent some time browsing for a solution but I think I'd be safer asking for specific help.

I've an Areca 1680 in a Mac Pro connected to a Proavio 8 disk enclosure housing 8 Seagate 2TB drives configured as a RAID 6. Drive model is ST32000542AS.

I went to the Mac today and saw that the RAID wasn't mounted. I rebooted the Mac and heard that infernal alarm from the Areca card.

When I fired up the Areca web console (it took many attempts to load that localhost:81 page) I saw that there was no RAID set. I then rebooted again and awaited the alarm. I was not disappointed.

This time I got the console up quicker and saw that there were 3 drives in the RAID set and the other 5 were not. Meanwhile the alarm is still beeping. It stopped after a few minutes (A timeout, perhaps?) I then thought it might be a good idea to try to recover the array using the web interface. (I realise now that might have been a mistake.)

r_1.gif


image.gif


After the timeout/ alarm condition clearing I noticed that all drives in all slots appeared to be free.

I checked each and this is typical of the condition of the drives as reported by the Areca console:

image.gif


So I powered the whole lot down, reseated the RAID controller, reseated the drives in the JBOB, one by one, rebooted, and got the same symptoms.

I've fired off an email to Areca but in the meantime I hope that one of you kind people might be able to advise me. I also called a data recovery specialist who informed me that it was likely to be a "multiple hard drive failure due to heat" and that it could cost around £2,500 to repair. Ouch!

Here's hoping some magic commands will bring my drive back to life.

Thanks!

Jim
 
BrilliantJim: First step is always to contact Areca support, which you did, because even though there are knowledgeable people on forums and the issue is easily solvable with some help, they need to be harrassed about this particular issue by as many people as possible so that they'll be forced to deal with the root of the problem or put some logic in place that the card more intelligently recovers an array from this issue.

That said its extremely unlikely you lost any data so don't sweat that. This is a fairly common issue - and can be triggered by things like excessive heat such that the controller board on the drive stops responding and the areca controller marks the drive as no longer part of the array. There are a few other scenarios in which I've seen this issue crop up but it shouldn't really happen that often and every Areca owner usually goes through this at least once. On the bright side these controllers are perfect in every other way and I run half a dozen Areca's exclusively. Things to do for now:

1. DO NOT change the order of the drives while the array is in this state. Sometimes people start moving drives around thinking that will accomplish something. Keeping original drive order is very important in case raid metadata (signatures) cannot be recovered and other steps need to be taken to regenerate the array config/meta without compromising the partition data.

2. In all likelihood Areca will email you a list of instructions including one or more keywords to enter into the "Rescue Raid Set" submenu. One of them that you can try for now because it is a read-only operation and will not write any data to the disks is the "RESCUE" command. Enter that and reboot and see if the drives show themselves as being part of the RAID SET instead of marked "Free". If that is successful then one would normally enter the "SIGNAT" command which commits the changes, but I would advise against that until hearing back from Areca. If "RESCUE" doen't work, there are some additional commands available which Areca will instruct you on, and I'd recommend against searching them out on the forum and running them all becaue I've seen people just throw them all at the card not even realizing the impact of each command and I've seen people make matters worse because they're in a panic.
 
Last edited:
Thanks odditory. I'm so grateful for a response. Areca hasn't even acknowledged receipt of my email.

I tried the RESCUE command and when my Mac reboots that's when I see three of the drives identified as part of the RAID set. After a load of beeping eventually the alarm stops and all drives appear "Free." I would imagine this was a time-out.

I'll hang off until I hear back from Taiwan. Do you know if they'll accept a telephone call and understand English?

Thanks again. Much appreciated.
 
Yes I was going to mention not to get worried about the lack of response so far given they're in Taiwan. They're not a 24x7 global operation like LSI but on the bright side their code and firmware has always been slightly superior to LSI's - even on cards that use LSI's own chips like the newer 188x series. And the out of band management (web interface, cli) is second to none.

Its 8:20 am over there so I'd expect an answer any time.
 
Last edited:
Yes I was going to mention not to get worried about the lack of response so far given they're in Taiwan. They're not a 24x7 global operation like LSI but on the bright side their code and firmware have always been higher performing than LSI's - even on cards where the Areca uses an LSI chip like the 188x series.

Its 8:20 am over there so I'd expect an answer any time.

That's funny to me, I've never gotten a reply with numerous attempts contacting LSI. But I've gotten a reply every single time I've contacted Areca.
 
Two business days have passed now and I am getting a little concerned that my emails are not being picked up.

Can anyone suggest the next step I should take, other than "sit tight?"

Thanks ;)
 
I have two Areca 1260s today, and I have been pretty satisfied with them (the exception being that if something causes a drive or two to fall out, like for instance a loose power cable, often one drive isn't part of the array, and the array needs to be rebuilt). So my plan is to get a Areca 1882-24 with 4 Gig of RAM (if no one convinces me otherwise, is intels offerings a more solid choice for instance?), have that in my main fileserver, and move the two old 1260s too my backup-server, run them in JBOD-mode, and have some sort of software-RAID there (probably the new win8-thing or ZFS).

But where to buy it? Do anybody have a recommended shop where i can get the 1882 at a reasonable price with 4 Gb preinstalled? And shipped reasonably quick? (I do not live in the US, so it mus hit a 10-day-period when I am staying there at a hotel). I used Flickerdown for my old cards, but they seem to not exsist as a store anymore?
 
Two business days have passed now and I am getting a little concerned that my emails are not being picked up.

Can anyone suggest the next step I should take, other than "sit tight?"

Thanks ;)

Confirm what you sent is concise and articulate, no rambling, includes the serial number and was sent to their correct support address. Resend.
 
Back
Top