ARECA Owner's Thread (SAS/SATA RAID Cards)

While my previous post is important to sort out, I have a more urgent issue.

Has anyone had an issue with an array frozen in Initialization?
It's showing a completely inaccurate capacity of 0.1GB and is stuck on 0.0% initialization.

When I try to delete either the raidset or array, it just freezes. A manual reboot just puts it back into the same state.
 
What you could try is to use diskpart to clean each drive in the array and restart from there.
Are there any useful errors in the eventlog on the Areca?
 
eek. 34 drives in this array... that'll take a while!

I can't even get to the logs anymore because it's now stuck in a time-out loop.
It spends 300 seconds trying to read the array at boot, doesn't find config info then reboots the PC.
 
Last edited:
While my previous post is important to sort out, I have a more urgent issue.

Has anyone had an issue with an array frozen in Initialization?
It's showing a completely inaccurate capacity of 0.1GB and is stuck on 0.0% initialization.

When I try to delete either the raidset or array, it just freezes. A manual reboot just puts it back into the same state.
sounds like a drive compatibility issue to me (if not a failed card). are all the drives the same make/model?
 
eek. 34 drives in this array... that'll take a while!

I can't even get to the logs anymore because it's not stuck in a time-out loop.
It spends 300 seconds trying to read the array at boot, doesn't find config info then reboots the PC.
I think that's your problem. Only 32 drives per array are supported last I checked. Running that many isn't the best idea anyway.
 
Just checked my manual, you are correct. So if it allowed him to try 34, and then failed for that reason, I'd call that a pretty serious bug in the UI. That said, maybe removing two drives from the array then trying to delete it again is the solution? And I agree, 16 drives per array max, IMHO.
 
hmm. I thought with the use of expanders, you can address up to 128 disks?
I am using the 1880ix-24 card + Astek SAS2 card
 
My array timed out again last night again. This is a production machine so this is bad. The only thing I recently did was migrate the array to RAID 6, would that have anything to do with it? Anyone know if the 1880 is any better in regards to the timeouts? I have another machine with an 1880 and am thinking about swapping the cards to see if this helps because I think I might have a bad card. 3 timeouts in the last 3 days on 3 different disks! I can't imagine all 3 disks are going bad.

Ah yes forgot to mention the disks. I'm using the popular 2TB Hitachi drives (HDS722020ALA330). I hadn't thought about heating being an issue, but something I'll look into. The drives are currently at 30 degrees C idle, but I'm not sure how warm they get when they are being hammered on. The server is in an AC controlled room so the temps shouldn't get too high.
 
Dailo

I've been plagued with drop-outs on one of my setups - I have 4 x 12 disk arrays, on the 4th array I have been getting dropouts all the time - different disks each time, no problems on any of the other arrays and the . This is what the support had to say. I've upgraded the power supply to match what I have in my first enclosure. Haven't tested this much though!

"normally device been removed and back in 3 seconds is a link reset, the connection between controller and drive is broken a while.
and in our experience, it could be an environment problem or hard drive related.
is it possible a power consumption related problem ? it happens before the drive firmware may reset itself because the power supply is unstable while heavy loading.
or you can also contact with hard drive vendor for possible firmware update.
as i remembered, hitachi had ever provide a new firmware for some drives before because a compatible issue between drives and SAS controllers. but i am not sure which drive they provide firmwares, so you may needed to contact with them for more detail about the available firmware models."
 
hmm. I thought with the use of expanders, you can address up to 128 disks?
I am using the 1880ix-24 card + Astek SAS2 card
Yes, the card support 128 drives, but only 32 per array. I've run 40 drives on mine for an extended period of time without any problems.
 
Thanks for that tip. I'll try it out. I just remembered that recently someone swapped the chassis from a 750w PSU to a 500w PSU. Although I only have 6 drives and one CPU so I would of imagined that'd be enough to power everything.

Dailo

I've been plagued with drop-outs on one of my setups - I have 4 x 12 disk arrays, on the 4th array I have been getting dropouts all the time - different disks each time, no problems on any of the other arrays and the . This is what the support had to say. I've upgraded the power supply to match what I have in my first enclosure. Haven't tested this much though!

"normally device been removed and back in 3 seconds is a link reset, the connection between controller and drive is broken a while.
and in our experience, it could be an environment problem or hard drive related.
is it possible a power consumption related problem ? it happens before the drive firmware may reset itself because the power supply is unstable while heavy loading.
or you can also contact with hard drive vendor for possible firmware update.
as i remembered, hitachi had ever provide a new firmware for some drives before because a compatible issue between drives and SAS controllers. but i am not sure which drive they provide firmwares, so you may needed to contact with them for more detail about the available firmware models."
 
Thanks for that tip. I'll try it out. I just remembered that recently someone swapped the chassis from a 750w PSU to a 500w PSU. Although I only have 6 drives and one CPU so I would of imagined that'd be enough to power everything.
If you have only 6 drives and a CPU then 500w is more than enough, but that doesn't mean the power is conditioned properly or doesn't drop down in output at times. If this is a production machine and reliability is important, then if you don't have redundant PSUs you really need one very high-quality server grade PSU, like this: http://www.newegg.com/Product/Product.aspx?Item=N82E16817139016
 
thanks Blue Fox and haileris

Do hot-spares count towards that 32-disk limit?
Lets say I built a 32-disk array with 2 hot spares (either Global or assigned specifically to the array).

Would that work?
 
thanks Blue Fox and haileris

Do hot-spares count towards that 32-disk limit?
Lets say I built a 32-disk array with 2 hot spares (either Global or assigned specifically to the array).

Would that work?
If you assign them as global spares you should be okay. I'm not sure about array-specific spares, however.
 
I am running an ARC-1880iX-24 card in a Norco RPC-4224 case. It is a Supermicro X8DT6 motherboard. I built a RAID6 array of 12 Seagate ST32000542AS LP drives using foreground initialization. It initialized fine then dropped two drives. The drives showed "failed". On reboot only one was missing and it showed "free". I made it a global hot swap and the array rebuilt without error. The data was mostly good, though there were a couple of directories that were corrupted.

I disabled all power saving modes, flashed all of the drives to CC35 (some had to be forced from CC34), updated the Areca from 1.48 to 1.49, rebuilt the array using background initialization and it has run fine since (about two weeks). I built a second array (RAID6) with 9 of the same drives (also flashed with CC35) using foreground initialization about ten days ago. It has also been running fine. Last Friday I decided to grow expand the second raid set from 9 to 11 drives - again using the same Seagate drives. It showed Migrating all weekend, then last night at about 64% migrated the alarm started beeping constantly on the controller. Both RAID sets were missing to the O/S. When I restarted the server, the first RAID set was up and normal, the second showed "failed migration". One of the original 9 drives was listed as "free" and my global hot swap drive had been assigned in its place. The array was not recoverable using RESCUE or SIGNAT. It is now being rebuilt with 12 drives.

The disc failures have been random, not on a particular backplane, drive or slot. Areca tells me that "only enterprise drives supported" - which I already know. I have had great luck with these drives so far, especially as compared to the WD and Samsung counterparts - albeit not in a RAID array.

The questions:
  • Are the Seagate drives really not going to work?
  • Could it be a bad 1880 card?
  • Any other thoughts?
 
I am running an ARC-1880iX-24 card in a Norco RPC-4224 case. It is a Supermicro X8DT6 motherboard. I built a RAID6 array of 12 Seagate ST32000542AS LP drives using foreground initialization. It initialized fine then dropped two drives. The drives showed "failed". On reboot only one was missing and it showed "free". I made it a global hot swap and the array rebuilt without error. The data was mostly good, though there were a couple of directories that were corrupted.

I disabled all power saving modes, flashed all of the drives to CC35 (some had to be forced from CC34), updated the Areca from 1.48 to 1.49, rebuilt the array using background initialization and it has run fine since (about two weeks). I built a second array (RAID6) with 9 of the same drives (also flashed with CC35) using foreground initialization about ten days ago. It has also been running fine. Last Friday I decided to grow expand the second raid set from 9 to 11 drives - again using the same Seagate drives. It showed Migrating all weekend, then last night at about 64% migrated the alarm started beeping constantly on the controller. Both RAID sets were missing to the O/S. When I restarted the server, the first RAID set was up and normal, the second showed "failed migration". One of the original 9 drives was listed as "free" and my global hot swap drive had been assigned in its place. The array was not recoverable using RESCUE or SIGNAT. It is now being rebuilt with 12 drives.

The disc failures have been random, not on a particular backplane, drive or slot. Areca tells me that "only enterprise drives supported" - which I already know. I have had great luck with these drives so far, especially as compared to the WD and Samsung counterparts - albeit not in a RAID array.

The questions:
  • Are the Seagate drives really not going to work?
  • Could it be a bad 1880 card?
  • Any other thoughts?
It's the Areca (or its firmware). I have the same card. Had problems with the first (see prior posts in this thread), got a replacement, worked fine at first. During several terabytes of file transfers to the array after the array was built, I twice had the transfer lock up. It was odd, but a reboot corrected it. It's been running fine since (just a couple days), and then tonight I was copying data again (I had to rebuild the array because I inadvertently chose 4K block instead of LBA64), back from a bunch of pass-through disks that held the data, and it was going slow. Writing the data to the pass-through disks was a lot faster (90 MB/s) than reading it from those disks eight terabytes later (65 MB/s).

Thinking that the drives themselves were slow (I had gotten 300+ MBps sustained transfers array-to-array, so I know I had bandwidth available), I tried to copy data from a third drive, and this immediately caused all arrays and disks on the entire controller to drop. When I looked at the RAID set hierarchy, it was if there was no expander on the card at all, it saw no slots and no drives, no expander at all.

Rebooted the server, and the arrays reappeared, however, two pass through disks had to be reset as pass through disks, their settings were lost. Once I reset them they worked fine. I tried restarting the file transfers that were running previously, and they worked, only at much faster speeds, 114 and 117 MB/s, concurrently. I then tried a network transfer, and was getting 108 MB/s concurrent with the other two.

So clearly something odd is happening here, I see one of two possibilities: their firmware still has some kinks that need to be worked out, the card is new after all. Or, the expander chip they source from LSI or an associated component is not up to spec, all my issues have manifested themselves as having to do with the expander.

(NOTE: My arrays are a mix of 4x 2TB WD EARS + 4x Samsung 2TB F4 RAID6, and a 5x Seagate 1.5TB 7200.11 RAID 6)
 
Last edited:
Try taking those Seagate drives out. I've had issues where the Seagate 1.5TBs do not cooperate with the SAS expander, causing drives to not detect or drop out or other issues.
 
Try taking those Seagate drives out. I've had issues where the Seagate 1.5TBs do not cooperate with the SAS expander, causing drives to not detect or drop out or other issues.
Ironically, the Seagates were the array that was not being used. It was the others that caused the whole card to "disappear" when I attempted three concurrent transfers. At this moment it continues to run smoothly since I rebooted, I'd say it's been running for an hour or so, and I'm getting 109 MBps over the network, and 109 and 111 MBps from two pass-through disks, all writing to the Samsung/WD array.

One thing I did do after the card disappeared (the drives disappeared, but I was still able to access the web interface), was disable the power saving settings on the drives, so for now, I'll keep them all spinning and see how it goes. Maybe it was the spin-down that was the cause.
 
Try taking those Seagate drives out. I've had issues where the Seagate 1.5TBs do not cooperate with the SAS expander, causing drives to not detect or drop out or other issues.
I knew of the issues with the 1.5tb drives, but I heard that the 2tb drives were good - with the CC35 firmware.

I guess I'll throw some Hitachi drives in there and test them. The 7K2000 drives sound like the ticket. I wonder if the new 5K3000 drives are as good.
 
I have had problems with the spin down settings dropping array. When flashing seagate st3200542AS firmware from cc34 to cc35,I would suggest one drive at a time and verify that the drive is seen in array correct. During the time I flash mine, I flash three at a time. This caused me a failure but was able to rescue after a few rescue resets. I sent my hard drive power management back to disable to kepp me from having since any problems.
 
Not sure what happen to last event of raid expansion and all my previous post. I was able to modify volume and expand raid set with increased size by adding another hard drive to the mix. this time all went well in storage manager area of windows 7 ultimate. Now new size is matched with MY COMPUTER area of windows. Thanks all for the reply post:D
 
One thing I did do after the card disappeared (the drives disappeared, but I was still able to access the web interface), was disable the power saving settings on the drives, so for now, I'll keep them all spinning and see how it goes. Maybe it was the spin-down that was the cause.

its rather the spinup than the spindown that's often the cause with false timeouts, and it was more pronounced on the 1680 than the 1880 series, the default staggered spinup time is .7 seconds on areca cards but with some drives you have to back that off to 1.0 or 2.0 seconds and in some cases longer. basically start with the default and then go one value at a time until the timeouts go away.

granted there are other factors that may be causing false timeouts but the staggered spinup time is a starting point for troubleshooting. i've harped on Areca about looking at this issue more, have gotten the canned response back "its a non-enterprise drive which means if we ask the HD manufacturer for support they'll tell us sorry", but my gut tells me its an issue that Areca could fix on their end without requiring HD vendors to make any firmware changes on their end.
 
It may not be related but check the drive temps. I found some drives would cause the RAID to drop them if their temp got too high. In the chassis they were getting up to 39C. Out in an external chassis, or even lying on the bench, the temp dropped to 33C. Keeping them under 35C has eliminated the intermittent problems.
 
To answer a few questions above:
  • The drives were all flashed one at a time on a separate machine, then scanned to verify the version.
  • The staggered power on is set at 1.5
  • All drive power management is disabled
  • The drives are at 33-34C when in a normal RAID state and 34-35C when initializing.

So far the new array of twelve Seagate drives has been foreground initializing since yesterday morning - currently at about 25%. No hiccups yet.

I have 12 of the new 5K3000 Hitachi drives on the way to build a third array. I should be able to start them on Friday.
 
To answer a few questions above:
  • The drives were all flashed one at a time on a separate machine, then scanned to verify the version.
  • The staggered power on is set at 1.5
  • All drive power management is disabled
  • The drives are at 33-34C when in a normal RAID state and 34-35C when initializing.

So far the new array of twelve Seagate drives has been foreground initializing since yesterday morning - currently at about 25%. No hiccups yet.

I have 12 of the new 5K3000 Hitachi drives on the way to build a third array. I should be able to start them on Friday.

Is that 3TB 5K3000?
 
its rather the spinup than the spindown that's often the cause with false timeouts, and it was more pronounced on the 1680 than the 1880 series, the default staggered spinup time is .7 seconds on areca cards but with some drives you have to back that off to 1.0 or 2.0 seconds and in some cases longer. basically start with the default and then go one value at a time until the timeouts go away.

granted there are other factors that may be causing false timeouts but the staggered spinup time is a starting point for troubleshooting. i've harped on Areca about looking at this issue more, have gotten the canned response back "its a non-enterprise drive which means if we ask the HD manufacturer for support they'll tell us sorry", but my gut tells me its an issue that Areca could fix on their end without requiring HD vendors to make any firmware changes on their end.
Thanks for info odd! I did mean spin-up when I said spin-down, I was just referring to the whole process. Ever since I disabled that power saving mode I've had no issues. I may try the low RPM mode in the future.

My concern that this might be a larger issue is that it occurred while it was spinning up a pass-through disk, so I wouldn't really expect that to cause issues. The RAID 6 arrays were spun up and in use. I also have a 750W single rail server-level power supply, so power should never be a problem I would think. Even if there were an issue with a single disk, that should only affect the disk, or the array it is attached to. It shouldn't bring down 5 arrays and the entire card.

Me thinks there are some bugs in the firmware, and some rare conditions are throwing errors that aren't handled properly by the Areca firmware. Individual drive incompatibility should never affect the stability of the entire card, and you are right, Areca needs not work with HDD manufacturers to be able to fix this.
 
Rebooted the server, and the arrays reappeared, however, two pass through disks had to be reset as pass through disks, their settings were lost.

FYI this behavior is by design (true non-destructive passthrough), and is the nature of pass-through disks on Areca controllers, which is actually a blessing considering the way certain other brands of controllers attempt to handle this (destructively). Since no metadata is written to the disk, the controller doesn't know the disk is supposed to be pass-through at boot time. The controller also isn't remembering these types of disk settings in NVRAM, doing so would create all sorts of additional headaches. The controller makes all of its configuration assumptions based on the metadata it reads array member disks, which is the behavior of most array controllers.

The one exception to that is if the controller is switched from RAID to JBOD mode, then every port is a passthrough and there's no need to create passthrough disks manually.
 
Last edited:
FYI this behavior is by design (true non-destructive passthrough), and is the nature of pass-through disks on Areca controllers, which is actually a blessing considering the way certain other brands of controllers attempt to handle this (destructively). Since no metadata is written to the disk, the controller doesn't know the disk is supposed to be pass-through at boot time. The controller also isn't remembering these types of disk settings in NVRAM, doing so would create all sorts of additional headaches. The controller makes all of its assumptions based on what's in the metadata of array members, which is the behavior of most array controllers for various reasons.
Didn't know that, good to know, thanks! Any reason why two would reset (included the oldest of four) while two would not?
 
Not sure what happen to last event of raid expansion and all my previous post. I was able to modify volume and expand raid set with increased size by adding another hard drive to the mix. this time all went well in storage manager area of windows 7 ultimate. Now new size is matched with MY COMPUTER area of windows. Thanks all for the reply post:D

Glad you got it working.
 
Thanks for that tip. I'll try it out. I just remembered that recently someone swapped the chassis from a 750w PSU to a 500w PSU. Although I only have 6 drives and one CPU so I would of imagined that'd be enough to power everything.

It wouldn't surprise me at all if you were having a power issue.

When I originaly built my home server I started out with 2 drives in RAID 1. At the time I knew that with a 24 port RAID card I would eventually have alot of drives to power. So I decided to get a 1000W power supply to future proof a bit. Over time the array grew untill I got to 8 drives. After I got the 8 drives I began to experience a total RAID crad crash when I put any sort of load on the drives (long sustained reads or writes). It actualy caused the RAID card to reboot/reset and not show any disks connected. The only way to fix it was to reboot the server (and thankfully the array was fine each time).

At first I blamed the RAID card as being faulty and after some back and forth with Areca support (and their helpful suggestion to replace the 8 drives with enterprise models) I determined I had a power issue. I replaced the 1000W power supply with a highly rated PC&P 750W unit. As soon as I did that all my problems went away. I have since grown to hosting 21 drives in this server with not a hint of power issues even with 12 hours of full data (9TB) bit by bit compare (testing integrity of my data on a new set of 3TB Hitachis).
 
Thanks for the tips everyone. I swapped the chasis back to the better PSU and hoping that it becomes more stable. However, I ran into a bigger problem because I also swapped my RAID controllers and for some reason my RAID 6 array is now corrupt. All the volumes are there, but when I booted into the OS it was complaining that all of the volumes were corrupt. I was able to fsck the volumes that have filesystems and looks like the data is there, but I use raw logical volumes as ISCSI luns for ESX and those all corrupt. For example when I view the contents of a .vmx file instead of showing me the configuration of a VM it is a log file. Any ideas how something like this could happen? Doesn't look like there is anything I can really do, but to save the data besides backing up what I have and re-creating the volume.
 
Thanks for the tips everyone. I swapped the chasis back to the better PSU and hoping that it becomes more stable. However, I ran into a bigger problem because I also swapped my RAID controllers and for some reason my RAID 6 array is now corrupt. All the volumes are there, but when I booted into the OS it was complaining that all of the volumes were corrupt. I was able to fsck the volumes that have filesystems and looks like the data is there, but I use raw logical volumes as ISCSI luns for ESX and those all corrupt. For example when I view the contents of a .vmx file instead of showing me the configuration of a VM it is a log file. Any ideas how something like this could happen? Doesn't look like there is anything I can really do, but to save the data besides backing up what I have and re-creating the volume.
Run CHKDSK. I had an issue in the past where a RAID 5 array went offline, then came back online, but any time I tried to open a folder it said the drive was corrupt. CHKDSK fixed it.
 
Unfortunately there is no chkdsk for VMware so I can't run anything like that. I just finished running fsck on one of the other volumes (linux FS) and all the data is still gone from the filesystem so I might be screwed here.
 
I decided to order an 1880ix-24 so will post some dd results once I get it.

23x ST32000444SS in raid 6 unfortunately the writes are already cpu limited around 1.1GB/s on my 1680ix-24 but the reads should be able to scale quite a bit higher currently 1.3GB/s on the 1680 with around 50-58% cpu usage on a single core.
 
Today I installed an Areca ARC-1100 raid controller in my homeserver, the write speeds (tested with dd and bonnie++) are really low, read speeds are however fine.

Setup:
Motherboard: Gigabyte GA-MA78GM-S2H (Latest BIOS)
Cpu: Amd Athlon X2 5050e
Harddisk: Samsung EcoGreen 2TB
OS: Ubuntu Live from USB (also tried in Windows 7)

Speeds:
Read: 109 mb/s
Write: 23 mb/s

I tried both write-back and write-trough, it is a single disk setup with the areca in disk passtrough mode.

When I put the same Areca controller, same disk and same sata-cable in my other computer (Asus P5E, Intel E8400) the reads are the same however the writes are much higher (100 mb/s). The OS is exactly the same as on the homeserver (same USB drive).

Can someone help me solve this problem? Why is the write speed so low in my homeserver?

Thanks!
 
Areca 1880
Looking for up to 20 2tb drives, currently have 8 in raid 6.

This is for my media server, what is the max drives I should do in one raid array?

My plan was 2 10 drive raid 6 arrays. I know build and rebuild times are long with this many drives but I think more than 10 might be extreme. Any advice or thoughts?
 
Areca 1880
Looking for up to 20 2tb drives, currently have 8 in raid 6.

This is for my media server, what is the max drives I should do in one raid array?

My plan was 2 10 drive raid 6 arrays. I know build and rebuild times are long with this many drives but I think more than 10 might be extreme. Any advice or thoughts?
My 23x 2TB in raid 6 can rebuild in about 5 hours with average usage going on.

That's on an older 1680ix-24 so your 1880 might go even faster with the right drives.
 
Back
Top