areca 1170 raid6 failed

Joined
Aug 20, 2014
Messages
7
So I received some email alerts with errors from my NAS. After looking into it, my data was not available any longer. Next I looked into the areca and noticed 2 drives failed(raid6) (raid degraded). Then I thought ok I will replace a drive and it should rebuild then hopefully data will show up again. So I used areca drive locator to make the drives flash. When I pulled the flashing drive(what was supposed to be drive7), the drive next to it immediately starts flashing. I went ahead and replaced the drive I was working with. Then I looked at my screen and noticed the raid was now failed (due to drive6 being removed). I immediately power off the box, and put the old drive back in.
Then i wasnt able to boot at all. I was getting stuck on the raid f/w screen trying to initialize. Next i powered off the box and unplugged the battery for about 30 seconds. Now when i boot up I have 3 raidset 00. 2 raidset 00 with 1 drive each and the rest of the drives "Missing' and the original Raidset 00 with 2 failed drives and 1 missing.

I was hoping some of the areca Gods could help me put my drives back into 1 raidset and magically make it work.

Code:
Raid Set 	IDE Channels 	Volume Set(Ch/Id/Lun) 	Volume State 	Capacity
Raid Set # 00 	Ch01  	  	  	 
  	Ch02  	  	  	 
  	Ch03  	  	  	 
  	Ch04  	  	  	 
  	Ch05  	  	  	 
  	Missing 	  	  	 
  	Failed 	  	  	 
  	Ch08  	  	  	 
  	Ch09  	  	  	 
  	Ch10  	  	  	 
  	Ch11  	  	  	 
  	Ch12  	  	  	 
  	Ch13  	  	  	 
  	Ch14  	  	  	 
  	Ch15  	  	  	 
  	Ch16  	  	  	 
  	Ch17  	  	  	 
  	Ch18  	  	  	 
  	Ch19  	  	  	 
  	Ch20  	  	  	 
  	Ch21  	  	  	 
  	Ch22  	  	  	 
  	Failed 	  	  	 
  	Ch24  	  	  	 
Raid Set # 00 	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Ch06  	  	  	 
  	Failed 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Failed 	  	  	 
  	Missing 	  	  	 
Raid Set # 00 	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Missing 	  	  	 
  	Ch23  	  	  	 
  	Missing 	  	  	 
IDE Channels
Channel 	Usage 	Capacity 	Model
Ch01 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch02 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch03 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch04 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch05 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch06 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch07 	Failed 	0.0GB 	
Ch08 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch09 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch10 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch11 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch12 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch13 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch14 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch15 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch16 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch17 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch18 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch19 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch20 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch21 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch22 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch23 	Raid Set # 00 	750.2GB 	ST3750640NS
Ch24 	Raid Set # 00 	750.2GB 	ST3750640NS

Here are the System Events
Code:
2014-8-20 11:52:13 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-20 11:50:13 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-19 17:11:49 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-19 17:5:18 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 17:3:29 	Incomplete RAID 	Discovered 	  	 
2016-4-20 17:3:29 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 17:1:18 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-19 16:59:20 	RS232 Terminal 	VT100 Log In 	  	 
2014-8-19 16:27:23 	RS232 Terminal 	VT100 Log In 	  	 
2016-4-20 16:27:12 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:27:12 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 16:23:36 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-19 16:7:4 	RS232 Terminal 	VT100 Log In 	  	 
2016-4-20 16:6:46 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:6:46 	H/W Monitor 	Raid Powered On 	  	 
2016-4-20 16:5:31 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:5:30 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 15:53:27 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 15:51:39 	Incomplete RAID 	Discovered 	  	 
2016-4-20 15:51:39 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 10:40:15 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 10:37:22 	H/W Monitor 	Raid Powered On 	  	 
2016-4-20 10:35:51 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:34:20 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:32:49 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:32:49 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:31:37 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:31:37 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:29:24 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:29:24 	H/W Monitor 	Power On With Battery Backup 	  	 
2014-8-13 16:17:11 	Raid Set # 00 	Rebuild RaidSet 	  	 
2014-8-13 16:17:11 	IDE Channel 6 	Device Inserted 	  	 
2014-8-13 16:16:42 	IDE Channel 6 	Device Removed 	  	 
2014-8-13 16:16:41 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-13 16:16:41 	ARC-1170-VOL#00 	Volume Failed 	  	 
2014-8-13 16:12:36 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-14 15:59:43 	Incomplete RAID 	Discovered 	  	 
2016-4-14 15:59:43 	H/W Monitor 	Raid Powered On
 
Before you do ANYTHING! Do you have the RAID layout (Physical Drive Order, BlockSize, StripeSize, etc) of your original array? Do you have enough blank drives to make bit-perfect backups of the drives so you can undo and retry recovers if the first attempt goes awry? What firmware/bios levels are on your Areca currently. Can you post the complete log, including events prior to 8/13/14 where it sees the volume failed?
 
Is it possible to pull blocksize and stripe size from the areca bios? I will work on gettting this info from the original nas provider if it isnt there. I can get the drive layout from the areca bios i believe. Sad to say, but no i dont have a place to copy my drives off to.

BIOS V1.17b
FW V1.42

Code:
2014-8-20 11:52:13 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-20 11:50:13 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-19 17:11:49 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-19 17:5:18 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 17:3:29 	Incomplete RAID 	Discovered 	  	 
2016-4-20 17:3:29 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 17:1:18 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-19 16:59:20 	RS232 Terminal 	VT100 Log In 	  	 
2014-8-19 16:27:23 	RS232 Terminal 	VT100 Log In 	  	 
2016-4-20 16:27:12 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:27:12 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 16:23:36 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-19 16:7:4 	RS232 Terminal 	VT100 Log In 	  	 
2016-4-20 16:6:46 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:6:46 	H/W Monitor 	Raid Powered On 	  	 
2016-4-20 16:5:31 	Incomplete RAID 	Discovered 	  	 
2016-4-20 16:5:30 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 15:53:27 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 15:51:39 	Incomplete RAID 	Discovered 	  	 
2016-4-20 15:51:39 	H/W Monitor 	Raid Powered On 	  	 
2014-8-19 10:40:15 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-20 10:37:22 	H/W Monitor 	Raid Powered On 	  	 
2016-4-20 10:35:51 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:34:20 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:32:49 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:32:49 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:31:37 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:31:37 	H/W Monitor 	Power On With Battery Backup 	  	 
2016-4-20 10:29:24 	Incomplete RAID 	Discovered 	  	 
2016-4-20 10:29:24 	H/W Monitor 	Power On With Battery Backup 	  	 
2014-8-13 16:17:11 	Raid Set # 00 	Rebuild RaidSet 	  	 
2014-8-13 16:17:11 	IDE Channel 6 	Device Inserted 	  	 
2014-8-13 16:16:42 	IDE Channel 6 	Device Removed 	  	 
2014-8-13 16:16:41 	Raid Set # 00 	RaidSet Degraded 	  	 
2014-8-13 16:16:41 	ARC-1170-VOL#00 	Volume Failed 	  	 
2014-8-13 16:12:36 	Proxy Or Inband 	HTTP Log In 	  	 
2016-4-14 15:59:43 	Incomplete RAID 	Discovered 	  	 
2016-4-14 15:59:43 	H/W Monitor 	Raid Powered On 	  	 
2014-8-13 11:6:50 	Proxy Or Inband 	HTTP Log In 	  	 
2014-8-13 9:53:9 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 18:45:49 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 18:24:5 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 17:18:55 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 17:12:58 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 15:42:25 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 15:15:46 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 14:42:15 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 13:48:32 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 13:42:38 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 13:39:42 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 12:33:11 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 12:21:40 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 11:28:23 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 9:17:13 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 8:38:41 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 7:35:39 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 7:32:5 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 6:9:7 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 3:27:2 	IDE Channel 24 	Reading Error 	  	 
2014-7-11 1:15:37 	IDE Channel 24 	Reading Error 	  	 
2016-3-9 10:13:41 	Incomplete RAID 	Discovered 	  	 
2016-3-9 10:13:40 	H/W Monitor 	Raid Powered On 	  	 
2014-7-7 21:50:15 	IDE Channel 7 	Device Failed 	  	 
2014-7-7 21:50:13 	Raid Set # 00 	RaidSet Degraded
 
This might be a recovery job for R-Studio. And yes you will need to make backups of each drive first if you want to be safe.

Hardware RAID cards are just stupid. Too damn many times have I had them decide an array is fucked up beyond repair when it in fact was not. Too many of them just love to drop disks that have not failed. This is why I only use ZFS now. Hardware RAID card manufacturers ought to lift some code out of ZFS, because I've never had any of these issues with it. Whereas Arecas are known for this shit.
 
Last edited:
Unfortunately I dont have the resources to backup the disks.

I might be shot but i went ahead and did a level2rescue. which placed all my drives back into 1 raidset00. but the volume was not showing up.So i activated the raidset and now my volume shows up but is failed as i somewhat expected. However it only shows as 2 drives failed in a raid 6. So now im wondering if there is a way to tell it that it is not failed just degraded.

Found this in the volume
Code:
Volume Set Name 	ARC-1170-VOL#00
Raid Set Name 	Raid Set # 00
Volume Capacity 	16500.0GB
SCSI Ch/Id/Lun 	0/0/0
Raid Level 	Raid 6
Stripe Size 	64KBytes
Block Size 	512Bytes
Member Disks 	24
Cache Mode 	Write Back
Tagged Queuing 	Enabled
Volume State 	Failed

Raid Drive Info
Code:
Raid Set 	IDE Channels 	Volume Set(Ch/Id/Lun) 	Volume State 	Capacity
Raid Set # 00 	Ch01  	ARC-1170-VOL#00 (0/0/0) 	Failed 	16500.0GB
  	Ch02  	  	  	 
  	Ch03  	  	  	 
  	Ch04  	  	  	 
  	Ch05  	  	  	 
  	Ch06  	  	  	 
  	Failed 	  	  	 
  	Ch08  	  	  	 
  	Ch09  	  	  	 
  	Ch10  	  	  	 
  	Ch11  	  	  	 
  	Ch12  	  	  	 
  	Ch13  	  	  	 
  	Ch14  	  	  	 
  	Ch15  	  	  	 
  	Ch16  	  	  	 
  	Ch17  	  	  	 
  	Ch18  	  	  	 
  	Ch19  	  	  	 
  	Ch20  	  	  	 
  	Ch21  	  	  	 
  	Ch22  	  	  	 
  	Failed 	  	  	 
  	Ch24
 
How could you build a single 24 disk pool without means to back it up? I mean even two 12 drive pools were so much more flexible.
 
Unfortunately this was something i came in to. My guess as to why this was done is that they used to be a windows company and onlly had so many drive letters to use.So i am stuck with this mess for now.
 
What it comes down to now is how valuable the data is. If you MUST have the data and you don't want to go the professional recovery route, buy enough new drive space to make bit-perfect images of the drives you already have in the array. You can then start trying alternate recovery routes such as forcing good the array, r-Studio and any other attempts you wish to make with the ability to return to as it is now after an unsuccessful attempt.
 
Why are you being so cheap? :p
Just buy 6x 4TB disks and start saving the disk images.You can then resell those bought disks to recoup some of the cost after all is done.

You are ignoring all advice asking you to do so without providing any real explanation as to why you won't/can't.

Moreover, a degraded array should not cause you to lose access to the data.
So, there is something missing in that story.
 
Because I am not being given approval to do so. This is an old nas box that we are wanting to phase out anyway. Honestly the IT department is the red-headed step department, but thats a whole other story. So im stuck trying to make lemonade out of rotten lemons. And they want it made yesterday.

Just an update.
long story short. i deleted the raidset and recreated it and it is now rebuilding
 
Wait... so, you were given approval to lose all the data on the array?

Must have not been important data to choose to lose it over merely $1k in expenses.
 
Back
Top