ARECA Owner's Thread (SAS/SATA RAID Cards)

Hi everyone, I'm new to this forum and have some questions about ARC-1883i.

I'm running Windows 10 as a NAS, 6 Seagate SATA EXOS 20 TB in RAID6 on the ARC1883i.

System is running fine, but I'm trying to put the HDD's into standby.

Under "System Controls > Hdd Power Management > Time To Spin Down Idle HDD" to 30 (Min) and it worked fine.

BUT after a while the array wakes up from "spun down" without any direct access.

The typical background task software like AIDA64 or any other S.M.A.R.T. software is not installed.

I could not find a solution in this forum but maybe there are some user who are experienced with these setting or problem?

Is there any chance to set the array permanently to "sleep" ??
 
Hi everyone, I'm new to this forum and have some questions about ARC-1883i.

I'm running Windows 10 as a NAS, 6 Seagate SATA EXOS 20 TB in RAID6 on the ARC1883i.

System is running fine, but I'm trying to put the HDD's into standby.

Under "System Controls > Hdd Power Management > Time To Spin Down Idle HDD" to 30 (Min) and it worked fine.

BUT after a while the array wakes up from "spun down" without any direct access.

The typical background task software like AIDA64 or any other S.M.A.R.T. software is not installed.

I could not find a solution in this forum but maybe there are some user who are experienced with these setting or problem?

Is there any chance to set the array permanently to "sleep" ??
It is most likely something on the network (or even the local machine) is polling the shared drive which forces it to wake to respond. If you pull the Ethernet cable, does the problem resolve (easy way to see if it is coming from the local network or the local machine itself?)
 
It is most likely something on the network (or even the local machine) is polling the shared drive which forces it to wake to respond. If you pull the Ethernet cable, does the problem resolve (easy way to see if it is coming from the local network or the local machine itself?)
That's good - I will try it and let know.
 
Hi everyone. I'm hoping that this crew will have the info I desperately need.

I was in the process of migrating data from one raid volume to a new (larger capacity) volume. During this process, I had several drives fail in my original volume. I offlined the original volume until I was able to identify what was happening. I read that after you offline the volume that you need to remove the drives and reseat them in order for the arc-1880 to repick them back up.

After doing this, I found that instead of recovering the original set, it formed two different sets (each with only 2 disks of 6). The 3 drives removed were 1 global spare, 2 drives from the original R6.

Is there a way to recover this original Raid6 of these Hitachi drives?

Code:
CLI> rsf info
 #  Name                     Disks      Total       Free  State
===============================================================================
 1  Raid Set # 000               6  12000.0GB      0.0GB  Degraded
 2  Raid Set # 000               6  12000.0GB      0.0GB  Degraded
 3  Raid Set # 001               4  32000.0GB      0.0GB  Normal
===============================================================================
CLI> rsf info raid=1
Raid Set Information
===========================================
Raid Set Name        : Raid Set # 000
Member Disks         : 6
Total Raw Capacity   : 12000.0GB
Free Raw Capacity    : 0.0GB
Min Member Disk Size : 2000.0GB
Supported Volumes    : 16
Raid Set Power State : Operating
Raid Set State       : Degraded
Security Status      : ISE Disks
Member Disk Channels : x.E2S9.x.x.E2S5.x.
===========================================
CLI> rsf info raid=2
Raid Set Information
===========================================
Raid Set Name        : Raid Set # 000
Member Disks         : 6
Total Raw Capacity   : 12000.0GB
Free Raw Capacity    : 0.0GB
Min Member Disk Size : 2000.0GB
Supported Volumes    : 16
Raid Set Power State : Operating
Raid Set State       : Degraded
Security Status      : ISE Disks
Member Disk Channels : x.x.E2S10.E2S11.x.x.
===========================================
CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 ARC-1880-VOL#000 Raid Set # 000  Raid6   8000.0GB 00/00/00   Failed
  2 ARC-1880-VOL#001 Raid Set # 001  Raid6   16000.0GB 00/00/01   Normal
  3 ARC-1880-VOL#000 Raid Set # 000  Raid6   8000.0GB 00/00/00   Failed
===============================================================================
CLI> vsf info vol=1
Volume Set Information
===========================================
Volume Set Name   : ARC-1880-VOL#000
Volume Serial     :
Raid Set Name     : Raid Set # 000
Volume Capacity   : 8000.0GB
SCSI Ch/Id/Lun    : 00/00/00
Raid Level        : Raid6
Stripe Size       : 16K
Member Disks      : 6
Cache Mode        : Write Back
Write Protection  : Disabled
Volume Encryption : Non-Encrypted
IO Mode           : Cached IO
Tagged Queuing    : Enabled
Volume State      : Failed
===========================================
CLI> vsf info vol=3
Volume Set Information
===========================================
Volume Set Name   : ARC-1880-VOL#000
Volume Serial     :
Raid Set Name     : Raid Set # 000
Volume Capacity   : 8000.0GB
SCSI Ch/Id/Lun    : 00/00/00
Raid Level        : Raid6
Stripe Size       : 16K
Member Disks      : 6
Cache Mode        : Write Back
Write Protection  : Disabled
Volume Encryption : Non-Encrypted
IO Mode           : Cached IO
Tagged Queuing    : Enabled
Volume State      : Failed
===========================================
CLI> disk info
  # Enc# Slot#   ModelName                        Capacity  Usage
===============================================================================
  1  01  Slot#1  N.A.                                0.0GB  N.A.
  2  01  Slot#2  N.A.                                0.0GB  N.A.
  3  01  Slot#3  N.A.                                0.0GB  N.A.
  4  01  Slot#4  N.A.                                0.0GB  N.A.
  5  01  Slot#5  N.A.                                0.0GB  N.A.
  6  01  Slot#6  N.A.                                0.0GB  N.A.
  7  01  Slot#7  N.A.                                0.0GB  N.A.
  8  01  Slot#8  N.A.                                0.0GB  N.A.
  9  02  SLOT 01 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 10  02  SLOT 02 N.A.                                0.0GB  N.A.
 11  02  SLOT 03 N.A.                                0.0GB  N.A.
 12  02  SLOT 04 N.A.                                0.0GB  N.A.
 13  02  SLOT 05 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 14  02  SLOT 06 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 15  02  SLOT 07 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 16  02  SLOT 08 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 17  02  SLOT 09 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 18  02  SLOT 10 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 19  02  SLOT 11 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 20  02  SLOT 12 N.A.                                0.0GB  N.A.
 21  02  EXTP 01 N.A.                                0.0GB  N.A.
 22  02  EXTP 02 N.A.                                0.0GB  N.A.
 23  02  EXTP 03 N.A.                                0.0GB  N.A.
 24  02  EXTP 04 N.A.                                0.0GB  N.A.
===============================================================================

Is there a way to recover this original Raid6 of these Hitachi drives?
 
Hi everyone. I'm hoping that this crew will have the info I desperately need.

I was in the process of migrating data from one raid volume to a new (larger capacity) volume. During this process, I had several drives fail in my original volume. I offlined the original volume until I was able to identify what was happening. I read that after you offline the volume that you need to remove the drives and reseat them in order for the arc-1880 to repick them back up.

After doing this, I found that instead of recovering the original set, it formed two different sets (each with only 2 disks of 6). The 3 drives removed were 1 global spare, 2 drives from the original R6.

Is there a way to recover this original Raid6 of these Hitachi drives?

Code:
CLI> rsf info
 #  Name                     Disks      Total       Free  State
===============================================================================
 1  Raid Set # 000               6  12000.0GB      0.0GB  Degraded
 2  Raid Set # 000               6  12000.0GB      0.0GB  Degraded
 3  Raid Set # 001               4  32000.0GB      0.0GB  Normal
===============================================================================
CLI> rsf info raid=1
Raid Set Information
===========================================
Raid Set Name        : Raid Set # 000
Member Disks         : 6
Total Raw Capacity   : 12000.0GB
Free Raw Capacity    : 0.0GB
Min Member Disk Size : 2000.0GB
Supported Volumes    : 16
Raid Set Power State : Operating
Raid Set State       : Degraded
Security Status      : ISE Disks
Member Disk Channels : x.E2S9.x.x.E2S5.x.
===========================================
CLI> rsf info raid=2
Raid Set Information
===========================================
Raid Set Name        : Raid Set # 000
Member Disks         : 6
Total Raw Capacity   : 12000.0GB
Free Raw Capacity    : 0.0GB
Min Member Disk Size : 2000.0GB
Supported Volumes    : 16
Raid Set Power State : Operating
Raid Set State       : Degraded
Security Status      : ISE Disks
Member Disk Channels : x.x.E2S10.E2S11.x.x.
===========================================
CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 ARC-1880-VOL#000 Raid Set # 000  Raid6   8000.0GB 00/00/00   Failed
  2 ARC-1880-VOL#001 Raid Set # 001  Raid6   16000.0GB 00/00/01   Normal
  3 ARC-1880-VOL#000 Raid Set # 000  Raid6   8000.0GB 00/00/00   Failed
===============================================================================
CLI> vsf info vol=1
Volume Set Information
===========================================
Volume Set Name   : ARC-1880-VOL#000
Volume Serial     :
Raid Set Name     : Raid Set # 000
Volume Capacity   : 8000.0GB
SCSI Ch/Id/Lun    : 00/00/00
Raid Level        : Raid6
Stripe Size       : 16K
Member Disks      : 6
Cache Mode        : Write Back
Write Protection  : Disabled
Volume Encryption : Non-Encrypted
IO Mode           : Cached IO
Tagged Queuing    : Enabled
Volume State      : Failed
===========================================
CLI> vsf info vol=3
Volume Set Information
===========================================
Volume Set Name   : ARC-1880-VOL#000
Volume Serial     :
Raid Set Name     : Raid Set # 000
Volume Capacity   : 8000.0GB
SCSI Ch/Id/Lun    : 00/00/00
Raid Level        : Raid6
Stripe Size       : 16K
Member Disks      : 6
Cache Mode        : Write Back
Write Protection  : Disabled
Volume Encryption : Non-Encrypted
IO Mode           : Cached IO
Tagged Queuing    : Enabled
Volume State      : Failed
===========================================
CLI> disk info
  # Enc# Slot#   ModelName                        Capacity  Usage
===============================================================================
  1  01  Slot#1  N.A.                                0.0GB  N.A.
  2  01  Slot#2  N.A.                                0.0GB  N.A.
  3  01  Slot#3  N.A.                                0.0GB  N.A.
  4  01  Slot#4  N.A.                                0.0GB  N.A.
  5  01  Slot#5  N.A.                                0.0GB  N.A.
  6  01  Slot#6  N.A.                                0.0GB  N.A.
  7  01  Slot#7  N.A.                                0.0GB  N.A.
  8  01  Slot#8  N.A.                                0.0GB  N.A.
  9  02  SLOT 01 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 10  02  SLOT 02 N.A.                                0.0GB  N.A.
 11  02  SLOT 03 N.A.                                0.0GB  N.A.
 12  02  SLOT 04 N.A.                                0.0GB  N.A.
 13  02  SLOT 05 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 14  02  SLOT 06 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 15  02  SLOT 07 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 16  02  SLOT 08 WDC WD80EFBX-68AZZN0             8001.6GB  Raid Set # 001
 17  02  SLOT 09 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 18  02  SLOT 10 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 19  02  SLOT 11 Hitachi HDS5C3020ALA632          2000.4GB  Raid Set # 000
 20  02  SLOT 12 N.A.                                0.0GB  N.A.
 21  02  EXTP 01 N.A.                                0.0GB  N.A.
 22  02  EXTP 02 N.A.                                0.0GB  N.A.
 23  02  EXTP 03 N.A.                                0.0GB  N.A.
 24  02  EXTP 04 N.A.                                0.0GB  N.A.
===============================================================================

Is there a way to recover this original Raid6 of these Hitachi drives?
First, please post the complete log from the card. Are the drives still in their original drive order? Next, you mentioned (2 drives of 6) did you originally have a 4 drive R6 of 2TB drives, which you were migrating the data to a second 4 drive R6 of 8TB drives? Do you just need to recreate the data on the 2TB drives (meaning you have 8TB drives (which have no data you currently need) you can use to backup the 2TB drive images to so you can attempt repair on the mirrors without damaging the original data in case different attempts fail?
 
First, please post the complete log from the card. Are the drives still in their original drive order? Next, you mentioned (2 drives of 6) did you originally have a 4 drive R6 of 2TB drives, which you were migrating the data to a second 4 drive R6 of 8TB drives? Do you just need to recreate the data on the 2TB drives (meaning you have 8TB drives (which have no data you currently need) you can use to backup the 2TB drive images to so you can attempt repair on the mirrors without damaging the original data in case different attempts fail?
Thank you for the reply. Allow me to clarify:
I was in the process of migrating the data away from a 6x 2TB R6 (+1 Spare) and onto a 4x 8TB R6. During this process, I had a single drive fail. This caused the spare to begin rebuilding. Then the spare failed, followed by yet another disk.

At that point I stopped the copy process. I then was worried of losing another disk, so I offlined the raid. I then removed all the disks, sorted out the failed ones and put back the operational drives. I believe they're in the correct slots.

Once I put them back and brought the server online, it appeared to be in this state. I may have gotten the drives in the wrong order?

I'd be happy to produce a log, but where do I get that from?
 
Thank you for the reply. Allow me to clarify:
I was in the process of migrating the data away from a 6x 2TB R6 (+1 Spare) and onto a 4x 8TB R6. During this process, I had a single drive fail. This caused the spare to begin rebuilding. Then the spare failed, followed by yet another disk.

At that point I stopped the copy process. I then was worried of losing another disk, so I offlined the raid. I then removed all the disks, sorted out the failed ones and put back the operational drives. I believe they're in the correct slots.

Once I put them back and brought the server online, it appeared to be in this state. I may have gotten the drives in the wrong order?

I'd be happy to produce a log, but where do I get that from?
Based upon what you stated you only have 2 members of a R6 fail (the first two which were array members, and the spare which had not yet been graduated to an array member,) which while not leaving your any parity should (if there are no other issues) leave you the ability to activate and/or repair the array. As to the log, you can get it through McBIOS or ethernet in Raid System Console -=> System Controls -=> View Events.
 
Based upon what you stated you only have 2 members of a R6 fail (the first two which were array members, and the spare which had not yet been graduated to an array member,) which while not leaving your any parity should (if there are no other issues) leave you the ability to activate and/or repair the array. As to the log, you can get it through McBIOS or ethernet in Raid System Console -=> System Controls -=> View Events.
I believe this might be what you're after...
Please let me know if there is anything else I can grab for you:

Code:
CLI> main
Copyright (c) 2004-2018 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.15.8, Arclib: 373, Date: May 29 2018( Linux )

 S  #   Name       Type             Interface
==================================================
[*] 1   ARC-1880   Raid Controller  PCI
==================================================

CMD     Description
==========================================================
main    Show Command Categories.
set     General Settings.
rsf     RaidSet Functions.
vsf     VolumeSet Functions.
disk    Physical Drive Functions.
sys     System Functions.
adsys   Advanced System Functions.
hddpwr  Hdd Power Management.
net     Ethernet Functions.
event   Event Functions.
hw      Hardware Monitor Functions.
mail    Mail Notification Functions.
snmp    SNMP Functions.
ntp     NTP Functions.
sef     Security Functions.
exit    Exit CLI.
==========================================================
CLI> event info
Date-Time            Device           Event Type            Elapsed Time Errors
===============================================================================
2024-01-26 18:06:42  H/W MONITOR      Raid Powered On
2024-01-26 17:55:36  RS232 Terminal   VT100 Log In
2024-01-26 17:55:13  H/W MONITOR      Raid Powered On
2024-01-26 17:49:57  H/W MONITOR      Raid Powered On
2024-01-26 17:38:07  Enc#2 SLOT 12    Device Removed
2024-01-26 17:36:59  Enc#2 SLOT 04    Device Removed
2024-01-26 17:36:44  Raid Set # 000   Offlined
2024-01-26 17:33:10  Enc#2 SLOT 02    Device Removed
2024-01-26 17:32:52  Raid Set # 000   Offlined
2024-01-26 17:32:21  SW API Interface API Log In
2024-01-26 17:30:43  Enc#2 SLOT 03    Device Removed
2024-01-26 17:28:50  Enc#2 SLOT 09    Device Removed
2024-01-26 17:28:07  Enc#2 SLOT 10    Device Removed
2024-01-26 17:27:32  Enc#2 SLOT 11    Device Removed
2024-01-26 17:24:41  Raid Set # 000   Offlined
2024-01-26 17:24:27  Raid Set # 000   Offlined
2024-01-26 17:24:19  SW API Interface API Log In
2024-01-26 17:14:10  H/W MONITOR      Raid Powered On
2024-01-26 17:11:55  Raid Set # 000   Offlined
2024-01-26 17:11:42  Raid Set # 000   Offlined
2024-01-26 17:11:27  RS232 Terminal   VT100 Log In
2024-01-26 17:11:15  H/W MONITOR      Raid Powered On
2024-01-26 15:48:45  H/W MONITOR      Raid Powered On
2024-01-26 15:43:23  H/W MONITOR      Raid Powered On
2024-01-26 15:34:18  H/W MONITOR      Raid Powered On
2024-01-26 15:30:44  H/W MONITOR      Raid Powered On
2024-01-26 15:28:18  RS232 Terminal   VT100 Log In
2024-01-26 15:27:58  H/W MONITOR      Raid Powered On
2024-01-26 15:26:51  Raid Set # 000   Rebuild RaidSet
2024-01-26 15:26:50  Enc#2 SLOT 03    Device Inserted
2024-01-26 15:24:48  Enc#2 SLOT 03    Device Removed
2024-01-26 15:24:48  Raid Set # 000   RaidSet Degraded
2024-01-26 15:24:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 15:23:49  Raid Set # 000   Offlined
2024-01-26 15:23:37  Raid Set # 000   Offlined
2024-01-26 15:23:22  RS232 Terminal   VT100 Log In
2024-01-26 15:23:03  H/W MONITOR      Raid Powered On
2024-01-26 15:21:24  RS232 Terminal   VT100 Log In
2024-01-26 15:20:48  H/W MONITOR      Raid Powered On
2024-01-26 14:55:48  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:48  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:41  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:41  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:41  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:41  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:34  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:34  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:26  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:26  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:16  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:16  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:16  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:16  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:50:32  RS232 Terminal   VT100 Log In
2024-01-26 14:50:31  Enc#2 SLOT 04    Device Inserted
2024-01-26 14:50:28  Enc#2 SLOT 03    Device Inserted
2024-01-26 14:50:25  Enc#2 SLOT 02    Device Inserted
2024-01-26 14:50:15  Enc#2 SLOT 12    Device Inserted
2024-01-26 14:50:11  Enc#2 SLOT 11    Device Inserted
2024-01-26 14:50:07  Enc#2 SLOT 10    Device Inserted
2024-01-26 14:50:01  Enc#2 SLOT 09    Device Inserted
2024-01-25 22:56:53  Enc#2 SLOT 05    Device Inserted
2024-01-25 22:56:26  Enc#2 SLOT 03    Device Removed
2024-01-25 22:55:18  Enc#2 SLOT 04    Device Removed
2024-01-25 22:55:03  Enc#2 SLOT 02    Device Removed
2024-01-25 22:54:48  Enc#2 SLOT 01    Device Removed
2024-01-25 22:53:24  Enc#2 SLOT 12    Device Removed
2024-01-25 22:53:08  Enc#2 SLOT 11    Device Removed
2024-01-25 22:52:42  Enc#2 SLOT 10    Device Removed
2024-01-25 22:52:05  Enc#2 SLOT 09    Device Removed
2024-01-25 22:40:05  RS232 Terminal   VT100 Log In
2024-01-25 22:39:31  H/W MONITOR      Raid Powered On
2024-01-25 22:30:36  H/W MONITOR      Raid Powered On
2024-01-25 22:06:40  Enc#2 SLOT 02    Time Out Error
2024-01-25 22:06:12  Enc#2 SLOT 10    Time Out Error
2024-01-25 22:05:44  Enc#2 SLOT 10    Time Out Error
2024-01-25 22:05:17  H/W MONITOR      Raid Powered On
2024-01-25 21:52:33  Raid Set # 000   RaidSet Degraded
2024-01-25 21:52:33  ARC-1880-VOL#000 Volume Failed
2024-01-25 21:52:23  Enc#2 SLOT 11    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 10    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:51:51  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:51:50  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:51:22  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:50:54  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:49:53  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:49:20  Enc#2 SLOT 10    Time Out Error
2024-01-25 21:48:52  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:48:25  H/W MONITOR      Raid Powered On
2024-01-25 21:37:35  SW API Interface API Log In
2024-01-25 21:17:14  Raid Set # 000   Offlined
2024-01-25 21:17:06  SW API Interface API Log In
2024-01-25 20:58:27  SW API Interface API Log In
2024-01-25 20:56:30  H/W MONITOR      Raid Powered On
2024-01-25 19:35:53  Enc#2 SLOT 10    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 01    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 11    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 09    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 10    Device Failed
2024-01-25 19:35:53  Enc#2 SLOT 08    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 07    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 06    Device Removed
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Enc#2 SLOT 04    Device Removed
2024-01-25 19:35:52  Enc#2 SLOT 03    Device Removed
2024-01-25 19:35:52  Enc#2 SLOT 02    Device Removed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:52  Enc#2 SLOT 12    Device Removed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:51  Enclosure#2      Removed
2024-01-25 19:35:29  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:29  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:06  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:35:00  Enc#2 SES2Device Time Out Error
2024-01-25 19:34:05  Enc#2 SES2Device Time Out Error
2024-01-25 19:33:22  Enc#2 SES2Device Time Out Error
2024-01-25 19:33:09  Enc#2 SES2Device Time Out Error
2024-01-25 19:32:27  Enc#2 SES2Device Time Out Error
2024-01-25 19:32:15  Enc#2 SES2Device Time Out Error
2024-01-25 19:31:33  Enc#2 SES2Device Time Out Error
2024-01-25 19:31:31  SW API Interface API Log In
2024-01-25 19:31:20  Enc#2 SES2Device Time Out Error
2024-01-25 19:30:56  Enc#2 SLOT 12    Device Failed
2024-01-25 19:30:55  ARC-1880-VOL#000 Abort Rebuilding      000:10:17
2024-01-25 19:30:55  Raid Set # 000   RaidSet Degraded
2024-01-25 19:30:55  ARC-1880-VOL#000 Volume Degraded
2024-01-25 19:30:46  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:29:45  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:29:37  Enc#2 SLOT 04    Device Failed
2024-01-25 19:29:14  Raid Set # 000   RaidSet Degraded
2024-01-25 19:29:14  ARC-1880-VOL#000 Volume Degraded
2024-01-25 19:29:05  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:28:45  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:28:35  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:26  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:26  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:27:25  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:26:49  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:40  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:40  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:26:31  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:21  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:26:12  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:25:41  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:25:32  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:24:57  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:24:47  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:24:21  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:24:12  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:24:02  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:23:44  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:23:44  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:35  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:25  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:16  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:23:06  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:06  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:22:31  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:21:47  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:21:38  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:21:29  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:20:38  ARC-1880-VOL#000 Start Rebuilding
2024-01-25 19:20:28  Raid Set # 000   Rebuild RaidSet
2024-01-25 19:20:22  SW API Interface API Log In
2024-01-25 19:08:44  SW API Interface API Log In
2024-01-25 19:03:20  H/W MONITOR      Raid Powered On
2024-01-25 19:03:22  Enc#2 SES2Device Time Out Error
2024-01-25 19:02:26  Enc#2 SES2Device Time Out Error
2024-01-25 19:01:31  Enc#2 SES2Device Time Out Error
2024-01-25 19:00:36  Enc#2 SES2Device Time Out Error
2024-01-25 19:00:10  Enc#2 SLOT 11    Time Out Error
2024-01-25 18:59:58  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:59:43  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:59:37  Enc#2 SES2Device Time Out Error
2024-01-25 18:59:26  Enc#2 SLOT 04    Time Out Error
2024-01-25 18:59:14  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:58:39  Enc#2 SES2Device Time Out Error
2024-01-25 18:57:44  Enc#2 SES2Device Time Out Error
2024-01-25 18:56:48  Enc#2 SES2Device Time Out Error
2024-01-25 18:55:54  Enc#2 SES2Device Time Out Error
2024-01-25 18:54:59  Enc#2 SES2Device Time Out Error
2024-01-25 18:54:04  Enc#2 SES2Device Time Out Error
2024-01-25 18:53:09  Enc#2 SES2Device Time Out Error
2024-01-25 18:52:14  Enc#2 SES2Device Time Out Error
2024-01-25 18:51:19  Enc#2 SES2Device Time Out Error
2024-01-25 18:50:24  Enc#2 SES2Device Time Out Error
2024-01-25 18:49:42  Enc#2 SES2Device Time Out Error
2024-01-25 18:49:29  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:47  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:34  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:22  SW API Interface API Log In
2024-01-25 18:47:52  Enc#2 SES2Device Time Out Error
2024-01-25 18:47:39  Enc#2 SES2Device Time Out Error
2024-01-25 18:47:26  Enc#2 SLOT 12    Device Failed
2024-01-25 18:47:25  ARC-1880-VOL#000 Abort Rebuilding      000:25:39
2024-01-25 18:47:25  Raid Set # 000   RaidSet Degraded
2024-01-25 18:47:25  ARC-1880-VOL#000 Volume Degraded
2024-01-25 18:47:16  Enc#2 SLOT 12    Time Out Error
2024-01-25 18:47:06  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:47:06  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:46:56  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:46:56  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:46:46  Enc#2 SLOT 04    Time Out Error
2024-01-25 18:46:37  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:46:10  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:45:52  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:45:52  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:45:17  Enc#2 SLOT 10    Time Out Error
===============================================================================

Please let me know if there is anything else I can grab for you.
 
Hello folks - still looking for guidance here. Anyone have any suggestions?
I'm not positive on the steps to recover the R6 array.
 
I believe this might be what you're after...
Please let me know if there is anything else I can grab for you:

Code:
CLI> main
Copyright (c) 2004-2018 Areca, Inc. All Rights Reserved.
Areca CLI, Version: 1.15.8, Arclib: 373, Date: May 29 2018( Linux )

 S  #   Name       Type             Interface
==================================================
[*] 1   ARC-1880   Raid Controller  PCI
==================================================

CMD     Description
==========================================================
main    Show Command Categories.
set     General Settings.
rsf     RaidSet Functions.
vsf     VolumeSet Functions.
disk    Physical Drive Functions.
sys     System Functions.
adsys   Advanced System Functions.
hddpwr  Hdd Power Management.
net     Ethernet Functions.
event   Event Functions.
hw      Hardware Monitor Functions.
mail    Mail Notification Functions.
snmp    SNMP Functions.
ntp     NTP Functions.
sef     Security Functions.
exit    Exit CLI.
==========================================================
CLI> event info
Date-Time            Device           Event Type            Elapsed Time Errors
===============================================================================
2024-01-26 18:06:42  H/W MONITOR      Raid Powered On
2024-01-26 17:55:36  RS232 Terminal   VT100 Log In
2024-01-26 17:55:13  H/W MONITOR      Raid Powered On
2024-01-26 17:49:57  H/W MONITOR      Raid Powered On
2024-01-26 17:38:07  Enc#2 SLOT 12    Device Removed
2024-01-26 17:36:59  Enc#2 SLOT 04    Device Removed
2024-01-26 17:36:44  Raid Set # 000   Offlined
2024-01-26 17:33:10  Enc#2 SLOT 02    Device Removed
2024-01-26 17:32:52  Raid Set # 000   Offlined
2024-01-26 17:32:21  SW API Interface API Log In
2024-01-26 17:30:43  Enc#2 SLOT 03    Device Removed
2024-01-26 17:28:50  Enc#2 SLOT 09    Device Removed
2024-01-26 17:28:07  Enc#2 SLOT 10    Device Removed
2024-01-26 17:27:32  Enc#2 SLOT 11    Device Removed
2024-01-26 17:24:41  Raid Set # 000   Offlined
2024-01-26 17:24:27  Raid Set # 000   Offlined
2024-01-26 17:24:19  SW API Interface API Log In
2024-01-26 17:14:10  H/W MONITOR      Raid Powered On
2024-01-26 17:11:55  Raid Set # 000   Offlined
2024-01-26 17:11:42  Raid Set # 000   Offlined
2024-01-26 17:11:27  RS232 Terminal   VT100 Log In
2024-01-26 17:11:15  H/W MONITOR      Raid Powered On
2024-01-26 15:48:45  H/W MONITOR      Raid Powered On
2024-01-26 15:43:23  H/W MONITOR      Raid Powered On
2024-01-26 15:34:18  H/W MONITOR      Raid Powered On
2024-01-26 15:30:44  H/W MONITOR      Raid Powered On
2024-01-26 15:28:18  RS232 Terminal   VT100 Log In
2024-01-26 15:27:58  H/W MONITOR      Raid Powered On
2024-01-26 15:26:51  Raid Set # 000   Rebuild RaidSet
2024-01-26 15:26:50  Enc#2 SLOT 03    Device Inserted
2024-01-26 15:24:48  Enc#2 SLOT 03    Device Removed
2024-01-26 15:24:48  Raid Set # 000   RaidSet Degraded
2024-01-26 15:24:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 15:23:49  Raid Set # 000   Offlined
2024-01-26 15:23:37  Raid Set # 000   Offlined
2024-01-26 15:23:22  RS232 Terminal   VT100 Log In
2024-01-26 15:23:03  H/W MONITOR      Raid Powered On
2024-01-26 15:21:24  RS232 Terminal   VT100 Log In
2024-01-26 15:20:48  H/W MONITOR      Raid Powered On
2024-01-26 14:55:48  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:48  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:48  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:47  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:47  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:41  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:41  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:41  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:41  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:40  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:40  ARC-1880-VOL#000 Volume Degraded
2024-01-26 14:55:34  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:34  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:33  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:33  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:26  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:26  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:16  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:16  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:55:16  Raid Set # 000   RaidSet Degraded
2024-01-26 14:55:16  ARC-1880-VOL#000 Volume Failed
2024-01-26 14:50:32  RS232 Terminal   VT100 Log In
2024-01-26 14:50:31  Enc#2 SLOT 04    Device Inserted
2024-01-26 14:50:28  Enc#2 SLOT 03    Device Inserted
2024-01-26 14:50:25  Enc#2 SLOT 02    Device Inserted
2024-01-26 14:50:15  Enc#2 SLOT 12    Device Inserted
2024-01-26 14:50:11  Enc#2 SLOT 11    Device Inserted
2024-01-26 14:50:07  Enc#2 SLOT 10    Device Inserted
2024-01-26 14:50:01  Enc#2 SLOT 09    Device Inserted
2024-01-25 22:56:53  Enc#2 SLOT 05    Device Inserted
2024-01-25 22:56:26  Enc#2 SLOT 03    Device Removed
2024-01-25 22:55:18  Enc#2 SLOT 04    Device Removed
2024-01-25 22:55:03  Enc#2 SLOT 02    Device Removed
2024-01-25 22:54:48  Enc#2 SLOT 01    Device Removed
2024-01-25 22:53:24  Enc#2 SLOT 12    Device Removed
2024-01-25 22:53:08  Enc#2 SLOT 11    Device Removed
2024-01-25 22:52:42  Enc#2 SLOT 10    Device Removed
2024-01-25 22:52:05  Enc#2 SLOT 09    Device Removed
2024-01-25 22:40:05  RS232 Terminal   VT100 Log In
2024-01-25 22:39:31  H/W MONITOR      Raid Powered On
2024-01-25 22:30:36  H/W MONITOR      Raid Powered On
2024-01-25 22:06:40  Enc#2 SLOT 02    Time Out Error
2024-01-25 22:06:12  Enc#2 SLOT 10    Time Out Error
2024-01-25 22:05:44  Enc#2 SLOT 10    Time Out Error
2024-01-25 22:05:17  H/W MONITOR      Raid Powered On
2024-01-25 21:52:33  Raid Set # 000   RaidSet Degraded
2024-01-25 21:52:33  ARC-1880-VOL#000 Volume Failed
2024-01-25 21:52:23  Enc#2 SLOT 11    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 10    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:52:23  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:51:51  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:51:50  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:51:22  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:50:54  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:49:53  Enc#2 SLOT 01    Time Out Error
2024-01-25 21:49:20  Enc#2 SLOT 10    Time Out Error
2024-01-25 21:48:52  Enc#2 SLOT 09    Time Out Error
2024-01-25 21:48:25  H/W MONITOR      Raid Powered On
2024-01-25 21:37:35  SW API Interface API Log In
2024-01-25 21:17:14  Raid Set # 000   Offlined
2024-01-25 21:17:06  SW API Interface API Log In
2024-01-25 20:58:27  SW API Interface API Log In
2024-01-25 20:56:30  H/W MONITOR      Raid Powered On
2024-01-25 19:35:53  Enc#2 SLOT 10    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 01    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 11    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 09    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 10    Device Failed
2024-01-25 19:35:53  Enc#2 SLOT 08    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 07    Device Removed
2024-01-25 19:35:53  Enc#2 SLOT 06    Device Removed
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:52  Enc#2 SLOT 04    Device Removed
2024-01-25 19:35:52  Enc#2 SLOT 03    Device Removed
2024-01-25 19:35:52  Enc#2 SLOT 02    Device Removed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:52  Enc#2 SLOT 12    Device Removed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:52  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:51  Enclosure#2      Removed
2024-01-25 19:35:29  Raid Set # 000   RaidSet Degraded
2024-01-25 19:35:29  ARC-1880-VOL#000 Volume Failed
2024-01-25 19:35:06  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:35:00  Enc#2 SES2Device Time Out Error
2024-01-25 19:34:05  Enc#2 SES2Device Time Out Error
2024-01-25 19:33:22  Enc#2 SES2Device Time Out Error
2024-01-25 19:33:09  Enc#2 SES2Device Time Out Error
2024-01-25 19:32:27  Enc#2 SES2Device Time Out Error
2024-01-25 19:32:15  Enc#2 SES2Device Time Out Error
2024-01-25 19:31:33  Enc#2 SES2Device Time Out Error
2024-01-25 19:31:31  SW API Interface API Log In
2024-01-25 19:31:20  Enc#2 SES2Device Time Out Error
2024-01-25 19:30:56  Enc#2 SLOT 12    Device Failed
2024-01-25 19:30:55  ARC-1880-VOL#000 Abort Rebuilding      000:10:17
2024-01-25 19:30:55  Raid Set # 000   RaidSet Degraded
2024-01-25 19:30:55  ARC-1880-VOL#000 Volume Degraded
2024-01-25 19:30:46  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:29:45  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:29:37  Enc#2 SLOT 04    Device Failed
2024-01-25 19:29:14  Raid Set # 000   RaidSet Degraded
2024-01-25 19:29:14  ARC-1880-VOL#000 Volume Degraded
2024-01-25 19:29:05  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:29:05  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:28:55  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:28:45  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:28:35  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:26  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:28:26  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:27:43  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:27:25  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:27:16  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:26:49  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:40  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:40  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:26:31  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:26:21  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:26:12  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:25:41  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:25:32  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:24:57  Enc#2 SLOT 10    Time Out Error
2024-01-25 19:24:47  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:24:21  Enc#2 SLOT 09    Time Out Error
2024-01-25 19:24:12  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:24:02  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:23:44  Enc#2 SLOT 11    Time Out Error
2024-01-25 19:23:44  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:35  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:25  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:16  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:23:06  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:23:06  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:22:31  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:21:47  Enc#2 SLOT 12    Time Out Error
2024-01-25 19:21:38  Enc#2 SLOT 01    Time Out Error
2024-01-25 19:21:29  Enc#2 SLOT 04    Time Out Error
2024-01-25 19:20:38  ARC-1880-VOL#000 Start Rebuilding
2024-01-25 19:20:28  Raid Set # 000   Rebuild RaidSet
2024-01-25 19:20:22  SW API Interface API Log In
2024-01-25 19:08:44  SW API Interface API Log In
2024-01-25 19:03:20  H/W MONITOR      Raid Powered On
2024-01-25 19:03:22  Enc#2 SES2Device Time Out Error
2024-01-25 19:02:26  Enc#2 SES2Device Time Out Error
2024-01-25 19:01:31  Enc#2 SES2Device Time Out Error
2024-01-25 19:00:36  Enc#2 SES2Device Time Out Error
2024-01-25 19:00:10  Enc#2 SLOT 11    Time Out Error
2024-01-25 18:59:58  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:59:43  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:59:37  Enc#2 SES2Device Time Out Error
2024-01-25 18:59:26  Enc#2 SLOT 04    Time Out Error
2024-01-25 18:59:14  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:58:39  Enc#2 SES2Device Time Out Error
2024-01-25 18:57:44  Enc#2 SES2Device Time Out Error
2024-01-25 18:56:48  Enc#2 SES2Device Time Out Error
2024-01-25 18:55:54  Enc#2 SES2Device Time Out Error
2024-01-25 18:54:59  Enc#2 SES2Device Time Out Error
2024-01-25 18:54:04  Enc#2 SES2Device Time Out Error
2024-01-25 18:53:09  Enc#2 SES2Device Time Out Error
2024-01-25 18:52:14  Enc#2 SES2Device Time Out Error
2024-01-25 18:51:19  Enc#2 SES2Device Time Out Error
2024-01-25 18:50:24  Enc#2 SES2Device Time Out Error
2024-01-25 18:49:42  Enc#2 SES2Device Time Out Error
2024-01-25 18:49:29  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:47  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:34  Enc#2 SES2Device Time Out Error
2024-01-25 18:48:22  SW API Interface API Log In
2024-01-25 18:47:52  Enc#2 SES2Device Time Out Error
2024-01-25 18:47:39  Enc#2 SES2Device Time Out Error
2024-01-25 18:47:26  Enc#2 SLOT 12    Device Failed
2024-01-25 18:47:25  ARC-1880-VOL#000 Abort Rebuilding      000:25:39
2024-01-25 18:47:25  Raid Set # 000   RaidSet Degraded
2024-01-25 18:47:25  ARC-1880-VOL#000 Volume Degraded
2024-01-25 18:47:16  Enc#2 SLOT 12    Time Out Error
2024-01-25 18:47:06  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:47:06  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:46:56  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:46:56  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:46:46  Enc#2 SLOT 04    Time Out Error
2024-01-25 18:46:37  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:46:10  Enc#2 SLOT 10    Time Out Error
2024-01-25 18:45:52  Enc#2 SLOT 09    Time Out Error
2024-01-25 18:45:52  Enc#2 SLOT 01    Time Out Error
2024-01-25 18:45:17  Enc#2 SLOT 10    Time Out Error
===============================================================================

Please let me know if there is anything else I can grab for you.
Unfortunately, based upon your log you have additional serious issues going on with your array. You have entire array drive drops and time outs which with the failed drives have left your array in an unknown state. My first question is how important is the data on these arrays (if very important there are some things we can try.) In any case, before you do anything you need to make bit perfect copies of each of the array members (one by one, out of and back to its exact positions in the array.) Once that is done, you have a starting point where we can go back to baseline if one (or more) attempts do not work. Once that is done, we can try onlining the array and see if anything is there. We can force rewrite the signatures to the drives if you can successfully bring up one of the drives (I can't tell from the log if your drives failed due to actually drive failure, communications failure (bad cable) or some other reason, once you have the image copies (if you can make them) we can start to investigate each drive.
 
Unfortunately, based upon your log you have additional serious issues going on with your array. You have entire array drive drops and time outs which with the failed drives have left your array in an unknown state. My first question is how important is the data on these arrays (if very important there are some things we can try.) In any case, before you do anything you need to make bit perfect copies of each of the array members (one by one, out of and back to its exact positions in the array.) Once that is done, you have a starting point where we can go back to baseline if one (or more) attempts do not work. Once that is done, we can try onlining the array and see if anything is there. We can force rewrite the signatures to the drives if you can successfully bring up one of the drives (I can't tell from the log if your drives failed due to actually drive failure, communications failure (bad cable) or some other reason, once you have the image copies (if you can make them) we can start to investigate each drive.
mwroobel, Thank you for getting back to me!
This array contains nothing that I couldn't live without; however it would be a terrible loss if I can't recover: Family photos, all my music digitized in FLAC, dumps of my previously designed webpages, etc.
It is worth my time to try and recover these files, but if in the end it's a no-go only the wife and I will be upset.
That said, if you're willing and able to provide instructions I'll be happy to follow them (word for word) in a hope that we'll be successful in the end. Just pm me your recommendation on a copy tool and I'll get rolling on creating images of the member disks.
 
Hi, long time Areca owner here, 12+ years, but first time poster.
Got a 1222-8 internal and two external boxes for backup, one on-site, one off-site.

My normally ultra reliable old Arc-1222-8 has started dropping offline. Starts up fine, works for a while, but then the Raid array 'R' drops out after ~30mins, (or goes to sleep not sure which, tho' its set to never sleep/ never spin down/ never low RPM) Do I have something set wrong to make it stay awake?

I feel it stays online if I keep using it, but if I pause to do other stuff, like email for awhile, it will drop offline, and be out till I reboot.

Anything I can do, or is it time for a new card?
I have an ARC 1223-8 sitting here, that I bought as a used spare, but... think I should upgrade to something more recent?
 
Anyone used the "classic" ARC-6120 battery backup unit for the 11xx/12xx/1680/1880 series with an 1882 card?
Do we have a pinout for the BBU cable?
From my understanding it won't work. PCB has to be v2.1 (model number ending in 21) and not the V1.3 from the other batteries.
 
  • Like
Reactions: izx
like this
Hi everybody,

My old geek server has given up (the power supply) and it's time I think to renew the hardware.
The only thing I will keep is, the case and the Areca 1880ix-24.

I'm a plug and play guy now, and the EVGA SR-2 with xeon are a little too much energy consuming to serve a file share.

What could be a good actual platform to continue use this Areca 1880ix-24 card ?
Can I take the first motherboard with a PCIE 4X and go ahead or do you have any advice recommandation for something that work great.
I have seen some UEFI boring stuff...
I think about future proof and maybe something compatible with future bandwith (more than 1GB)

Note: I'm definitely a Windows guy.


I have another question, did we hit a capacity limit with this card ? Does 24* 20GB can be hosted for example ?

Thanks for all the advice you could give to me.
 
Supermicro X10 series with dual E5 2670 v3 Xeon CPU. 24 cores/48 threads. Works great and available for cheap!
 
Supermicro X10 series with dual E5 2670 v3 Xeon CPU. 24 cores/48 threads. Works great and available for cheap!
Hi Cpufrost,

Thanks for your really quick answer.
But, no, no, no 120W for each processor to run a file share is really too much, I have already played enough with SR-2.

What about something like Gigabyte B550 which does have ECC option in the bios and ryzen 3000G PRO series that support ECC

For about 160€ (at first look), I have motherboard and CPU which consums around 35 watts and are ECC compatible.

I think it could be a good starting point then see what to improve...(Brand, feature: IPMI, 10GB support...)

Does anybody here have experienced that with the Areca 1880 ?

Any advice ?
 
Hi Cpufrost,

Thanks for your really quick answer.
But, no, no, no 120W for each processor to run a file share is really too much, I have already played enough with SR-2.

What about something like Gigabyte B550 which does have ECC option in the bios and ryzen 3000G PRO series that support ECC

For about 160€ (at first look), I have motherboard and CPU which consums around 35 watts and are ECC compatible.

I think it could be a good starting point then see what to improve...(Brand, feature: IPMI, 10GB support...)

Does anybody here have experienced that with the Areca 1880 ?

Any advice ?
Not the same of course, but I had my Areca ARC-1883ix-16-2G running on an x370 motherboard and a 1700x for many years, a few years back I upgraded to a B550 with 5600x. It's actually on a riser cable now out of the first pcie 16x slot, due to the new case only being half height for the motherboard area. I didn't use ecc ram, but I don't see how that would be a problem for you or the 1880. I have a mellanox fiber card giving it 10gbit to my main pc, and just a simple low profile gt 710 for video out.

I run windows 10, and the 1880 should be supported via the drivers available through areca's website under discontinued products. If a 3000g gives you enough cpu for what the system needs, I say it will work very nicely! My system draws quite a lot of power, but the majority of that is the 24 drives rather than the cpu.
 
Never tried it a newer non server board.
Server 2022 drivers work.
Even though each cpu has up to 120W TDP a dual board isn't going to be using that much power, about half of SR-2. Of course (the latter) was never designed for power efficiency in mind and when you have dual X5690s at 4.66GHz with speedstep disabled, the power draw is formidable!
 
Not the same of course, but I had my Areca ARC-1883ix-16-2G running on an x370 motherboard and a 1700x for many years, a few years back I upgraded to a B550 with 5600x. It's actually on a riser cable now out of the first pcie 16x slot, due to the new case only being half height for the motherboard area. I didn't use ecc ram, but I don't see how that would be a problem for you or the 1880. I have a mellanox fiber card giving it 10gbit to my main pc, and just a simple low profile gt 710 for video out.

I run windows 10, and the 1880 should be supported via the drivers available through areca's website under discontinued products. If a 3000g gives you enough cpu for what the system needs, I say it will work very nicely! My system draws quite a lot of power, but the majority of that is the 24 drives rather than the cpu.
OK, we could understand from your message that, the 1880ix could be compatible with actual material, that was my main concern.


Never tried it a newer non server board.
Server 2022 drivers work.
Even though each cpu has up to 120W TDP a dual board isn't going to be using that much power, about half of SR-2. Of course (the latter) was never designed for power efficiency in mind and when you have dual X5690s at 4.66GHz with speedstep disabled, the power draw is formidable!
I agree for the consumption, that's why I would like to benefit from advences in this area with new platform. :)
 
Hi everyone,

I'm in need of help/advice once more.

I have an ARC-1882ix-24 controller with multiple SAS drives attached to it: 14TB WUH721414AL5204 (SKU# 0F31052) and 10TB HUH721010AL5200 (SKU# 0F27352)
They're all working flawlessly for years, connected using Norco SC-H500 hot swap drive bays and via the controller's original SFF-8087 to 4xSATA breakout cables that came with it.
The controller is running on the last firmware released: v1.56 (2021-01-12).

I have recently purchased some of the newer 22TB WUH722222AL5204 (SKU# 0F48052) and while I've tried connecting just one of them, it's not being recognized at all by the controller.
It doesn't appear listed together with all the other drives in the System Console, so the controller is not identifying it at all... as if it doesn't even exist.
Here's a screenshot so that people understand what I mean:
OmE2kXvh_t.jpg

I checked also directly from the controller's BIOS screens, before Windows is even loaded. Same thing, it's not listed there either.

I should mention that the sole drive I tried connecting was sold as brand new but not factory sealed. It was supposedly tested by the supplier before being resold.
But before I go tearing the factory seal of one of the other drives, I wanted to ask here if anyone knows any reason why these drives wouldn't be identified by the controller.
I thought incompatibility issues are rare with SAS drives, it's the main reason I chose this interface in the first place.

I'm open to any suggestions, though I'll admit my main fear is a software incompatibility issue... easily fixable with a firmware update but not going to happen due to the controller being discontinued.

As a side note, is Areca contact form even working ?
Every time I try submitting it it says the captcha code is incorrect.
Is their old address [email protected] still up and running, is anyone from their side answering you if you send something ?
 
Last edited:
Hi everyone,

I'm in need of help/advice once more.

I have an ARC-1882ix-24 controller with multiple SAS drives attached to it: 14TB WUH721414AL5204 (SKU# 0F31052) and 10TB HUH721010AL5200 (SKU# 0F27352)
They're all working flawlessly for years, connected using Norco SC-H500 hot swap drive bays and via the controller's original SFF-8087 to 4xSATA breakout cables that came with it.
The controller is running on the last firmware released: v1.56 (2021-01-12).

I have recently purchased some of the newer 22TB WUH722222AL5204 (SKU# 0F48052) and while I've tried connecting just one of them, it's not being recognized at all by the controller.
It doesn't appear listed together with all the other drives in the System Console, so the controller is not identifying it at all... as if it doesn't even exist.
Here's a screenshot so that people understand what I mean:
View attachment 668969

I checked also directly from the controller's BIOS screens, before Windows is even loaded. Same thing, it's not listed there either.

I should mention that the sole drive I tried connecting was sold as brand new but not factory sealed. It was supposedly tested by the supplier before being resold.
But before I go tearing the factory seal of one of the other drives, I wanted to ask here if anyone knows any reason why these drives wouldn't be identified by the controller.
I thought incompatibility issues are rare with SAS drives, it's the main reason I chose this interface in the first place.

I'm open to any suggestions, though I'll admit my main fear is a software incompatibility issue... easily fixable with a firmware update but not going to happen due to the controller being discontinued.

As a side note, is Areca contact form even working ?
Every time I try submitting it it says the captcha code is incorrect.
Is their old address [email protected] still up and running, is anyone from their side answering you if you send something ?
Of course, it could be a bad drive. Try another. If that isn't the issue, unfortunately you have been introduced to Enterprise-Grade HCL issues. The drive model in question is NOT on the Areca Drive Compatibility List. It is POSSIBLE that it isn't compatible, you need to increase your sample size before we can point in either direction.
 
This is going to sound embarrassing... but I tried again and it worked.
Same cable, same port, same enclosure, same drive.

The only reason I could think of why it didn't work in the first place:
- either I didn't properly insert the SFF-8087 cable in the controller's port
- either I did insert it but the controller's port gathered too much dust throughout the years and just didn't establish the connection
Because the drive was definitely powered on, its power led was on... so the enclosure's power cables were properly connected.
But I do recall now not seeing the drive's activity led blinking... so the problem was the connection with the controller.

Whatever it was, I'm just happy it all works. The array is initializing as we speak.
So it looks like incompatibility issues are still rare with SAS drives :)

And as a side note, the old e-mail address [email protected] is still up and running.
I did send an e-mail a couple of days ago and got a reply.
 
Last edited:
So it looks like incompatibility issues are still rare with SAS drives :)
Congrats on everything working. I SO wish that the quote above was true, unfortunately years and years of gremlins (if you use something NOT on the HCL (Hell, even using things ON the HCL doesn't guarantee anything if you have more than one manufacturer in the mix)) have proven this is not the case.
 
Congrats on everything working. I SO wish that the quote above was true, unfortunately years and years of gremlins (if you use something NOT on the HCL (Hell, even using things ON the HCL doesn't guarantee anything if you have more than one manufacturer in the mix)) have proven this is not the case.
I'm sorry to hear that 😞
I'll admit I haven't tried mixing things up with different brands.
I initially started this journey 7 years ago, when I purchased the controller as used - though the guy who sold it to me mentioned it was pretty much brand new, it was stored in its box as a back-up and was only gathering dust.
And I bought HGST because I knew they were rock solid and some of the most reliable drives on the market.
One RAID6 array consisting of 7 x 10TB HUH721010AL5200 (SKU# 0F27352) was built then and it's been working flawlessly for the past 7 years.
4 years ago I've added another RAID6 array consisting of 8 x 14TB WUH721414AL5204 (SKU# 0F31052), which is also working flawlessly.
The drives in the smaller RAID6 array consisting of 5 x 4TB HDN724040ALE640 (SKU# 0S03665), respectively HDS724040ALE640 are even older (SATA) drives that I had prior to the controller.
All of the above have been working 24/7 and I haven't had one single hick-up, but maybe I've just been lucky.

And now I've added 9 x 22TB WUH722222AL5204 (SKU# 0F48052) - after retiring the old 4TB SATA drives - and I can only hope they will work just as good for many years.
 
Hey. Iam the lucky owner of ARC-1882ix-24, after upgrading my server with new Motherboard/cpu and trying to boot it, it startup with unix and all like it should. but the bios setup option on controler isnt showing up. any suggestions whats wrong or what ive been doing wrong. Motherboard is ASUS Prime B660M-K D4, when i just let server boot into unix i see the partition from the drives theres on my controler. but after some mins it starts beep.
 
Hey. Iam the lucky owner of ARC-1882ix-24, after upgrading my server with new Motherboard/cpu and trying to boot it, it startup with unix and all like it should. but the bios setup option on controler isnt showing up. any suggestions whats wrong or what ive been doing wrong. Motherboard is ASUS Prime B660M-K D4, when i just let server boot into unix i see the partition from the drives theres on my controler. but after some mins it starts beep.
Welcome to Enterprise hardware in consumer/prosumer hardware. Things to try..... Change PCIe slot of the HBA. See if your BIOS/UEFI has options for some/any of the following: Legacy PCIe Mode, Legacy INT13 mode, Show BIOS Message or things similar to that (and toggle the choices one at a time.) It is also possible that no matter what you do, nothing will show the BIOS POST from your HBA. If that is the case and you can't get to the McBIOS, you can always install the areca utils ARCHTTP/MRAID which will let you set the same options, or connect via the Ethernet port and you will have the same options. What are the 4 versions of the software on your card, (BOOT, BIOS, FIRM, & MBR?)
 
Hi, does anyone have any experience with Areca ARC 8060 1.1 Raid Controller ARC8200 Areca REV-C SAS Expansion Module? What is it used for? Looks like it has integrated SPF slots. Looked for a manual or product info, but unable to find any…
 
Hi, does anyone have any experience with Areca ARC 8060 1.1 Raid Controller ARC8200 Areca REV-C SAS Expansion Module? What is it used for? Looks like it has integrated SPF slots. Looked for a manual or product info, but unable to find any…
If you are referring to the 8068, there are no SFP ports. It has 2 SFF-8088 SAS channels, an SFF-8088 SAS passthrough (as well as the internal SFF-8087 connectors), a Serial terminal adapter and a XBaseT ethernet. It is a SAS RAID adapter/expander.
 
If you are referring to the 8068, there are no SFP ports. It has 2 SFF-8088 SAS channels, an SFF-8088 SAS passthrough (as well as the internal SFF-8087 connectors), a Serial terminal adapter and a XBaseT ethernet. It is a SAS RAID adapter/expander.
Hi, I was referring to this item on eBay:

https://www.ebay.com/itm/145814611612

Would be great if I can access the array via ip address on local network….but I don’t see how that’d be possible. So trying to understand what this device does.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Back
Top