Need help restoring Areca Raidset and Volume

Discussion in 'SSDs & Data Storage' started by frier, Jan 18, 2019.

  1. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    I have a 1210 controller. 4 drives in raid 5. I do not know the original order. I do have a screenshot of the original settings. Windows 7 lives on a different drive.

    One of my drives had failed so I decided to replace all of the drives and have been hot swapping them one at a time over the last 3 days (waiting until each one is complete).

    Once all of the drives were swapped I selected "Check Volume Set" but after realizing that it was going to talk a long time I selected "Stop Volume Check", rebooted, and started booting into Windows 7.

    Upon booting into Windows 7, "windows deleting orphan file record segment" started and deleted a whole bunch of records. To the point where files are for sure missing.

    I then switched back to the original 4 drives. Raid was not automatically detected so I did "RESCUE" to no luck and then "LeVeL2ReScUe" with some luck: it shows "3/4 Disks: Incomplete" (does this simply mean the one drive is failed or that I am missing some data? Unfortunetly I do not remember the original status message when one drive failed to compare.). So I did "SIGNAT" and rebooted... and it nolonger displays the raidset upon reboot. I can run "LeVeL2ReScUe" again and it does the same thing all over again.

    The Volume Set doesn't seem to display after any of the recoveries.

    At this point, my main objective is to get the data visable in Windows so I can back up all of this data.

    I do have backups but they are remote and will take weeks to recover.
     
  2. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    The failed drive which had been listed as "failed" (iirc) is now listed as "Free". But it must be unreadable since when I do "LeVeL2ReScUe" it isn't part of the raid.
     
  3. Joust

    Joust [H]ard|Gawd

    Messages:
    1,957
    Joined:
    Nov 30, 2017
    I am sure someone here knows. I am not that guy - I have worked exclusively with ZFS stuff.

    Not very helpful, I'm afraid.
     
  4. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    No worries. Going forward I am looking into a better way of handling these drives...
     
  5. bigddybn

    bigddybn [H]ardness Supreme

    Messages:
    6,807
    Joined:
    Nov 21, 2006
    And by a better way you should really mean a backup plan. RAID isn't it.
     
  6. Joust

    Joust [H]ard|Gawd

    Messages:
    1,957
    Joined:
    Nov 30, 2017
    Yes yes. We all know RAID isn't redundancy - he has off-site backup, he just doesn't want to use it (slow). Thats a thing.
     
  7. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    Exactly. I should have had a local backup but I didn't. Lesson learned.

    I am now looking into Unraid gong forward and using it as a NAS to store all of the files for my Windows based server.
     
  8. mwroobel

    mwroobel [H]ardness Supreme

    Messages:
    4,865
    Joined:
    Jul 24, 2008
    Please post a COMPLETE log from the card. Do you know the original positions of all the drives in the array? Did the original drive that failed actually fail for cause or did it just drop out of the array on a timeout?
     
  9. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    Log is as follows. It appears to have reset. When I started the only events were from reboots and then the failed drive a year ago.



    System Events Information
    Time Device Event Type Elapse Time Errors
    2019-01-19 17:08:37 Proxy Or Inband HTTP Log In
    2019-01-19 17:07:03 H/W Monitor Raid Powered On
    2019-01-18 14:20:28 IDE Channel 1 Device Removed
    2019-01-18 14:18:32 RS232 Terminal VT100 Log In
    2019-01-18 14:18:20 H/W Monitor Raid Powered On
    2019-01-18 14:17:19 IDE Channel 4 Device Removed
    2019-01-18 14:17:18 IDE Channel 3 Device Removed
    2019-01-18 14:17:16 IDE Channel 2 Device Removed
    2019-01-18 14:16:58 IDE Channel 1 Device Inserted
    2019-01-18 14:16:54 IDE Channel 2 Device Inserted
    2019-01-18 14:16:50 IDE Channel 1 Device Removed
    2019-01-18 14:16:46 IDE Channel 2 Device Removed
    2019-01-18 14:16:15 RS232 Terminal VT100 Log In
    2019-01-18 13:53:07 RS232 Terminal VT100 Log In
    2019-01-18 13:49:31 Incomplete RAID Discovered
    2019-01-18 13:49:31 H/W Monitor Raid Powered On
    2019-01-18 13:47:38 IDE Channel 2 Device Inserted
    2019-01-18 13:47:34 IDE Channel 2 Device Removed
    2019-01-18 13:44:57 Proxy Or Inband HTTP Log In
    2019-01-18 13:43:33 Incomplete RAID Discovered
    2019-01-18 13:43:33 H/W Monitor Raid Powered On
    2019-01-18 13:42:43 RS232 Terminal VT100 Log In
    2019-01-18 13:42:18 H/W Monitor Raid Powered On
    2019-01-18 13:37:06 RS232 Terminal VT100 Log In
    2019-01-18 13:34:48 RS232 Terminal VT100 Log In
    2019-01-18 13:34:37 H/W Monitor Raid Powered On
    2019-01-18 13:33:54 RS232 Terminal VT100 Log In
    2019-01-18 13:33:35 Incomplete RAID Discovered
    2019-01-18 13:33:35 H/W Monitor Raid Powered On
    2019-01-18 13:32:28 RS232 Terminal VT100 Log In
    2019-01-18 13:32:16 H/W Monitor Raid Powered On
    2019-01-18 13:28:37 RS232 Terminal VT100 Log In
    2019-01-18 13:19:18 RS232 Terminal VT100 Log In
    2019-01-18 13:16:24 RS232 Terminal VT100 Log In
    2019-01-18 13:16:15 H/W Monitor Raid Powered On
    2019-01-18 13:13:09 RS232 Terminal VT100 Log In
    2019-01-18 13:12:57 Incomplete RAID Discovered
    2019-01-18 13:12:57 H/W Monitor Raid Powered On
    2019-01-18 13:12:04 RS232 Terminal VT100 Log In
    2019-01-18 13:11:55 H/W Monitor Raid Powered On
    2019-01-18 13:11:11 IDE Channel 2 Device Inserted
    2019-01-18 13:10:40 IDE Channel 2 Device Removed
    2019-01-18 13:09:34 RS232 Terminal VT100 Log In
    2019-01-18 13:09:22 Incomplete RAID Discovered
    2019-01-18 13:09:22 H/W Monitor Raid Powered On
    2019-01-18 13:08:15 RS232 Terminal VT100 Log In
    2019-01-18 13:08:05 H/W Monitor Raid Powered On
    2019-01-18 13:02:34 H/W Monitor Raid Powered On
    2019-01-18 13:00:46 IDE Channel 2 Device Removed
    2019-01-18 13:00:45 IDE Channel 1 Device Removed
    2019-01-18 13:00:43 IDE Channel 3 Device Removed
    2019-01-18 13:00:33 RS232 Terminal VT100 Log In
    2019-01-18 12:25:26 IDE Channel 4 Device Removed
    2019-01-18 12:19:52 RS232 Terminal VT100 Log In
    2019-01-18 12:19:43 Incomplete RAID Discovered
    2019-01-18 12:19:43 H/W Monitor Raid Powered On
    2019-01-18 12:15:26 IDE Channel 4 Device Inserted
    2019-01-18 12:14:53 IDE Channel 3 Device Inserted
    2019-01-18 12:14:46 IDE Channel 4 Device Removed
    2019-01-18 12:14:45 IDE Channel 3 Device Removed
    2019-01-18 12:14:08 RS232 Terminal VT100 Log In
    2019-01-18 12:13:59 Incomplete RAID Discovered
    2019-01-18 12:13:59 H/W Monitor Raid Powered On
    2019-01-18 12:12:27 RS232 Terminal VT100 Log In




    No.

    I haven't done any testing to it, but I suspect that it failed due to heat since it was positioned with no airflow (I've learned a lot since I built this system nearly 10 years ago). I had rebooted the system multiple times before replacing the drive and it never joined back in (I didn't actually realize that it had failed until recently).
     
  10. frier

    frier n00bie

    Messages:
    9
    Joined:
    Jan 18, 2019
    I am implimenting new solutions as we speak such as better documentation as to drive positions, saving logs out more often etc.