Areca ARC-8050T - DrivePulse shows damaged areas, consistency check shows zero errors

czohoori

n00b
Joined
Jan 16, 2019
Messages
3
Hey all,
I've got an Areca ARC-8050T 6-bay (less than 6 months old) with 5 Seagate Ironwolf drives in a RAID 5 array. It's been performing great. However, I've been running Drive Genius from Prosoft, which monitors drive health in the background. It recently started giving error messages showing bad blocks on the ARC (see error log below), including a critical error message recommending backup and drive replacement.

Screen Shot 2019-01-16 at 10.47.18 AM.png


I got scared and decided to run a full consistency check on the array from the Areca raid manager console, with the 'scrub bad blocks' and 'recompute parity' options unchecked. After completing that check, 0 errors showed up (see event log below).

Screen Shot 2019-01-16 at 10.48.54 AM.png


Should I still be worried? Would Drive Genius be showing errors that the Areca check is somehow missing? Should I trust the Areca results and move on? Or am I misinterpreting these conflicting results somehow?

This is the first Areca array I've used and I'm fairly unexperienced in troubleshooting RAID arrays in general, so any input from more experienced folks would be much appreciated! Let me know if I can provide any more info.

I did order a spare matching drive I could add to the array, either as a hot spare or by converting it to a RAID 6. My original plan had been to expand the size of the RAID 5 at some point, but this experience is making me think more fail-safe measures may be a smarter route!
 
Drive Genius can't see past the Areca card and into each of the attached drives, which is why it can't show any problems. I don't know this particular chassis, does this Areca give you the option to query the drives for their SMART info? Please post a complete log from the card. As to RAID levels, stay far away from RAID5 with multi terabyte drives, as the chance of a second drive failing (or some other error occurring) during rebuild is high enough that you are far better with double parity.
 
If you ran a full pass and checked the smart for each drive in Areca control software it's more likely drive genius could not get access for some reason (if there are any relocate events is what you should be looking for if there is more then 0 I would not trust the drive, but some people seem say its OK to have relocate events)

Also use at least raid 6 as higher chance of muti disk fai/error when your messing with muti TB disks or if its ZFS then Z2 or higher (z* is the amount of disks that can fail)
 
Thanks both for the replies. I just figured out how to get SMART status via the CLI and I've attached the results. Like I said I'm still learning all this so not sure exactly what to watch out for in the results. I see the 'Reallocated Sector Count' line but which value should I be watching?

I'm also attaching the full errorlog file, not sure if this is helpful or not.

And I'm totally convinced on the RAID6 thing. As soon as I'm sure there's nothing more urgent to address here I'll plan on converting it from RAID5.
Thanks again.
 

Attachments

  • Areca ARC-8050T3 SMART.txt
    9.3 KB · Views: 0
  • errorlog.txt
    182 KB · Views: 0
Per the SMART you have no reallocated sectors, and I don’t see any hard errors in the log from the date/time in Drive Genius. Drivepulse cant talk to the drives themselves past the RAID so anything it or any of the other Drive Genius apps report are suspect in regard to the array. The log and the Areca utility will tell you for sure. You seem to be fine.
 
That's great to hear. I appreciate you taking the time to confirm that. Makes sense about Drivepulse. It was a good exercise for me to learn which data to actually monitor going forward.
Thanks to both of you!
 
Back
Top