ARECA Owner's Thread (SAS/SATA RAID Cards)

Feh, the powerchute software is crap. Yes, some versions of it have worked now and then, but more often than not it's seemed like a better idea to just avoid using it entirely. That and there's a generator on site, so the UPSes really only have to carry the systems for about 45 seconds. Most of the time they just handle brown-outs and quick blips.

Yes, units with a management card are a better idea, as would budgeting for them instead of the ones we've got. Hindsight and all that....

Depending on your unit(s), you can get either pulls or brand new management cards on eBay for as little as $30 for pulls and $50 for new, it is a cheap addon you can charge to your expense budget since you didn't plan last year and your capital budget going forward.
 
I'm expanding a raid set and getting reading errors on slot 11's disk. I had gotten reading errors a while back but I proactively swapped slot 11's disk out before the Areca card ejected it from the array. (I shipped that disk in and used a spare off the shelf) The replacement disk has started tossing random read errors so I maybe have a faulty cable...

The Areca's Timeout Setting is currently 8 seconds. Would it be prudent to bump this up to 12 or 17 seconds to try and buy a little time for the raid set expansion to finish? I've got about 7 hours to go. Or is there a better tweak that could buy time?
 
I'm expanding a raid set and getting reading errors on slot 11's disk. I had gotten reading errors a while back but I proactively swapped slot 11's disk out before the Areca card ejected it from the array. (I shipped that disk in and used a spare off the shelf) The replacement disk has started tossing random read errors so I maybe have a faulty cable...

The Areca's Timeout Setting is currently 8 seconds. Would it be prudent to bump this up to 12 or 17 seconds to try and buy a little time for the raid set expansion to finish? I've got about 7 hours to go. Or is there a better tweak that could buy time?

Is it just a cable or is it connected to a backplane? If the former, I would change both the power and data cables and let it continue. If it is in a backplane and you have a free slot, put the drive in that slot, set it to passthrough and run some DFTs on it from whoever your particular manufacturer is.
 
Last edited:
It's connected to a backplane and was in the process of expanding so I wouldn't have been able to change the power and data and let it continue. The raid set expansion finished a few minutes ago so now I can.
 
I did type in 18000.0 (technically I just changed the 6 to an 8) and clicked the check box but it drops off the point zero.

I just walked through it again and grabbed screen shots.

01arecaconfirm.jpg


02volumesetmodified.jpg


Going right back to the modify volume set, shows no modification done.
03volumesetunmodified.jpg


a shot of the log for proof of clicking submit.
04arecalog.jpg


Powering it down is similar enough to a reboot which was where I started out asking if I should do, I just didn't want to reboot too soon. I will power it down instead but I'll leave it as-is for now in case anyone else weighs in with additional suggestions.

Try doing it through the CLI. I recall I had problems with the web-interface going from 20x2TB -> 24x2TB myself. Might be an issue with very large arrays.


FYI from my event history you can see the multiple times I did a 'modify volume' and the initialize did not happen after:

Code:
2012-05-28 07:59:31  DATA VOLUME      Complete Init         003:25:10
2012-05-28 04:34:20  DATA VOLUME      Start Initialize
2012-05-28 04:34:18  DATA VOLUME      Modify Volume
2012-05-28 04:34:09  SW API Interface API Log In
2012-05-28 04:32:27  DATA VOLUME      Modify Volume
2012-05-28 04:31:59  DATA VOLUME      Modify Volume
2012-05-28 04:30:09  DATA VOLUME      Modify Volume
2012-05-28 04:29:20  DATA VOLUME      Modify Volume
2012-05-28 04:28:22  001.001.001.003  HTTP Log In
2012-05-19 19:05:54  DATA VOLUME      Complete Migrate      111:56:44
2012-05-16 22:39:46  001.001.001.003  HTTP Log In
2012-05-15 03:09:46  DATA VOLUME      Start Migrating
2012-05-15 03:09:46  LINUX VOLUME     Complete Migrate      000:24:19
2012-05-15 02:45:27  LINUX VOLUME     Start Migrating
2012-05-15 02:45:27  MAC VOLUME       Complete Migrate      000:05:42
2012-05-15 02:39:44  MAC VOLUME       Start Migrating
2012-05-15 02:39:44  WINDOWS VOLUME   Complete Migrate      000:24:00
2012-05-15 02:15:44  WINDOWS VOLUME   Start Migrating
2012-05-15 02:15:42  40TB RAID SET    Expand RaidSet
 
Try doing it through the CLI. I recall I had problems with the web-interface going from 20x2TB -> 24x2TB myself. Might be an issue with very large arrays.

Nah. Done precisely that dozens of times ( -> 24x2TB RAID6, even -> 32x2TB RAID6 tested multiple times).

But it would be worth letting Areca know in case there's some other root problem they need to find and weed out.
 
Nah. Done precisely that dozens of times ( -> 24x2TB RAID6, even -> 32x2TB RAID6 tested multiple times).

But it would be worth letting Areca know in case there's some other root problem they need to find and weed out.

Very weird. I did it multiple times through the web-interface like the guy in question and you can even see the modification in the event history but again it didn't actually do it until I did it through the CLI.

I actually email Billion quite often to get new features/etc but I have only done a migration once and on a home machine only so didn't feel a need to bring it up.

Things I have gotten Areca to do though:

Ability to fail drives without pulling (initially from web-interface only then later had them add support to the CLI)
Smartmontools support for their newer SAS controllers
Smartmontools patch for SAS expanders
CLI smart info support for SAS controllers + format change (so it includes raw values)
Ability to see logical/physical disk order from the CLI (He told him this will be in their next release)

And probably some other stuff that I am forgetting.
 
I never thought to try it via the CLI. I ended up rebooting the server between each raid set expansion and volume expansion to get my disks added.
 
Things I have gotten Areca to do though:

Ability to fail drives without pulling (initially from web-interface only then later had them add support to the CLI)
Smartmontools support for their newer SAS controllers
Smartmontools patch for SAS expanders
CLI smart info support for SAS controllers + format change (so it includes raw values)
Ability to see logical/physical disk order from the CLI (He told him this will be in their next release)

Now that's a true Areca fan :) Yeah the first thing I did after installing FW 1.51 after rebooting was the smartmontools query of the disks behind my expanders and it worked beautifully, so thanks for that. I can see the Fail-Drive also being very useful for remotely managed or otherwise colocated systems.

I do shoot Mr. Wu the occasional "we need this feature" email from time to time, once in a while he responds to my insane ramblings. Besides triple parity which I've suggested a few times (current dualcore RoC's easily have the horsepower) and he states they're considering, the big one I've been harping on for years is evolving HW raid to the controller becoming filesystem-aware, which no doubt is very complex and would necessitate a much more intelligent host O/S driver and/or client mechanism. In that scenario the controller becomes more of an extension of the host driver rather than the other way around; the controller is still autonomous but the host driver gives it a lot more insight, to do its job in less time. That really is the holy grail since it unlocks so many possibilities which systems like ZFS already utilize to some extent. End-to-end checksumming, non-striping JBOD/passthrough with parity, TRIM for SSD's in RAID, rebuilding parity only for sectors that contain active data, the list goes on.

Its a herculean task for sure and may not even be feasible, and no doubt a large gap separates waxing theoreticals and engineers actually implementing it - hell, even Intel continues to struggle just to get TRIM for a couple SSD's in RAID0 working correctly. But they have to evolve in order to survive since all the innovation in the last 5-7 yrs is in software based solutions.
 
Last edited:
I can see the Fail-Drive also being very useful for remotely managed or otherwise colocated systems.

That was only half the reason. One of the datacenters we have is remote (15 miles away from our main one) and we don't have onsite staff. That being said the biggest reason is when a drive is causing issues on the array its much better to fail it before you pull it just in case the backplain is wired incorrectly. Sometimes it happens with our supermicro builders. Since the disk LED light uses the actual SATA signaling from the controller disk LED activity should always be accurate as to which drive has been failed (not showing disk activity).

There have been cases where someone pulled a drive and either the SFF-8087 cables were backwards (so pulling slot 5 instead of slot 1) or worse where just two of the drives were swapped (pulled slot 4 instead of slot 3) when slot 3 was the failing disk and now we have an array that has to rebuild from a disk on its last legs (in raid10).
 
Cascading Expanders? I had read it was a bad idea in the past to cascade SAS expanders like using a 1680ix with a supermicro expander, or daisy chaining expanders. Is this still a bad idea with the SAS2 expanders? Was the previous warning really FUD? Specifically a 1882ix with a supermicro 846E16.

Also, does anyone know if the 846E16 can be dual linked? Any known issues with areca and the 846E16?
 
Hey guys,

We recently built a server using a supermicro CSE-743TQ-865B chassis. It comes with the SAS743TQ backplane, which is for all intents & purposes a SATA backplane. Connected to this backplane are x6 WD RE4 SATA drives. Anyway, the controller is an Areca ARC-1222 & everything with the array works fine except for the failure LED's. The backplane has both SGPIO/Sideband & a I2C connectors. Unfortunately the Fault LED connectors in the ARC-1222 don't seem compatible with the I2C connectors (Areca has 8 pins & the backplane has 4) so that's out. The card itself is one of the early edition cards which came with x2 mini-sas > sata fanout cables without the sideband connector. I purchased these from monoprice http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8192&seq=1&format=2 & changed the jumpers on the backplane to use SGPIO instead of I2C but the fault LED's are still not working. From what I could find via google, it might be possible that the pinouts are different between Supermicro & Areca? I can get the pinout diagram from supermicro, but there's nothing from Areca. Then again, I would assume that being part of the SFF-8087 connector, there's not much deviation possible from whatever the standard is.

Any ideas? Has anyone ran into something similar to this? Thanks!
 
Hey guys,

We recently built a server using a supermicro CSE-743TQ-865B chassis. It comes with the SAS743TQ backplane, which is for all intents & purposes a SATA backplane. Connected to this backplane are x6 WD RE4 SATA drives. Anyway, the controller is an Areca ARC-1222 & everything with the array works fine except for the failure LED's. The backplane has both SGPIO/Sideband & a I2C connectors. Unfortunately the Fault LED connectors in the ARC-1222 don't seem compatible with the I2C connectors (Areca has 8 pins & the backplane has 4) so that's out. The card itself is one of the early edition cards which came with x2 mini-sas > sata fanout cables without the sideband connector. I purchased these from monoprice http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8192&seq=1&format=2 & changed the jumpers on the backplane to use SGPIO instead of I2C but the fault LED's are still not working. From what I could find via google, it might be possible that the pinouts are different between Supermicro & Areca? I can get the pinout diagram from supermicro, but there's nothing from Areca. Then again, I would assume that being part of the SFF-8087 connector, there's not much deviation possible from whatever the standard is.

Any ideas? Has anyone ran into something similar to this? Thanks!

The I2C pinouts are in your manual on page 27. I don't know about the SGPIO non8087 pinouts though.
 
The I2C pinouts are in your manual on page 27. I don't know about the SGPIO non8087 pinouts though.

Hmm, the I2C connector on the backplane is way different than this 8-pin connector on page 27. It's hard to explain & I have a crappy picture of it but it's hard to see. It's a small, white, 4-pin connector on the backplane. If you look at the manual for the backplane ( http://www.supermicro.com/manuals/other/BPN-SAS-743TQ.pdf ), you can kind of see it on page 2-1. On page 2-3 it diagrams the pin outs as:

1 - Data
2 - Ground
3 - Clock
4 - No Connection

Vs. the Areca manual saying:

1 - +5V
2 - Ground
3 - LCD I2C Int
4 - Protect Key
5 - LCD Data
6 - ACS CLK
7 - ACS Data
8 - LCD Clock.

So I think I2C is totally out since they are so vastly different.
 
I have 8 128GB Vertex 4 SSDs in Raid 0 on an Areca ARC-1882ix-24 with BBU -- I have owned Areca cards from every generation except the 11xx series, so I am pretty familiar with the configuration etc -- just need some help finding out why reads are so much slower than writes.

SysSpecs:
i7 970 @ 4ghz
Gigabyte EX58-UD5 - Bios F13 (latest)
24GB 1600mhz ram
Areca Arc-1882ix-24 1GB Cache, 1.51 FW, BBU attached
8x OCZ Vertex 4 1.5 FW Raid 0, 64 Stripe
6x Hitachi 1TB mixed HDS721010DLE630 and HDS721010CLA332
1250 Watt PSU

See Link for some screen shots:
http://imgur.com/a/6RTEg


Volume Set Information
Volume Set Name SSD-8x128GB
Raid Set Name SSD-8x128GB
Volume Capacity 512.0GB
SCSI Ch/Id/Lun 0/0/0
Raid Level Raid 0
Stripe Size 64KBytes
Block Size 512Bytes
Member Disks 8
Cache Mode Write Back
Write Protection Disabled
Tagged Queuing Enabled
Volume State Normal

Raid Set Information
Raid Set Name SSD-8x128GB
Member Disks 8
Total Raw Capacity 1024.3GB
Free Raw Capacity 512.3GB
Min Member Disk Size 128.0GB
Supported Volumes 128
Raid Set Power State Operating
Raid Set State Normal

Raid Subsystem Information
Controller Name ARC-1882IX-24
Firmware Version V1.51 2012-07-04
BOOT ROM Version V1.51 2012-07-04
PL Firmware Version 13.0.59.0
Serial Number Y227CADHAR810196
Unit Serial #
Main Processor 800MHz PPC440 RevD1
CPU ICache Size 32KBytes
CPU DCache Size 32KBytes/Write Back
CPU SCache Size 1024KBytes/Write Through
System Memory 1024MB/1333MHz/ECC
PCI-E Link Status 8X/5G
Current IP Address 192.168.10.131
 
Since you have Disk Write Cache enabled, it explains why the writes are so much faster. Yet, I'd certainly expect faster reads from 8x SSDs in RAID0.
Have you tested the drives individually to see if one (or possibly more than one) of them cripples the speed?
 
Last edited:
Hi Areca users, I have 1880x card with two external enclosures attached. The first enclosure has 13 drives in a RAID6 array and has been working flawlessly over the past 6 months. Recently, I added a Supermicro 847JBOD chassis (partially filled with 17 drives in RAID6) to the second connector and am now experiencing time-out errors on both enclosures. If I removed the 847JBOD chassis from the card, the original enclosure/array works fine again. Since the array in the 847JBOD is new, the time-out errors I'm getting is during the init phase of this new array. Any thoughts? Is this too much for 1 card to handle? Thanks for your help!
 
Hi Areca users, I have 1880x card with two external enclosures attached. The first enclosure has 13 drives in a RAID6 array and has been working flawlessly over the past 6 months. Recently, I added a Supermicro 847JBOD chassis (partially filled with 17 drives in RAID6) to the second connector and am now experiencing time-out errors on both enclosures. If I removed the 847JBOD chassis from the card, the original enclosure/array works fine again. Since the array in the 847JBOD is new, the time-out errors I'm getting is during the init phase of this new array. Any thoughts? Is this too much for 1 card to handle? Thanks for your help!

What kinds of drives are in the 2 enclosures? All SAS, All SATA or a mixture of the 2. Have you upgraded all 4 of the firmware files to 1.51 from the areca site? Can you please post the log from the card?
 
They are all SATA drives. And yes, the card has been updated to the latest firmware (all files). I don't have access to the log itself right now, but here is a sample of the emails I received:

2012-08-21 22:04:38 Enc#3 Slot 17 : Time Out Error

It's not always the same drive or the same enclosure, but I get a series of these.

Thanks you.
 
They are all SATA drives. And yes, the card has been updated to the latest firmware (all files). I don't have access to the log itself right now, but here is a sample of the emails I received:

2012-08-21 22:04:38 Enc#3 Slot 17 : Time Out Error

It's not always the same drive or the same enclosure, but I get a series of these.

Thanks you.

When you do get them, how many drives drop out at a time (is it just one per enclosure) and how long between the drops? When you can, please post as much of the log as you can. Do you have 2 new SFF-8088 cables you can swap in and test?
 
They don't time out long enough for either raidset to be degraded. Yes, I've tried swapping out cables, but still the same issue.

Thanks again.

Here's more of the log:
2012-08-21 22:05:45 Enc#5 Disk #3 : Time Out Error
2012-08-21 22:05:45 Enc#5 Disk #4 : Time Out Error
2012-08-21 22:05:45 Enc#5 Disk #5 : Time Out Error
2012-08-21 22:14:06 Enc#2 Slot#14 : Time Out Error
2012-08-21 22:14:06 Enc#2 Slot#15 : Time Out Error
2012-08-21 22:14:06 Enc#2 Slot#16 : Time Out Error
2012-08-21 22:14:06 Enc#4 Disk #0 : Time Out Error
2012-08-21 22:14:06 Enc#4 Disk #3 : Time Out Error
2012-08-21 22:14:06 Enc#5 Disk #0 : Time Out Error
2012-08-21 22:14:06 Enc#5 Disk #2 : Time Out Error
2012-08-21 22:14:06 Enc#5 Disk #3 : Time Out Error
2012-08-21 22:14:06 Enc#5 Disk #4 : Time Out Error
2012-08-21 22:22:13 192.168.001.028 : HTTP Log In
2012-08-21 22:29:07 Enc#2 Slot#15 : Time Out Error
2012-08-21 22:29:07 Enc#4 Disk #0 : Time Out Error
2012-08-21 22:29:07 Enc#4 Disk #2 : Time Out Error
2012-08-21 22:29:07 Enc#5 Disk #1 : Time Out Error
2012-08-21 22:29:07 Enc#5 Disk #2 : Time Out Error
2012-08-21 22:32:36 Enc#2 Slot#14 : Time Out Error
2012-08-21 22:32:36 Enc#2 Slot#15 : Time Out Error
2012-08-21 22:32:36 Enc#4 Disk #0 : Time Out Error
2012-08-21 22:32:36 Enc#4 Disk #1 : Time Out Error
2012-08-21 22:32:36 Enc#4 Disk #2 : Time Out Error
2012-08-21 22:32:36 Enc#5 Disk #2 : Time Out Error
2012-08-21 22:32:36 Enc#5 Disk #3 : Time Out Error
2012-08-21 22:32:36 Enc#5 Disk #4 : Time Out Error
2012-08-21 22:32:36 Enc#5 Disk #5 : Time Out Error
2012-08-21 22:36:56 Enc#2 Slot#14 : Time Out Error
2012-08-21 22:36:56 Enc#2 Slot#15 : Time Out Error
2012-08-21 22:36:56 Enc#4 Disk #0 : Time Out Error
2012-08-21 22:36:56 Enc#5 Disk #3 : Time Out Error
2012-08-21 22:36:56 Enc#5 Disk #4 : Time Out Error
 
Pull just the drives out of the 847J, leave everything else connected. If you get the stability back, then start plugging them back in one by one and checking for stability between each drive.
Out of curiosity, are the drives of the same brand? Samsung? Seagate? what model? If you get the stability back, how many drives do you need to plug back in before things become unstable?
 
Last edited:
Anyone tried using a areca 1260 inside of a Windows 8 ESXi guest VM with pass-thru for the 1260 pci-e card?

When I try to load the storport driver the install just hangs. It brings up the "Installing driver software" dialog with the progress-bar and it just endlessly loops. Meanwhile it pegs 100% CPU. It's been hung that way for about 15 minutes.

I had the same sort of thing happen with windows 2012 (the same basic OS) and when I did a hard reset of the guest the driver had apparently been installed.

I can use the drives attached to the 1260 as RDMs. This is where ESXi host has the drives and they're passed to a virtual controller, instead of the controller itself being passed whole as a PCIe device. So it's not an issue with actually being able to communicate with the drives.

Anyone else seen this, and found a solution?
 
Anyone tried using a areca 1260 inside of a Windows 8 ESXi guest VM with pass-thru for the 1260 pci-e card?

When I try to load the storport driver the install just hangs. It brings up the "Installing driver software" dialog with the progress-bar and it just endlessly loops. Meanwhile it pegs 100% CPU. It's been hung that way for about 15 minutes.

I had the same sort of thing happen with windows 2012 (the same basic OS) and when I did a hard reset of the guest the driver had apparently been installed.

I can use the drives attached to the 1260 as RDMs. This is where ESXi host has the drives and they're passed to a virtual controller, instead of the controller itself being passed whole as a PCIe device. So it's not an issue with actually being able to communicate with the drives.

Anyone else seen this, and found a solution?

What motherboard are you using, and what other PCIe cards are installed?
 
Pull just the drives out of the 847J, leave everything else connected. If you get the stability back, then start plugging them back in one by one and checking for stability between each drive.
Out of curiosity, are the drives of the same brand? Samsung? Seagate? what model? If you get the stability back, how many drives do you need to plug back in before things become unstable?
Hi, thanks for the suggestion, but a disaster occurred this morning. The power supply of the old chassis died and now the volume on those disks has failed.

Unfortunately, I cannot quickly replace the power supply and so I wanted to know if it will be possible to remove the drives and install them into the 847JBOD chassis. Since it's a different chassis with a different expander in it, the drives will not be in the same order as in the old chassis. Is that a problem?

If the volume is marked failed, how can I "unfail" it? I've not done anything except turn off the server for now...and hoping for some good advice from the experts as to what I should do next.

Please help!

I have attached the log from the 1880x card if that is helpful. You can see the problems started at 12:58:56. You'll see the log contains info on 2 volumes #000 and #009. I only care about the #000 volume and not the #009 volume. The #009 is on another Raidset with a separate set of disks.

2012-08-22 21:02:26 192.168.001.008 HTTP Log In
2012-08-22 21:01:09 H/W Monitor Raid Powered On
2012-08-22 18:19:45 Enc#3 Slot 19 Device Removed
2012-08-22 18:19:45 Enc#3 Slot 19 PassThrough Disk Deleted
2012-08-22 18:19:44 Enclosure#3 Removed
2012-08-22 18:19:44 Enc#3 SES2Device Device Removed
2012-08-22 18:19:44 Enc#3 Slot 17 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 16 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 15 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 14 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 12 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 11 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 10 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 09 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 08 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 07 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 05 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 04 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 03 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 02 Device Removed
2012-08-22 18:19:44 Enc#3 Slot 01 Device Removed
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:44 Raid Set # 0009 RaidSet Degraded
2012-08-22 18:19:43 Enc#3 Slot 18 Device Removed
2012-08-22 18:19:43 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:43 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:42 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:42 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:41 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:41 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:41 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:40 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:40 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:39 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:39 Enc#3 Slot 06 Device Removed
2012-08-22 18:19:39 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:38 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:38 ARC-1880-VOL#009 Volume Failed
2012-08-22 18:19:38 ARC-1880-VOL#009 Volume Degraded
2012-08-22 18:19:38 ARC-1880-VOL#009 Volume Degraded
2012-08-22 18:18:50 192.168.001.008 HTTP Log In
2012-08-22 12:59:01 Enc#5 SES2Device Device Removed
2012-08-22 12:59:01 Enclosure#5 Removed
2012-08-22 12:59:01 Enclosure#4 Removed
2012-08-22 12:59:01 Enc#5 Disk #5 Device Removed
2012-08-22 12:59:01 Enc#5 Disk #4 Device Removed
2012-08-22 12:59:01 Enc#5 Disk #3 Device Removed
2012-08-22 12:59:01 Enc#5 Disk #2 Device Removed
2012-08-22 12:59:01 Enc#5 Disk #1 Device Removed
2012-08-22 12:59:01 Enc#5 Disk #0 Device Removed
2012-08-22 12:59:01 Enc#4 Disk #1 Device Removed
2012-08-22 12:59:01 Enc#4 Disk #0 Device Removed
2012-08-22 12:59:01 Enc#2 Slot#16 Device Removed
2012-08-22 12:59:01 Enc#2 Slot#15 Device Removed
2012-08-22 12:59:01 Enc#4 Disk #3 Device Removed
2012-08-22 12:59:01 Enc#2 Slot#14 Device Removed
2012-08-22 12:59:01 ARC-1880-VOL#000 Abort Checking 015:33:23 3
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Raid Set # 000 RaidSet Degraded
2012-08-22 12:59:01 Enc#4 Disk #2 Device Removed
2012-08-22 12:59:00 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:59:00 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:59:00 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:59 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:59 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:58 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:58 Enc#4 SES2Device Device Removed
2012-08-22 12:58:58 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:57 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:57 Enclosure#2 Removed
2012-08-22 12:58:57 Enc#2 SES2Device Device Removed
2012-08-22 12:58:56 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:56 ARC-1880-VOL#000 Volume Failed
2012-08-22 12:58:56 ARC-1880-VOL#000 Volume Degraded
2012-08-22 12:58:56 Raid Set # 000 RaidSet Degraded
2012-08-22 12:58:56 ARC-1880-VOL#000 Volume Degraded
2012-08-22 12:58:56 Enc#2 Slot#13 Device Removed
2012-08-22 06:37:50 192.168.001.028 HTTP Log In
2012-08-22 06:23:11 192.168.001.028 HTTP Log In
2012-08-22 05:49:52 192.168.001.028 HTTP Log In
 
What motherboard are you using, and what other PCIe cards are installed?

This is a Supermicro X7DWE with 16gb of RAM, dual E5440 Xeons and one other PCIe card, an IBM M1015 SAS interface. It's running ESXi with the latest patches. The 1260 has the latest 1.49 flashed to it. I'm likewise using the most recent download of the windows x64 driver.

I've set up the 1260 as passed-through in ESXi and then added to a guest VM running x64 windows 2012 or 8. Installed the OS as per usual and then added the driver after that was complete. The OS guest boots from a VHD on ESXi media, not from the 1260.

I can use drives attached to the card if I set them up as RDMs in ESXi and pass them individually as virtual discs on a SCSI controller. So clearly the card, cabling and drives are all in working order. But if have the card passed-through it just hangs the boot of the guest VM that has drivers for it. I've been trying this as the sole guest VM operating.

I've tried moving the card to a different PCIe slot and that did not change the problem. I have not tried it without any other cards connected to the motherboard as I do need the other card for another VM's use.
 
So I'm expanding my RAID with a new drive. I've added it to the Raid Set and modified the Volume Set to increase the size, and the Volume State is back to Normal. Now I need to expand my partition, but Ubuntu Server 12.04 is still reporting the old size for the volume. What can I do to update the size without rebooting the machine?
 
So I'm expanding my RAID with a new drive. I've added it to the Raid Set and modified the Volume Set to increase the size, and the Volume State is back to Normal. Now I need to expand my partition, but Ubuntu Server 12.04 is still reporting the old size for the volume. What can I do to update the size without rebooting the machine?

What card, how many drives, what filesystem? Have you run gparted to expand your filesystem yet?
 
What card, how many drives, what filesystem? Have you run gparted to expand your filesystem yet?
Areca 1280ML, 5x3TB Hitachi 7k3000, one Raid Set and one Volume Set partitioned to XFS. I know how to resize the partition but parted and fdisk still report the filesystem at the same size as it was before the new drive was added.
 
Areca 1280ML, 5x3TB Hitachi 7k3000, one Raid Set and one Volume Set partitioned to XFS. I know how to resize the partition but parted and fdisk still report the filesystem at the same size as it was before the new drive was added.

What raid level are you suing with these 5 drives?
Can you post the results from a df-h
Have you done a xfs_growfs yet?
Please post the result from sudo xfs_growfs -n /$mounted_filesystem_name (This will not make any changes, just show me the current layout) Most likely you will just have to do a sudo xfs_growfs -d /$mounted_filesystem_name but don't do it until you post the prior info.
 
What raid level are you suing with these 5 drives?
Can you post the results from a df-h
Have you done a xfs_growfs yet?
Please post the result from sudo xfs_growfs -n /$mounted_filesystem_name (This will not make any changes, just show me the current layout) Most likely you will just have to do a sudo xfs_growfs -d /$mounted_filesystem_name but don't do it until you post the prior info.
It's RAID6. I think you're missing my point, I just need a way to resync/reload the device in the kernel without unmounting or rebooting so it detects the new volume size.

xfs_growfs does nothing because it thinks there isn't any space left to expand.

[Edit] I should add I'm not expanding an OS partition, it's just under heavy/frequent use and I can't unmount it or reboot the machine to expand it.
 
Last edited:
It's RAID6. I think you're missing my point, I just need a way to resync/reload the device in the kernel without unmounting or rebooting so it detects the new volume size.

xfs_growfs does nothing because it thinks there isn't any space left to expand.

[Edit] I should add I'm not expanding an OS partition, it's just under heavy/frequent use and I can't unmount it or reboot the machine to expand it.

Ah, sorry I though you just need to expand and it saw the space. As far as I know with xfs in Ubuntu Server you will have to either reboot or at least kill any processes using the filesystem and umont/mount to see the upgraded size. I will be the first person to say I am not a XFS expert so I invite others to either confirm or correct my comment.
 
So it looks like all the drives in my media server are deciding to fail on me simultaneously. I have 9xWD2002FYPS (RE4-GP 2TB) drives on my 1680ix-24 in RAID6... luckily in RAID6.

One drive failed last week (not the first time I've had one die, no worries). RMA the drive, and while rebuilding with the replacement, another drive times out repeatedly for a few hours, works for a few hours, then times out again, but never drops. A few other drives have started to pop media errors and timeouts as well. Luckily, everything held together long enough for the array to rebuild. The new drive gets SIDS the next day. So now I've got 1 dead and one that's going to die, and a few others that are showing their age. Obviously I'm going to RMA the dead drives, but I'm down to the last few months on warranty, so I think it's time to consider the replacements.

I'm in the process of backing up the critical data (trimmed it down to just under 6TB, I can re-rip my movie collection if it comes to it) to a couple external drives. But with the RE4-GP drives now discontinued, I was just wondering what a good replacement would be for this.

I looked at the new WD RED drives, but it looks like the jury is still out on these on a RAID card and not in a NAS (that and 3 year warranty). So what's the consensus on a good 2-3TB drive these days? I don't care if it's 5200 or 7200 RPM, speed is not a huge deal when you have enough spindles to compensate - I'm looking mostly for reliable and a 5 year warranty.

...and a new chassis. The Norco 4220 I have now has a few bad bays. Only 1 backplane has 4 working bays. It's my own fault for not testing them all when I first got it, the ones that don't work never did, so I don't think it's related to my current issue.
 
So what's the consensus on a good 2-3TB drive these days? I don't care if it's 5200 or 7200 RPM, speed is not a huge deal when you have enough spindles to compensate - I'm looking mostly for reliable and a 5 year warranty.

...and a new chassis. The Norco 4220 I have now has a few bad bays. Only 1 backplane has 4 working bays. It's my own fault for not testing them all when I first got it, the ones that don't work never did, so I don't think it's related to my current issue.

My suggestion for drives for a hardware RAID card like an areca is as follows. Personally I would buy them in this order (For Reasons Of Performance, Longevity & Compatibility) The enterprise drives will cost the most but will likely be the ones with 5 year warranties. Some of the retail pack drives can still offer up to 5 years (depending on where you buy them and what SKU you get)

1. Enterprise SAS
2. Enterprise SATA
3. Hitachi 5k/7k
4. WD Red
5. Seagate ST3000DM001

When buying them in quantities greater than 4, try to buy 1/3 of the order from 3 different resellers, reducing your chance of getting a whole batch of a bad or problematic production run.
As for a new enclosure, what is your need on drive bays (still need the 24 the norco afforded you?) and what is your budget? You can't go wrong buying Supermicro enclosures, but they are going to cost you a bit more.
 
i just bought a 1882x. 2 Norco DS-24e external sas enclosures, they have Areca ARC-8026 expanders in them. I have this hooked into a Supermicro Atom 1u. Im having issues with the expander locking the card up and not detecting many drives. It detects them randomly. I have consoled into the Enclosure and they are detecting fine and look normal but with drives pluged in when i boot the Server it seems to lock the card up and counts the 300 seconds and reboots untill i unplug it. Think it could be the atom server? I havent had time to test this in a different server.

the card functions fine untill the expanders are pluged in though but i cant find any issues with the expanders.

Any idea?
 
Hi, thanks for the suggestion, but a disaster occurred this morning. The power supply of the old chassis died and now the volume on those disks has failed.

Unfortunately, I cannot quickly replace the power supply and so I wanted to know if it will be possible to remove the drives and install them into the 847JBOD chassis. Since it's a different chassis with a different expander in it, the drives will not be in the same order as in the old chassis. Is that a problem?

If the volume is marked failed, how can I "unfail" it? I've not done anything except turn off the server for now...and hoping for some good advice from the experts as to what I should do next.

Please help!

I have attached the log from the 1880x card if that is helpful. You can see the problems started at 12:58:56. You'll see the log contains info on 2 volumes #000 and #009. I only care about the #000 volume and not the #009 volume. The #009 is on another Raidset with a separate set of disks.

I would suggest getting a new PSU or use one temporary just to get the array back in a normal state before screwing with the order. When I had a power outage and my DAS was not UPS'd and the same thing happened to me as you.

All being in the same chassis I just shut down the machine, turned the enclosure back on and when it came back up everything came back up like normal (in the nromal state):

Code:
2012-03-05 05:21:45  H/W MONITOR      Raid Powered On
2012-03-05 05:04:56  001.001.001.003  HTTP Log In
2012-03-05 02:57:42  Enc#3 PHY#14     Device Removed
2012-03-05 02:57:42  Enc#3 PHY#13     Device Removed
2012-03-05 02:57:42  Enc#3 PHY#12     Device Removed
2012-03-05 02:57:42  Enc#3 PHY#11     Device Removed
2012-03-05 02:57:42  Enc#3 PHY#10     Device Removed
2012-03-05 02:57:42  Enc#3 PHY#9      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#8      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#7      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#6      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#5      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#4      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#3      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#2      Device Removed
2012-03-05 02:57:42  Enc#3 PHY#15     Device Removed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:42  Enc#3 PHY#1      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#15     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#14     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#13     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#12     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#11     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#10     Device Removed
2012-03-05 02:57:42  Enc#2 PHY#9      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#8      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#7      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#6      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#5      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#4      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#3      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#2      Device Removed
2012-03-05 02:57:42  Enc#2 PHY#1      Device Removed
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:42  90TB RAID SET    RaidSet Degraded
2012-03-05 02:57:00  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:00  DATA 2 VOLUME    Volume Failed
2012-03-05 02:57:00  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:59  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:59  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:58  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:58  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:58  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:57  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:57  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:56  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:56  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:56  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:55  DATA 2 VOLUME    Volume Failed
2012-03-05 02:56:55  DATA 2 VOLUME    Volume Degraded
2012-03-05 02:56:55  DATA 2 VOLUME    Volume Degraded
2012-03-05 02:56:55  Enclosure#2      Removed
2012-02-15 22:10:03  001.001.001.003  HTTP Log In
2012-02-14 11:18:42  001.001.001.003  HTTP Log In
2012-02-04 22:37:05  001.001.001.003  HTTP Log In
2012-01-14 22:49:09  001.001.001.003  HTTP Log In
 
Yea Just wait to get a PSU or borrow one from another system.

Dont make a simple problem a complicated one, by moving stuff around.
 
Yea Just wait to get a PSU or borrow one from another system.

Dont make a simple problem a complicated one, by moving stuff around.

+1 on this. Especially if anything happens and you lose the raid config in the switchover, you MUST have the same order. In any case, it is always a good idea to have the drive/cable order and all the characteristics (stripe size/block size/etc) of all your arrays written down and filed somewhere.
 
hello evrybody. i m newbie. and i m not very good in english
i use
Asus M5A99X EVO (sAM3+, AMD 990X/SB950, PCI-Ex16)
ARC-1882I PCI-Express 2.0 x8 Low Profile SATA / SAS 8 Ports 6Gb/s
HP SAS expander


after some read erorrs Raid 5 start degraded
i fixed that using such commands
LeVeL2ReScUe -> SIGNAT

now it says that its in norlam state BUT in 1 folder, where were about 50 000 folders, exist less then 2000 (folders with start name A)
free space shows the same as it was before degradei dont know hoe to recover almost 50 000 folders

and one more questions when i plug 2 raid arrays with SAS expander - windows 7 cant see it, but in EFI BIOS its ok

please help
 
Last edited:
hello evrybody. i m newbie. and i m not very good in english
i use
Asus M5A99X EVO (sAM3+, AMD 990X/SB950, PCI-Ex16)
ARC-1882I PCI-Express 2.0 x8 Low Profile SATA / SAS 8 Ports 6Gb/s
HP SAS expander


after some read erorrs Raid 5 start degraded
i fixed that using such commands
LeVeL2ReScUe -> SIGNAT

now it says that its in norlam state BUT in 1 folder, where were about 50 000 folders, exist less then 2000 (folders with start name A)
free space shows the same as it was before degradei dont know hoe to recover almost 50 000 folders

and one more questions when i plug 2 raid arrays windows cant see it, but in EFI BIOS its ok

please help

A few suggestions... If you just had a drive start kicking up errors and drop the drive, replacing the drive as a spare and letting it rebuild itself is the standard methodology. Don't mess with documented (or undocumented) recovery methods until/unless you have already tried the standard practices, you can actually make things worse. First, make sure that your HBA is updated to the latest code (all 4 files). As to the missing files/folders, do you have a full backup you can restore from? How large would the missing 48,000 folders of information have been? Can you please post the entire log from the card so we have a better idea of what is going on, and can offer more ideas. What Manufacturer/Model/Capacity drives are in the array(s) and how many do you have?
As to not being able to see a second array, are these arrays both connected to the areca/expander chain? What OS version are you running? Did you prepare/partition/format the second array? How large was the second array?
 
Back
Top