ARECA Owner's Thread (SAS/SATA RAID Cards)

Also is it me or does the ArcHttpProxyServer service on Windows seem to crash a lot?
It's gotten better, but it's super unstable and a real contraption of a mechanism to get data out of the card. I wish they'd come up with something more reliable and simpler to configure.
 
It's gotten better, but it's super unstable and a real contraption of a mechanism to get data out of the card. I wish they'd come up with something more reliable and simpler to configure.

Doesn't get much more bulletproof than the out of band ethernet on Areca cards -- which has never crashed on me a single time in 6+ years and dozens of different Areca cards and I'm using it almost every day. That's thee one killer feature I cannot live without and makes it impossible to use something like an LSI or OEM LSI like the IBM's with the same SAS2108 under the hood that can be had for cheaper, because the ability to look at my controllers and arrays away from the server - like on a tablet or laptop - is priceless.
 
Last edited:
I set up Hdd Power Management the other day and used these settings:

Time To Hdd Low Power Idle: 2
Time To Hdd Low RPM Mode: 10
Time To Spin Down Idle HDD: 20

I wouldn't mess around with anything but Time To SpinDown Idle HDD. Reason being very few drives actually support all those power modes and some may behave strangely if they don't support it and are issued the commands anyway. So Low Power Idle and Low RPM Mode should be disabled at least until dropouts disappear so you can rule them out. It also happens to be the epitome of splitting hairs in terms of actual energy conservation -- especially with a low drive count.

Lastly, make sure that 1280ML has latest V1.49 firmware.
 
Last edited:
It also happens to be the epitome of splitting hairs in terms of actual energy conservation -- especially with a low drive count.
There's power savings and noise reduction. I've found it's usually a lot less trouble to just get newer drives that don't make a racket and consume less power. Trying to futz around with saving the typical 10 watts a drive might consume is often more trouble than it's worth. Certainly when it comes to the amount of my time wasted trying to wrestle with it.
 
Well sure you can save the 7-8 running watts by spinning it down, that wasn't in dispute, the point is the low power idle & RPM are relatively meaningless in terms of any perceived benefit even when they do work on the small subset of drives that support those modes.
 
ARC1880 and ARC1882 V1.51 Firmware Release Note (2012-07-04):


2012-2-21
1 Fix PLI SATA/SMP passthrough error is not reported to upper laryer (for SATA write same+WDC4000YS, WDC4000YS report support of
write same but report error if write same SCT is uased)

2012-2-24
1 ARC1882:Add more timeout setting for TOSHIBA
2 Add SCSI Passthrough

2012-3-7
1 Change BUZZER logic when HDD unplug or rebuild complete


2012-3-13
1 Add PL library if 13.0.2.0 support
2 Add MPT of 13.0.1.0 support

2012-3-14

1 Enable PCI-E 3.0 support

2012-3-22

1 Fix ARC-1882 old firmware may corrupt the SBR (24C64)

2012-3-23

1 Fix NESUS testing of TCPIP warning, filter out source IP of 224.0.0.0->239.255.255.255

2012-4-20
1 Fix write same command timeout handling (pl library)
2 Fix 3HDD R6 in terminal config when quick create is used
3 Disable ASPM

2012-4-25
1 Add SUPPORT_512E_DISK
2 Add SUPPORT_HOST_PORT_CONFIG
2012-5-4
1 Add LOG_FAILED_DISK

2012-5-23

1 HITACHI 2T/3T HDD SCT Write Same Problem

2012-5-24
1 SUPPORT_512E_DISK:display HDD attribute

2012-6-5
1 Add LOG_FAILED_DISK (Failed disk is logged in FLASH)
2 FAILED_TIMEOUT_HDD_REMOVED
3 Add "Hot Plugged Disk For Rebuilding" option

2012-6-11

1 Fix event log:if only volume revived and no rebuild reqired do not log
rebuild raidset event
2 ARC1882:pCIE Gen3 support, add option to disable Gen3 (use Gen2)

2012-6-20
1 Fix SATA passthrough write same command mixed with SMART read attribute
2 Patch ARC1680/1880/1882 CPU fan function,


2012-6-28
1 Fix SCSI Pass through
2 Rebuild V1.51 2012-06-28



Download:

ARC1880: http://www.areca.us/support/download/RaidCards/BIOS_Firmware/ARC1880-151-20120704.zip

ARC1882: http://www.areca.us/support/download/RaidCards/BIOS_Firmware/ARC1882-151-20120704.zip

ArcHTTP v2.2.1-20120503: ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Windows/HTTP/v2.2.1_120503.zip

CLI v1.9.0-20120503: ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Windows/CLI/v1.9.0_120503.zip
 
Yep, finally.. was playing around with 1.51 earlier today, thanks for posting up the links. Successfully tested smartmontools is now able to see through Areca 1880 + expander to the SMART info of individual disks - not just passthrough disks but even those in raidsets, perform drive tests, etc. I tested with windows version of smartctl, using the following command to print all smart info for 1st disk of the 1st enclosure (expander) on the 1st Areca controller:

smartctl -a -d areca,1/1 /dev/arcmsr0

the numberic values in the example mean the following:

smartctl -a -d areca,<drive#>/<expander#> /dev/arcmsr<areca controller# ie 0 for first, 1 for second, etc>
 
Last edited:
1.51 also added about four new options to the System Configuration page including the option to fail a drive or just alert in the event of SMART errors, and Physical Drives tab also got a few new options including manually failing a disk and reactivating a failed disk. They also added Seagate to the list of supported drives for manually setting link speed in the Advanced Configuration page. I'm sure there's more I'm not seeing, havent gone through the whole changelog brick by brick.
 
Last edited:
Doesn't get much more bulletproof than the out of band ethernet on Areca cards -- which has never crashed on me a single time in 6+ years and dozens of different Areca cards and I'm using it almost every day.
Out of band definitely works better, but we've had the interfaces go non-responsive in high load applications. The real problem is, tho, not all the cards have this interface.
 
Doesn't get much more bulletproof than the out of band ethernet on Areca cards -- which has never crashed on me a single time in 6+ years and dozens of different Areca cards and I'm using it almost every day. That's thee one killer feature I cannot live without and makes it impossible to use something like an LSI or OEM LSI like the IBM's with the same SAS2108 under the hood that can be had for cheaper, because the ability to look at my controllers and arrays away from the server - like on a tablet or laptop - is priceless.

I switched to the out of band management and it seems pretty solid, strangely the ArcHttpProxyServer service hasn't crashed since switching either.

I wouldn't mess around with anything but Time To SpinDown Idle HDD. Reason being very few drives actually support all those power modes and some may behave strangely if they don't support it and are issued the commands anyway. So Low Power Idle and Low RPM Mode should be disabled at least until dropouts disappear so you can rule them out. It also happens to be the epitome of splitting hairs in terms of actual energy conservation -- especially with a low drive count.

Lastly, make sure that 1280ML has latest V1.49 firmware.

Yeah, seems a bit silly now I think about it since they don't use much power anyway, but it's odd only the newer Samsung/Seagate drives were messed up by it. The firmware is up to date.

I think I worked out what was happening, the power management settings were making the Samsung/Seagate drives drop out momentarily which resulting in the array working for a small time in a degraded state until a second drive dropped. I watched the drive LEDs and it showed the same sort of thing as when booting with the LEDs lighting up to show drive spinups. The thing was the power settings were not being applied until a reboot and I hadn't rebooted until that day, typical pebcak :D.

Here's some screenshots from when it was happening: http://imgur.com/a/NumyQ
 
Would having an older SATA drive connected to a 1260 pull down performance of any others on the same card? With the drive in JBOD mode. I'm looking to use one for ZFS and want to pass through all the drives. I haven't narrowed down why, but a bit of testing in passmark on windows showed a pair of drives as being QUITE a lot slower than the same ones connected to an M1015 (also in JBOD mode). The only difference was I had an older 250mb drive still connected to the 1260. I have not yet gone back and tested without that drive connected. It wasn't being used, and had no partitions on it. I'm just wondering if it's presence would have affected how the 1260 performed with the other drives. And whether there's any configuration options that would improve the situation.

I already have the 1260 and the M1015 in a 24-bay chassis. I'm trying to determine how to arrange the drives to give me the best performance for a 6 drive raidz2 array. Do I put them all on the M1015, the 1260 or spread them between the two? An alternative would be to remove the 1260 and add two more M1015 controllers instead.
 
I switched to the out of band management and it seems pretty solid, strangely the ArcHttpProxyServer service hasn't crashed since switching either.
That's not strange, as the OOB management interface doesn't use the in-band proxy server.
 
Would having an older SATA drive connected to a 1260 pull down performance of any others on the same card? With the drive in JBOD mode. I'm looking to use one for ZFS and want to pass through all the drives. I haven't narrowed down why, but a bit of testing in passmark on windows showed a pair of drives as being QUITE a lot slower than the same ones connected to an M1015 (also in JBOD mode). The only difference was I had an older 250mb drive still connected to the 1260. I have not yet gone back and tested without that drive connected. It wasn't being used, and had no partitions on it. I'm just wondering if it's presence would have affected how the 1260 performed with the other drives. And whether there's any configuration options that would improve the situation.

I already have the 1260 and the M1015 in a 24-bay chassis. I'm trying to determine how to arrange the drives to give me the best performance for a 6 drive raidz2 array. Do I put them all on the M1015, the 1260 or spread them between the two? An alternative would be to remove the 1260 and add two more M1015 controllers instead.

Its apples and oranges you're comparing - the SAS2008 controller on the M1015 even though it doesn't do parity based RAID is years newer than the RoC of the Areca 1260, as such its got more internal bandwidth - its a bigger pipe and its able to pass more traffic in and out in less time. For ZFS you'd absolutely want to go with multiple M1015's, or an M1015 + HP or Intel expander. As far as mixing different speed SATA disks like SATA-I and SATA-II well thats something to avoid. Mixing SATA II and SATA III on the same controller however doesn't make much difference since spinning disks aren't saturating SATA-II bandwidth anyway and probably won't until 1.25/1.33/1.5TB platters.
 
Last edited:
That's not strange, as the OOB management interface doesn't use the in-band proxy server.

There's two different IP addresses, one for in-band and one for OOB right? Unless the in-band IP address now directs to OOB then the in-band service on Windows has been running for a few days without a single crash which is unusual.
 
Will the OS proxy even speak to an ethernet equipped card? I seem to recall ones that wouldn't, but then I wouldn't swear to it...
 
Yes. You can use the proxied web interface to disable, reset, or configure the OOB interface.
 
I've got an 1880ix-24 firmware v1.49 and I've added a disk to the raidset that holds the "last volume set" as described as supported in the manual.

I've successfully expanded that raid set but when I clicked through "Modify Volume Set" to use the new space nothing is happening.

I did get an email notification confirming a"Modify Volume" event registered but if I go back into "Modify Volume Set" it still shows the volume as expandable and at the original volume size.

Do you have to reboot for these cards to pick up the change or did I miss a step?
 
I've got an 1880ix-24 firmware v1.49 and I've added a disk to the raidset that holds the "last volume set" as described as supported in the manual.

I've successfully expanded that raid set but when I clicked through "Modify Volume Set" to use the new space nothing is happening.

I did get an email notification confirming a"Modify Volume" event registered but if I go back into "Modify Volume Set" it still shows the volume as expandable and at the original volume size.

Do you have to reboot for these cards to pick up the change or did I miss a step?


Are you running Windows? It is a 3 step process, and you need to let each process complete before advancing. First you need to make a backup of your data. If something is going to go wrong, this is when it will. Next, you need to add the drive and expand the RAIDset. After, you need to expand the volumeset. These are both operations that can take a while, make sure you see that the operation you initiated has completed before you go on to the next. You probably didn't wait for the modify volume set to complete. Lastly, you need to use the diskpart.exe application (again, for Windows):
Run a Command Prompt (run cmd.exe)
diskpart.exe (starts the diskpart utility)
help (displays a list of commands, for reference only)
list volume (to list the volumes available and figure out the number of the volume you want to expand)
select volume # (# being the number of the volume you want to extend, in this case you would choose your RAID array)
extend (this extends the selected volume to take up the available free space you just added)
list volume (to make sure the volume is in fact bigger now)
exit (to exit partdisk)
Reboot your computer

If any of this does not go as expected, come back to chat more before you continue to minimize any potential nasties from creeping into the operation.
 
Yes, it's Windows 2008 R2. It's also a data volume separate from the OS volume (and raidset).

1) Backed up data - done, did not reboot.

2) Added disk & Expanded raid set. - done, did not reboot. Confirmed in log:
2012-08-02 08:17:51 RaidSet-Data Expand RaidSet
Raid set expansion also logged these, which I did not initiate:
2012-08-02 08:17:53 VOL-Data Start Migrating
2012-08-03 07:15:47 VOL-Data Complete Migrate 022:57:53

3) After "Complete Migrate 022:57:53" happened (still not rebooted) I went under "Volume Set Functions" and I do not have expand volume. I have "Modify Volume Set" so I did that and upped my volume number to occupy all space listed as available - 16000 upped to 18000. Also, I did pick LBA back when I originally created this set.
This was logged as I clicked submit:
2012-08-03 14:07:39 VOL-Data Modify Volume

But nothing is happening. If I go back into "Modify Volume Set" the volume capacity is back at 16000.

Windows does not see the extra space, which is expected since the card shows it as free space too. But I still have not rebooted.
 
Last edited:
Reboots have nothing to do with the process, the host O/S has no part in the controller's internal operations. Can you post a screenshot of the modify-volume-set page, as well as a screenshot of windows disk management (start -> run -> diskmgmt.msc)

On a sidenote I hope when you formatted the windows partition you didn't leave the default 4k block size, since its imposes a 16TB partition size limit. Regardless though diskmgmt.msc should show a segment of free space to the right of the existing partition if everything is good on the controller end - you just wouldn't be able to extend the existing 16TB partition into it if in fact you formatted 4k

And after you resolve this issue get yourself on the latest firmware -- 1.50 has been out for a while and now 1.51 was released two days ago. And make sure you're on the latest storport driver. 6.20.0.22 is latest.
 
Last edited:
How do I recreate a RAID set that is not being recognized?

It's Friday afternoon and one of my servers crashed badly. I have an i1880-24 RAID controller with 3 seperate RAID sets. Upon rebooting a couple of times I got the system to recognize 2 of my RAID sets but the 3rd is still unrecognized. All 8 disks in this 3rd RAID set show up as Free but not part of the RAID set. I do not want to do anything to corrupt the data so am looking for help in how to force the system to reestablish the 3rd RAID set.
 
@jones: I would power down, disconnect the drives in the working RAID sets (if you've got hotswap drive trays just pull them out an inch so they're disconnected) and that way you isolate just the problematic raidset to work on it. Since the rescue commands are global you don't want your working raidsets in among that process.

next go to "Raid Set Functions" -> "Rescue Raid Set" -> type RESCUE <submit> and reboot, and see if it picks it up. That's a read-only operation which doesn't modify any disks. Report back either way with a screenshot of your Information -> Raid Set Hierachy page in the webGUI.
 
Pulled all the other drives and did RESCUE command and reboot without success. Here is the current RAID hierarchy view.
raid1a.jpg
 
Reboots have nothing to do with the process, the host O/S has no part in the controller's internal operations. Can you post a screenshot of the modify-volume-set page, as well as a screenshot of windows disk management (start -> run -> diskmgmt.msc)

On a sidenote I hope when you formatted the windows partition you didn't leave the default 4k block size, since its imposes a 16TB partition size limit. Regardless though diskmgmt.msc should show a segment of free space to the right of the existing partition if everything is good on the controller end - you just wouldn't be able to extend the existing 16TB partition into it if in fact you formatted 4k

And after you resolve this issue get yourself on the latest firmware -- 1.50 has been out for a while and now 1.51 was released two days ago. And make sure you're on the latest storport driver. 6.20.0.22 is latest.

I believe I used 8192 when I originally set it up in diskpart and set the offset because I remember reading up on 4K limiting it to 16T back then and I knew this guy would grow above 16T because of the slots I had available in the chassis. Here are the screen shots:
modifyvolumeset.jpg


diskmgr.jpg
 
Pulled all the other drives and did RESCUE command and reboot without success. Here is the current RAID hierarchy view.

raid1a.jpg
 
Yes, it's Windows 2008 R2. It's also a data volume separate from the OS volume (and raidset).

1) Backed up data - done, did not reboot.

2) Added disk & Expanded raid set. - done, did not reboot. Confirmed in log:
2012-08-02 08:17:51 RaidSet-Data Expand RaidSet
Raid set expansion also logged these, which I did not initiate:
2012-08-02 08:17:53 VOL-Data Start Migrating
2012-08-03 07:15:47 VOL-Data Complete Migrate 022:57:53

3) After "Complete Migrate 022:57:53" happened (still not rebooted) I went under "Volume Set Functions" and I do not have expand volume. I have "Modify Volume Set" so I did that and upped my volume number to occupy all space listed as available - 16000 upped to 18000. Also, I did pick LBA back when I originally created this set.
This was logged as I clicked submit:
2012-08-03 14:07:39 VOL-Data Modify Volume

But nothing is happening. If I go back into "Modify Volume Set" the volume capacity is back at 16000.

Windows does not see the extra space, which is expected since the card shows it as free space too. But I still have not rebooted.

Yeah, you are caught in the cluster size trap. When you originally created the RAIDset and the volume, you probably chose 4096 as your default cluster size. If you do that, there is a 16TB limit on the set size you can have. You need to choose 8K for a 32gb max, or 16K for 64tb limit. Doing so after the fact is not recommended and has had uneven results by reclustering. Since you have a full backup, you are best off either creating another volume and continuing there, or blowing the array away and repopulating it after creating a new volumeset.
 
@mwroobel, if you mean in Windows, no, I did not choose the default 4K.
Here is my volume:

bytespercluster.jpg
 
Yeah its not a cluster size problem because diskmgmt.msc would still show a block of free space to the right of the existing partition, you just wouldn't be able to extent into it if clusters was 4k.

So based on the screenshot of the modify volume set page above, in your case the normal process would be to type in 18000.0 and click confirm and click submit. And then in the raid set hierarchy page you'd see a completion percentage as its working.

If it still doesn't go to work, then if it were me I'd power the system all the way down so the card resets, power back up and try again. If you were to contact areca support, my *guess* is they'd tell you to get on the latest firmware, which isn't a bad idea but I don't like introducing extra variables especially in the middle of a migration, unless I've tried everything else. But there's 100% no issue with firmware V1.49 and expanding volume sets because I've done it many times on that rev.
 
Pulled all the other drives and did RESCUE command and reboot without success. Here is the current RAID hierarchy view.

Second step is to try LeVeL2ReScUe in the rescue raid set menu - yes that's case sensitive and must be typed in exactly as is, reboot and see if it picks it up.
 
Not before the level2 command, no. SIGNAT writes the running raid configuration back to the disks, which you ONLY want do if one of the rescue commands picks the config back up successfully.

If you were to do SIGNAT before trying a level2rescue you could be wrecking your chances of level2rescue recovering signatures, since it would have the effect of overwriting potentially useful metadata with essentially blank data.

And the last resort is a NO-INIT raid creation, which relies on a) the disks still being in the original order as the time of original raidset creation, b) remembering the original volume attributes, such as stripe size, raid level, etc.
 
Last edited:
Yeah its not a cluster size problem because diskmgmt.msc would still show a block of free space to the right of the existing partition, you just wouldn't be able to extent into it if clusters was 4k.

So based on the screenshot of the modify volume set page above, in your case the normal process would be to type in 18000.0 and click confirm and click submit. And then in the raid set hierarchy page you'd see a completion percentage as its working.

If it still doesn't go to work, then if it were me I'd power the system all the way down so the card resets, power back up and try again. If you were to contact areca support, my *guess* is they'd tell you to get on the latest firmware, which isn't a bad idea but I don't like introducing extra variables especially in the middle of a migration, unless I've tried everything else. But there's 100% no issue with firmware V1.49 and expanding volume sets because I've done it many times on that rev.

I did type in 18000.0 (technically I just changed the 6 to an 8) and clicked the check box but it drops off the point zero.

I just walked through it again and grabbed screen shots.

01arecaconfirm.jpg


02volumesetmodified.jpg


Going right back to the modify volume set, shows no modification done.
03volumesetunmodified.jpg


a shot of the log for proof of clicking submit.
04arecalog.jpg


Powering it down is similar enough to a reboot which was where I started out asking if I should do, I just didn't want to reboot too soon. I will power it down instead but I'll leave it as-is for now in case anyone else weighs in with additional suggestions.
 
I believe I used 8192 when I originally set it up in diskpart and set the offset because I remember reading up on 4K limiting it to 16T back then and I knew this guy would grow above 16T because of the slots I had available in the chassis. Here are the screen shots:

Ah, ok. Usually when I hear problems at 16,000 it is the cluster size problem...
 
@Jones -- if the level2resuce doesnt work, I can help you rebuild the sector data manually, instead of doing a NOINIT. I had an issue where I upgraded the SAS firmware and left the drives plugged in, and had multiple raidsets and volume sets -- it nuked all of my data, but I was able to recover everything with a disk hex editor.

The important data in my experience in sectors 1-3 (first sector is 0).

In sector 1, row 250, sixth column in is the disk 'number' starting at 00, by number I mean the order in which the disks were in the raid. Depending what data is damaged, you may be able to get everything back.

here are a few rows to show what I mean (Sector1):
200: 24 52 61 69 64 53 44 59 - 08 00 00 00 08 00 00 00 $RaidSDY........
210: FE 04 00 00 66 0A 66 0A - 76 44 76 44 76 44 67 0F þ...f.f.vDvDvDg.
220: 67 0F 76 44 00 00 00 00 - 00 00 00 00 00 00 00 00 g.vD............
230: 00 00 00 00 15 52 28 2C - D7 54 5F 6E 9F 6A 26 3F .....R(,×T_nŸj&?&#16166;
240: 13 15 CF 83 00 00 00 00 - 00 00 00 00 00 00 00 00 ..σ............
250: 00 00 00 00 06 00 00 00 - 02 00 00 00 00 00 00 00 ................
260: 00 24 BA 03 00 00 00 00 - 53 53 44 2D 38 78 33 30 .$º.....SSD-8x30
270: 47 42 20 20 20 20 20 20 - 00 00 00 00 00 00 00 00 GB ........

Sector 2

400: 24 56 6F 6C 75 6D 45 24 - 00 80 5B 1B 00 00 00 00 $VolumE$..[.....
410: E3 00 00 00 00 80 5B 1B - 00 00 00 00 E3 00 00 00 ã.....[.....ã...
420: 00 00 00 00 02 00 00 00 - 00 01 00 01 00 00 00 00 ................
430: 00 E0 00 01 53 53 44 2D - 38 78 33 30 47 42 20 20 .à..SSD-8x30GB&#16967;†
440: 20 20 20 20 34 30 34 38 - 65 61 31 34 38 34 34 35 4048ea148445&#13620;
450: 37 38 31 36 00 00 00 00 - 00 00 00 00 00 00 00 00 7816............
460: 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00 ................
470: 00 00 00 00 00 00 00 00 - 84 0E 8E B2 00 00 00 00 ........„.Ž²....


Send me a PM if other methods dont work out, or if anyone has questions I dont mind helping/explaining -- and yes, this was after emailing areca, and answering the same questions 3 times before they stopped replying.
 
Thanks for the suggestion.

Not sure if it is good or bad but the card appears to have blown completely so that narrows down the problem. Hangs most of the time at "Waiting for F/W" and when it does boot, it fails all ports as with any significant file transfers. Tried different slot and disconnecting SAS cables with no effect. Slots work fine with another RAID and Graphic card so doesn't appear to be motherboard issue.

Am in the Arcea e-mail support merry go round at the moment. Latest reply is asking for the same screen shots I already sent them. UGHHH!. Will work support a couple more days before probably going out and purchasing a new card to get the system back up in a timely manner. Since I want to get at the existing data I am forced to stay with Areca.

My understanding is that the newer 1882 card should be able to pick up the RAID configuration on the drives so will probably go in that direction.
 
Anyone have a mysterious beeping you think is from one of your Areca boards, but you can't pin down which one is doing it? And nothing shows up in the logs?

That's because it's not from the RAID boards... it's from an APC battery backup UPS.

Damned thing has been plaguing me for a week. Randomly I'd hear what sounded like an array failure. No signs in any logs though. The UPS are mounted on a shelf in the same rack assembly, in a tight location, and there's a fair amount of ambient noise. So it's sometimes a little hard to find the source. So I went one-by-one through everything on the rack and, lo and behold, an APC UPS shows a battery replace icon. It'll beep for a minute, once every 5 hours. And it sounds exactly like the beeping of a failed array.

So at least my stress level of drive array failures has lowered. New batteries on order...
 
Anyone have a mysterious beeping you think is from one of your Areca boards, but you can't pin down which one is doing it? And nothing shows up in the logs?.....

There's no way I'd want to listen for that plus I'm not in the data center enough. I set all my controllers to disable that beep and configured the email alerts.
 
There's no way I'd want to listen for that plus I'm not in the data center enough. I set all my controllers to disable that beep and configured the email alerts.
Heh, I 'hear you'. I've got mine configured for e-mail alerts too. I've found it sometimes a little quicker to leave the beeps enabled. Somebody usually hears it before the e-mail gets read.
 
Anyone have a mysterious beeping you think is from one of your Areca boards, but you can't pin down which one is doing it? And nothing shows up in the logs?

That's because it's not from the RAID boards... it's from an APC battery backup UPS.

Damned thing has been plaguing me for a week. Randomly I'd hear what sounded like an array failure. No signs in any logs though. The UPS are mounted on a shelf in the same rack assembly, in a tight location, and there's a fair amount of ambient noise. So it's sometimes a little hard to find the source. So I went one-by-one through everything on the rack and, lo and behold, an APC UPS shows a battery replace icon. It'll beep for a minute, once every 5 hours. And it sounds exactly like the beeping of a failed array.

So at least my stress level of drive array failures has lowered. New batteries on order...

Don't you have powerchute set to monitor all your APC UPS's via RS-232 or USB? All of their better units also have the option of a management card which will give you Ethernet connectivity for the UPS This would have alerted you the second the problem started.
 
Feh, the powerchute software is crap. Yes, some versions of it have worked now and then, but more often than not it's seemed like a better idea to just avoid using it entirely. That and there's a generator on site, so the UPSes really only have to carry the systems for about 45 seconds. Most of the time they just handle brown-outs and quick blips.

Yes, units with a management card are a better idea, as would budgeting for them instead of the ones we've got. Hindsight and all that....
 
Back
Top