ARECA Owner's Thread (SAS/SATA RAID Cards)

Hello,

I searched the thread and Areca's website but I can't find a definite answer. Hopefully someone here knows.

I own a ARC-1231ML. Does this card and drivers work without hassle on Windows 8 64-bit and Windows 2012?

Many thanks!
FB.

Working fine for me in Windows 8.1 64 bit
 
Gurrag, any luck with the battery you ordered from eBay? My battery shows that it is working, but is bloated and looks as if it is about to explode. Way past time to replace! Would be nice to just replace the battery and not the whole circuit board as well.

I never put the order down actually... since the old galaxy s battery have been working perfectly together with the areca battery circuit board... the one i linked from ebay should work just fine too since its a standard 3.7v li ion battery.... might have todo some soldering though if the connectors does not match.
 
Thanks. I actually ordered it and it does work. Did have to solder on the connector because the one included was too small. Much cheaper option than replacing the battery and card!

WP_20140303_001.jpg


WP_20140303_002.jpg


WP_20140303_003.jpg


Areca.jpg
 
Hello folks. I installed a RAID 10 array using 4 Western Digital Red 2 TB hard drives, an Areca ARC-1214-4i RAID controller with an accompanying Areca ARC-6120-T121 Battery Backup Module. The RAID build was fine without a hiccup. The problem I'm having is that the RAID controller, when viewed in ArcHTTP utility, is giving me a temperature of 2 degrees Celsius or sometimes 160 degrees Celsius. In other words, it is producing bogus temps. It's also giving me zero voltage output on almost everything. Finally, it's telling me the battery is "not installed" although the battery itself appears to be charging (it produced a red light which eventually turned into a green light after a while).

The motherboard is an ASUS H87M-Plus and the processor an Intel 4770S Haswell processor. What gives? Any ideas? I keep getting under voltage warnings in my Windows Server 2012 notification area. The power supply is a Corsair RM-750 750 watt power supply.

The RAID controller is the only thing installed in the PCI-E 16x 3.0 slot.

Edit: Well, I found out what was happening. I noticed in the owner's manual that the cable running from the battery to the battery backup module circuit board should be plugged into socket J2. If you look very closely on the circuit board, you'll see the label. However, there's another socket, same shape and form, right next to the J2 socket. It's labeled J4. Well, the battery cable was plugged into socket J4. I took it out, plugged it into J2, re-installed the BBM into the computer, booted the server up, and went into ArcHTTP to view the temps and voltages. Everything was reading normally. It also says the battery is "installed" now. Hopefully this can help someone in the future.
 
Last edited:
I've had my Areca for a year and have used raid 0 and raid 5. This morning I wanted to try raid 30, but was not able to do that. Is there something else I need to do for raid 30?


PC with i7 3930k with 32GB memory and Win7-64
Areca 1882ix-12-4GB with 8 each 1TB HHD
 
I am going to answer my own question. I tried a few things, but what works is to create 2 each raid 3 raid sets. Then create a raid 30 raid set, then create a volume. Then format using disck management.
 
So, the new 1883 cards are just around the corner. The 1883ix models actually require you to plug in a 6 pin PCIe power connector due to them using almost 40 watts. Also new is a flash based backup module option (instead of the usual BBU, but that's still possible too). Performance seems to have gone up nicely too.
 
So, the new 1883 cards are just around the corner. The 1883ix models actually require you to plug in a 6 pin PCIe power connector due to them using almost 40 watts. Also new is a flash based backup module option (instead of the usual BBU, but that's still possible too). Performance seems to have gone up nicely too.

Areca is sending me two ARC-1883x's and two of their 12 gig expanders (that allow for increased performance of existing 6 gig disks on 12 gig expander). This is for a 30-day evaluation.

Anyway I will be testing these out in a couple of different configurations and let you guys know how it goes.

I don't see the extra power connector on:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816151150
 
And my shipment from taiwan came in (after shipping to the wrong address first which caused a few days of delays):






I get to use it for the next 30 days while I run lots of tests on it.

Two arc-1883x, two ARC-8028-24 (12G sas expanders). Very curious to see what kind of speeds I see. I will be doing tests with 30 disk or more arrays.
 
Hey guys, a question about the areca BBU unit. I just had an external power failure on my server running a 1280ML + BBU, which I set up a couple years ago. While the power was out for ~5-7 seconds, the system was completely off - no lights on the BBU, no beeps, nothing. Power came back, and the motherboard light turned on as usual. After booting the server the only feedback I can see is that my battery is "Charging (96%)" and the Volume Set is being checked. How do I know the BBU actually worked?
 
Question:
I replaced my OS drive with an SSD, and reinstalled WHS 2008 R2. The original OS drive was connected to a gigabyte SATA port on the mobo. The new SSD is connected to an intel SATA port on the mobo. After installation of all drivers, when i go into disk manager, my existing 16TB RAID6 volume isn't showing up. Instead im getting a dynamic foreign disk like so:
DyanmicDisk1_2.png


Booting into the raid controller, everything is still normal, and the volume is intact. I still have the original OS drive, do I need to go in and unmount the RAID volume in the old windows install, then remount it in the new OS installation? A little help as im scared to do anything.
Using a 1680ix-24 and a GA-H55-USB3 mobo
 
Last edited:
To answer my own question: all I did was click "Import Foreign Disk" and the drive appeared! Victory! I was hesitant about trying anything. There's no undo button for messing up that much data. Thx for your help.
 
How do I get Arc HTTP working on Windows 8.1? I've got an older 1220 card, and the CLI works fine. I downloade winhttp_V2.4.0_131028.zip and ran the installer. When I run ArcHttpSrvGUI, it gives an icon in the task bar. Clicking on that icon results in messgae box saying "no service found".

Code:
---------------------------
ArcHttpSrvGUI
---------------------------
No Service Found
---------------------------
OK   
---------------------------

How do I get a working install?
 
I'm looking for feedback on my expansion plan before I dig a foot and shoot myself in the hole ;-)

Running a pair of ARC-1680 RAIDS here, primary and backup. Each card is in a SuperMicro 1U, with a Habey 10-drive box attached. I've just added 2 drives to the primary and finished integration, now to resize the single RAID 6 Volume Set from 12000.0GB to 18000.0GB, not planning to change any parameters, just the size. The volume hosts a single LVM2 pv, the volume group has a logical volume, formatted as ext4. It hosts variety of NFS shares, actively used by 6 people including homedirs.

Live option: Since I have a good backup and the RAID Set integration ran in the background without a hitch in 54.4 hours, I could resize the Volume Set in the background, then run pvresize, then lvresize, then e2resizefs. All of these tools are robust & should handle a live expansion without a hitch, right?
Pros: Much simpler if it works, and I maintain my nightly backups to the backup RAID throughout the process (I'm estimating 2.5 days per resize operation but don't really know).
Cons: There are 4 resize operations to run, if any operation fails I could lose a day's work.

Off-line option: Make everyone log out, run an rsync to to the backup RAID (1-2 hours), bring up the backup RAID as the active system. Resize the primary, reverse the RAID swap, etc.
Pros: Feels safer, will finish faster.
Cons: With the backup RAID live and the primary off-line, I won't get daily backups to a second device until I'm done. The backup RAID has been less stable than the primary, it will "fail" a drive and start integrating the Hot Spare. The failed drive invariably tests OK & becomes the new Hot Spare. This happens about once a month (no consistent pattern as to which drive or slot will fail).

Any thoughts?
 
I have an Areca 1261, every time the host reboots, for windows updates or whatever, the raid controller has to do a 'check' which always completes with no errors, but it takes 12 hrs. Is there a setting somewhere where I can stop it from doing a check every time the host reboots?
 
OK I never had the problem before when expanding my raid 6 hw raid. the migration part went well with no errors, but now when i go modify the volume set from 28000.0 to 30000.0 and submit. it remains 28000.0 I can't figure out why this is happening..the array is built on 17x2tb Hitachi drives.
 
I ran into this today also. I couldn't select Background Initilization either. Following the advice of older threads on this forum, I restarted my server and used the BIOS menus to get it done. Was able to select Background, so far no problems. YMMV
 
can i recover and accidentally deleted raid volume set?

Yes but you want to make sure none of the disk order's changed since its initial creation and it has to be created in an identical way that it originally was with the exact same settings (16 or 128 LUN capable, size, raid-level, stripe-size, etc...) and when the volume sets are being created 'no init (rescue)' should be selected.

If disks have failed in the past then it can make it more complicated especially if you had hot spares or more than 2 disks failed at one point (which can also change the disk order).
 
Yes but you want to make sure none of the disk order's changed since its initial creation and it has to be created in an identical way that it originally was with the exact same settings (16 or 128 LUN capable, size, raid-level, stripe-size, etc...) and when the volume sets are being created 'no init (rescue)' should be selected.

If disks have failed in the past then it can make it more complicated especially if you had hot spares or more than 2 disks failed at one point (which can also change the disk order).

houkouonchi - thanks for pointing me in the right direction. I have no signs of failed disks now or in the past just disk add about a month and some change ago and the other day I added another drive hence how I accidentally selected delete instead of modify.

what I now see different now that it has completed initialization is that the Volume Capacity shows 2199.0GB and the Max Capacity Allowed at 30000.0GB where as before the accident it was Volume Capacity 28000.0GB and the Max Capacity Allowed at 30000.0GB.

Should I try and modify the Volume set again and try and get it back to 28000.0GB?

the array is built on 17x2TB Hitachi drives (if that matters)

Thanks again.
 
houkouonchi - thanks for pointing me in the right direction. I have no signs of failed disks now or in the past just disk add about a month and some change ago and the other day I added another drive hence how I accidentally selected delete instead of modify.

what I now see different now that it has completed initialization is that the Volume Capacity shows 2199.0GB and the Max Capacity Allowed at 30000.0GB where as before the accident it was Volume Capacity 28000.0GB and the Max Capacity Allowed at 30000.0GB.

Should I try and modify the Volume set again and try and get it back to 28000.0GB?

the array is built on 17x2TB Hitachi drives (if that matters)

Thanks again.


If its parity and you re-created and it initialized then the data is probably all lost as a parity initialization will over-write data on the disks =( This is why I mentioned you have to chose 'no init (rescue)' when re-creating the volume sets.

Sounds like when the volume set was created LBA64 was not selected for > 2TB support hence the 2 TiB limit (2048 GiB/2199 GB).
 
If its parity and you re-created and it initialized then the data is probably all lost as a parity initialization will over-write data on the disks =( This is why I mentioned you have to chose 'no init (rescue)' when re-creating the volume sets.

Sounds like when the volume set was created LBA64 was not selected for > 2TB support hence the 2 TiB limit (2048 GiB/2199 GB).
In Panic I may have recreated the raid set and did the init....my heart is in my throat I should have posted here before touching after I accidentally deleting the volume...so I take it I am now at a complete lost:(:(:(
 
If its parity and you re-created and it initialized then the data is probably all lost as a parity initialization will over-write data on the disks =( This is why I mentioned you have to chose 'no init (rescue)' when re-creating the volume sets.

Sounds like when the volume set was created LBA64 was not selected for > 2TB support hence the 2 TiB limit (2048 GiB/2199 GB).

In Panic I must have recreated the raid set and did the init. (just looked at the logs)...my heart is in my throat I should have posted here before touching after I accidentally deleting the volume...so I take it I am now at a complete lost:(:(:(
 
In Panic I must have recreated the raid set and did the init. (just looked at the logs)...my heart is in my throat I should have posted here before touching after I accidentally deleting the volume...so I take it I am now at a complete lost:(:(:(

someone can correct me if I am wrong but when its a parity raid it has to make sure it is consistent so doing a consistency check (against the parity) won't fail so part of the initialization is writing over the data. Its technically possible it could just read the data and write new parity would mean you could possibly be safe but again only if the settings were identical and you were writing over existing parity and not existing data which i think there isn't a good chance of that being the volume set was not lba64 like the original one is. I am sorry to say I am pretty sure you are fux0rd man =(

You could use linux or some other utility that can read the raw blocks on the block device and verify if its all 0's or not.
 
Yeah I looks like I am Going to be depleting a bottle this weekend as I mourn. I can't believe I was so tired/supid to do something like this. This has to be one of the hardest lessons I've Learned.

houkouonchi Thanks for all your support
 
Yeah I looks like I am Going to be depleting a bottle this weekend as I mourn. I can't believe I was so tired/supid to do something like this. This has to be one of the hardest lessons I've Learned.

houkouonchi Thanks for all your support

No problem and sorry man =( Probbably wouldn't have been that difficult to recover otherwise. I have recovered like 60+ arrays at my old job (thousands of shared web-servers all with raid arrays on various brands: 3ware, lsi, areca).

As long as you don't do actions that write data to the disks you can almost always recover data off them. I assume it was all in one file-system or raidset that the first 2 TiB ate into?
 
No problem and sorry man =( Probbably wouldn't have been that difficult to recover otherwise. I have recovered like 60+ arrays at my old job (thousands of shared web-servers all with raid arrays on various brands: 3ware, lsi, areca).

As long as you don't do actions that write data to the disks you can almost always recover data off them. I assume it was all in one file-system or raidset that the first 2 TiB ate into?
I figured you had tons experience with stuff since I first seen pics of your set up and youtube vid. yup your right all one file-system. I guess I know have 17 2tb hitachi drives for sale.
 
I am still trying to figure out why every time I reboot my host (windows 7 pro) my Areca 1261 has to do a 'check' which takes 12 hrs. Does anything jump out from these screen shots of the config?

raid%20controller-M.jpg


raid%20controller%201-M.jpg
 
28TB RAID5, a braver man than I (with hopefully up-to-the-minute backups a hand lengths away from you). In any case, for the log go to System Controls -=> View Events/Mute Beeper for the log entries.
 
This is a backup of what is on my NAS's. I can see the event log, there is nothing to tell, every time I reboot, it does a check that takes 12 hrs, I know not why. Could be a setting in the BIOS that is not visible in the web gui?
 
Halp! It's been so long since I configured this ARC-1680 card. I went to go pull data off one of our old servers and the machine beeped after the raid card initialized, but proceeded to boot just fine.

After rebooting and entering the raid BIOS I see the raid status as 22/24 disks (Raid 6) degraded. The other two disks are there but marked as "free." Whats the safest way to proceed?
 
I am still trying to figure out why every time I reboot my host (windows 7 pro) my Areca 1261 has to do a 'check' which takes 12 hrs. Does anything jump out from these screen shots of the config?

raid%20controller-M.jpg


raid%20controller%201-M.jpg

Sounds like you have volume scheduled volume checks enabled. Go to VolumeSet Functions -> Schedule Volume Check.

From there you can change the scheduled or checking after system idle (which I am guessing is enabled).
 
Back
Top