ARECA Owner's Thread (SAS/SATA RAID Cards)

Is it very hot in your case? Based on the symptoms it almost sounds like either the expander chip is overheating or just flaky. Expander chip is under the heatsink closest to the back of the board where the SFF-8087 connectors are. Assuming the expander chip is good and within temp limits, only other explanation for raid build issues is likely those WD20EARS drives, but could also be an issue with mixing/matching the EARS with the F4. I've only ever had problems with EARS drives on hardware raid controllers, though that was on the previous gen 1680 controllers, and plenty of other people report no troubles with EARS in hardware raid.

1. Disconnect SFF-8087 cables from card and re-flash all four firmware files via ethernet management port, then reboot card and make sure SES2 is set to "Enabled".

2. When your drives are able to be seen again then try creating a RAID5 or RAID6 with just the WD drives (F4 excluded), see if it finishes. Hopefully that will narrow whether mixing&matching is culprit, or EARS drives, again assuming its not expander.
 
Is it very hot in your case? Based on the symptoms it almost sounds like either the expander chip is overheating or just flaky. Expander chip is under the heatsink closest to the back of the board where the SFF-8087 connectors are. Assuming the expander chip is good and within temp limits, only other explanation for raid build issues is likely those WD20EARS drives, but could also be an issue with mixing/matching the EARS with the F4. I've only ever had problems with EARS drives on hardware raid controllers, though that was on the previous gen 1680 controllers, and plenty of other people report no troubles with EARS in hardware raid.

1. Disconnect SFF-8087 cables from card and re-flash all four firmware files via ethernet management port, then reboot card and make sure SES2 is set to "Enabled".

2. When your drives are able to be seen again then try creating a RAID5 or RAID6 with just the WD drives (F4 excluded), see if it finishes. Hopefully that will narrow whether mixing&matching is culprit, or EARS drives, again assuming its not expander.
Disconnected and flashed, unfortunately same result. The case is not hot at all, in fact the top has been open for the most part. Still blinking blue LED and says there are only 8 channels are available.

After reconnecting drives they are still not visible.
 
Also, what is SES2? I couldn't find a setting for that...

EDIT: Nevermind, found it, and it's enabled...
 
That's what I'd do, sounds like maybe the onboard expander died which is the first time I've heard of this happening but thats the only reasonable explanation for why you'd only be seeing the 8 native/internal ports (which the expander is in turn hardwired into).

Any particular reason you went with a $1500 model? Are you planning on adding lots of drives very soon? If not I'd get an 1880i and then add a SAS expander later (either an HP SAS Expander or an Intel RES2SV240 from Provantage). Yes you'd lose 4G cache but if you're going to be storing mostly large media files anyway then the 4G cache isn't very useful. However if its for a multi-user file server or you're going to use it for O/S or VM's or lots of small random files then cache is useful, and even then I would get a 1880ix-12 w/ 4GB cache and add expander later when needed.
 
I've tried the cold boot, power drain, several times, no success. Always greeted by double beep and blinking blue LED.
The double beep is probably from the motherboard sending the PCIe reset signal twice on boot. It should be completely harmless, if somewhat anoying.
 
*deleted* double post due to glitchy forum.
I bought the 4G card because Newegg had it for 15% off, which made it cheaper than the 1GB version. Yes this is mainly just a file server, but I also will be running a VM or two, and most importantly, I wanted ECC reliability throughout. I've used a mix of motherboard hardware and software RAID for a couple years now, and I have had every array fail at some point in time, so I figured it's time to buy the best. Of course it's my luck that the card is defective (and I agree it's the expander chip, makes the most sense as to why every drive would drop at once, and then reappear at once, and now it only sees 8 slots).

Also, I currently already have 13 HDDs, and will probably need more in a month or two, so I would need to buy the expander card now anyways. As a side note, I currently have two system drives resting in the place where the redundant power supply usually goes, but they are not mounted. Does anyone know of a good way to mount them in the Norco RPC-4224? Thanks!
 
Last edited:
Chriscicc, I had the same problem on my 1880i card and hp sas expander droping drives and losing the array. I Thank Odditory for past post and advice getting the setup stable.
I notice the hp expander chip is extreamly hot and made myself a temporary cooler setup. I used the old cpu cooler fan attached to a card slot dirrecting airflow onto the chip. I also changed some of my drive spin settings back to default. I forced firmware flash on all my seagate drives I was using. Seeing tb drives with same firmware made fell better about my array. This was a fight with seagate as same model drives could have wrong firmware. My last 2tb drive from seagate came with firmware cc95. Seagate stated current model firmware should only be cc35. All of my drives had cc34 before flash , the recent one seem to be from a possible external take out enclosure :eek: I also have had problems with dual link on my Areca 1880i card and waiting on a few answers from Areca. I have been able to get array stable through 9c port of card but not both 8c&9c. I have'nt read what the temperature limits are on the HP sas expander chipset but maybey someone can chime in.
 
I can't find any information about the model of ARC-1880 expander in Areca website.

What's your concern? Areca not mentioning the brand name of the chips they OEM'd for their cards isn't exactly an uncommon practice, especially when its a direct competitor.
 
Last edited:
I have an replacement from Newegg arriving tomorrow, I'll post back and let you all know how I made out. Thanks for the advice and help!!
 
Can array that has been created in raid0 be migrated to raid6 or raid 5? if raid I have raid 0 with say two drives, can I add a hot swap and modify array to say raid 5. Or is the only way to delete array and migrate data:confused:
 
If I don't need the storage, would doing a 10 disk RAID 10 be worthwhile? I want to make an array to be used for iSCSI storage and want the write performance. I have a 7 disk RAID 6 right now and as soon as a heavy I/O application hits the storage everything else on it crawls. Reads are fine, but writes are slowing the whole array down dramatically. I was thinking about creating two 10 disk arrays, one RAID 10 for my iSCSI and one RAID 6 for archive storage.
 
are most of you just using the 1880i with an expander(for example the hp) rather that the 1880ixl?
 
Last edited:
Hello Everyone,
Just wanted to post a follow up. Yesterday, after a snow delay, the replacement card from Newegg arrived. It has worked perfectly ever since. The RAID 6 array built without issue in just over six hours, and so far over 3 TB worth of data has been copied to it without issue. This is with the same hard drives and backplanes in the same configuration as the first card, so it was definitely a defective card. I was also in touch with Areca tech support and nothing they recommended to try helped.

One thing of note: I stated that after the "failure" of the first card it would only say it had 8 channels on boot, and show 8 slots in the RAID hierarchy. It appears that 8 channels is the correct number, but it should show 24 slots in the hierarchy. Just wanted to clarify that.

Thanks for all your help, esp Odditory!

-Chris
 
can someone help explain the performance results I've been seeing

Linked here (so as not to bog this thread down too much with pictures)

I am not confident my settings are optimal.
 
can someone help explain the performance results I've been seeing

Linked here (so as not to bog this thread down too much with pictures)

I am not confident my settings are optimal.
What's wrong with those numbers? 62 hours to build an array that large makes perfect sense, assuming they are 2TB disks. 65 MBps over the network is slow, but your metrics are right on, so it may be read performance limits from the source disk.
 
can someone help explain the performance results I've been seeing

Linked here (so as not to bog this thread down too much with pictures)

I am not confident my settings are optimal.

that post is very long and doesn't give details on your configuration which is all that's important. which exact model areca card, how are the drives connected, what are they connected to (expander?), what chassis and backplane, etc.

Also you need to use LBA64 for addressing, not 4K . "Use 4K block" option has nothing to do with 4k sector drives.
 
Odd, sorry - I only linked the performance results. The full thread should answer some of the questions. But to address specifically:

  • Raid card: Areca 1880ix-24 (w/ BBU)
  • Expander: Astek A33606 24-port
  • Cables: All Areca SATA breakout cables
  • 30 drives reside in 6 x iStarUSA BPU350SATA hotswap enclosures - split about even between going back to the Areca card and the Astek expander
  • 4 are directly linked to the SAS expander
 
Anyone have issues trying to install mcraid archttp v1.83_81103? Specifically getting "javaw.exe has stopped working" when running the install.exe.

Interestingly I installed it fine before on my old build but since reformatting I can't seem to install this anymore.

EDIT - I installed it in safe mode and it works fine. Not sure why I was having issues with it though.
 
Last edited:
Well I had issues getting my 1880i or lsi 9211-8i working with an hp sas expander. I just got my areca 1880ix-16 today. With the 1880i and the hp sas expander my firmware would always time out first and then after restart my drives would usually show up. With the 1880ix-16 I found the exact opposite! My computer would always boot up with all 12 drives and then if I did a restart it would never recognize the drives. What are the odds of that! Anyway, I have 9 - 1.5tb seagate ST31500341AS, 2 - 2TB WD WD20EARS-00MVWB0, and 1 - 1.5TB samsung hd154ui. This time, however, in the mcraid event log I saw consistent timeouts for my samsung drive. After unplugging that I have the same successful initial boot and the restart works too. Hopefully this is stable now because I've gone through 3 controllers and 3 expaders just to get my additional 12 drives working!
 
Anyone have issues trying to install mcraid archttp v1.83_81103? Specifically getting "javaw.exe has stopped working" when running the install.exe.

Interestingly I installed it fine before on my old build but since reformatting I can't seem to install this anymore.

EDIT - I installed it in safe mode and it works fine. Not sure why I was having issues with it though.

You may try the v1.84_100119 or the new Beta Build 110104 to avoid the java.exe issue.

Beta Build 110104 change log:

==============================
2011/01/04

This beta installer use new JRE version to avoid java.exe conflict issue with some nVIDIA graphic card driver.

==============================

ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Windows/HTTP/v1.84_100119.zip
 
I expanded my raid6 array from 4tb to 6tb on my Areca 1880i. Why would windows 7 64 ultimate not reconize the full 6tb after the expansion in my computer? All drive sizes read correct in windows manager and in the mcaraid software :(there a way to correct?
 
I expanded my raid6 array from 4tb to 6tb on my Areca 1880i. Why would windows 7 64 ultimate not reconize the full 6tb after the expansion in my computer? All drive sizes read correct in windows manager and in the mcaraid software :(there a way to correct?

Couple things.

1. Did you expand the volume set as well?
2. Is the volume partitiond using GPT and not MBR?
3. Reboot or do Rescan from Disk Management.
 
Partitions don't expand on their own and you'll need to extend it if you want to use the rest.
 
I dont see the expand volume set. There is expand raid set,or modify volume set. I see 6tb fine in windows storage manager. I think it was mbr I cannot see that, where would I find info? My partion wizard software does not see that drive letter for that array for some reason. It wont let the space to be redistributed because it sees the correct amount in manager. The isssue is justin My computer area:confused:
 
I dont see the expand volume set. There is expand raid set,or modify volume set. I see 6tb fine in windows storage manager. I think it was mbr I cannot see that, where would I find info? My partion wizard software does not see that drive letter for that array for some reason. It wont let the space to be redistributed because it sees the correct amount in manager. The isssue is justin My computer area:confused:

Sorry I should have said use the modify volume set option (going from memory). In there make sure the Max Capacity Allowed and Volume Capacity values are the same.

If the two values are the same then everything on the Areca side is done. Now in Windows go to Disk Management (in vista or 7 just type "Disk Management" into the start box and hit enter). The volume should show the correct capacity there. If it does not choose Rescan Disks from the Action menu. If this doesn't work a reboot might fix it.

Now assuming the disk is showing the correct capacity right click the gray box on the left (not the partition on the right) that shows the disk you modified and go to Properties. In the new window on the Volumes tab it should say GPT under Partition Style.

If this is fine then all you should need to do is expand the partition. Just right click on it and choose Extend Volume. Follow the wizard to increase the size. The new size should be instantly available for use.
 
ok it looks like it's set to gpt disk andwill not allow mbr switch. sees it different and windows wont see anything larger than 6tb in my computer.
 
I've gotten two timeout errors on my 1680 this week on two different disks. How can I test if the problem is the card or the disks? I've been running for the past year mostly with no problems, but in the past few months I've been getting timeout errors and I'm not sure if it's because my disks are dying or I'm just pushing the array too hard. Power management has been disabled on the array.
 
Last edited:
I've gotten two timeout errors on my 1680 this week on two different disks. How can I test if the problem is the card or the disks? I've been running for the past year mostly with no problems, but in the past few months I've been getting timeout errors and I'm not sure if it's because my disks are dying or I'm just pushing the array too hard. Power management has been disabled on the array.

It helps to mention what disks you're using. That's sort of critical to answering disk questions. What type of case, cpu and OS helps too.

When I was having issues with the drives (seagate 1.5tb) it turned out to be heat related. I moved the drives out into a separate chassis and totally solved the problems. It was a difference between 38C in the same case as the CPU vs 33C in an external one. The external case has less in it generating heat (just drives). I chose not to put more fans on the CPU chassis as I wanted to avoid making it even noisier.

Were I to replace the stuff today I'd probably still go with an external chassis because of noise.
 
ok it looks like it's set to gpt disk andwill not allow mbr switch. sees it different and windows wont see anything larger than 6tb in my computer.

I think at this point a screen shot of disk management might help.
 
Ah yes forgot to mention the disks. I'm using the popular 2TB Hitachi drives (HDS722020ALA330). I hadn't thought about heating being an issue, but something I'll look into. The drives are currently at 30 degrees C idle, but I'm not sure how warm they get when they are being hammered on. The server is in an AC controlled room so the temps shouldn't get too high.
 
Back
Top