ARECA Owner's Thread (SAS/SATA RAID Cards)

Looks like areca removed the firmware from the main site now? I wonder why. Areca hasn't responded to like the last 5 emails I sent them. Not sure if they aren't getting them or what.

The firmware still in the areca ftp site.
 
Yeah I knew this but I was wondering why it was removed from the main site. I didn't say it was removed from their site/server completely just the main download page.
 
Quick question...

I recently declared a LOCAL hot spare for one of my raid 6 arrays. I've noticed that the HDD activity light is almost always lit up. Any reason why?
 
Yes, the firmware counter is a new feature. My guess is those WD20EARS drives are what's holding it back from booting first time. ....Also try disconnecting the Samsung if the there's no change when the WD's are disconnected, since that's the only other unique drive.

Well I spent about 1.5 hrs today just trying different combos of drives hooked up to the expander and changing cables and such. I am sure that it does not matter what drives are used. I don't have a bad cable in the mix either. I can connect all 3 different types of drives to 1 set of sas-4x sata cables and they all come up just fine. I can usually get 8 to work. However, no matter what other variables I change I cannot get over 8 drives to boot up with the expander the first time. This includes using just the 9 seagates. If I connect the additional 4 drives I have straight into the extra port of the 1880i I will get all 12 successfully. I have tried different cables for everything, different ports, etc. Also tried single and dual link to the 1880i.

The good news is that I didn't have any drives drop out today with the power mgmt change.

I know of no other troubleshooting to attempt with this expander except to RMA for the second time. If there weren't so many here with this working right I would give up on this expander by now. I have never received a faulty item twice in a row but I suppose it is possible. Of note, I do know this card operates different with my exact same setup than the last one. Prior card would only connect at 150MB/s per drive not 300. Do you think the RMA is in order or other ideas?
 
My guess is that the Areca is doing staggered spin-up(one or a few at a time) of the drives, when connected to the expander all spin-up at the same time causing a huge load on the PSU as the drives you use probably pull between 20-30W each during spin-up..

Does your PSU have a single or multiple 12v rails?
How many amps/watts do the 12v provide?
What is it's efficiency?
Is it old?

I'd recommend you try a PSU with lots of amps on the 12v rail, preferable a single 12v rail PSU so that no math is involved with evening out the load on different rails...
 
No way it is the psu. I have a corsair 1000 watt in there which is way underrated at 1000 watts. 2x 40amp 12v rails. It is only a few months old.

Not to mention that this issue doesn't occur when drives are plugged straight to the areca. I don't think the expander affects staggered spin up.
 
I am about the push the button on getting an Areca 1280ML card.

I've only heard good things about these cards, but, in Raid 6 mode do they have any drawbacks I should be aware of? What is the magic number of hard drives this card can handle before diminishing performance returns really start to set in?

I am looking to start out with a 10 drive array of 2TB drives, with near term expansion to 15 + hot spare, and ultimately a full 24 drives creating a 48TB array.

What are your thoughts?
 
Only drawback of RAID 6 is the longer rebuild time, but with that many drives you definitely want to use it over RAID 5. The 1280ML caps out at about 700mb/s for reads, so you can figure out how many drives it will take to hit that. It's a good card and I used one for quite some time until I needed more than 24 drives.
 
I am about the push the button on getting an Areca 1280ML card.

I've only heard good things about these cards, but, in Raid 6 mode do they have any drawbacks I should be aware of? What is the magic number of hard drives this card can handle before diminishing performance returns really start to set in?

I am looking to start out with a 10 drive array of 2TB drives, with near term expansion to 15 + hot spare, and ultimately a full 24 drives creating a 48TB array.

What are your thoughts?

24 drive Raid6 is too big IMHO. I would go with 2 12drive arrays.
 
24 drive Raid6 is too big IMHO. I would go with 2 12drive arrays.

Do you have some math to back that up?

What convinced me was a paper I read talking about the average number of GB before an unrecoverable sector is created and how once Raid 5 surpasses some number of TB, it becomes nearly a statistical certainty that such an error will exist and likely wreck your rebuild after you lose a drive due to natural causes.

What is the TB limit for Raid 6 before this happens?
 
The problem is more to do with the stress the array is put under whilst rebuilding. The loss of one drive often casues another to fail during rebuild. Once you get to very large arrays the chance of 2 or more drives failing during rebuild becomes quite real. I have lost a second drive during rebuild before and that was only a 12 drive raid 6 array, I would not feel comforrtable running a 24 drive Raid6 array.
 
wouldn't the use of 2-3 hot spares help diminish that risk significantly?

As I understand it, a hot spare ist just a drive that is plugged in and ready to take over for a failed drive in a array at a moments notice without any user intervention to initiate the rebuild process.

The hot spare is not actively used to store parity data. I do not know the name of the raid level that can handle 3 drive failures before data loss occurs, but I am sure it can be set up.
 
Only drawback of RAID 6 is the longer rebuild time, but with that many drives you definitely want to use it over RAID 5. The 1280ML caps out at about 700mb/s for reads, so you can figure out how many drives it will take to hit that. It's a good card and I used one for quite some time until I needed more than 24 drives.

Raid5 and Raid6 seemed to rebuild at about the same speed on an ARC-1280ML.

An ARC-1280ML max read speed in raid6 is 830 megabytes/sec and max write speed is 800 megabytes/sec. You will be bottlenecked by the card after about 10-12 dries (modern speed drives). The controller is very fast IMHO even being limited to 800 MB/sec sequential speeds.

I use 20x2 TB hitachi drives on an ARC-1280ML and it works great.

Quick question...

I recently declared a LOCAL hot spare for one of my raid 6 arrays. I've noticed that the HDD activity light is almost always lit up. Any reason why?


Might wanna check the settings. There is setting for idle drive LED ON or LED OFF that could be causing that behavior. Also with some 1.5 TB seagate drives they ha bad firmware (SD17) which would cause the disk activity light to almost always be on (not just on raid controllers).
 
Might wanna check the settings. There is setting for idle drive LED ON or LED OFF that could be causing that behavior. Also with some 1.5 TB seagate drives they ha bad firmware (SD17) which would cause the disk activity light to almost always be on (not just on raid controllers).

IHmmm I think it is the Disk Enclosure it is contained in, removing the hotspare does not change the LED. However in the same enclosure I have two WD 1GB Black 6GB drives and they are working properly LED wise. The drive itself is not impacted just a weird scenario... Any thoughts? I am running same FW on all the other 16 ST32000641AS without issues but it must be something weird with this enclosure.
 
Last edited:
I am atm running 12x Samsung F4 2TB drives on my 1680 + HP sas expander and i get a few timeout on random drives but atm it is about 1 time out per month or less.

Are you kidding? I've been going crazy this past few weeks/months cause a drive will drop out. I'll reset and everything will work perfectly for weeks or month+ again.

I'm almost running the same as you. An Areca-1680i, HP SAS and 32 Samsung F3 (not F4) 2TBs. I was just about to give this whole thing up and pick up a bunch of AOC-SASLP-MV8s for these so I can get stable machine. Since I'm not sure what's causing the actual drop. Areca, HP or Samsungs. I have alot of data on these drives now so I can't just go blowing them up to do a large amount of tests.
 
^ not an expander issue. post your exact configuration (RAID level, drives per array, power management settings). most common reason for timeouts is if too low a "staggered spin up" value is set.

with 1680i + HP expander and Hitachi 2TB drives I had to go as high as 2.0 seconds for staggered spin up in order for timeouts to disappear completely when array was waking up from spindown. with 1880i I can get away with 0.7 with same expander and drives.
 
Last edited:
Does the staggered spin up value matter if you have all of the power management stuff disabled? I don't set my drives to spin down when they are idle because they are never really idle, but I have my spin up set to 3 seconds and just experienced another timeout. I have the 1680i with 5 Hitachi drives in RAID 5.

^ not an expander issue. post your exact configuration (RAID level, drives per array, power management settings). most common reason for timeouts is if too low a "staggered spin up" value is set.

with 1680i + HP expander and Hitachi 2TB drives I had to go as high as 2.0 seconds for staggered spin up in order for timeouts to disappear completely when array was waking up from spindown. with 1880i I can get away with 0.7 with same expander and drives.
 
Is it always the same drive that shows up in the event log as timing out? The spinup value would only come into play as a culprit for timeouts if you were spinning the array down. Can you post a screenshot of your power management settings?

by the way there's no need to quote when you're replying to the previous post. just adds noise :)
 
Anyone know when a drive is marked as FAILED if there is there any detailed information as to why the Areca card think it has failed? I've had a few that work perfectly well in other computers and pass the drive fitness test with flying colours but as soon as I put them into my RAID they almost immediately are marked as FAILED?
Thanks!
 


TIPS:

1) For proper array spindown on RAID arrays with Hitachi 2Tb (7K2000) drives, the "Stagger Power On Control" value needs to be increased to 1.0, 1.5 or 2.0 seconds. You'll need to experiment to find the lowest setting. For testing set "Time To Spin Down Idle HDD" to 1, wait a minute for the array spindown, then access the drive with the array data and wait for it to spin back up. Watch the event log for any drive timeouts- if you see any, you'll need to hard-reboot the computer and try a higher stagger power on value (i.e. try 1.0, then 1.5, then 2.0 seconds).



Keep meaning to ask this - what happens if a drive does timeout, does the RAIDset need to rebuild? Currently I leave all of my disks spinning, I would like to set this but as its a card setting not sure my heart could stand the excitement if I put all my RAIDsets at risk in one go! Thanks
 
Is it always the same drive that shows up in the event log as timing out? The spinup value would only come into play as a culprit for timeouts if you were spinning the array down. Can you post a screenshot of your power management settings?

It is usually the same drive. Sometimes it's another, but more then often it always drive #6, which I've posted the info for. I've been meaning to get another drive so I can have the drive exchanged (lame that Hitachi doesn't have advanced replacement), but I don't get why a time out stops the whole array. I would think it would just mean that the array would be degraded, but when a time out happens every filesystem on the array is not accessible by the OS until I reboot the whole machine.

screenshot20101226atdec.jpg

screenshot20101226atdec.png
 
Hitachi will do advanced replacement if you email them and explain why you need it. ;)
 
is it normal that a 60TB Array (27x2TB+3 Hot Spares) took 62 hours to initialize? Seems pretty long.

Hardware: Areca 1880i-24, Astek A33606, StarTech SBAY5BK Backplanes, Samsung F4 2TB
 
is it normal that a 60TB Array (27x2TB+3 Hot Spares) took 62 hours to initialize? Seems pretty long.

Hardware: Areca 1880i-24, Astek A33606, StarTech SBAY5BK Backplanes, Samsung F4 2TB


It is long for a foreground initialization... Not long if it was a background initialization.
 
I would think it would just mean that the array would be degraded, but when a time out happens every filesystem on the array is not accessible by the OS until I reboot the whole machine.

that symptom - the whole card freezing and requiring a reboot - can usually only be triggered with the "Time To Spin Down Idle HDD" power setting set to too low a value on the 1680 series, at least thats the only way i've been able to reproduce it. and btw the reason the array isn't considered 'degraded' is because the card never marks the drive as failed. like i said i think its a unique bug, one i've raised to areca before.

but in your case you aren't spinning down your drives, which is the first time i'm hearing about that timeout issue happening that way. it *could* be that the drive is just going bad, but I think its worth first updating the drive to the latest 3MA firmware, then see if it happens again. PM me your email address.
 
is it normal that a 60TB Array (27x2TB+3 Hot Spares) took 62 hours to initialize? Seems pretty long.

Hardware: Areca 1880i-24, Astek A33606, StarTech SBAY5BK Backplanes, Samsung F4 2TB

My 24-drive RAID6 took the following with foreground initialization and default 64k stripe size:

Areca 1880i + Dual Link to HP Expander + 24 x Hitachi 2TB + RAID6: 9h 18m
Areca 1880i + Single Link to HP Expander + 24 x Hitachi 2TB + RAID6: 14h 19m

(BTW, your 3 hotspares seems like overkill, unless you were planning to leave the system in Antarctica in a weather station and not return for 5 years.)
 
Not sure if it's a bug either, but I find it odd that the Timeout Count is 0 on the drive when it's probably timed out over 5 times now.
 
Hitachi will do advanced replacement if you email them and explain why you need it. ;)

Thanks! I'll give it a shot next time. I already ordered two new drives for the replacement, but at least I'll have more disks when the replacements come in :)
 
Not sure if it's a bug either, but I find it odd that the Timeout Count is 0 on the drive when it's probably timed out over 5 times now.

It's not odd at all considering chances are its a glitch with the raid card and not the drive.
 
Did you get my PM by any chance? I don't see anything in my sent items so wasn't sure.
 
Ok, think this might be a cross-post, sorry iff so.

Are there people here using the Hitachi Deskstar 7K1000.C drives running RAID 5 on the Areca? I see lots of people use the 7K2000 to good effect. Does the same go for the 7K1000.C? Planning on running 6 of them on an Areca 1220 in RAID 5.

Can i use them ok, or are they going to give trouble with the CCRL/TLER stuf? Hope someone can give me an answer.
 
My 24-drive RAID6 took the following with foreground initialization and default 64k stripe size:

Areca 1880i + Dual Link to HP Expander + 24 x Hitachi 2TB + RAID6: 9h 18m
Areca 1880i + Single Link to HP Expander + 24 x Hitachi 2TB + RAID6: 14h 19m

(BTW, your 3 hotspares seems like overkill, unless you were planning to leave the system in Antarctica in a weather station and not return for 5 years.)

Thanks for the reply odditory. While I see the dual link adding quite an improvement for you, my results are still quite dramatically different then yours even without it.
Rather then post too much troubleshooting on this thread, I posted further details on my build log thread.

I made sure to detail each step I took in Post # 3 - with some of my settings that may be causing this very long initialization time?

As for the 3 hot spares... I figured with an immediate storage requirement of only about 12TB - that's quite a large amount of room-to-grow - so why not play it extra safe.
I do travel internationally quite frequently and would hate to be a half-a-world-away for even a single night and risk it.
 
Ok, think this might be a cross-post, sorry iff so.

Are there people here using the Hitachi Deskstar 7K1000.C drives running RAID 5 on the Areca? I see lots of people use the 7K2000 to good effect. Does the same go for the 7K1000.C? Planning on running 6 of them on an Areca 1220 in RAID 5.

Can i use them ok, or are they going to give trouble with the CCRL/TLER stuf? Hope someone can give me an answer.

They will probably work fine but you should update them to the more recent 3EA firmware. I use 4 in Raid5 with an IOP based (Same as Areca 1680) Rocketraid 4320 and they work fine, no drop outs or errors, HDTach graph isnt quite perfect but all other benches show good performance and system is responsive.

The Arecas I use are at works so I cant try the Hitachis for you on a 1212 or 1680 but I can make an educated guess that they will work OK.
 
They will probably work fine but you should update them to the more recent 3EA firmware. I use 4 in Raid5 with an IOP based (Same as Areca 1680) Rocketraid 4320 and they work fine, no drop outs or errors, HDTach graph isnt quite perfect but all other benches show good performance and system is responsive.

The Arecas I use are at works so I cant try the Hitachis for you on a 1212 or 1680 but I can make an educated guess that they will work OK.

Thanks. Just have to find out were and how to upgrade those 3EA firmware. I'm taking the leap, and orderd 5 7K1000.c drives ( and yet another external hdd for backup haha )
Hope to start assembling and testing soon.

Thanks so far, apriciate it.
 
Thanks. Just have to find out were and how to upgrade those 3EA firmware. I'm taking the leap, and orderd 5 7K1000.c drives ( and yet another external hdd for backup haha )
Hope to start assembling and testing soon.

Thanks so far, apriciate it.

I couldnt get the crappy Hitachi Windows utility to work but the dos utility on a dos bootable usb stick worked fine.......

http://files.hddguru.com/download/Firmware updates/Hitachi/

Plug the the first drive into you motherboard sata controller (Should be set to compatibility mode in bios)
Get the JPDL_SP.exe from the 3EA rar file
Take the BD files from the 3MA zip file and rename them to BDX
Put all of this together on a bootable floppy or bootable usb stick
Boot and run the exe, it should confirm the firmware is updated
Unplug drive and plug in next drive - Repeat
 
I couldnt get the crappy Hitachi Windows utility to work but the dos utility on a dos bootable usb stick worked fine.......

http://files.hddguru.com/download/Firmware updates/Hitachi/

Plug the the first drive into you motherboard sata controller (Should be set to compatibility mode in bios)
Get the JPDL_SP.exe from the 3EA rar file
Take the BD files from the 3MA zip file and rename them to BDX
Put all of this together on a bootable floppy or bootable usb stick
Boot and run the exe, it should confirm the firmware is updated
Unplug drive and plug in next drive - Repeat

Wow, perfect service, can;t thank you enough.

My drives should arrive tomorrow, so i can get prepared hehe.
Today i got my first SSD, damn those things are fast... They blow my current 3 disk RAID 5 and 3 disk RAID 0 away... 265mb read/write... Damn...

But i think the Hitachi's will do a nice safe job for my data :)
 
Whoops, talked to fast.

Ok, then these files i put on the usb stick are:

JPDL_SP.exe ( from 3EA file )
JP0NB3MA.BDX ( from 3MA file )

Those two, on the stick, remove all hdd's, insert hitachi, boot, run exe and ready?
Correct?
 
Whoops, talked to fast.

Ok, then these files i put on the usb stick are:

JPDL_SP.exe ( from 3EA file )
JP0NB3MA.BDX ( from 3MA file )

Those two, on the stick, remove all hdd's, insert hitachi, boot, run exe and ready?
Correct?

Correct but remember to put the SATA in compatability mode in the bios and the usb stick must be a dos bootable one. I dont think you need to remove other hard disks just make sure the Hitachi is on one of the first 4 sata ports (The JPDL utility ignores other brands of hard disk and asks which drive if more than one Hitachi is detected).
 
Back
Top