ARECA Owner's Thread (SAS/SATA RAID Cards)

Just push the drives somehow. Copy several TB to the array, delete and then copy again. You could even use something like robocopy to do that. The new Mesa ZFS webui has a drive burn in tool if I remember correctly. You could also use that. Really I personally want to know if the drives will drop from the array. Only time will tell.
 
Good news from the support: the beta Build101105 fw for the 1680 series supports local / dedicated hot-spare. I hope this func will remain in the next stable release.
 
Last edited:
Does anyone have experience with the Seagate LP ST32000542AS on Areca controllers? neweegg is practically giving them away and if they'll work with an 1880 I might just by 10 or 20 right now.
 
They didn't work very well back when I had all my 1680 cards, but I'm not sure about now.
 
They didn't work very well back when I had all my 1680 cards, but I'm not sure about now.

Early to mid 2009 Areca fixed quite a few problems with Seagate SATA drives. My guess is they would be OK, they are approved for the Adaptec 5 series which is similar hardware wise and previous Barracuda drivers have worked well with my 1222s. Just an educated guess though....
 
Anyone with a 1880 able to tell me how long the included sff-8087 cables are? Trying to judge for a new case. Thanks!
 
Any word on the status of the Samsung Spinpoint F4 HD204UI 2TB 5400rpm drives in everyone's raid systems? Ive heard mixed reviews so far about them with Areca cards while reading through this forum. Was looking to pair 10 of them in a Raid6 with a 1680ix-24 in the next week or so.
 
What mixed review have you read, that a Samsung 2TB F4 had an issue on an Areca? Mine work fine on Areca 1880 but I have yet to test them on a 1680.
 
Now i havde done some more testing.

Did expand my Raid 5 of 4x Samsung F4 2TB disk's to 6x and did move some data wile it was expanding and 0 timesouts so far, also done some restarts/power on/off.

I did get the timeout's the first 4-5h but after that i have got 0 timesouts so dunno if it was you some temporery fail or?

I have 5 2TB Samsung F4 disks hooked up to an 1880i on their 24th hour of drive testing and I have not had any issues so far (fingers crossed)

Mixed reviews about the F4 and areca cards on this forum as well as on others. I may just bight the bullet, but them and post reviews based on my own experience.
 
I wouldn't call that mixed reviews. I'd question whether there were actual drive issues or maybe just user error - often that's the case. But its hardly a review sample. That's why I always do my own testing. I don't even trust vendor HCL's because they've at times been erroneous and testing proved otherwise. A vendor isn't going to spend weeks or months stress testing a drive like we can.
 
Last edited:
anyone have any issues running these areca cards with gigabyte, asus, or supermicro motherboards?

Im thinking of picking up an 1880i w/ bbu for a raid 5 fileserver im building...
 
anyone have any issues running these areca cards with gigabyte, asus, or supermicro motherboards?

Im thinking of picking up an 1880i w/ bbu for a raid 5 fileserver im building...

Me! I'm having an issue with 1880IX-24 on two Supermicro X8DTU-F mainboards. The Intel PXE boot agent text won't appear when the Areca card is in the system. Trying to boot from network drops the system to a blinking underscore.

I've got other Supermicro systems with X8ST3-F, X8STi-F, and X8DTU-6TF+ that are working great with 1880i cards.
 
I'm having an issue with 1880IX-24 on two Supermicro X8DTU-F mainboards. The Intel PXE boot agent text won't appear when the Areca card is in the system. Trying to boot from network drops the system to a blinking underscore.

Do you have it plugged directly into the motherboard PCIe slot, or are you using a riser of some sort? If a riser, is it active or passive?
 
Do you have it plugged directly into the motherboard PCIe slot, or are you using a riser of some sort? If a riser, is it active or passive?

A Supermicro riser. I want to say model RSC-R2UU-3E8G. I believe it's passive.
 
A Supermicro riser. I want to say model RSC-R2UU-3E8G. I believe it's passive.

If it was an active riser (which would have a controller chip, usually with a heatsink), I was going to suggest you try the Areca directly in the motherboard slot. With a passive riser, I guess it is less likely to be a riser problem, but it still may be worth trying it directly if you can.

Do you suppose it could be a shadow ROM size problem?
 
anyone have any issues running these areca cards with gigabyte, asus, or supermicro motherboards?

Im thinking of picking up an 1880i w/ bbu for a raid 5 fileserver im building...

I'm running the 1880ix-24 with 4GB and a BBU in an Asus m3a32-MVP Deluxe with ECC memory and an AMD 4850e proc. This is running 12 Samsung EcoGreen F4 2TB drives fine.
 
Do you suppose it could be a shadow ROM size problem?

It could be? Supermicro support mentioned something about possibly running out of shadow ROM in one of their emails.

Update: they sent me a beta BIOS which seems to have resolved my issue so far.
 
On an EVGA 680 mobo with an Areca 1880ix the bootup will halt if there is an optical media(bootable or not doesn't matter) in the optical unit when it's looking for optical media to boot from, this doesn't happen with the LSI 9280..
With no optical media inserted both cards work fine, at least individually...
 
Threw 23 like drives in RAID0 out of curiosity... then did a default Windows 2008 R2 install.
I guess I expected the numbers to be higher, should they be?

ATTO


HDTune
 
What is your stripe size?
What NTFS Cluster size did you use? (if you didnt specify its 4kb)

23 is an odd # of drives. Why dont you try like 8 w/ 128kb stripe and then when you format the volume choose a 128kb cluster size.
 
I was going to do 24 but one of the drives is DOA.
Volume stripe size is 64. NTFS cluster size is default. I just changed the stripe size to 128 but now it says need migrate. The volume info says migrating but it doesn't appear to be. (Edit: until I rebooted, now it is.)

(this isn't a production config, I'm just poking around)

EDIT:
What a difference an updated version of ATTO made!



and luckily my drive wasn't DOA but unluckily my cable is defective instead. I swapped around cables and the problem followed the cable.
 
Last edited:
Don't overlook setting the right "transfer size" in benchmark tools, because when its at default (which is generally geared toward single disk) the results often aren't relevant for your planned usage pattern, in this case presumably to store lots of large files.

In the ATTO test you left it at default 256MB. Try 1GB or 2GB
In HDTune I assume you also left it at 64kb (default). Try 2MB, and slide "accuracy" slider all the way to the bottom.

Stripe size of the array - whether its 64k or 128k, is irrelevant here. NTFS block size also isn't skewing your results. In any case make sure to use 16k clusters when you do finalize and format your partition. And in case you're still sitting there waiting for the volumeset to migrate to 128k stripe on the areca controller, don't bother - you're just testing so just delete the volumeset and recreate.

Lastly, a lot depends on the individual drives, I don't recall which one you said you were using. You can see scaling efficiency by first benching one drive in JBOD mode (set it to pass-through disk on Areca controller). Scaling also isn't quite linear - it is up until about 16-18 drives and then drops off steadily.
 
Last edited:
I wouldn't call them garbage. ATTO and HDTune are just fine for the ballpark estimates he and most people are interested in, when you use them right and set the right parameters. IOMeter is less than user friendly to dial in for testing a particular usage pattern, so most people aren't going to bother with it anyway. It all depends what type of usage you want to simulate.
 
Last edited:
Every time I've used CDM, ATTO, or HDtune they give different results for every run. Your right, SQLio/IOmeter are not user friendly but have the most 'accurate' data. The subsequent runs will be pretty close to the previous.
 
so what are the best drives for raid 5/6 should i spend the extra money to get the ES or re4 drives that have ERC /TLER?
 
Anyone with a 1880 able to tell me how long the included sff-8087 cables are? Trying to judge for a new case. Thanks!

the ix-24 I received came with 2 of these:
areca CB-R8787-75M Right Angle SFF-8087 MiniSAS to SFF-8087 MiniSAS 0.75 Meter Cable - OEM
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151079

and 4 of these:
areca CB-8787-75M SFF-8087 MiniSAS to SFF-8087 MiniSAS 0.75 Meter Cable - OEM
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151078
 
the ix-24 I received came with 2 of these:
areca CB-R8787-75M Right Angle SFF-8087 MiniSAS to SFF-8087 MiniSAS 0.75 Meter Cable - OEM
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151079

and 4 of these:
areca CB-8787-75M SFF-8087 MiniSAS to SFF-8087 MiniSAS 0.75 Meter Cable - OEM
http://www.newegg.com/Product/Product.aspx?Item=N82E16816151078
Yea I was hoping it came with SFF-8087 to SATA Fan out but it didn't! So now I have 16 2TB Drives waiting for the fanout cables to arrive before I can use them :(.
 
Quick question.

Using an 1880ix-16 with all 16 ports occupied internally serving a raid6 raid set and using the external SFF-8088 port to serve a second raid set (raid 1), a global hotspare, and thinking of serving a 17th drive to the raid6 raid set. Would there be any performance hit by using an external drive off the SFF-8088 into the set of 16 already serving the internal SFF-8087s?
 
Quick question.

Using an 1880ix-16 with all 16 ports occupied internally serving a raid6 raid set and using the external SFF-8088 port to serve a second raid set (raid 1), a global hotspare, and thinking of serving a 17th drive to the raid6 raid set. Would there be any performance hit by using an external drive off the SFF-8088 into the set of 16 already serving the internal SFF-8087s?

The shouldnt be a performance hit, the controller should deal in differences of cable length and backplane.
 
The shouldnt be a performance hit, the controller should deal in differences of cable length and backplane.

I was looking more at it for a point of location of the drive. I was willing to take a hit on the hotspare since it would be a protection method incase of failure while a replacement is obtained. But when adding a full time member on the external chain that includes all internal members was my concern.
 
I was looking more at it for a point of location of the drive. I was willing to take a hit on the hotspare since it would be a protection method incase of failure while a replacement is obtained. But when adding a full time member on the external chain that includes all internal members was my concern.

Providing you arent using an additional expander in the external chassis (Only 4 drives) then it shouldnt be a problem having array sets mixed across internal and external chassis.
 
Quick question.

Using an 1880ix-16 with all 16 ports occupied internally serving a raid6 raid set and using the external SFF-8088 port to serve a second raid set (raid 1), a global hotspare, and thinking of serving a 17th drive to the raid6 raid set. Would there be any performance hit by using an external drive off the SFF-8088 into the set of 16 already serving the internal SFF-8087s?

I don't think you'll see a performance hit and it should work fine, but that doesn't mean its a good idea or good practice. It's an issue I've wrestled with too when I've outgrown a chassis. Ultimately I decided against spanning any single array across multiple enclosures. I think creating a new array in the second enclosure is the better practice. Otherwise you're doubling your points of failure and creating additional risk for the array in the first chassis. Plus there are scenarios down the road that you may not realize now, that might make you regret spanning any arrays across multiple enclosures.

I also think you'd be fine with an HP expander in the second chassis serving those disks, and if you want to assign one global hotspare for both arrays, and locate the hotspare in the second chassis, that's fine since the hotspare is just a stopgap anyway. If a drive in the first chassis fails, and the hotspare in the second chassis kicks in, once the rebuild finishes you just power down and move that drive to the slot of the disk that failed in the first chassis.

That's how I go about it. Keep your arrays:enclosures 1:1.
 
Last edited:
I don't think you'll see a performance hit and it should work fine, but that doesn't mean its a good idea or good practice. It's an issue I've wrestled with too when I've outgrown a chassis. Ultimately I decided against spanning any single array across multiple enclosures. I think creating a new array in the second enclosure is the better practice. Otherwise you're doubling your points of failure and creating additional risk for the array in the first chassis. Plus there are scenarios down the road that you may not realize now, that might make you regret spanning any arrays across multiple enclosures.

I also think you'd be fine with an HP expander in the second chassis serving those disks, and if you want to assign one global hotspare for both arrays, and locate the hotspare in the second chassis, that's fine since the hotspare is just a stopgap anyway. If a drive in the first chassis fails, and the hotspare in the second chassis kicks in, once the rebuild finishes you just power down and move that drive to the slot of the disk that failed in the first chassis.

That's how I go about it. Keep your arrays:enclosures 1:1.

I don't disagree with you but from the way I understand the mappings of the card isn't the external port tied into the same pathways as the 4 internal 8087s? In theory I would lose the internal ports if the external port itself failed also.
 
Well, so far for HD204UI drives working. I saw 3 simultaneous time-outs while expanding from a 12 disk set to 16 disks, thus failing the expansion. Result being that I can't access my data and am mailing back and forth with Areca support to regenerate the set so I can get to my data yet.
That and the silent corruption happening on this disk have made sure I won't buy anymore Samsungs. Time to switch to Hitachi 3 TB drives maybe...
 
Well, so far for HD204UI drives working. I saw 3 simultaneous time-outs while expanding from a 12 disk set to 16 disks, thus failing the expansion. Result being that I can't access my data and am mailing back and forth with Areca support to regenerate the set so I can get to my data yet.
That and the silent corruption happening on this disk have made sure I won't buy anymore Samsungs. Time to switch to Hitachi 3 TB drives maybe...

So samsungs do not work the with the 1880's? I have about 40 drives i am about to order. I need to make damn sure i order the right drives. What 2tb drives aside from the hitachi's actually do work?

I thought the consensus here was the samsungs were okay for the 1880 series.

I need to order these drives asap. But i dont want to be stuck with drives that wont work. Please help me out guys.
 
Not sure why you need help. You know the Hitachis work, so why not just get those?
 
the enterprise or the desktop? I am unsure which versions to which people are referring. The model numbers are very similar. They also seem to be out of stock everywhere and most importantly way out of my price range.
 
Back
Top