ARECA Owner's Thread (SAS/SATA RAID Cards)

Can anyone give me a quick guide to Winthrax?

I basically add my raid array to the task list the select which option?

Right now my task list looks like:
RaidDrive->RaidDrive

Option: FileFlags: Write Through
Line Size: 64
 
FWIW, I've had some trouble with 7200.11 Seagate 1.5 drives on my 1260. It seems to have been heat related. I had them all in the same chassis and their temps were reading 41C. I moved them to a separate chassis (using a multilane cable setup) and their temps dropped to 35C. I've had no trouble with the drives since the move a week ago. Before the drives would drop out randomly but would not show any errors when immediately tested in seatools. Suffice to say I keep a pair of hot spares online to deal with the possible drop outs.

Meanwhile I've had 5 of the 2tb Seagate drives running without a hitch (4 drive RAID10 + hot spare). One difference is the 1.5tb are 7200 rpm versus the 5900rpm 2tb ones. The 2tb ones run at around 34C. I swapped them into where the 1.5 were installed and they stayed running at that temp. So clearly the 1.5 run hotter and are more susceptible to trouble because of it.

Tangentially, is anyone running a network monitoring console watching drive and motherboard temps?
 
Last edited:
Here is Windows 2008R2 x64, fresh install. No service packs.
Used the Areca provided storport driver and loaded it during Windows installation.
All drives are Seagate ST3146356SS, 146 GB, 3.5" SAS with firmware 0007.
Created two RAID sets as RAID-5 with 3 drives each, then 1 volume on each RAID set.

HDTune by itself:
HDTune_v2.55_ARC1880iRAID5-01_standalone.png




HDTune on both volumes at the same time:
HDTune_v2.55_ARC1880iRAID5-01_simultaneous.png


HDTune_v2.55_ARC1880iRAID5-02_simultaneous.png
 
Hi guys,

Got questino about my Areca 1680-4Gb version.

Since I got the 4g cache memory, when I created the raid, which one I should choose ?
write-through or write-back cache. For me, I thought write-back cache is the one that would utilize my 4gb cache better.

On my Areca card, I have 6 internal port, which is all used up. How do I utilize the external port to expand more hard drives ? Do I purchase a SAS expander, and connect that to the external port and bring that to another chasis ?

Thanks !
 
Hi guys,

Got questino about my Areca 1680-4Gb version.

Since I got the 4g cache memory, when I created the raid, which one I should choose ?
write-through or write-back cache. For me, I thought write-back cache is the one that would utilize my 4gb cache better.

On my Areca card, I have 6 internal port, which is all used up. How do I utilize the external port to expand more hard drives ? Do I purchase a SAS expander, and connect that to the external port and bring that to another chasis ?

Thanks !

write back is general better, write-though is only better in very rare cases where all you do is sequential data transfer and the data size is identical to stripe size....etc. Basically, such condition probably only happen when all stars are lined up, LOL.

After the internal ports are saturated (24 drives in your case), the external port must be connected to a SAS-expander for handling additional drives. The SAS-expander though can be inside your chassis or located inside a seperate chassis. (some call it jbod chassis)
 
hello.

Stupid question. I have a 1680ix-8 in a Norco 4224 enclosure, hooked up to 6 x 2tb drives. I added 2 more 2tb drives over the weekend and expanded the raid set during the week (it took an age).

Finally, the raid set is expanded, but I notice that the underlying volume did not increase in capacity.

Apparently I need to 'modify the volume set' capacity in order to apply the additional 4tb.

I am in the process of ripping my dvd collection (the existing raid6 array is already quite full) and I am loathed to start again. Will modifying the volume set increase the capacity without wiping my pre-existing dvd rips, or will it blat the entire array and force me to start again?

BTW, the physical disks are my backup, so if I have to build the entire lot again I will have another week of ripping ahead :(
 
It was unnerving to discover it is posible to log into an Areca controller via out-of-band management and have it, falsely, list all the attached drives as "FAILED" because the controller hasn't finished booting.

Infact the "HTTP log in" message managed to get logged a good 8 seconds before the "Raid Powered On" message in the event log.
 
Status Update

1 x Areca 1880i
1 x HP SAS Expander
5 x Samsung F4 2TB HDDs (HD204UI)

Raid array built and rebuilt 3 times with no errors
24+hours total of Winthrax with no errors

I THINK these drives work with the 1880!
 
Oh good, I guess that confirms the SAS expander works with the 1880 cards. Time to actually order one then.
 
Finally, the raid set is expanded, but I notice that the underlying volume did not increase in capacity.

Apparently I need to 'modify the volume set' capacity in order to apply the additional 4tb.

I am in the process of ripping my dvd collection (the existing raid6 array is already quite full) and I am loathed to start again. Will modifying the volume set increase the capacity without wiping my pre-existing dvd rips, or will it blat the entire array and force me to start again?

the underlying volume doesn't automatically increase in capacity after a raid set expansion because the card doesn't know whether you want to expand the volume set OR create additional volume sets. the key to understanding the differentiation is to remember that a raid set can contain multiple volume sets. this was confusing to me at first, so think of the raid set as a big slab, and the volume set(s) like what you carve out of that slab into virtual physical harddisks. that said, I think Areca could improve the GUI to make this process simpler, like a checkbox to "automatically expand volume set" if only one volume set is present on a raid set. reason being, most people only bother with one volume set per raid set.

bottom line no you aren't going to lose any data, and its standard procedure to have to mod the volume set size after the raid set expand. So in the web GUI, modify "Volume Capacity" to match what it says for "Max Capacity Allowed" and kick that off. Once that completes then go into Windows Disk Management (diskmgmt.msc) and extend the partition.

In the future I'd also suggest "Foreground initialization" instead of "background initialization" when creating/expanding raid sets. I'm guessing you chose Background when you expanded yours, which is why it took so long.
 
Last edited:
and this begs the question, for those with problems running multiple arrays, does their performance differ if they use separate volume sets?
 
Status Update

1 x Areca 1880i
1 x HP SAS Expander
5 x Samsung F4 2TB HDDs (HD204UI)

Raid array built and rebuilt 3 times with no errors
24+hours total of Winthrax with no errors

I THINK these drives work with the 1880!

Woo hoo! Areca 1880i + SAS expander works! If there were any 1880i's even available for purchase I think I might buy one. Looks like they are out of stock at most vendors I have heard of with decent pricing; darn.

@vr.

Thanks for the performance benchmarks! Those don't look THAT bad to me. But not owning an 1680 it's hard to tell. I will have to see if I can find past 1680 numbers with multiple RAID arrays to compare to say for sure but, that looks pretty acceptable compared to what I believe was reported before. All the more reason to get an 1880i I think!

---EDIT---

Here is one of the poor performance reports from the thread from 'Danman'. So given that vr.'s numbers look great for 2 RAID5 arrays.

"I've noticed with my 1680LP that it works great with one RAID 5 array but once you add another array (RAID 0 or 5) the performace drops significantly. If I'm streaming from the RAID 5 array and copying files from the other RAID 0 array from/to the RAID 5 array the performance will drop down to as low as ~10MB/s. "

@vr. Could you try streaming a video file from the RAID5 array and then do a copy to the other array, just like Danman? What is the copy performance like then?
 
Last edited:
Status Update

1 x Areca 1880i
1 x HP SAS Expander
5 x Samsung F4 2TB HDDs (HD204UI)

Raid array built and rebuilt 3 times with no errors
24+hours total of Winthrax with no errors

I THINK these drives work with the 1880!

Just one question, have you also rebooted numerous times without any drive time-outs? 'Cause that's where mine screws up with my HD154UI disks, it works fine once it's up, but I can't boot with the disks attached.
 
I'm not sure why everyone is experiencing such low performance with multiple arrays. I just did a test with my system recently. I have a 1680i and 2 HP SAS expanders. I started off with one volume (20 x 2TB in RAID 6) and created another (12 x 2TB in RAID 6) with background initialization (set at 80% priority). During initialization of the second array, I copied 16TB of data to it from the primary one. It averaged ~285mb/s. I imagine I would have gotten higher numbers if I set the initialization priority to something less.
 
Just one question, have you also rebooted numerous times without any drive time-outs? 'Cause that's where mine screws up with my HD154UI disks, it works fine once it's up, but I can't boot with the disks attached.

I have rebooted 10-12 times with no issues.
 
Ok, thanks for the info. Just ordered 4 of these drives to start off with, if they work allright I will start selling off the HD154UI drives and buy more of these.
 
the underlying volume doesn't automatically increase in capacity after a raid set expansion because the card doesn't know whether you want to expand the volume set OR create additional volume sets.

Thanks Odditory for your guidance. I also discovered another thread on here yesterday along the same lines as mine where you posted about the default 4k allocation unit size in windows server 2008 R2. I fear I have made the same mistake and thus I will be scuppered when it comes to resizing the drive partition in windows.

I will have a ponder what to do about that. Probably an order of 8 more drives will do the trick. Anybody seen any good offers on Hitachi 2tbs? :)
 
check the Hitachi 2Tb thread -- yes Fry's has them for $99 right now.
 
So this might be a hiccup, I dont know. Im guessing its the card and not the drives.

I made and re-made several Raid 6 arrays with my 5 Samsung drives with nothing else. The arrays took about 12 hrs (ish?) to make.

I have since connected my old array from my 1231 to it, which was recognized and works).

However now I am doing another remake with the Samsung drives and it is about 7 hours in and only 13.9%. At this rate its going to take until Monday morning to make a 5 drive raid 6 array.

Why would the raid initialization be suffering so much now with the other array attached?
 
Did you choose background or foreground initialization? Might the card also be rebuilding the array from the 1231? Always look at your event log.
 
Foreground, and no it is not rebuilding the array from the 1231, that array is in there, Normal and active.
 
Well, I used background initialisation with high priority (80%) on mine, initialising a 8x 1.5 TB RAID 6 array, and that also took a couple of days. Should be getting my 4 x 2TB either today or tomorrow, will let you know how long the foreground initialisation takes for that.
 
I emailed areca tech support he basically said to disconnect other array before the build, and make sure you set the scsi channel, etc to something different. He said when there is another array on the card active, no matter what it will take longer because some of the card will be dedicated to that array even when not actively accessing it.
 
Well that is interesting news, seen as I will be getting my 2 TB drives tomorrow, but all my data is on the array I have now, so I can't really disconnect it for as long as the array initialisation takes. Unless I would kick it off at night, that might work...
 
With my other array disconnected the array initialization took 12 hours, right now with the other array connected its going to come it right at 54 hours.

You need to make sure the SCSI channel/lun are not the same though (pretty sure anyway).

Is there anyway I can remove drive from an array while on-line?

On-line raid reduction (not expansion)?
 
I've actually heard it is possible, but I have never tried it. I have confirmation that my disks will be in tomorrow, so hopefully by the weekend I will be able to check it out.
 
It's not possible. The RAID card has no way of differentiating between data and unused space since it doesn't operate on that level.
 
It's not possible. The RAID card has no way of differentiating between data and unused space since it doesn't operate on that level.

Yea, This is inherent of all hardware RAID solutions.
 
LOL! points for comedy.

Hehe, wasn't meant that way, but after doing some research you guys are right.

Blue Fox, your reason does make sense, my bad. I also figured out where I got mixed up. The topic I read mostly (on Tweakers.net) about home-built RAID NAS systems mentions software RAID on Linux or BSD quite a bit. Using software RAID under one of those OS's will actually let you shrink the array, but of course, it being software RAID inside the OS that is a completely different situation and that also is a way around the reasons you mentioned.

Learned something again! :)
 
Foreground, and no it is not rebuilding the array from the 1231, that array is in there, Normal and active.

Hmmm, initializing a 4 disk RAID5 array of 2 TB disks is only at 8,5 % yet, has been running for about 4 and a half hours now. Running a foreground init, background priority is on high, although that shouldn't matter as I disconnected the other array for this.
 
5 2TB disk raid 6 array final tally took about 53 hours to initialize with the other array connected.
 
So if you had a choice between a 16XX or an 18XX card at this moment which would you choose?

Also does anyone have any numbers or data to show the increases (or decreases) in performance of the RAID array as more memory is added to the Areca cards?

NOTE this would be for a RAID6 raid away with 10x2TB drives.
 
I'd take those benches with a grain of salt. It was done back in May and the guy didn't specify whether it was using the old Marvell silicone or the newer PowerPC production silicone. As always I don't believe benches until I see them for myself so I'll post some more definitive 1680 vs 1880 results this weekend.

As for cache memory and whether to get a 512mb card or one of the "ix" cards with expandable memory slot, it depends on your planned usage. If you want to mix O/S boot volume and storage on the same card, then cache will come in handy. If all you're doing is storing large media files on a multi-terabyte array then the cache doesn't matter - after the first 4Gb of data is read the cache is depleted. Cache mainly helps in scenarios where the same data is read frequently, aka "hotspots" - O/S, db servers, etc.
 
Last edited:
I'd take those benches with a grain of salt. It was done back in May and the guy didn't specify whether it was using the old Marvell silicone or the newer PowerPC production silicone. As always I don't believe benches until I see them for myself so I'll post some more definitive 1680 vs 1880 results this weekend.

I can't wait to see your results (I know others will appreciate it as well)! If the 1880 does well I'm pulling the trigger and ordering it immediately! I'm already 90% sure I will buy it anyhow as it APPEARS to have much better multiple RAID5/RAID6 array performance.
 
just giving you guys a heads up...over at XS serra has found. and tested, a super cheap 4gb stick of ram for the 1880 series :)
the big problem with the sticks recommended by areca is price, 450usd!!

991822 - 4GB DDR2 PC2-6400 6-6-6-18 Proline ECC Registered

Web priced at a mere $133.42 (retrieved 10-2-2010).

EDIT: STICK IS HAVING ISSUES ON ANOTHER SYSTEM> ABORT! ABORT!
 
Last edited:
Quick question on the Areca cards again.

If the card has 3 internal and 1 external but is labeled as a -12 card does that mean only 3 of the 4 buses can be used at a time or could you buy an external cable that breaks out into 4 sata drives and utilize 16 drives on a system using a SATA cable system?
 
Back
Top