ARECA Owner's Thread (SAS/SATA RAID Cards)

There definitely isn't much to choose from these days. Too much consolidation.
 
Hi

Just to summarize.

Last response from Areca
----
Hi,

We will implement the SR-IOV function on our next version controller firmware that ROC processor has supported the SR-IOV architecture.

Regards
B.... W.
----

You will see this problem when you have a lot of outstanding io request (as created with IO meter).

Real life example:
- first raidset - OS
- second raidset - data

if you will write 100k files in Windows it will go directly to system process which will try to write 100k files in parallel. Areca will store a lot of this request in memory. That is all fine. The problem occurs when OS will try to write just some files and will wait for sync. Due to arhitecture, all io requests are in the same queue and it will handle OS request in something like 1-10s. the whole time OS will be frozen!

Real life example 2:
you install XenServer and create for each VM its own raid set. If any of raidsets will have large number of outstanding IOs, ALL VMs will stall on write.

Damjan Pipan
 
Since that response is in "engrish," does that mean it will be in the next update for the 1680ix? or the next update for the 1880ix?
 
I was having similar performance issues with my ARC-1680i in JBOD mode on Windows Server 2008 R2.

Here is what I noticed:

- Copying files (40GB+ each) from a USB connected MyBook drive to HDD1
- Moving a file (40GB+) from HDD2 to HDD3
- Copying files (40GB+ each) over the network onto HDD4

If I just copy a file from the USB connected MyBook drive to HDD1, I usually see about 28 to 30MB/s throughput.
Moving a large file from HDD2 to HDD3 on it's own is decently fast.
Copying a large file over the gigabit Ethernet connection usually transfers at 95+ MB/s

Once I do either two or all 3 of the above listed tasks at the same time will cause the USB transfer speed to drop down to below 20MB/s, the HDD to HDD move will take FOREVER and the network transfer is sporadic and drops down to an average of less than 20MB/s!

Individually they seem fine and are fast. Combined they are awful. I thought there was something wrong with my setup. I am using Areca's latest drivers released a few days ago. I also noticed that the system does a LOT of data caching for each transfer and will what seems to be a block of data transfer at a time. Looking at the activity LEDs, the drive with the highest incoming throughput seems to be active the most, so the HDD to HDD transfer LEDs seem to be on the most and are usually on at the same time, sometimes the alternate but appear to be solid on, this gets interrupted every now and then when the buffer for the network transfer gets full at which point the HDD4 LED will be solid on and all other's turn off. The HDD1 LED (data coming from the USB drive) seems to be the lowest priority at this point and is only on for short periods of time when there is 'time' in between the other transfers.

This behavior or implementation is by far the worst I have seen! The storeport driver architecture was supposed to 'fix' these kind of bottlenecks/transfer issues not make them worse!

After reading some of the previous posts here I am relieved to see that it does seem to point in the direction of the Areca driver as the source of my problem. I hope they fix this soon, or I may need to find a different controller for my setup...
 
I had the same problem with either the scsiport or storport driver. If find it weird that people are just now noticing this. How long have the 1680 cards been on the market?
 
The odd thing is that if I run de-fragmentaion (O&O Defrag Server Edition) on say 8 drives simultaneously or run a rsync to create the parity (FlexRAID) which also uses multiple drives at the same time, there doesn't seem to be any performance issues. The resource meter for example shows me that with 6 drives active (5 on the ARC-1680i and one on the ICH10R), I get a disk throughput of 700+ MB/s! Now granted at this stage the ARC-1680i is doing mostly reads and only the drive connected to the ICH10R is actually doing the writes, but during the defrag process, there are tons of reads and writes to all of the drives simultaneously and it doesn't seem to be causing any issues or maybe I just haven't paid too much attention to it when it happens... Have to have a closer look at the throughput during a defrag process next time I run it...
 
I only experienced it with multiple RAID sets on the same Areca controller. When I only had one RAID 5/6 set on the Areca and moved my other array onto the built in ICH10 everything ran great.
 
I, too, noted performance issues when copying huge amounts of data between two RAID6 volumes. To me it seems that the drives I am using might at least be partially responsible. In the past I had two arrays on my 1680-ix24:

a RAID6 volume consisting of seven 1TB Hitachi HDT721010SLA360 and five Samsung F1 RAID HE103UJ drives
a RAID0 volume consisting of two 1.5 TB Seagate ST31500341AS and five Samsung F2 HD154UI drives

Copies between both volumes were fast and high transfer rates sustained throughout, even with multiple parallel transfers.

A while ago I started to experiment with the HP SAS expander. Since then I have moved the RAID0 volume to the HP expander and created a new RAID6 volume consisting of four 2TB Samsung F3 HD203WI drives directly attached to the ARC1680ix24. With this array the trouble started. First of all the F3 drives have compatibility issues. With NCQ support enabled on the Areca card the Samsung drives cause timeout errors in the event log and eventually will drop out completely. The drives work with NCQ support disabled, though, but the overall performance is down the drain. I also notice the "pumping" performance: very fast as long data is read into the cache (I have 4GB installed) and slowing to a crawl when data is written back. Copying data between two JBOD disks simultaneously will sometime freeze Windows for ten seconds or more.

Originally I had the new RAID6 array connected to the HP expander, but that gave me an average write rate of around 6MB/s only. It took 24 hours to copy 500GB from a JBOD disk to the array. I attribute this more to the HP expander, though. The write performance was much better after I connected the array directly to the Areca controller. Then I achieved an average write rate of around 15MB/s. Still not fast, but better. But this is a discussion more suited - at least partially - for the HP owner's thread.

Next week I will try the latest Areca beta firmware dated 2010-07-15. Maybe it will improve things. I installed the beta dated 2010-06-07 to get the HP SAS expander working, but noticed that this beta had some other issues. For example, it took the Areca three minutes longer to become ready during boot than it took with the official 1.48 firmware. Nothing was changed: no more drives attached, no attributes changed in the Areca firmware and especially the HP SAS expander not connected.

There is also a firmware upgrade available for the Samsung drives to address some compatibility issues between the drives and the AMD SB850 chipset. I don't think it will help, but maybe I'll give it a try anyhow.

At the moment I can only conclude that the Samsung F3 drives are not really compatible with the Areca card. I didn't have issues with the Samsung F1 RAID or F2 drives, so I thought I should give the F3 drives a try. Seems that this wasn't such a good idea.

By the way: the Samsung F1 non-RAID 1TB drives really are not compatible with the Areca controller, as Areca's HDD compatibility overview states. I experimented with a F1 HD103UJ connected as a JBOD that I had lying around. I experienced timeouts with the drive, too. But it works with NCQ support disabled, like the F3 drives do.
 
I'm still wondering how some drives can not be compatible with some controllers if all adhere to the specs. The F3 series show some odd behaviour compared to the F2 like ignoring the staggered spin-up settings on backplanes and sometimes just dropping of my LSI controllers if they are set to standby. Maybe I should also try disabling NCQ.
 
I recently bought an ARC1680-ix8, and I'm at a bit of a loss. At the time I figured I'd just get a SFF-8087 to 4x SATA breakout cable, but then it occurred to me that I may be able to get an internal drive cage with a single SFF-8087 connector. However, I've not been able to find many and those I've found seem to be out of production. In hindsight I guess that makes sense, since most people with SAS would have special server grade equipment.

Anyway, hope springs eternal. If anyone knows of a drive cage that will take 4 SATA disks, hot swappable, fit in 3x 5.25" drive bays, and provides a single SFF-8087 connector on the back... that'd be very helpful.

Every online store seems to have Intel's AXX4DRV3GEXP offering, but finding information on that is impossible. I don't even know if you can mount it in a standard case; somehow I suspect not. Anyone know for sure?
 
I recently bought an ARC1680-ix8, and I'm at a bit of a loss. At the time I figured I'd just get a SFF-8087 to 4x SATA breakout cable, but then it occurred to me that I may be able to get an internal drive cage with a single SFF-8087 connector. However, I've not been able to find many and those I've found seem to be out of production. In hindsight I guess that makes sense, since most people with SAS would have special server grade equipment.

Anyway, hope springs eternal. If anyone knows of a drive cage that will take 4 SATA disks, hot swappable, fit in 3x 5.25" drive bays, and provides a single SFF-8087 connector on the back... that'd be very helpful.

Every online store seems to have Intel's AXX4DRV3GEXP offering, but finding information on that is impossible. I don't even know if you can mount it in a standard case; somehow I suspect not. Anyone know for sure?

http://www.rackmountpro.com/product.aspx?prodid=1715&catid=270

I have 3 of these on my 1680ix-12. They work great.
 
I am losing faith in my two ARC-1680ix-8 cards. But it may not be their fault. At work the people in charge of money chose to put fat 2TB consumer drives in ESXi servers with these RAID controllers. Any heavy disk I/O (snapshotting, etc) can cause a large number of SCSIAborts because one of the SATA disks timed out (according to the Areca card).

I haven't seen any noticeable improvements with a BBWC, which is a bit of a bummer.

We are putting together a new server and have chosen enterprise SAS disks, two 600TB 15k RPM for a RAID1 mailserver and 6 1TB Seagate Constellation disks, hope these work without issue or we will definitely be going with a different RAID controller.
 
You could probably convince those guys to ship there. I ordered from them and they seemed eager to please.

They're willing to ship, but it looks like postage alone will cost over US$300, which is more than the cost of two units themselves. :p
 
Found some benchmarks of the 1880i on Areca's forums.
http://arecaraid.com/forum/viewtopic.php?f=7&t=14&start=30#p59

I scanned the file attached to the post with ESET, its clean.


Also saw this
Here is official word on the status of Areca ARC1880 series.

"Sorry for the delay schedule on 6G SAS/SATA RAID Cards. We have solved the fundamental problem on this series finally since it was announced to deliver months ago. Please be patient for few days after out final verification and new F/W updated."

This was received on 7/29/10
 
Due to the cost of shipping a SAS-enabled RAID cage to where I live, I'm thinking of getting a mSAS-to-4xSATA fan-out cable and a regular SATA cage, since both of those are available locally.

So to summaries:
Areca 1680ix-8
Adaptec ACK-I-mSASx4-4SATAx1 1m R cable (2236700-R)
And some kind of cage for 4x regular SATA drives.

Anyone see a problem with this idea? One thing that worries me is that all the sites selling these cables say something like 'Internal mini Serial Attached SCSI x4 (SFF-8087) to (4)x1 Serial ATA (controller based) fan-out cable measuring 1 meter.'

Assuming there isn't a problem, I guess one benefit of this is I could reuse the cages later with my other, non-SAS Areca card, when I eventually upgrade again.
 
Last edited:
Forward means you have a multilane connector on the HBA and 4 connectors for the drives/backplane. Reverse means you have 4 connectors for the HBA and multilane for the backplane. If your cables are that bloody expensive, I would consider importing some. I just bought 6 on eBay for $54 shipped.
 
So 'yes' in other words? :p
Thanks for using the terminology though. I need to learn it.

Also, I will look into cheaper cables definitely, but yes they are not exactly cheap over here. Keep in mind our dollar is woth less though.



New question now: Does anyone know if I can just connect my existing RAID set up to the new controller and have it just work? My existing set is on an Areca 1110.
 
Last edited:
Something like this? https://www.techbuy.com.au/p/76805/3Ware/CAB-8087OCF-10M.asp

What exactly is the difference? Is it that a reverse cable connects to multiple SATA ports on a RAID card or motherboard, and goes to a SAS backplane, where a forward cable goes from a SAS RAID card to multiple SATA hard disks?

They are wired differently. see this

http://www.cs-electronics.com/PDF/34-iSAS-737P-xm p2.pdf

and this

http://www.cs-electronics.com/PDF/35-iSAS-7P73-xm p2.pdf
 
Yes, you can move arrays between different Areca models. Nothing to it really. I've moved arrays between 4 different cards.
 
So 'yes' in other words? :p

New question now: Does anyone know if I can just connect my existing RAID set up to the new controller and have it just work? My existing set is on an Areca 1110.

I believe I read in the Areca KB that you could move a SATA array to a SAS controller (using SATA drives) but not the other way around.
 
very interesting question, it actually isnt listed in their documentation other than :

RAID-on-Chip 800MHz

do you have any insight as to why they didnt go with the marvell? wonder if they are being intentionally vague?
my main interest is low QD radom I/o and how well it scales, the problem with most solutions right now with ssd arrays is that the low QD (1-3) isnt much higher than single ssd performance.
 
Last edited:
They did go with Marvell. Technically the Intel ones were Marvell too since Marvell bought out the division a while back. I bet they don't hit newegg for another month.
 
intel is using lsi re-brands currently, same as their 6gb/s line right now. for the 6gb/s intel is definitely going with lsi hardware. you can even cross flash lsi firmwares onto intel cards.
interesting stuff with areca, wonder why they arent naming the roc...
 
very interesting question, it actually isnt listed in their documentation other than :

do you have any insight as to why they didnt go with the marvell? wonder if they are being intentionally vague?
my main interest is low QD radom I/o and how well it scales, the problem with most solutions right now with ssd arrays is that the low QD (1-3) isnt much higher than single ssd performance.

The available Marvell documentation like all things Marvell is incredibly sparse so they may have gone with a Marvell controller. I checked the PMC controllers and from everything I could find, they run at 600 Mhz. Though there are a couple chips where, like Marvell, the documentation is spare. The only controller I know of that supports 800 Mhz DDR2 and has a documented 800 Mhz core is the 2108 but they've never used LSI parts before and there is no LSI documentation for supporting 4GB max ram size. Honestly, I don't understand why Marvell and PMC are so stingy with simple things like datasheets and other documentation.

Neither the PMC parts nor LSI parts are ARM based (MIPS and PowerPC respectively) so that might explain the delay in getting the cards out there.

If you are worried about low QD and SSDs the only option I know that really works right now is host/software based raid 0/1/10/1e. Even the 9211 though is slower than the ICH10 when it comes to SSDs due to tunneling the SATA protocol. There just aren't that many native SAS SSDs available and they all cost a significant multiple for roughly the same performance. The only 2 SAS drives I know of right now are the STEC and Hitachi ones and I don't think the Hitachi one has hit general availability yet.
 
well the 9260 w/ fastpath key is definitely fast... i can reach 125k iops, and others have reached 150-180k, and this is with random 4k. it is just low qd where i would like to see some improvement, however with fastpath key on the 9260 it is probably safe to say it is faster than ich.
http://www.xtremesystems.org/forums/showpost.php?p=4413416&postcount=141
another breakdown:
http://www.xtremesystems.org/forums/showpost.php?p=4415804&postcount=185

At low QD, you are going to be limited by the transaction latency, unless you have some data for ICH with software based raid that says otherwise, all the data I've seen for ICH10 w/ software raid points to it having by far the lowest latency for 0/1/10/1e. Didn't see any data in those graphs for ICH10 from what I can tell.
 
well yes soft raid is faster :) however for a hard raid (bootable being key for os use) you would be hard pressed to find a faster solution.
however, it should scale higher than that. that is my interest in the 18xx series, how they handle it. not really interested in soft raid at all tbh.
 
well yes soft raid is faster :) however for a hard raid (bootable being key for os use) you would be hard pressed to find a faster solution.
however, it should scale higher than that. that is my interest in the 18xx series, how they handle it. not really interested in soft raid at all tbh.

yeah I understand that, though the point still stands that at low QD with SSD you are effectively latency bound which means that the critical issue is how fast the firmware of the raid card is. Its not that easy for them to lower it significantly due to the complications of the software stack and the firmware interface.
 
well yes, i understand that as well. however, I have had the 9260 since the first week it was released, and have tested it significantly. i have seen over the course of 5 or 6 firmware revisions, and the fastpath key , tremendous performance gains. on the order of 2x to 3x random I/O at small file random, and latency go from .18 to .08. also with my testing with the 9211 i seen the same kinds of performance increaes with its integrated raid, even on a hba there can be advances...
so yes, latency is the factor, but to expect them to be able to make significant gains is not being unreasonable, which is exactly why i am interested in arecas offering. maybe they have a better solution...time will tell.
 
intel is using lsi re-brands currently, same as their 6gb/s line right now. for the 6gb/s intel is definitely going with lsi hardware. you can even cross flash lsi firmwares onto intel cards.
interesting stuff with areca, wonder why they arent naming the roc...
Marvell bought the Intel IOP line.
 
not trying to derail the areca thread here :) but just for comparisons sake here is first gen firmware v latest gen with the lsi 9260-8i and 8R0 vertex
4kcomparsions.png


tremendous gains can be made!

@blu fox- ok i see now :)
 
Computurd: what's the left benches and what's the right benches in that picture? And what kind of config since >300MB/s cannot be one drive. What kind of I/O workload? 4K read, 4K mixed?

But 1.2 M iops is a nice score anyhows. :)
 
Back
Top