ARECA Owner's Thread (SAS/SATA RAID Cards)

LOL

LSI software management is complete shit compared to Areca's

+1 on that, the linux dependencies are not quite as bad as Adaptec though but I havent used Adaptec or LSI in about 2 years. The Highpoint out of band looks very similar to Areca's.

Alamone,

What are your general thoughts on the 4320?

My problem with the expander is price, the 24 port 3560 Highpoint is £545. I think the Chenbro expander is about £200 although it does do 28 ports but I would still need a card with 2 external and 2 internal connectors (Or 4 internal and bracket).

I can fit 8 drives in my case, was thinking 8 drive raid 5 to start with and then run the other 4 ports to a SF8088 bracket like this:
http://www.span.com/product_info.php?cPath=28_1214&products_id=16642

I could then add cheap 8 drive passive enclosures for additional 8 drive raid5 arrays as needed.
 
The Highpoint 3XXX series is SATA only, so you can't use it with expanders.
I'm not sure that running the cables directly out to passive enclosures is a good idea - the SAS expander acts kind of like an amplifier node so the voltage doesn't have to travel all the way from your raid card to the drive, but just to the SAS expander, which can then actively reroute the traffic. This should give you some more leeway in using longer cables and SAS adaptors.

I rather like my 4320, and it's a good deal if you can get it from newegg on sale at around 300$. I especially like how easy it is to unplug and detect arrays (online array roaming), as I use it with external JBODs that I power on and off as necessary. I tried this feature on the Areca but it didn't work properly, and LSI/Adaptec don't even offer it as a feature. Also, it runs fairly cool with the included HSF, unlike the Adaptec. Personally I took the fan off and use a 120mm fan to cool all my raid cards. You can see my build in the showoff thread.
 
Thanks for the info. Looking at this I will be limited to 1.5M cables for SATA when using SFF-8088/8087. Since the passive boxes would stand either side of the case I think that should be OK (Could get away with a 0.75M external cable). The array roaming feature sounds useful.
 
Don't any of you feel hardware RAID is becoming more useless every year, both due to the uprising of SSDs, as well as filesystems like ZFS.

In ZFS, the filesystem also contains a (software) RAID-engine. It supports features no hardware RAID can ever support:
  • dynamic stripesizes (impossible with hardware RAID because integration with the filesystem is required)
  • checksum-based correction of redundant information (in ZFS; when running a redundant array, any corruption on the disk is instantly repaired due to checksumming. If using ZFS on hardware RAID this would not work because the redundant information is hidden from ZFS.
  • instant and safe rebuilds (hardware RAID rebuilds can damage filesystems in some cases; and can take very long. ZFS only rebuilds your actual data. So a 600 terabyte filesystem with 20 megabytes of data on it would last only 1 second to rebuild/scrub.
  • write-back without needing a BBU (Battery Backup Unit). In Windows filesystems, you need a battery unit to save the memory on hardware RAID in case of failures. In ZFS this is not necessary as ZFS uses the ZFS Intent Log (ZIL) to cope with such failures, without requiring write-back memory to be protected. ZFS uses the system RAM as write-back, and the RAM size is often much more than the buffercache available on hardware RAIDs.
  • Software is universal and not tied to any physical hardware. While hardware RAID means you still rely on that RAID controller; if it fails you need another of the exact same type. With software you don't have this problem.
  • Lastly; the I/O latency of any hardware RAID is higher, thats why Areca controllers with Intel IOP are limited to 70.000 IOps when testing SSDs. A RAID0 of two Intel SSDs may already reach this limit; so SSD performance is very weak for most hardware RAIDs; your host system is much more powerful. Offloading to a slower CPU only lowers performance in this case.

All-in-all, Hardware RAID is quickly becoming obsolete when considering advanced filesystems like ZFS. Its only successful application is in Windows-oriented environments, where the user has access only to older and less advanced technology. NTFS is little more than FAT + metadata-only journaling; ancient in design.

I predict hardware RAIDs will quickly disappear, and their usability has dropped significantly already.
 
Not until ZFS is built into Windows. Right now the only people who attempt to use it are the hardcore geeks. Its not for the faint of heart. I value my data enough not to put it in the hands of a company who no longer exists.
 
With Open Source you do not need any company. There are only few companies that do what consumers want. Virtually all big companies keep consumers addicted and adjust their product to benefit the company but harms consumers.

Its true ZFS is not widespread; though let's be honest only the open source world has advanced filesystems; most Windows technologies are old and inferior. Still many people who only know Windows playing with FreeNAS (who also has ZFS) and Mac OSX almost had ZFS integrated; though they didn't go through with that.

If what you say is true, that would mean Windows-users would benefit from Hardware RAID because the OS is not advanced enough. Other operating systems would not need or would even be harmed when using Hardware RAID. That limits its usefulness to people who have no knowledge beyond Windows and are not prepared to learn anything new in that regard.

I do want to point out many 'Windows-people' look for alternate operating systems to store their data. That's not without reason; Windows simply isn't up to the task.
 
  1. This thread is not about ZFS or the replacement of RAID.
  2. If ZFS was so great big companies would have jumped on it.
  3. Thank you for spamming and taking yet another thread off-topic.
 
In this case it is for my home computer, I dont want or need a dedicated storage server and am quite happy to let Windows share the files, I'm pretty sure it can manage 3 simultaneous HD streams (150Mbit/sec) over gbit lan which is all I need it for.

I use Linux quite a bit at work (Mainly Centos) and am looking at implementing a failover Infiniband/ISCSI-SRP SAN using DRBD over SDP to replicate later in the year but I would still likely not use ZFS. Clear and reliable out of band management and CPU offloading are just too useful. Maybe Intels new storage acceleration geared Xeons will allow some better performance for CPU raid but I still dont like the idea of having to rely on the OS for raid management. It may well be old fashioned but I've been burnt by software raid in the past.
 
@sub.mesa:

Why not start a separate thread to disucss ZFS? I have a few things I'd like to talk about concerning ZFS, but I don't want to hijack this thread or start a thread myself.
 
I am an Areca owner, and i posted my experiences and thoughts about my product, in relation to the new age where we don't have to use FAT anymore. That changes the usefulness of the product, and that i expressed in my post. I fail to see how this is not related to the topic of the thread.

I'm ready to help anyone with specific ZFS questions though, in a separate thread. But my point was that my product is useless now and nothing more than a dumb SATA controller that can do staggered spinup but adds little more and is not worth its price when using modern storage technologies.
 
Hey Odditory, quick q- in one of you screen shots you have the HDD Queue Depth set to 32. Whats the dif between 1 and 32? Is there a big performance impact?

I didn't see any noticeable difference in benchmarks whether I had it set to 1, or 32. Therefore I just left it at default. I know that many people disable NCQ for raid arrays but for me it hasn't made enough of a difference to care one way or the other. Perhaps other people have had a different experience.
 
A few details about the SAS 2 cards - the only coverage I've seen from CeBIT so far...

Original Dutch: http://tweakers.net/nieuws/66023/cebit-areca-presenteert-arc-1880-sas-6g-raid-adapters.html

English highlights from Google translation:

8 port version available "immediately" / May
Marvell 88RC9580 @ 800Mhz
PCIe 2.0
512MB RAM (up to 4GB on models with DIMM slot)
Ethernet management port

ARC-1880LP 8 ports 1x SFF-8087 1x SFF-8088
ARC-1880i 8 ports 2x SFF-8087
ARC-1880x 8 ports 2x SFF-8088
ARC-1880ix-12 12(+4 ext) ports 3x SFF-8087 1x SFF-8088, DIMM slot
ARC-1880ix-16 16(+4 ext) ports 4x SFF-8087 1x SFF-8088, DIMM slot
ARC-1880ix-24 24(+4 ext) ports 6x SFF-8087 1x SFF-8088, DIMM slot

No details on whether there's an onboard expander for the high port models as in the 1680, but I assume that there is.

Areca is relatively late to SAS2 (after LSI, HP, Highpoint and Promise) - hopefully performance will be quite special...
 
Maybe you should've said HP and LSI were just too early to SAS-2, because from what I've experienced myself and heard from others those offerings are dreadful and offered no compelling reason to upgrade from an Areca 1680.

Nice to see they've upgraded their non-raid HBA's to SAS-2 based on those photos of the 1320 cards. Thanks for the update!
 
odditory,

So far I've only seen good stuff written about the newer LSI cards; better start searching for more negative reviews! I'm currently using on-board LSI SAS-2 (8 ports, but no cache and limited to 2 RAID volumes) and ARC-1680ix-24, and unfortunately have an incompatibility between the Areca and a Supermicro (LSI) expander, so for my next build I was drifting towards LSI (or an OEM of LSI) to avoid this kind of problem.

The SAS market could do with a bit more competition, so I really hope the new Areca's are great...

cheers,

Aitor
 
I returned my LSI 9280 since it was pretty awful. I can't say performance was any better since the bloody thing never worked properly...and don't get me started on their management software. I kinda wish there was a version with a RAM slot like the original 1680. That won't stop me from buying one however.
 
well don't fret too much about the lack of upgradeable cache on the non IX versions. for storing big sequential files (like video) the cache is useless anyway and you won't really see a difference compared to the cards that have the permanent 512MB cache onboard.

I'm going to ask areca if the expander chip they're using on the 1880ix is the same as the 1680ix.
 
I've got a pair of 1230's and can't for the the life of me firgure out how to set up email notification. Anybody know how to set this up?
Thanks
 
I have set up email notification on Areca 1261MLs, but I don't have any experience with the 1230s.

On the 1261s, there are two ways to set up email notification, either through the onboard ethernet port, or using the host computer's own network equipment. If you access the web interface through port 81 and choose the email notification item there, it will use the onboard ethernet port. If you access the web interface through port 82, then it will use the host computer's network equipment.
 
Question or two for the Areca owners/gurus in this thread.

Background: I have a server that will be colo'd and used for data backups. Norco 4220 case, 20 hot swap drive bays + 2 internal drives. Currently it has an Adaptec 5405 controller with 4 drives and is at a client's office but I will be rebuilding it soon with 20 7200 RPM SATA drives in 2 RAID 6 arrays (will boot off a RAID 1 internal pair). The 5405 doesn't seem to work with the HP SAS expander that I'm planning to use, so I'll probably put in an Areca 1680.

Question: am I better off going with the 8 drive 1680 with the HP SAS expander or just getting the 24 drive capable 1680 and skipping the SAS expander? A slot to power the expander is available. I don't care about the minor cost difference between the two setups. I value reliability over performance for this server.

Ancillary question: should I consider a different controller/approach?

Constructive advice welcome!
 
Last edited:
What HDDs are you planning to use?

The internal ports on a 1680ix are rather picky on what HDDs they work with. I know from experience that a 1261ML works with more HDDs than a 1680ix. As for whether the HP SAS Expander combined with, say, a 1680LP is more or less picky than a 1680ix, I do not know.

Another option would be an Areca 1280ML.
 
Had I been able to get the server set up when I originally planned I would have been using Seagate 1.5 TB 7200 RPM drives. Now I'm considering the 2 TB Hitachi units but I'm open to others.
 
I haven't tried it myself, but I'd be wary of using the Seagate 1.5 TB HDDs with a 1680ix. If you check the HP SAS Expander thread, there are several reports of success with the Hitachi 2 TB drives on the HP SAS Expander with an Areca controller.
 
As far i remember, any Areca product with "ix" in the name means its running off a port multiplier; so cramped bandwidth and compatibility issues. Please someone verify - i'm lazy today. :)
 
As far i remember, any Areca product with "ix" in the name means its running off a port multiplier; so cramped bandwidth and compatibility issues. Please someone verify - i'm lazy today. :)

Yes the IX's use a SAS expander, as does almost every card with more than 8 ports.
Although I would not say that it causes cramped bandwidth.

You are limited by STP (SAS Tunneling Protocol) which is the case whether you are running on an expander or not.
SAS2 raises the bar for STP quite a bit and this is evident by looking at todays SAS2 cards.
 
Question: am I better off going with the 8 drive 1680 with the HP SAS expander or just getting the 24 drive capable 1680 and skipping the SAS expander?

this question has been addressed in the HP expander forum up, down, and sideways. besides cost, the biggest benefit for me in offloading drive connectivity to the HP expander is being able to swap RAID cards in and out more easily. when new tech comes out and you feel like upgrading, you get the $500-$600 8-port card rather than sink $1300 into the 24-port version.

another example of the value in making the drive connectivity independent of the raid card: take the scenario of having to copy 36-38Tb from server #1 to server #2, each with its own 1680x card and 20 x 2Tb drives. by having the drives connected to HP expander instead of a 24-port raid card, I just unplug the SFF-8088 cable from server #2's 1680x and plug it into the second SFF-8088 port on server #1's 1680x. now the massive data transfer is going directly array-to-array rather than through ethernet. same goes for the JBOD enclosures I built with no motherboard and just a PCIe power board powering the HP expander--with the SFF-8088 cable I just connect it to any server I need fast direct transfers to or from. that's the great value of "externalizing" your connectivity between raid card and expander - patching becomes very flexible especially when you have multiple servers.

@sub.mesa: no the HP expander is not picky about drives like the onboard expander on "ix" model 1680's.
 
Last edited:
OP UPDATE: added TIP about proper array spindown settings when using Hitachi 2Tb drives with Areca 1680 cards.

Background: I've noticed that Hitachi 2Tb drives on Areca 1680 series cards in raid arrays configured for spindown did not wake up properly with the default .4 second staggered spin-up value. The event log would show time out errors for half the drives when the staggered spinup value was set too low, and the system required a hard reboot. I'm still doing more testing and have submitted the issue to Areca but for now I've had to resort to the following values:

System #1 - Areca 1680 + 20 x Hitachi 2Tb Raid6, lowest spin-up setting that worked was 2.0 seconds.
System #2 - Areca 1680x + 12 x Hitachi 2Tb Raid5, lowest spin-up setting that worked was 1.0 seconds

At this point I'm not sure this issue is limited to Hitachi 2Tb drives, but those happen to be the drives I noticed this behavior on. It could be that the Areca card is being too impatient, or that these particular drives have a longer power on sequence than other drives. The same Areca controllers with 1.5Tb Seagates or 1Tb Western Digital in raid arrays woke up fine from spindown with even the lowest .4 second setting.

We'll see what Areca says, but for now, waiting a few seconds longer to be able to access the array data after it was spun down is not the end of the world.
 
Last edited:
Thanks for the input. I hadn't realized the "ix" meant a SAS expander was built into the card. I'll likely go with the HP SAS and an 8 port card.

I'm reading through the HP SAS expander thread now. Interesting stuff.
 
The Pics:
1267809885.jpeg

1267809886.jpeg

1267809887.jpeg

1267809888.jpeg
 
Any word on what the Areca 1880 cards will be selling for? Will they be replacing the current MSRP of the 1680 cards or living along side them with a higher MSRP? I hope they plan to be matching the prices of the LSI cards since the new Areca 1880 has slightly lower hardware specs using only 533Mhz DDR2 VS 800Mhz DDR2 compared to the LSI cards. I guess it all depends on how well the new Marvell 800Mhz ROC on the Areca 1880 matches up to the LSI 800Mhz ROC, as well as firmware/driver optimizations. Marvell's less than stellar SATA 6Gbps controllers up to this point, doesn't exactly build up my confidence that they'll pull off an amazing SAS 6Gbps ROC/controller combo chip, but who knows. Still, the larger Areca cards have >8 ports and allow up to 4GB of cache, so they'll keep that advantage.

Pricing hasn't been announced yet. I'd assume they different gen cards will coexist- why not? The 1280ML still sells for over $1000 after all these years. Comparing Marvell 6Gbps SATA controllers in the same sentence with the SAS ROC on the Areca 1880 = apples versus oranges. Remember Intel sold their ARM division off to Marvell, so the ROC on the 1880 is the first major byproduct of that transition happening in this category. I doubt Areca would put out a card that didn't measure up, it would be surprising to say the least, but we'll see. The biggest longterm benefit I see is Marvell having committed to being a bit more open with allowing OEM's like Areca access to the ROC - a big departure from Intel's attitude which was essentially trade secret paranoia and lockdown which meant bugs couldn't be fixed quickly or at all, and once Intel sold the division off, it marked the end of the line for any updates to the IOP348.

Also 1880 having slower cache mem or CPU clock than the LSI card tells us nothing - it's all in the implementation and optimization. As we saw with Adaptec 5 series versus Areca 1680, the two gens of cards used the same IOP348 and similar memory yet the 1680 blew the doors off the Adaptec with I/O throughput, build speed, OCE speed, repair speed, etc by 2 to 1, in some of my tests 3 to 1. Even the previous gen Areca 1280ML was faster than the Adaptec 5 series in most tests.
 
Last edited:
Pricing hasn't been announced yet. I'd assume they different gen cards will coexist- why not? The 1280ML still sells for over $1000 after all these years.
If they do co-exist, I hope the 1680 cards get a price drop and they don't charge a $500 premium over the current prices just to get SAS2.

Comparing Marvell 6Gbps SATA controllers in the same sentence with the SAS ROC on the Areca 1880 = apples versus oranges.
Cebit 2010 news said:
Test Copies of the ARC-1880, already last year at the Cebit show with the promise that the cards in the first quarter of 2010 would come on the market. The new adapters are no longer based on I / O processors from Intel, but running on a Marvell 88RC9580 processor. This chip is clocked at 800MHz and features an integrated controller sas-6G. The cards have a PCI Express 2.0 interface with eight lanes and are standard equipped with 512MB DDR2-533 memory.
Why do you say it's apples to oranges when the Marvell 88RC9580 ROC on the 1880 has an integrated SAS/SATA 6Gbps controller built-in?

Isn't it possible the 88RC9580 ROC will just be the 88SE9480 & 88SE9485 in a single package?

88SE9480 (PCIe 2.0 x8 to 8 SAS/SATA 6 Gb/s ports RAID controller)
88SE9485 (PCIe 2.0 x8 to 8 SAS/SATA 6 Gb/s ports I/O controller)

Remember Intel sold their ARM division off to Marvell, so the ROC on the 1880 is the first major byproduct of that transition happening in this category. Also 1880 having slower cache mem or CPU clock than the LSI card tells us nothing - it's all in the implementation and optimization. As we saw with Adaptec 5 series versus Areca 1680, the two gens of cards used the same IOP348 and similar memory yet the 1680 blew the doors off the Adaptec with I/O throughput, build speed, OCE speed, repair speed, etc by 2 to 1, in some of my tests 3 to 1. Even the previous gen Areca 1280ML was faster than the Adaptec 5 series in most tests.
Very true. I also didn't realize the Marvell chips were ARM based (for some reason I was thinking they were PowerPC based, my mistake) or that they bought Intel's ARM division. I wonder if the ARM portion will end up being based on that quad-core they showed at CES. If so, it really may end up killing the competition.

If the Areca 1880 ends up being a top performer, I could definitely see picking one up this coming summer when I plan to build a new main PC. Looking forward to reviews when you guys pick them up.
 
I am looking at probably buying a 1680IX-8 and 9 Hitachi 2TB drives for a RAID 6 plus a cold spare. I currently have a Highpoint RR2320 with 6 WD RE3's in RAID 5 and 2 Raptors in RAID 1 for the OS. If I want both to coexist, would I be best off leaving the RR in place or should I move all the data off it to the new RAID and then, using SAS expanders, rebuild the existing RAID 5 on the Areca?

Additionally which cables do I need to either run the 8 SATA drives alone or the total of 16 SATA drives off the Areca? HP SAS expanders in both cases? It seems like it would get a bit pricey for my simple file storage intent. If the cheapest option is just to get some fanout cables and run the 8 drives off the Areca and leave the existing drives on the 2320 then I'd probably just do that.

Are there any issues with prioritizing which RAID card the OS will boot off of? Not sure if this depends on mobo or not but this would probably be going on a DFI P35 board.

Thanks!
 
Are there any issues with prioritizing which RAID card the OS will boot off of? Not sure if this depends on mobo or not but this would probably be going on a DFI P35 board.

If the Mobo bios support BBS then you will just get a list of arrays and single drives and you can choose which to boot off in the boot order.

If the bios does not support BBS then it is somewhat luck dependant when dealing with multiple "SCSI" cards.
 
As far i remember, any Areca product with "ix" in the name means its running off a port multiplier; so cramped bandwidth and compatibility issues. Please someone verify - i'm lazy today. :)

Is it port multiplied internally? If it is not then each disk has its own dedicated bandwidth.

The spec indicates they're dedicated lanes for each channel (disk).

Just bought two Areca ARC-1680ix-8 at work, I did my homework. :)
 
Back
Top