ARECA Owner's Thread (SAS/SATA RAID Cards)

i think you misunderstood me. if the raid lives on the areca card, bet a battery back up for the areca. if the raid lives on the computer (software) or the drives are passthrough, you'll want your battery backup on hte COMPUTER side (not areca)
 
Does anyone here have experience using Xyratex JBOD/EBOD units with the Areca cards?
 
i think you misunderstood me. if the raid lives on the areca card, bet a battery back up for the areca. if the raid lives on the computer (software) or the drives are passthrough, you'll want your battery backup on hte COMPUTER side (not areca)

Oh, I get it. It's gonna be running zfs so the raid card is basically just a dumb controller. Which is why I was asking if a battery would even help in that situation. All my PCs are on UPS and this will be as well.

So I can skip the Areca BBM if I'm on zfs then.
 
You should still have the BBU otherwise you can have data loss on powerloss from the hard drive's write cache, where as the BBU lets you turn off the drive write cache and still get acceptable performance from the controller's write cache.

Most consumer drives lie about when data is actually finally written, and randomly have 8-32mb of recent writes evaporate will trash any filesystem. And you can damage a zfs pool practically beyond recovery if the 'sync' commands are not being honored as expected.
 
I would be surprised if someone was using a 800 dollar raid card for ZFS
 
I wasn't getting a 1680i for the purpose of a zfs setup especially since I already have an AOC-USAS card already. I asked about 1680 because I'm getting a one for free and since I only need one card I for use, it may as well use the areca.

I can justify using another ethernet port on the areca as well. :p
 
You should still have the BBU otherwise you can have data loss on powerloss from the hard drive's write cache, where as the BBU lets you turn off the drive write cache and still get acceptable performance from the controller's write cache.

Most consumer drives lie about when data is actually finally written, and randomly have 8-32mb of recent writes evaporate will trash any filesystem. And you can damage a zfs pool practically beyond recovery if the 'sync' commands are not being honored as expected.

hmm, so if the system loses power immediately, and the controller has it cached, but the raid doesn't live on the controller, how & when does the controller tell the OS here's the rest of what i was writing? is that built into the areca driver?
 
hmm, so if the system loses power immediately, and the controller has it cached, but the raid doesn't live on the controller, how & when does the controller tell the OS here's the rest of what i was writing? is that built into the areca driver?

Assuming you have battery backup, the Areca firmware (not the driver) will finish flushing the cache to disk as soon as power is restored and the disks can be spun up. Neither the OS nor driver is involved at all.
 
The Areca 1880 cards will be out next week!
They will not release prices until next week sadly.

Ill update when i learn anything further.
 
anyone know what was updated yesterday/2 days ago for the 1680? it says the firmware was updated but shows the same version...
 
saw the firmware (beta 100607) was updated on the ftp

also, there's a new linux driver (arcmsr.1.20.0X.15-91001.896)
 
Update: pgjensen where were you able to determine it was beta firmware from? Am I missing a firmware description or update page or something?

saw the firmware (beta 100607) was updated on the ftp

also, there's a new linux driver (arcmsr.1.20.0X.15-91001.896)



FYI...

Not sure of all your guys' thoughts on this but figured I would post it for your info...

Background: Sent an inquiry friday about the change, just got this reply in the last few hours. [removed]

Update#2: Cut out all the long email crap... I'll summarize for you. Areca claimed the new firmware was beta software and but now is just a bugfix. I'm trying to get them to update their version numbering to reflect a change from the previous release. Either way something has changed but they haven't changed the version number, which i find annoying. Wow... just realized I got an email from Billion Wu himself.
 
Last edited:
Hi

We have quite some areca 1680ix and we face a problem!

We create 3 pass through disks and when any of them is stressed with a lot of small writes other disks performance decreases tenfold.

We have created tests with IOMeter:
- 3 workers - custom access specifications - 100% writes
- we ran two tests
- first every worker has 1 outstanding io
- second 2 workers have 20000 outstanding ios and third disk has 1 outstanding io.

In second test performance on less stressed disk drops around 30x and we get less than 20 writes per second!!

This seams as a major Areca problem. The configuration was Disk cache disabled and volume cache enabled as default config with BBU.

Does anyone have same problems?

Damjan Pipan
 
i am about to create a new raid array and will do some testing to confirm.

do you have latest bios & driver (both from july 2010)?
 
Hey guys if your power gets cut expectantly and you're fortunate enough to have the BBU connected you wind up with this:

http://www.youtube.com/watch?v=jaoIDEnbR0o

Without the display on the chassis telling what's going one may think they have a bigger problem! (system will hang at PCI-E initialization of HBA until flush finishes)
 
HI

we installed latest firmwares from Areca. We then upgraded to Beta version from June I think.

OS is Win 2008 RC2 and in IOMeter we set writes in size of 1kB

Damjan
 
Hi

We have quite some areca 1680ix and we face a problem!

We create 3 pass through disks and when any of them is stressed with a lot of small writes other disks performance decreases tenfold.

We have created tests with IOMeter:
- 3 workers - custom access specifications - 100% writes
- we ran two tests
- first every worker has 1 outstanding io
- second 2 workers have 20000 outstanding ios and third disk has 1 outstanding io.

In second test performance on less stressed disk drops around 30x and we get less than 20 writes per second!!

This seams as a major Areca problem. The configuration was Disk cache disabled and volume cache enabled as default config with BBU.

Does anyone have same problems?

Damjan Pipan

Sounds like your running into the same problem this guy had:

http://forums.2cpu.com/showthread.php?t=96602

I have it on my machine but its not that bad.
 
Hi

I got a response from Areca to my question "I suspected that queued requests in cache impact other volumes. Will you fix this?"

----
your suspect is correct.
it is true that the queued requests in cache will impace other volumes, because all volumes use same queue thread.
and we will not able to fix this behavior because it is the firmware architecture.
---

Damjan Pipan
 
I had this same problem with my 1680 controller (reported pages back in this thread). When 2 RAID 5 volumes were present the writes would drop to 5-10 MB/s or slower. Just one array and they were 200 MB/s or so. It's the same with my 12xxML.

They are great cards for just one array but not enterprise cards. Home use only.
 
I had this same problem with my 1680 controller (reported pages back in this thread). When 2 RAID 5 volumes were present the writes would drop to 5-10 MB/s or slower. Just one array and they were 200 MB/s or so. It's the same with my 12xxML.

They are great cards for just one array but not enterprise cards. Home use only.

hmmmm I wonder if the architecture will change with the 1880 series cards? i may have to sell my 1680ix-24 and get the 1880

i am about to add a large 2nd array to my 1680ix-24 and now i'm scared :(


danman, what card would you recommend for 2tb hitachi's over this that has 24 internal ports and also has an external sff port?
 
Last edited:
1 Array set is for backups that are run at night
1 array set is for OS
1 Array has Data, and VMs
1 Array is backups for VMs and Working directory for video encoding

Really 3 arrays are always active.
 
Dear Sir/Madam,

all our raid controllers have similar architecture, and as i known, most of
these array controllers have similar architecture.

the controller total throughput is fixed. so if there have multiple arrays,
all array have to share the total throughput no matter the controller have
one queue thread or many threads.


Best Regards,


Kevin Wang

just got that re: does the 1880ix-24 have the same firmware architecture as the 1680ix-24 (meaning, can queues get blocked with multiple arrays on the same card)
 
They are great cards for just one array but not enterprise cards. Home use only.

Lots of enterprise environments do not use multiple volumes. We use these in an enterprise environment where I work. Here is our brand count:

294 Areca's average iowait: 2.976 load: 2.230 CPU (user/nice/sys): 18.121
318 LSI's average iowait: 4.617 load: 3.020 CPU (user/nice/sys): 33.952
115 3ware's average iowait: 6.913 load: 2.922 CPU (user/nice/sys): 38.057


We switched to areca after tons and tons of problems with both 3ware and LSI and so far have yet to lose data on an areca array and they have proved to be *MUCH* better in the enterprise environment.

All these cards are use on shared web-servers that are under heavy I/O load (way more than a regular home system) which is where 3ware and LSI seem to fall flat on their face.

LSI cards have horrible quality control where they randomly will die with REJECTING I/O to offline device and lose their cache data even with a BBU. I had to restore two machines just this week from file-system corruption caused by bad LSI cards. We have had to return about 20-25% of our LSI cards due to bad cards that do this very often (and a replacing the card with the same disks, cables, etc... fixes the prob).

I think areca has more than proved itself in the enterprise environment. That being said all our disks are always part of a single raid set with two volume sets. The OS volume set does hardly any I/O so this particular problem does not effect us at all.
 
Lots of enterprise environments do not use multiple volumes. We use these in an enterprise environment where I work. Here is our brand count:

....

Plenty of enterprises use multiple Raid sets - Raid 1 for the OS and Raid 5 for data. Areca cards are almost never used in an enterprise. This is because the big OEM's don't included them with their servers.
 
Plenty of enterprises use multiple Raid sets - Raid 1 for the OS and Raid 5 for data. Areca cards are almost never used in an enterprise. This is because the big OEM's don't included them with their servers.

I am just saying that lots of enterprise environments do not use different raid sets (us being one of them). Also if its just OS and DATA then I dont see why you would be getting effected by this issue as there should be very little reads/writes going to the OS volume/raid set. For example my machine:

http://box.houkouonchi.jp/rrd/

sda = boot/OS (raid6)
sdb = database (raid10)
sdc = storage (mechanical raid6)
sdd = storage (SSD raid0)

As you can see there is almost no activity to sda. Its mostly writes and most of that is writing the logs and stuff as that is where they are stored.

I do agree that Areca aren't used a lot in enterprise and its a damn shame. LSI and 3ware are crap. I have heard good things about Adaptec but they seem to have the worse disk compatability of any other raid controller.
 
Since Adaptec sold their Raid business to PMC who knows what the future hold for the cards.
 
I think the main reason why Areca isn't really used in enterprise environments is support. They don't have a 24/7 number (or anything but email support) like the competition.
 
I think the main reason why Areca isn't really used in enterprise environments is support. They don't have a 24/7 number (or anything but email support) like the competition.

Yeah good point! Although I waited on hold for 2.5 hours and gave up when trying to contact 3ware. It took 2 weeks of going back and fourth before they even got the system partly working which I had to finish the rest myself where as areca support (minus some engrish) has shown to actually be much better than what I have dealt with @ 3ware, especially for an issue I could have recovered from in 5 minutes on areca.

Areca, like all other raid brands, has problems; but compared to all the others Areca seems to be far more reliable and better suited for the enterprise environment. We have over 700 servers with raid arrays with 3ware, LSI, and areca so I think I have had a big enough spectrum of hardware and cards to get an idea of what the reliability is of the different brands.

We have only had 2-3 problems on areca controllers. So far only one (of the now almost 300) controllers have died and one array we had problems with because someone used the wrong (incompatible) seagate 1TB 340AS drives. One time someone pulled the wrong disk due to silicon mechanics not wiring the backplain correctly causing a failed volume which was pretty easy to recover. We have yet to completely lose an array or file-system on any areca machine when I had 2 file-systems lost on LSI just in the last week and can't even count how many in total...
 
I've been quite pleased with them as well. I've owned a 1130ML, 1230, 1260, 1280ML, 1680i, and 1680ix-24. I keep buying them for myself for a reason. :p
 
Areca's email response is very very fast and they have great linux drivers. LSI actually has great email/phone support, but I don't like their RAID cards (just their SAS HBAs). Their linux drivers (kernel) are good, but I can't ever get their released drivers to work. Adaptec drivers suck for linux and I just can't stand them :)
 
Since Adaptec sold their Raid business to PMC who knows what the future hold for the cards.

Who knows, we might actually get back up to 3 different raid controller designs. Right now everything is either LSI based or Intel(now Marvell) IXP based and has been like that for a while.
 
Back
Top