Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
i think you misunderstood me. if the raid lives on the areca card, bet a battery back up for the areca. if the raid lives on the computer (software) or the drives are passthrough, you'll want your battery backup on hte COMPUTER side (not areca)
You should still have the BBU otherwise you can have data loss on powerloss from the hard drive's write cache, where as the BBU lets you turn off the drive write cache and still get acceptable performance from the controller's write cache.
Most consumer drives lie about when data is actually finally written, and randomly have 8-32mb of recent writes evaporate will trash any filesystem. And you can damage a zfs pool practically beyond recovery if the 'sync' commands are not being honored as expected.
hmm, so if the system loses power immediately, and the controller has it cached, but the raid doesn't live on the controller, how & when does the controller tell the OS here's the rest of what i was writing? is that built into the areca driver?
The Areca 1880 cards will be out next week!
They will not release prices until next week sadly.
Ill update when i learn anything further.
saw the firmware (beta 100607) was updated on the ftp
also, there's a new linux driver (arcmsr.1.20.0X.15-91001.896)
Hi
We have quite some areca 1680ix and we face a problem!
We create 3 pass through disks and when any of them is stressed with a lot of small writes other disks performance decreases tenfold.
We have created tests with IOMeter:
- 3 workers - custom access specifications - 100% writes
- we ran two tests
- first every worker has 1 outstanding io
- second 2 workers have 20000 outstanding ios and third disk has 1 outstanding io.
In second test performance on less stressed disk drops around 30x and we get less than 20 writes per second!!
This seams as a major Areca problem. The configuration was Disk cache disabled and volume cache enabled as default config with BBU.
Does anyone have same problems?
Damjan Pipan
I had this same problem with my 1680 controller (reported pages back in this thread). When 2 RAID 5 volumes were present the writes would drop to 5-10 MB/s or slower. Just one array and they were 200 MB/s or so. It's the same with my 12xxML.
They are great cards for just one array but not enterprise cards. Home use only.
4 array sets - 6 volumes
Dear Sir/Madam,
all our raid controllers have similar architecture, and as i known, most of
these array controllers have similar architecture.
the controller total throughput is fixed. so if there have multiple arrays,
all array have to share the total throughput no matter the controller have
one queue thread or many threads.
Best Regards,
Kevin Wang
They are great cards for just one array but not enterprise cards. Home use only.
Lots of enterprise environments do not use multiple volumes. We use these in an enterprise environment where I work. Here is our brand count:
....
Plenty of enterprises use multiple Raid sets - Raid 1 for the OS and Raid 5 for data. Areca cards are almost never used in an enterprise. This is because the big OEM's don't included them with their servers.
I think the main reason why Areca isn't really used in enterprise environments is support. They don't have a 24/7 number (or anything but email support) like the competition.
Since Adaptec sold their Raid business to PMC who knows what the future hold for the cards.