About Areca ARC-1680ix

advanced power management. only hitachi? sata drives supported right now on it. can spin down the disks after certain time periods and what not.

my WD drives say "No" to support for APM in latest BIOS. it's only on 1.47 bios.
 
I wonder if the new firmware can speed up rebuilds.My 1680ix-24 is pretty slow when doing RAID-1 rebuilds (I only ever use RAID 1 for various reasons which I won't go into here). Dell PERC5i and 6i rebuild pretty much at the native speed of the drives, but the 1680ix is about 3x slower with the same drives. Well, SATA drives - with SAS drives it's fast. So maybe due to the emulated SATA layer. Normal I/O is blazing fast.

I've got another 1680ix on order so will be able to compare new vs old firmware on a non-production system soon...

Do you have the background process set up high to 80%? I have noticed that when it is set ot the default (10 or 20%) that rebuilds will go really slow even when the system is completely idle I/O wise. Rebuilds sped up about 3x or more on a raid10 rebuild when I changed the priority.
 
houkouonchi:

yes, priority was at 80%. I think the behaviour maybe drive specific. Hitachi 7.2K rpm 2.5", WD Velociraptor = slow, Intel X25M, Seagate Savvio 10K.2/10K.3 = fast.

It matters a lot less now that I've replaced all my velociraptors with real enterprise drives.
 
do it in the foreground and it will be much faster. 8 1.5tb drives took around 15 hours for me i think. was 30 under bg 80%
 
We're running 8 WD20EADS (RAID5) drives here with a 9690sa card and we were experiencing random drive timeouts for a while, then we found out that the actual hard drive goes to sleep and doesn't ping back quick enough when the raid card does its heartbeat. The solution we came up with was to flush the raid's cache every 15 minutes-- WHICH IS HORRIBLE. Anyone else experiencing this issue with the 9690sa and/or WD20EADS drives? For the folks that said it was a fluke, I can assure you, it'll happen again :(
 
I thought I'd post this here to show the performance I have been achieving using the 1680ix-24 controller:

Ok I finally got around to testing my Drive performance of the rig I made in Jan 2009:
  • Dell XPS 730x
  • Intel® Core i7 processor Extreme Edition 965 (3.20GHz,8MB)
  • 6GB RAM (3 x 2048) 1067MHz DDR3 Dual-Channel
  • 6X Blu-Ray Burner Drive including software
  • Creative Sound Blaster Xfi Xtreme
  • Dell 3008WFP Monitor - Ultrasharp 30" Wide Aspect Flat Panel
  • 2x nVidia GTX 295 in SLi (quad SLi configuration)

I used HD Tach (running in 2000 compatibility mode - got to be a better way than this) and got the following results:


OCZ Vertez (4x SSD) RAID Array:
OCZVertex.gif

Avg read 814.9 MB/s
Burst 1039.6 MB/s

Samsung F1 (4x SATA) RAID Array:
samsungf1.gif

Avg read 339.1 MB/s
Burst 1163.2 MB/s


Couple of points:
I have no idea whether this is good, bad or indifferent, any feedback would be useful.
The RAID Controller is an Areca ARC-1680i with 4GB of cache
I have a second array on my server with the same controller but I have had 8 drive failures of the 18 Samsung F1 1TB drives I purchased - the latest batch are ok but will replace these with 2TB WD Caviar Blacks which are waiting for me in the office.
Does anyone else reflect this kind of high failure rate of the Samsung F1s or is this just a godawful batch?


How does this translate to real life
Well I suppose apps are almost instant including things like Adobe Photoshop CS4 and Premiere CS4. Win7 boots in about 10-15 secs. The PC is generally a pleasure to use. Oddly enough this rig has been the most stable platform I have had the pleasure to work with. I suspect that has much to do with Dell's testing rigour and tolerances - rock solid, never had a crash or blue screen in the last 12 months.


What next?
Hmm avidly watching SATA3 atm, Carl found an LSI RAID card based on the new bandwidth of 6GB/s, LSI aren't bad but they are never at the forefront of performance so I'll wait till this become far more commonplace and re-evaluate the market then. That said there are only a handful of SATA3 HDDs around and most are 2TB drives which I am not keen on atm especially knowing that the SSD costs have plummeted in the last few months. MLC drives are massively cheaper which means SLC won't be far behind. and I so much prefer SSD to SATA3 on a platter based HDD.

I am not that impressed with SATA 3 at this time though, early tests have not been encouraging however if SAS became the I/O standard of choice then I think the bottlenecks will have released the capability of the SSD drives even further. Watching the current market with baited breath.

I have 6x OCZ Agility EX SLC drives sitting in a box next to me to replace these 1 year old OCZ Vertex drives - through about the Intels but these Agility drives are nearly the same performance and besides a new generation of Intel SSDs is due to launch imminently.

Any advice as to how to extract more throughput is gratefully appreciated.
 
Hello LionelW

I was very happy to read about your success with the Areca 1680ix card. Thanks for sharing!! I have been looking around for some success stories with this card and the WD 2Tb Caviar Blacks, and since you mentioned that this is your next step, I'm very eager too hear about it :D Are those the RE4 WD2003FYYS versions or the WD2001FASS you are using?
 
Hi Yoheen, actually I ordered them before Christmas, I'll check when I go in tomorrow ;).
 
@Lionel nice rig! I would be weary of buying any more Samsung F1's (1Tb or 2Tb) for raid arrays. Heard too many complaints about those.

RE: SAS2 6GB/s raid card- keep your eye on the Areca 1880 series- should be a screamer, they claim twice the performance of the IOP348, and now that it's Marvell that's taken over Intel's former stake in the ARM biz and is supplying the new gen chips, hopefully they'll allow the firmware tweakability by OEM's that Intel never allowed on the IOP348 (i.e. Areca 1680ix, Adaptec 5-series).

First video of the Areca 1880LP back in Computex '09 http://www.dailymotion.com/video/x9hwx4_areca-computex-2009_tech

the euro douche asking the questions in the video is borderline retarded but try to ignore him ("durrrr - it haz ethernet port?")
 
Last edited:
I can't wait on the 1880 unfortunately. I bought an LSI 9280 last night. I'll probably still buy one when it comes out though.
 
Interesting, Q1 is mentioned but so is April which will probably translate to Q2. Looks like a tasty bit of kit ;).
 
I can't wait on the 1880 unfortunately. I bought an LSI 9280 last night. I'll probably still buy one when it comes out though.

nice card BlueFox -- what made you settle on that? I've never owned an LSI card but seems like you can't go wrong with that brand- I always read they have the broadest compatibility, and weren't plagued by the drive incompatibility issues of IOP348 based cards. 1 year warranty is kind of bullshit when everyone else does 3, and the price of the battery module is less exciting than getting a prison shank to the spleen, but chances are if it survives a year it'll probably run forever anyway. The "LSI SafeStore" encryption thing sounds cool as well if it'll actually encrypt drives and/or an entire array.

I'm hoping you'll post benchmarks for us to salivate over - hopefully it pushes some good numbers. Theoretical versus actual can vary wildly as we know. Example my 800Mhz IOP341 based Areca 1280ML beats my "next gen" *cough* dualcore 1.2Ghz IOP348 based Adaptec 52445 in Raid6 read AND write performance - WHY, JESUS, WHY? :headdesk:

If the new 6G SAS2 cards really deliver "double" the bandwidth/throughput/performance I'll be shocked and amazed.
 
Last edited:
nice card BlueFox -- what made you settle on that? I've never owned an LSI card but seems like you can't go wrong with that brand- I always read they have the broadest compatibility, and weren't plagued by the drive incompatibility issues of IOP348 based cards. 1 year warranty is kind of bullshit when everyone else does 3, and the price of the battery module is less exciting than getting a prison shank to the spleen, but chances are if it survives a year it'll probably run forever anyway.
Apparently you can change the drive timeout values on LSI cards, so I figured they might work with the 20 WD drives I just bought. I would have gone with one of the 9260 cards, but they are limited to 32 drives (I need at least 40 drives). As for the 1 year warranty, it really doesn't bother me. Areca's warranty and support scare me more. LSI has 24/7 phone support as well, so that's really nice. Yeah, the BBU being $175 is bloody ridiculous. I thought that paying $120 for one of the Areca ones was bad enough, but that's just crazy (though I did manage to get it for $65 from Ockie when he sold his).
 
Areca's not bad at least for phone support - they send you over to Tekram in California who is their U.S. distro. OTOH I'm going through an RMA process right now with a bad port that developed on my 1680ix-24 -- it's LOOKING like they may actually not just swap it automatically but "send it for repair". If I end up having to wait weeks for a replacement then that may be the end of the line for Areca and friends as far as getting any more of my lizard skins.

When you say you need support for 40+ drives, do you mean in a single array, or just total number seen by the card?
 
I thought it was all email based support with the guys in Taiwan. LSI also has an office within walking distance from where I work, so I can always just storm in and complain if I ever have issues (or at least I'll try :p ).
 
Can I please just check with your experts that there are no outstanding issues with the 1680ix-24 and Seagate 1TB 333AS and 1TB 340AS drives before I place my order? I saw there was some middle of 09 but have they been cleared up now?
SD04 and SD15 firmware on the 340AS
SD15 and CC1F firmware on thr 330AS

thx!
 
Can I please just check with your experts that there are no outstanding issues with the 1680ix-24 and Seagate 1TB 333AS and 1TB 340AS drives before I place my order? I saw there was some middle of 09 but have they been cleared up now?
SD04 and SD15 firmware on the 340AS
SD15 and CC1F firmware on thr 330AS

thx!

AFAIK they drop out of the array like the WD's did without TLER on. But now WD removed TLER from their desktop drives.


odditory, last year it took about 4 weeks for them to send my 1220 back.
 
Don't do it. I was one of the unlucky people that purchased the 1680 and used those Seagate 340AS drives before the compatibility report was posted. They timed out / dropped out constantly. I put Hitachis (HDS721010KLA330) on it last year, and hadn't even had to check on it since then. The Seagate 340AS is still not compatible going by the compatibility report. However, I was looking through the release notes and found this:
2009-3-31 1 Fix SATA raid controller seagate HDD error handling. They might be referring to the ES line.
 
We just bought two of the ARC-1680ix-24 cards and intend to use them with SATA hdds. I can confirm that the Crucial CT51272AA667 4GB DDR2-667 ECC Unbuffered RAM works in a Areca ARC-1680-ix 24 card as I tested it yesterday. I bought the ram from PGN and had it shipped to Australia. We have 12x Enterprise WD 2TB drives (NOT the low power drives) and 12x Seagate ES2 1TB drives with latest firmware and will post some speed tests when we set it all up (early next week-ish). We originally had some 3ware 9650s and they were HORRIBLE performance wise.
 
Don't do it. I was one of the unlucky people that purchased the 1680 and used those Seagate 340AS drives before the compatibility report was posted. They timed out / dropped out constantly. I put Hitachis (HDS721010KLA330) on it last year, and hadn't even had to check on it since then. The Seagate 340AS is still not compatible going by the compatibility report. However, I was looking through the release notes and found this:
2009-3-31 1 Fix SATA raid controller seagate HDD error handling. They might be referring to the ES line.

To clarify, don't blame all 1680's for the problems you experienced with the Seagates. In my testing the issue is limited to 1680ix cards, where the "ix" signifies the presence of an onboard SAS expander (easy to tell with two heatsinks on the card). Thus the issue and incompatibility with some drives relates to this onboard expander and isn't even limited to the one Areca chose. Adaptec 5 series cards greater than 8 ports suffer a similar problem.

Areca 1680, 1680i, 1680LP and 1680X do NOT have an onboard expander and you'll find the drives mentioned will work in raid arrays. I know because I've tested a lot of the known problematic ones including the CC1H 1.5TB Seagates- flawless in RAID6 with no dropping when combined with an HP SAS expander.
 
Last edited:
I have built two arrays, one with the Seagate 1TB ES 2 drives and recieved one time out error, and also received 1 timeout error with the Western Digital 2TB Enterprise drives :(. Will do some benchmarking after updating the firmware of the raid card (both sets of drives have the latest hdd firmware).
 
I have updated the ARC 1680 ix 24 raid card firmware to 1.48 and will do some speed tests and post the results.
 
Have not recieved another timeout error since updating firmware (although it hasnt been that long yet) and have done two benchmarks which have caused me some concern (screen shots below):

Seagate Raid6 ES2 NS 1TB drives with 12 drives
SeagateES2_1TB_Raid6_12drives_64k_NTFS_Run1.JPG


Western Digital RE4 WD2003FYYS in Raid5 (11 drives, and a 12th as hot spare)
WDRE4-WD2003FYYS_2TB_Raid5_11drives_64k_NTFS_Run1.JPG


Cluster size for the NTFS format is 64k, what on earth could be causing the slow down at or just before the 128k test in these benchmarks?

EDIT: also the raid stripe size is 128k.
 
Last edited:
Which version of windows is that, and which driver are you using- SCSIPORT or STORPORT? You can try the SCSIPORT driver and see if your benchmark numbers improve. You can also try some benchmarks with HDTune Pro (Trial).

Also, how long ago did you buy those Seagate and WD drives?
 
Windows Server 2003 SBS SP2 (32bit) and the Western Digital drives are brand new (bought in the last few weeks) while the Seagate drives are 1.5years old. I will try search for a newer driver as this one says its from 2008 but how can i check whether i used SCSI or STOR version?
 
I used the STORPORT driver. I will reboot and intall the SCSI driver after lunch and also turn off NCQ and retest with HDTune Pro in RAID0.
 
Last edited:
With NCQ turned off and with the SCSI driver (had to force this to install by renaming the sys file to the same as the stor driver as it refused to use anything other than the stor driver) HD Tune Pro shows for both sets of disk around 450MB read (one or two slow down spikes to 300) and about the same for write. I will revert back to the STOR driver.
 
I have a suspicion that ATTO has a bug with regards to going over 1GB a second. I ran crystal mark 2.2 and sequential read and write is over 1200mb with random 512k at 1100+mb. So do I take crystak mark to be accurate or not?
 
I have recreated the raid in raid5 for the 2TB WD and RAID6 for the 1TB Seagates (as the seagates are old) and the WD 12x2TB Raid5 is giving me 1100mb sequential read/write in crystal mark. I also did an ATTO like benchmarch in HD Tune Pro and it shows the same results. No timeouts whatsoever since the firmware update on the raid card. Only thing holding us back now is the Windows 2003 x32 operating system.
 
Last edited:
An update, when i was restoring a few large backup images to the raid arrays on this card i hit massive problems. First the raid card crapped out mid transfer saying that drives had been removed an the raid had failed when it hadnt (i rebooted and it was all magically fixed) and now i am getting thousands of drive time out errors. I have emailed areca support with my specs which are as follows:

Motherboard: Tyan Tempest i5000PX S5380G2NR
Memory: 4GB DDR2-667 FBDIMM
Raid Card: Areca 1680ix-24
Hard Drives: 12 brand new Western Digital RE4 WD2003FYYS 2TB SATA hard drives and 12 Seageate ES2 1TB SATA drives with firmware SN06
Controller Name: ARC-1680
Firmware: V1.48 2010-01-04
Boot ROM Version: V1.48 2010-01-04
SAS Firmware: 4.7.3.0
System Memory: 4096MB/533MHz/ECC
OS: Windows Server 2003 SBS 32bit

Settings: NCQ is turned off, hard drive power settings are all turned to off. Have two raid sets on the raid card with the 12 western digital drives in RAID5 array and 10 Seagate drives in raid 5 and with two hot spares. At the moment since the huge failure of the raid card yesterday and after a reboot I am getting massive amounts of time out error emails from the raid card (around 10000 in one day) when I know the drives are ok.
 
1) Is this a corporate or business server?
2) Exactly which drives does the event log report timeout errors for? Which raid array reported as failed? The Seagate array? The WD array? BOTH?
3) Which version of the 1680 driver, and which one - storport or scsiport?

My money is on the onboard SAS expander causing these issues. I'm not thrilled about that Tyan motherboard in the mix - do yourself a favor and stick to Supermicro mobo's going forward. The Tyan *probably* isn't the issue but I mention it because I have seen issues between certain raid cards and certain motherboards.
 
Last edited:
So folks say OCE with the Areca's is fast, but how fast? I just got a 1680ix-12 and am expanding a 7x2TB RAID-6 array to an 8x2TB RAID-6, and the migration seems to be going at roughly 1% per hour, so we're talking about 4 days to finish. Is that normal?
 
that's unusual. what's the "background task priority" set to in the system configuration? make sure its at the max - 80%.
 
Last time I did an array expansion, it didn't even take that long with a failed drive.
 
It is set to 80%. The only thing I can think of is, when you click submit on the form for the RAID expansion, it asks you "Change the volume attributes" (RAID-level and stripe size, I think). I clicked Yes (leaving them the same as before) since I wasn't sure, and assumed the controller would see that. Perhaps it didn't, and is doing an expansion PLUS a useless RAID6/64k to RAID6/64k migration that's eating up the extra time?
 
Back
Top