About Areca ARC-1680ix

It hasn't mattered whether I"ve done 4 drives, 6 drives or 8 drives in Raid6, it will always fail. There is an issue with the IOP348 chip that Intel has confirmed has a bug relating to the issue people are having, thus other controllers like the Adaptec 52445 have issues as well. Part of the problem is the IOP348 was designed for SAS with SATA as an overthought (and built a SATA <-> SAS translation layer) and combined with the supposedly slow processor Seagate put on the 1.5Tb's board, these problems are all happening.

Cock!

That's really really annoying. What are my options if I want to get a 16-24 port PCI-E controller to run with my existing Seagate 1TB and 1.5TB drives? I was looking at the 3ware 9650SE but I'm not sure whether that uses the same IOP.
 
Cock!

That's really really annoying. What are my options if I want to get a 16-24 port PCI-E controller to run with my existing Seagate 1TB and 1.5TB drives? I was looking at the 3ware 9650SE but I'm not sure whether that uses the same IOP.
Th 9650SE should work (it's not a SAS controller, so it doesn't have the same IOP as the 1680ix). I might have one available to sell in about a week, so, send me a PM if you are interested (I'm looking for a rather reasonable price compared to retail).
 
the 9650 has some performance issues, i am running one with 6 Segate 500G ES2 drives in raid 50 and i get 127MB avg read, one drive alone i can average 90MB.
 
Don't go with 3ware. They suck. Also if you have any issues with an array using drives over 1TB you can kiss your array good-bye because 3ware can't help you.

the ARC-1222 worked fine for me with the newer firmware seagate 1TB drives. I actually went through several rebuilds without issues. It also uses the iop348 but I think its the non dual core slower (800 Mhz) version so that might be why.

If you need different controller go for the ARC-1280ML. It will beat the crap out of the POS 3ware.
 
WD 2TB WD20EADS works on this card in case someone is wondering
 
Last edited:
is it normal for 8 drives to take over 24h to initialize in raid6? 2tb drives.

Foreground init on my hardware Rocketraid 3560 took 5.5 hours with (13) 1TB drives. Background init while in use could take much much longer.
 
how is the 3560 for you? have any benchmarks?

Highpoint confirmed the 3560 would work with the WD15EADS and WD20EADS (with TLER) and may consider it if my RAID6 speeds end up being too slow.
 
Kinda off-topic, but anyone with a 3ware 9650SE get the new 9.5.2 firmware? It seems like it added 256K striping for RAID 6, so I'm wondering what performance boosts there are associated with that. 3ware claims a greater sequential read/write boost which would be a nice improvement over the current speed of the array, as updating the firmware seems to have sped up my 64k-stripe RAID 6 in sequential read/writes around 20-25 MB/s.
 
Hi Guys,

I'm thinking about getting one of these cards for a Media Server that I am planning. I do have a few questions, if someone could help..

Is this card overkill for a media server type of application? Is it worthwhile upgrading the RAM on the card to 2GB?

I was thinking of swapping out my current day-to-day hardware and using it for the server. I currently have a Gigabyte 965p-ds3p m/b with C2D e6300 and 4Gb DDR2 667mhz.

Will this be ok? The cpu & ram is currently overclocked, is this a good idea for a server or should I take it back to stock?

e6300 2.13Ghz @ 3.2Ghz, RAM @ 800Mhz.

Another thing about the motherboard is that it has one x16 slot@16lanes and one x16@4lanes. Will there be any perfomance degradation by using it in the 4 lane slot? As I have a PCI-e graphics card that I will have to use. Could i use this card in the 4 lane slot?

Any help would be appreciated.
 
For a server, I would kill your overclock and put the video card in the slot with 4 lanes. My current file server has a E2180 and it is plenty. I also have a PCIe x1 video card in there (yes, such things do exist) and it's enough as well.
 
Kinda off-topic, but anyone with a 3ware 9650SE get the new 9.5.2 firmware? It seems like it added 256K striping for RAID 6, so I'm wondering what performance boosts there are associated with that. 3ware claims a greater sequential read/write boost which would be a nice improvement over the current speed of the array, as updating the firmware seems to have sped up my 64k-stripe RAID 6 in sequential read/writes around 20-25 MB/s.

i got one, i plan to move it from a raid 50 to a raid 10 maybe tomorrow, but i can upgrade the firmware first on the raid 50 and see if it performs any better, using 7 Segate 500G ES2 drives on it, 6 in raid 50 with a hot spare.
 
does anyone currently have this card that could do some testing with me to verify a firmware issue with linux paritioning? would need to have 4 or more drives in RAID6 to mess around with for about 15-30 minutes and not worry about the data.
 
I have the 16 port version and 8 Hitachi 750G drives, but dont have linux installed with it, got Windows server 2008 :(
 
it seems port # 4 has gone bad on me... any drive i put on it fails. i have switched out cables, pci-e slots, etc. and it still does it. if i use the cable on ports 21-24 it's fine and won't make the drive fail on port 24.
 
got my first "timed out / failed" WD20EADS drive, testing it now to see if it's the drive or the wonderful IOP348 piece of shit.
 
I'm going to do some testing when I get my 1680ix-24 on Monday...should be interesting.
 
Not quite sure yet. I still have a dozen WD RE2s left. I can probably squeeze some Samsung and Hitachi drives in there too. Probably some more WD ones too.
 
the card is freakin incredible if your drives don't time out. i'm hoping the recent WD20EADS dropout was a fluke. if it tests ok i'll put it back in and run some more tests. if it does it again i'll sell the 1680ix-24 and get a 1280ML-24 or HighPoint 3560 (since it has 24 internal and the 4 external i need)
 
Well, if it doesn't work out for me, I have 30 days to return it to newegg. I still have my 1280ML too.
 
if it does work for you i may buy your 1280ML, assuming more drives drop for me :)
 
raid rebuilt fine. no more drive drops. giving it another 2-3 weeks to test.


if you're doing linux, do the following tests to see how much raid geometry affects performance:


create a partition from 0 to -0 (all free space).
echo 3 > /proc/sys/vm/drop_caches
dd if=/dev/sdb1 of=/dev/null bs=128k count=80000

do that a couple times (echo/dd) to get average speed.

delete partition

create a partition from 63s to -0 (offset 63 sectors from beginning)

repeat echo/dd tests.



speed should be 2-3x faster with the 63s offset. was for me anyways. has to do with the areca card somehow expecting an ms-dos table and when you use parted (gpt) you must offset 63s.
 
Last edited:
As it so happens, the 3are 9650SE works a treat with my Seagate 1.5TB drives. Performance was rubbish until I applied the latest firmware update to the controller - but I'm still only seeing 400MByte/sec reads off a 13 disk RAID 5 array. I'm starting to consider the Areca again - will the 1280 work with my existing Seagate 720.11 drives (13x1.5s, 8x1s)? Where's the best place to buy these online who will ship to Australia?
 
The 1280ML will work fine with the 7200.11 drives. I personally have 16 x 1tb drives on mine. I couldn't get more than 700mb/s read out of my array, but that's nothing to complain about. As for who will ship to Australia, no idea.
 
to repair a raid set on this, should i just set the Free drive as a hot spare and let it rebuild that way?

answer: yes
 
Last edited:
anyone try out the new firmware yet? mine is running like a rock and i read through the changes and didn't see much that would improve performance or anything for my setup...
 
anyone try out the new firmware yet? mine is running like a rock and i read through the changes and didn't see much that would improve performance or anything for my setup...

When you mean "running like a rock" do you mean stable or slow? I could never get that phrase down. :confused:

If it's fine, I guess the old saying goes, "If it ain't broke, don't fix it" applies. :)
 
My 1680ix-12 is running fine.... my RE4-GPs in an EnhanceBox... different story. A time-out here and there with NCQ disabled, no dropouts though. Dropouts with NCQ enabled, with 1.46 too. I've read someone running a larger array with RE4-GPs running fine (without an ext enclosure).

I have a 8x10K.2 attached too, no issues with that, but it's only been up for 3 days.
 
I wonder if the new firmware can speed up rebuilds.My 1680ix-24 is pretty slow when doing RAID-1 rebuilds (I only ever use RAID 1 for various reasons which I won't go into here). Dell PERC5i and 6i rebuild pretty much at the native speed of the drives, but the 1680ix is about 3x slower with the same drives. Well, SATA drives - with SAS drives it's fast. So maybe due to the emulated SATA layer. Normal I/O is blazing fast.

I've got another 1680ix on order so will be able to compare new vs old firmware on a non-production system soon...
 
anyone running new firmware and notice differences?
Not really, the most noticeable thing is the addition of APM support... which I don't recall being there before.

Also, RE4-GPs are working solidly when jumpered down to SATA150.

RE4-GP RAID 6 array, migrating from 5 disk array to 6 disk array took 40 hours to complete migration + init (80% background, and yes I have data being written to the array during).
 
on APM:

(F) Only Hitachi HDDs are supported
(G) Seagate and WDC hdd report that NO APM support
(H) SAMSUNG report APM support and function is incorrect, we exclude
Samsung HDD
 
when using the external port with a sas enclosure... if that sas enclosure is being used slightly - my raid on the internal ports grinds to a halt. anyone have a similar setup to test?
 
I have the card and the SAS expanders, just not enough drives to test it with at the moment. Kinda odd though, as the internal ports run off a SAS expander too (IOP348 provides 8 lanes, 4 connected internally and 4 are external).
 
when using the external port with a sas enclosure... if that sas enclosure is being used slightly - my raid on the internal ports grinds to a halt. anyone have a similar setup to test?
Me. 1ext+1int->ext to an external EnhanceBox (SAS) 8 drive box. All eight are occupied. 6xRE4-GP, 2x7200.11s

The other two internal ports go to 8x10K.2 (SAS) drives.

Performance seems fine. Can't really say if I high I/O/writes would be the same, but when a backup is running to the external enclosure (transfer rates about 50mb->80mb, source disk limited), benchmarks and usability on my internal (system disk) seem normal.
 
i'm wondering if it's an incompatibility with my external enclosure and the raid card or the external enclosure and the wd15eads drives. i run all wd20eads drives internally and they're fine. seems odd. getting a non-raid sas hba to test and make sure the drives will work fine in that enclosure. may end up just doing software raid on them since i'm having these issues on the 1680ix.
 
Back
Top