ARECA Owner's Thread (SAS/SATA RAID Cards)

It's been said many times in many forums, RAID is not a substitute for backups. But I can see where a hammer-fest like something out of Office Space would be very therapeutic.
Well I'm not gonna 1:1 backup 40 TB of data on another 40 TB setup. 2 drive protection seems pretty safe to me.

In theory it should be fine, although you will wear your drives out a lot faster by going that route. Also, unless you need that much cache for performance it is cheaper to get an ARC-1880i and an HP SAS expander, that's a few drives right there.
Thanks for the suggestion, but not including hard drive count, I wanted to go pretty balls out from the start. But also stick with 1 company to reduce complexity. I like keep and use things for a very long time through careful use and proper maintenance... which makes me think, what happens 6-8 years down the road and the Areca card dies and it's impossible to find a replacement anywhere? I'm probably pretty fucked huh. But by then we should have 50 TB drives and I can do that full backup the other poster suggested.
 
Well I'm not gonna 1:1 backup 40 TB of data on another 40 TB setup. 2 drive protection seems pretty safe to me.
And it very well could be... right up to the point where you start playing with the volumes. I'm not saying don't do it. But at the same time, one can't act surprised when something goes whoops.

what happens 6-8 years down the road and the Areca card dies and it's impossible to find a replacement anywhere? I'm probably pretty fucked huh.

Given the amount of Areca cards on the market today and loosely comparing that with what was current 6-8 years ago against what you can pick up aftermarket, this seems unlikely, at least to me. Also, some have reported success where their old card died and a newer generation (same series) card picked up their existing array.
 
Chances are you wont have the same Areca card 6-8 years down the road.

I mean in 6-8 years ago, setting up a RAID with 40+TBs of storage was not even a dream....who knows what is going to happen in 6yrs time.
 
I've been buying ARC-1880i cards from assorted vendors and they've come with SFF-8087 to 4 SATA breakout cables. The latest two 1880i's I received have 8087 to 8087. I'm concerned maybe the card is an 1880IX because the ARC-1880IX-24's I've gotten in the past had 8087 to 8087. Contacting Areca by email they said they don't provide cables and to contact the seller who in this case has no idea which cable should/shouldn't be in the package. So... is the 1880IX unique next to the 1880i card? If not, what's the best way to tell them apart?
 
I've been buying ARC-1880i cards from assorted vendors and they've come with SFF-8087 to 4 SATA breakout cables. The latest two 1880i's I received have 8087 to 8087. I'm concerned maybe the card is an 1880IX because the ARC-1880IX-24's I've gotten in the past had 8087 to 8087. Contacting Areca by email they said they don't provide cables and to contact the seller who in this case has no idea which cable should/shouldn't be in the package. So... is the 1880IX unique next to the 1880i card? If not, what's the best way to tell them apart?

The X denotes that the card has an external SAS port, so anything -IX will have both internal and external ports. The 1880i only has 2 internal ports.
 
The X denotes that the card has an external SAS port, so anything -IX will have both internal and external ports. The 1880i only has 2 internal ports.
ok, mine only have the RJ-45 external. Thank you!
 
The X denotes that the card has an external SAS port, so anything -IX will have both internal and external ports. The 1880i only has 2 internal ports.

If the X dictates external ports, what specifies the models with an onboard expander?


I just picked up an 1880i last week, and was suprised that it came with the SAS to SATA breakout cable.
 
If the X dictates external ports, what specifies the models with an onboard expander?


I just picked up an 1880i last week, and was suprised that it came with the SAS to SATA breakout cable.

anything with ix is going to have an expander because it will have more than the 8 ports supported by the RoC, so to offer more ports it must use an onboard expander chip.
 
I just picked up an 1880i last week, and was suprised that it came with the SAS to SATA breakout cable.
I got some from CDW and was surprised they came without breakout cables. Want to trade? :)
 
Yeah the 1880i I ordered from newegg came with 2 breakout cables. I just ordered a Norco RPC-4220 so I had to order SFF-8087 cables with it.
 
So I got my Norco 4220 in today and I am going to be moving my drives from breakout cables to the backplane. How does the drive order work? From what I understand it is based on the controller?

0,1,2,3
4,5,6,7

Is that the right order for an ARC-1880i?
 
I got some from CDW and was surprised they came without breakout cables. Want to trade? :)

As I do this professionally as well as personally, I'm going to hold onto them for future needs. sorry/thanks.
 
Just as a data point for future readers, my newest Norco 4220 buildout has the following setup for storage:

Areca 1880i
Intel RES2SV240 expander
Norco SFF-8087 cables
(8) Hitachi 7K2000 2TB drives, RAID-6, 64k stripe size and two cold spares
(12) Hitachi 5K3000 3TB drives, RAID-6, 64k stripe size and three cold spares.


My previous Norco builds have been as media storage devices. This box is running Storage Server 2008 R2 - I'm going to be utilizing it for VM storage via iSCSI. The other(larger) array is going to be used for storing client PC images, OS reinstallation images, software installs, and some media.

Being somewhat new to RAID builds, one of my biggest questions was stripe size - as I understand it, balancing maximum iops and maximum transfer speed. I have elected for the 64k stripe size to be "in the middle" - as I learn more, I might be backing up and rebuilding.

I am very interested to the see the difference in performance between these two arrays. One is using 7200rpm drives, there are only 8 spindles and it is SATA2/3Gbps. The second array is built with the newer lower speed Hitachi drives (5400 or 5900 rpm?), but there are 12 spindles and these drives are SATA3/6Gbps.


Kicked off the initialization around 3pm ET - now 9:30 ET, current status is 73%. Hoping to get to do some testing tonight on the new array (if I can stay up late enough!)
 
Please do ask! I received my last two cards from you, but it was some years ago already. Glad to see you are still around. :)

All 16-port cards arrived with older/smaller heatsinks, but after opening a box with 24-port controller I found out it came with the newer/bigger one. Go figure. :)
If you happen to be shopping for a new ARC-1880ix card, I guess I could help you get one with the big heatsink. ;)
 
Is there anyone that managed to get the 1880i to work with centos 5.6 / 2.6.18-238.9.1.el5?
 
5.5 driver doesn't work on 5.6. :rolleyes:
You could always compile your own kernel from source not sure why you wouldn't really using a high performance card like an 1880 a 2.6.18 kernel is going to kill that performance...
 
Can't build it from source and loading the 5.5 driver into it as instructed in the manual doesn't work. Problem being that the boot disk is attached to the same controller. :)
Booting of the 5.6 disk with 5.5 drivers loaded doesn't work either, so I'm kinda wondering if anyone at all managed to get it running.
I'm not expecting any performance of it, it currently holds an ssd and two mirrored 2TB RE4 disks. ;)
I need the kernel for some bugfixes in KVM.
 
Don't know if this helps you but I got an 1880i working in Linux (debian unstable) after reading this:

Areca support said:
Dear Sir/Madam,

yes, the driver version with 1880 support already available since kernel 2.6.36.


Best Regards,


Kevin Wang
 
my 1280ml seems to freeze upon bootup into windows (well everything freezes). everything else works just fine without the card. i have a bbu connected to it, my video card is in the top pci-e slot and the areca in the second slot. both are 8x slots (asrock p67 extreme4 board).

anything i am doing wrong? is there a setting in the areca i need to change? i see it in the bios. no drives are connected yet. i'm running windows 7 ultimate x64. i installed windows first with all updates, then the areca, installed the driver fine. now the comp is starting to freeze again upon booting into windows

it says no bios disk found, no raid controller bios installed! or something like that after the areca boots.
 
Last edited:
Hi all, I'm looking to join the Areca club in the near future and have a few questions. I'm planning to pair a 24-port 1880ix card with a 24-bay Norco case, eventually filled with 24 2TB Hitachi hard drives, in RAID-6. However, I really can't afford all that hardware along with 24 drives at same time, so I would like to start with 10 drives, gradually expanding the array with 2 drives at a time until I hit 24. Is this possible? Could starting with a 10 drive RAID-6 and expanding 2 at a time, maybe even 1 at a time occasionally, ever present any problems? All drives would be the same 2 TB model, start to finish in the project
I planed to rip and remux my 500+ physical Blu-rays and store them on this unit, and if anything ever went wrong and I lost all my data, I would probably just smash everything to pieces with a sledgehammer. That's why I'm looking at RAID-6 protection, but if many expansions could cause problems (it has to rebuild itself every expansion right?) I might take a different route.

Thanks for any info

I did this when going from 1TB -> 2TB drives but make sure your array is stable (run a check on it and make sure it completes) before attempting it and also *definitely* do it one drive at a time (not two).

Given the amount of Areca cards on the market today and loosely comparing that with what was current 6-8 years ago against what you can pick up aftermarket, this seems unlikely, at least to me. Also, some have reported success where their old card died and a newer generation (same series) card picked up their existing array.

Every areca controller I have used are backwards compatible with each other (as long as your running a decently newish firmware).

An ARC-1220 array (iop333) can be read by:
ARC-1231 (iop341)
ARC-1680i (iop348)
ARC-1222 (iop348)
ARC-1880i (PPC/LSI)

The ARC-1220 is a really old card. I have a promise iop 333 card and promise wont even make a 64-bit CLI for the damn thing on linux because to them its EOL and no longer supported even as of several years ago. Compare that to areca where even their old 1220 is still getting BIOS updates to keep the version the same on all their cards.

I wouldn't worry about this.
 
Need some help....
So I just bought a 1680ix-24 with 12 Samsung F4 2TB HD204UI drives.
I flashed the firmware on my H55-USB3 mobo's SATA ports as recommended.

Heres where the fun part begins. The Areca controller recognizes all 12 drives, and I set up a Raid 6+hot spair with them(7.5hrs to initialize). WHS2008 R2 recognizes the raid. Read/write speeds are fine with HDtune. I began to write a large amount of data to the raid and it froze up after an hour. Trying to read the data on the raid also randomly results in a freeze. Ive tried disabling all write/read caching, disabled NCQ.

The weird thing is its not as through the raid drops out due to timeouts on multiple drives(I assume the Raid will just appear as degraded or failed until rebooted), but rather the entire card drops from the system, which hangs for 30 seconds before I can use it again. When it stops hanging I cant access the 1680 through either the web-interface or CLI. Its like the PCI slot stopped communicating with the card entirety or the card froze even though the card's lights are still on, so i'm not convinced its the drives.

Ive had problems with this card since day one not wanting to boot. The system event log is full of 'ctrl CPU 1.2 v Under Voltage' errors, and i've manually seen it drop below 1.08. Everything else in the error log is normal.

Any thoughts/suggestions???
 
Last edited:
Need some help....
So I just bought a 1680ix-24 with 12 Samsung F4 2TB HD204UI drives.
I flashed the firmware on my H55-USB3 mobo's SATA ports as recommended.

Heres where the fun part begins. The Areca controller recognizes all 12 drives, and I set up a Raid 6+hot spair with them(7.5hrs to initialize). WHS2008 R2 recognizes the raid. Read/write speeds are fine with HDtune. I began to write a large amount of data to the raid and it froze up after an hour. Trying to read the data on the raid also randomly results in a freeze. Ive tried disabling all write/read caching, disabled NCQ.

The weird thing is its not as through the raid drops out due to timeouts on multiple drives(I assume the Raid will just appear as degraded or failed until rebooted), but rather the entire card drops from the system, which hangs for 30 seconds before I can use it again. When it stops hanging I cant access the 1680 through either the web-interface or CLI. Its like the PCI slot stopped communicating with the card entirety or the card froze even though the card's lights are still on, so i'm not convinced its the drives.

Ive had problems with this card since day one not wanting to boot. The system event log is full of 'ctrl CPU 1.2 v Under Voltage' errors, and i've manually seen it drop below 1.08. Everything else in the error log is normal.

Any thoughts/suggestions???
Might be unrelated but using a desktop board + hardware raid controller is usually not such a good idea.
 
hello to you all I really need a good advice. I have Areca ARC-1220 and 4x1TB Samsung Spinpoint F3 hard drives + 1X3TB connected on mb sata for backup so I can't decide to use RAID 10,6 or 5 for speed. I want to boot OS from RAID and I will use Win 7 x64. I dont use computer for server just for games and fun. I want speed but security of data also. If I buy 2x1TB more to have 6 will it make difference what to use 10,6 or 5
 
I've built a system with a 1280-ML and am quite happy with it, except that the failure LEDs don't work. This makes hot swapping practically impossible. But I can't find a pinout for the activity or failure headers.

What is the pinout on the Areca 1280-ML failure LED header?
 
Nope; they identify the jumper block, but don't give a pinout. Do you have a different manual? I'm looking at ARC_V4.1.pdf. I found Supermicro_Enclosure.pdf, which gives detailed pinouts for some controllers, but not for the ARC-1280ML.
 
They're not numbered; just indicated "FAULT" and "ACT". The numbers are for the BGA very near the headers. Even if they were numbered, that would just provide pin numbers, not signal names.
 
Definitely numbered. Should be pretty obvious. 24 pins for fault and 24 pins for activity...on a 24 channel card. Excuse the image quality as this is an old photo and I no longer have the cards.

1280mlpinouts.png
 
Interesting; my card certainly doesn't have the corner numbers. Piecing this together with the partial information in the manual, it looks like I have to find a 3.3v source and use that with these single-channel drives. The manual implies that one row is "Cathode side", but it turns out all the pins are cathode drivers, unlike the pinouts for the other cards.

Thanks for the help, and sorry that I tried your patience.
 
No worries. I'm still helpful even when slightly annoyed. :p If all this becomes too much work, you could just label your drives/drive trays and rely on email/SMS notifications (or the official LCD). It's what I do because my Norco cases don't support it either (unlike the Supermicro chassis I used to have).
 
Actually, I think I can catch a break. I'm using IcyDock MB-455SPF bays in a big tower case, and I think I can re-purpose some 1x5 pin motherboard USB cables to connect from the "fail" LED inputs on the back plane of each module to the "fail" header on the controller.

I just tried grounding one of the back plane pins, and it looks like the single-sided cathode output will end up causing the fail light on the bay to work right. Once I get that hooked up, I can just straighten out all my cables and I'm in business.
 
I was going to build a Fail/Activity LED panel with 2 color LEDs for my ARC-1880i that would fit in a 3.5" bay but I moved it to a Norco case before I got around to making it. I found a few places where you could get 2x8 connectors with ribbon cables.
 
Can the Areca 1880ix support presenting luns to an OS?

I would like to use this card in vmware to (raid6) 8x3TB and present 2TB luns in vmware (to bypass the vmware iscsi 2tb limitation). Is this possible?
 
Yes, you could create a 18TB RAID set like you want and then create nine 2TB volumes on it, all with different SCSI LUNs.
 
Need some help....
So I just bought a 1680ix-24 with 12 Samsung F4 2TB HD204UI drives.
I flashed the firmware on my H55-USB3 mobo's SATA ports as recommended.

Heres where the fun part begins. The Areca controller recognizes all 12 drives, and I set up a Raid 6+hot spair with them(7.5hrs to initialize). WHS2008 R2 recognizes the raid. Read/write speeds are fine with HDtune. I began to write a large amount of data to the raid and it froze up after an hour. Trying to read the data on the raid also randomly results in a freeze. Ive tried disabling all write/read caching, disabled NCQ.

The weird thing is its not as through the raid drops out due to timeouts on multiple drives(I assume the Raid will just appear as degraded or failed until rebooted), but rather the entire card drops from the system, which hangs for 30 seconds before I can use it again. When it stops hanging I cant access the 1680 through either the web-interface or CLI. Its like the PCI slot stopped communicating with the card entirety or the card froze even though the card's lights are still on, so i'm not convinced its the drives.

Ive had problems with this card since day one not wanting to boot. The system event log is full of 'ctrl CPU 1.2 v Under Voltage' errors, and i've manually seen it drop below 1.08. Everything else in the error log is normal.

Any thoughts/suggestions???

Maybe not enough power under load. That's why it disappears during load?
I've had similar problem where my 1880-ix will disappear from the system while initializing drives, but didn't hang the system, everything else is still running. My system didn't complain about voltage, but after I put the card in a different system, it fully initialized the drives.
 
I'm currently running an ARC-1260 on an Intel DX48BT2 motherboard as a home storage server. Since this is now on 24x7 I'm keen to get power consumption as low as possible. I've recently switched to 8x 2TB 5K3000 drives, which has at least halved power draw - accounting for the fact I can spin them down (the WD RE3's I had really didn't like that). The system boots from a separate disk.

The system spends most of its time idle, when it draws ~120W. Well over 80% of this seems to be the CPU+motherboard+graphics (NVS280), so I'm considering swapping them for something a bit more efficient. I'm keen to keep hold of the DDR3 I've already got, which is non-ECC.

The new Sandy Bridge seems to be far more lean when idle, and H67 supports the on-chip graphics: the Intel DH67BL apparently runs ~16W idle with a Core i5-2500, which would pay for itself in just over 2 years. It has a PCIe2 x16 slot, but it's sometimes referred to as for graphics - even the Intel site can't seem to make its mind up. I've heard PCIe2 eradicated the graphics-only PEG slot nonsense, but I've been burned before and I'm wary of wasting ~£250 on CPU + Mobo.

Has anyone tried an Areca in an H67 mobo? I'm aware a server-grade board would be a safer bet, but it would be ~£150 more expensive, would also require new RAM, and seems unlikely to be as power efficient (happy to be corrected if you have data).

Thanks.
 
Back
Top