ARECA Owner's Thread (SAS/SATA RAID Cards)

I have a 1680ix-12, 1GB cache and 8 Hitachi 5k3000 2tb drives and was wondering if someone has tested and confirmed the best stripe size and ntfs cluster size Raid 6 for video storage server??

I really don't want to do all that work if it's already been performed.
 
Last edited:
The reality is you're not going to notice a difference between 64 and 128 stripe on a small array of slow spinning disks, especially if its just storing videos, so default of 64 is fine.
 
.
Odditory - I believe you mentioned in a post somewhere that the sectors per cluster should be 32 or 16k.

Is this correct?
 
16k cluster size gives you a 64TB upper limit for volume size (partition) so I'd say that's fine. There is no one size fits all, it depends how large you think your volume will grow. Some people leave it at default 4k when formatting, not knowing better and then wonder why they can't exceed a 16TB volume size. 8k goes to 32TB, 16k goes to 64TB, etc. And obviously larger cluster size means more space wasted for volumes with lots of small files, but that's really a "who cares" thing these days given how inexpensive harddisks have become in terms of price/GB.

http://support.microsoft.com/kb/140365

Ultimately, trying to tune cluster sizes and strip sizes on data archival volumes is as pointless as trying to tune 0-60 times on a cargo van.
 
Last edited:
Maybe not enough power under load. That's why it disappears during load?
I've had similar problem where my 1880-ix will disappear from the system while initializing drives, but didn't hang the system, everything else is still running. My system didn't complain about voltage, but after I put the card in a different system, it fully initialized the drives.

Ive got an HX-650w PSU in there, so it should be more than enough. I tried swapping in a 1000w to no avail. I called areca and they said the card is most likely bad, so i RMA'd it. I give a status update when I get the new one in.
 
I just picked up the areca 1880i for my server which is using an evga 730i motherboard. It has onboard video and one 16x pcie 2.0 slot. I'm using the on board video as output, and when I put the areca card in the 16x slot, it is not recognized at all. The card has power, during the boot, it beeps once along with a few led flashes from the card.

When I put the card in my desktop (in a 16x pcie slot) and booted, it was detected fine (allowing me to adjust raid sets, etc, in the bios).

Does anyone have any idea why it's not working in the pcie slot on the other motherboard?
 
I just picked up the areca 1880i for my server which is using an evga 730i motherboard. It has onboard video and one 16x pcie 2.0 slot. I'm using the on board video as output, and when I put the areca card in the 16x slot, it is not recognized at all. The card has power, during the boot, it beeps once along with a few led flashes from the card.

When I put the card in my desktop (in a 16x pcie slot) and booted, it was detected fine (allowing me to adjust raid sets, etc, in the bios).

Does anyone have any idea why it's not working in the pcie slot on the other motherboard?

I have heard some low-end boards with onboard video and only one PCI-E x16 slot sometimes can't use any type of expansion card in that PCI-E slot other than a video card. This means no NICs, raid controllers, sound card, etc.. This might be the case for you?
 
I have heard some low-end boards with onboard video and only one PCI-E x16 slot sometimes can't use any type of expansion card in that PCI-E slot other than a video card. This means no NICs, raid controllers, sound card, etc.. This might be the case for you?

There are also sometimes BIOS settings to manually set the PCIe width from Auto to x8 which might make the card accessible.
 
I just picked up the areca 1880i for my server which is using an evga 730i motherboard.
Check with eVga regarding that board. I had a 730i-based one from evga that was very picky about what it could use in the 1st PCI-e slot. Mine had no on-board video and would only allow a video card in the 1st slot. This doesn't related directly to your board, of course. But I found the evga forums were pretty good for getting information.
 
After determining that a bad sata power termination on some custom cables was causing my resume from spin-down issues, a few months later I've now encountered something else.

On my arc-1680lp; I'm getting the following error and the external sas expander just not showing up;
> Enc#1 Time Out Error
 
Having a wicked time with an 1880 on RHEL6, anyone have any tips?

I get ~50-100MBps read/write to an 8 drive RAID-0... I have tried setting up the array many different ways, none seem to work, at this point I am lead to believe it is the driver but even that doesn't make a lot of sense since I would imagine that would be a binary (work or not work) scenario.

I have tried XFS and EXT4 both perform equally terrible. I have tried multiple FS configurations, to no avail.

These cards should be pumping out well over 500MBps, what am I doing wrong? How can I get more info and more quickly troubleshoot this problem, it is a server that needs to go up yesterday scenario. Did I mention I am also a novice Linux user?
 
@JoeKillaMofo

Stock RHEL kernel and stock driver? From an email I got from Areca support, full 1880 support is in kernel 2.6.36 or later.
 
Well that would explain a few things.... 2.6.32 kernel.
RHEL6 kernel probably has it backported in just like they do with many other things.

It would still be a good idea to compile the latest source code driver however or at least take a look at the one they provided.

Few more things to check are you running the latest firmware? Also, check to make sure the card is running at pci-e 2.0 x8.
 
How fast does my server need to be?

I'm building a 24 drive media server to serve bluray ISOs to Win7 media center HTPCs. So far all I've purchased is an Areac ARC-1880ix-24 controller and Norco RPC-4224 case. I ordered the 1880 with the 4GB cache, but just got the standard 1GB version, which I was sort of expecting based on the price.

Initially the server will only be loaded with 5-10 3TB 6gbs SATA HDDs in RAID6 and I will add more storage as needed, and I will only be steaming to 2 HTPCs. However, I would like to make sure that whatever I build has the potential to run 24 x 3TB drives, and serve as many HTPCs as possible at once (minimum of 6, maybe 10 or more?).

I was initially planning to use Western Digital RE4 drives, but users here seem to be recommending Hitachi drives. What do I need, Deskstar or Ultrastar? The Ultrastar is just as expensive as the WD RE4 drives. For the system drive I'll either use a 2.5" laptop drive or SSD if it makes a big difference. I'll have to customize a system drive mount, as there are no internal bays on my case. The PCI based SSDs seem like a really good idea, especially due to the lack of a bay.

I've seen similar systems built with server grade MBs and dual Xenon processors, but when I was adding up all the costs it gets very expensive, very fast. I know a lot of companies who sell media servers don't go to this extreme. My question is, how much does CPU speed have to do with RAID performance? I would think that with a hardware RAID card, most of the heavy lifting would be done by the controller card, and CPU wouldn't be as important, but I've never built a system like this before.

Also, how much memory do I need to get full performace from the system? I've read that you should have 1GB per 1TB of storage, but don't know if this is true.

Will I get a big performance boost from upgrading to the 4GB cache? This seems to be a pretty inexpensive place to start improving performance, but there won't be a lot of repeatedly accessed data, mostly sequential read/write.

I'm not adverse to buying a server MB and only installing one CPU and limited memory to start and upgrading as needed, if this is possible.

TIA!
 
@aero0T2:

The Hitachis most of the people here use are the Deskstars, they work very well with hardware RAID controllers and have proven to be fairly reliable (those that don't arrive DOA). Their new line consists of both 7200 and 5400 RPM drives, I would suggest the latter for a large array.

The RAID controller is for the most part solely responsible for the performance of the array, the other hardware makes no difference, and you do not _need_ to use server grade hardware for the core system (mobo/proc/memory).

As for memory, the standard should be fine - 2GB per channel or so. If you were going the ZFS route memory is more important, but that is not the case here. Also I don't see you needing the upgraded cache for your purposes.

For general performance, you will quickly find yourself limited by your network ports moreso than the array. The theoretical max of a GigE port is 125MB/s before overhead, even my RAID 6 of 8 drives could handle several of these maxed in both directions simultaneously so you don't have much to worry about there.
 
Have you tested the drives individually?

No, but they mount properly and add to the array without error and they are decent, drives probably wouldn't be it, not their performance anyway. I suppose I could test the individual drive through the controller but if that was drivers it wouldn't confirm much. Everything seems to point to a driver issue at the moment, I am going to be throwing in a new system drive today and installing RHEL 6.1 to see how that works out. The driver installation readme from areca is a bit strange, I have one last test to do this morning and after that I will be installing the latest version with the built in drivers.
 
@OldSchool:

Thanks for confirming my thoughts that the system speed isn't a big factor in RAID performance. Why would you recommend the 5400 PRM drives? Because the RAID will be so fast with the multiple drives anyways that the NIC will be my bottleneck before hardrive performance? Drive prices have come down so far that I don't mind paying for 7200 rpm drives, but the power savings on 5400 rpm drives might be better in the long run too.

Can windows 7 load share with two ethernet connections to the switch to increase bandwidth? The Blu-Ray specs say that I need 54Mbps per disc playing, so even one maxed out gigabit port should be able to serve 18 players.
 
The website says 2W/drive. So about 50W if you have 24 drives running. How this relates to real life, I'm not sure.
 
@OldSchool:

Thanks for confirming my thoughts that the system speed isn't a big factor in RAID performance. Why would you recommend the 5400 PRM drives? Because the RAID will be so fast with the multiple drives anyways that the NIC will be my bottleneck before hardrive performance? Drive prices have come down so far that I don't mind paying for 7200 rpm drives, but the power savings on 5400 rpm drives might be better in the long run too.

Can windows 7 load share with two ethernet connections to the switch to increase bandwidth? The Blu-Ray specs say that I need 54Mbps per disc playing, so even one maxed out gigabit port should be able to serve 18 players.

Yes, I suggested the 5400s for cost reasons, and because a large array of them will still be very fast BUT if you can afford the 7200s and aren't concerned with the slightly higher cost of running them then by all means go with those. While you won't see much benefit in regards to speed out to the network, you will have faster array build and verify times.

And yes, you can bond 2 (or more) GigE ports together, there are several names for this (link aggregation, port trunking, bonding, LACP, etc). The best approach is to get either a motherboard with dual Intel NICs, or a stand alone 2-4 port Intel NIC, and a managed GigE switch that supports link aggregation (Like an HP or Cisco).
 
And yes, you can bond 2 (or more) GigE ports together...

...a managed GigE switch that supports link aggregation (Like an HP or Cisco).

It's important to note you must use a switch that understands how to interact with the ports. Not all switches will do this. Prepare to spend big dollars for a switch than can do this.
 
It's important to note you must use a switch that understands how to interact with the ports. Not all switches will do this. Prepare to spend big dollars for a switch than can do this.
Managed gigabit switches are not expensive. You can easily pick up a 24 port one for under $100.
 
Managed gigabit switches are not expensive. You can easily pick up a 24 port one for under $100.

Where? What make/model switch is that? One that properly supports combining more than one GIGABIT port for the purpose of increase bandwidth in/out of a single computer? Presumably also with jumbo frames. Show me the device that supports managing for THAT many ports under $100.
 
Where? What make/model switch is that? One that properly supports combining more than one GIGABIT port for the purpose of increase bandwidth in/out of a single computer? Presumably also with jumbo frames. Show me the device that supports managing for THAT many ports under $100.
eBay? Jumbo frames don't matter at all and of course it's gigabit...we were only just talking about that. I know what teaming is thanks. :rolleyes:

They're really not hard to find if you actually look around a bit:
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=220783692303
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=310318625548
 
As an eBay Associate, HardForum may earn from qualifying purchases.
eBay? Jumbo frames don't matter at all and of course it's gigabit...we were only just talking about that. I know what teaming is thanks. :rolleyes:

They're really not hard to find if you actually look around a bit

Not sure that used crap off fleabay makes for an accurate price range.

Run away from that netgear switch. I suffered through having to support three sites with them. Between having a crappy web interface, to flaky power supplies and then failure to hold up under heavy network traffic loads, all three got rid of them. Better to have NO management than that junk.
 
I would like to know if anyone knows if the 1880i uses 4 PCIe lanes per 4 port connector or does it share all 8 lanes between the 2 connectors?

I have no idea how to see the activity on the PCIe bus and I was hoping an inquiring mind has checked this.
 
PCIe bus does not interface directly to the ports.

All 8 PCIe lanes are utilized if connected to an x8 link.
 
This is a new build and I wanted to split 20 drives between the two 8 port connectors.

I have one Intel RES2SV240 SAS expander single linked and was wondering if I should have two expanders split between the two connectors.

I'm not sure how all this shares the ports on the 1880i and want the best configuration for the RAID sets that I will have.

I'm looking to build 2 RAID 6 array's w/8 drives each.

I can add another single link Intel expander if need be???
 
Last edited:
@BigXor:

I assume you mean 4 port connectors? And yes you can add a 2nd Intel expander although I think it would be overkill for your setup.

In theory the Intel can link to your Areca card at 4 x 6Gbps (single link), that's ~3000MB/s before overhead. Since I do not own one of the Intel expanders I am unsure if it will only link at 6Gbps if you are using 6Gbps drives, maybe someone else can chime in on this? At any rate, based on this assumption, lets say you connect the expander to 4 of your backplanes, and connect the 5th backplane directly to the Areca card. 16 Hitachi drives are going to be hard pressed to saturate a 24Gbps link.
 
I see what you mean.

Yes, I'm using 5K3000 Hitachis and the Intel will connect at 6Gb/s.

That's why the word "n00bie" is next to my noble moniker.

Thanks all for the help.
 
I read the manual for the 1880i and I cannot figure how the boot drive or array is designated.

My MB boot bios only shows 1 Areca device and I have no idea which drive or array is selected.

Also can the bios on the controller be turned off if you don't want use the controller as a boot device?

Can someone enlighten me?
 
Last edited:
I read the manual for the 1880i and I cannot figure how the boot drive or array is designated.

My MB boot bios only shows 1 Areca device and I have no idea which drive or array is selected.

Also can the bios on the controller be turned off if you don't want use the controller as a boot device?

Can someone enlighten me?

You can typically specify which device you boot from in your motherboard BIOS.

In the Areca settings, during new volume creation you can specify SCSI Ch/Id/Lun. The first volume created defaults to 0/0/0 which would become the boot device if no other device was selected in your motherboards BIOS.
 
You can typically specify which device you boot from in your motherboard BIOS.

My motherboard will only show one Raid device and the other drives connected to the sata ports.. I have to select which array or pass-through drive connected to the 1880i controller will boot in the 1880i's controller bios. I come from an Adaptec world and they simply had a hot-key to select the boot array.

According to your reply the first drive or device (#000) will be the boot array or pass-through drive then.

Thanks for replying.
 
Back
Top